Id
stringlengths
2
6
PostTypeId
stringclasses
1 value
AcceptedAnswerId
stringlengths
2
6
ParentId
stringclasses
0 values
Score
stringlengths
1
3
ViewCount
stringlengths
1
6
Body
stringlengths
34
27.1k
Title
stringlengths
15
150
ContentLicense
stringclasses
2 values
FavoriteCount
stringclasses
1 value
CreationDate
stringlengths
23
23
LastActivityDate
stringlengths
23
23
LastEditDate
stringlengths
23
23
LastEditorUserId
stringlengths
2
6
OwnerUserId
stringlengths
2
6
Tags
sequencelengths
1
5
Answer
stringlengths
32
27.2k
SimilarQuestion
stringlengths
15
150
SimilarQuestionAnswer
stringlengths
44
22.3k
9512
1
9513
null
2
315
When reading about SVMs (e.g. on the German Wikipedia) there is a sentence like "an svm is a large-margin classifier). Are there other large margin classifiers than SVMs?
Are there other large margin classifiers than SVMs?
CC BY-SA 3.0
null
2015-12-25T09:04:20.853
2015-12-25T09:56:21.907
null
null
8820
[ "machine-learning", "classification", "svm" ]
Yes, one famous example are boosting techniques like [Adaboost](https://en.wikipedia.org/wiki/AdaBoost). It uses small classifiers to create a big one. Here you can find more info about [margin classifiers](https://en.wikipedia.org/wiki/Margin_classifier).
SVM hyperplane margin
After we have $$w^Tx + b = \pm \delta$$ We can always divide everything by $\delta$, $$\left( \frac{w}{\delta}\right)^Tx + \left( \frac{b}{\delta}\right)=\pm1$$ Now, we can set $\tilde{w}=\frac{w}{\delta}$ and $\tilde{b}=\frac{b}{\delta}$. $$\tilde{w}^Tx+\tilde{b}=\pm1$$ This is as if we have set $\delta=1$ from the beginning. The derivation of the distance formula has been given in equation $(19)$ in the article that you linked to and you might like to be more specific if you can't understand it. The distance should be $\frac{2\delta}{\|w\|}$ if $\delta$ is not set to be $1$.
9528
1
9533
null
8
240
I've understood that SVMs are binary, linear classifiers (without the kernel trick). They have training data $(x_i, y_i)$ where $x_i$ is a vector and $y_i \in \{-1, 1\}$ is the class. As they are binary, linear classifiers the task is to find a hyperplane which separates the data points with the label $-1$ from the data points with the label $+1$. Assume for now, that the data points are linearly separable and we don't need slack variables. Now I've read that the training problem is now the following optimization problem: - ${\min_{w, b} \frac{1}{2} \|w\|^2}$ - s.t. $y_i ( \langle w, x_i \rangle + b) \geq 1$ I think I got that minizmizing $\|w\|^2$ means maximizing the margin (however, I don't understand why it is the square here. Would anything change if one would try to minimize $\|w\|$?). I also understood that $y_i ( \langle w, x_i \rangle + b) \geq 0$ means that the model has to be correct on the training data. However, there is a $1$ and not a $0$. Why?
Where exactly does $\geq 1$ come from in SVMs optimization problem constraint?
CC BY-SA 3.0
null
2015-12-26T19:42:51.460
2016-07-06T08:11:48.710
null
null
8820
[ "machine-learning", "svm" ]
First problem: Minimizing $\|w\|$ or $\|w\|^2$: It is correct that one wants to maximize the margin. This is actually done by maximizing $\frac{2}{\|w\|}$. This would be the "correct" way of doing it, but it is rather inconvenient. Let's first drop the $2$, as it is just a constant. Now if $\frac{1}{\|w\|}$ is maximal, $\|w\|$ will have to be as small as possible. We can thus find the identical solution by minimizing $\|w\|$. $\|w\|$ can be calculated by $\sqrt{w^T w}$. As the square root is a monotonic function, any point $x$ which maximizes $\sqrt{f(x)}$ will also maximize $f(x)$. To find this point $x$ we thus don't have to calculate the square root and can minimize $w^T w = \|w\|^2$. Finally, as we often have to calculate derivatives, we multiply the whole expression by a factor $\frac{1}{2}$. This is done very often, because if we derive $\frac{d}{dx} x^2 = 2 x$ and thus $\frac{d}{dx} \frac{1}{2} x^2 = x$. This is how we end up with the problem: minimize $\frac{1}{2} \|w\|^2$. tl;dr: yes, minimizing $\|w\|$ instead of $\frac{1}{2} \|w\|^2$ would work. Second problem: $\geq 0$ or $\geq 1$: As already stated in the question, $y_i \left( \langle w,x_i \rangle + b \right) \geq 0$ means that the point has to be on the correct side of the hyperplane. However this isn't enough: we want the point to be at least as far away as the margin (then the point is a support vector), or even further away. Remember the definition of the hyperplane, $\mathcal{H} = \{ x \mid \langle w,x \rangle + b = 0\}$. This description however is not unique: if we scale $w$ and $b$ by a constant $c$, then we get an equivalent description of this hyperplane. To make sure our optimization algorithm doesn't just scale $w$ and $b$ by constant factors to get a higher margin, we define that the distance of a support vector from the hyperplane is always $1$, i.e. the margin is $\frac{1}{\|w\|}$. A support vector is thus characterized by $y_i \left( \langle w,x_i \rangle + b \right) = 1 $. As already mentioned earlier, we want all points to be either a support vector, or even further away from the hyperplane. In training, we thus add the constraint $y_i \left( \langle w,x_i \rangle + b \right) \geq 1$, which ensures exactly that. tl;dr: Training points don't only need to be correct, they have to be on the margin or further away.
What is the 1 Unit in the contraint of SVM: $y_i(wx_i+b) \geq1$
Main formula for SVM is - $y_i(wx_i +b) \geq d$ In the derivation process, it is changed to 1 to make it standardized for all hyper-plane. If it has to be described, it will be - "Greater than" "per unit of minimum margin distance" Let's suppose, If a hyper-plane has the minimum margin point at 4 Eucledien distance Another one has it at 4.5 Eucledien distance So, this $y_i(wx_i +b) \geq$ 1 means, 1 unit of "every 4 units" for first hyper-plane and 1 unit of "every 4.5 units" for the other hyper-plane What it meant - This is more for Mathematical convenience. Another neatness it added, the maximizing equation changes to 1/$w$ from F/$w$. F is the distance of the point which is nearest to the plane. Why it will not affect point position A plane i.e. $(wx_i +b)$ will not change if we rescale $w$ and $b$. So we rescaled it in such a way such that F becomes 1. This "1" will be different for different Hyperplanes depending on it's $w$. > Added this screen from Support Vector Machines Succinctly. Please read it if you want a very detailed start to end explanation of SVM with python code [](https://i.stack.imgur.com/Vc2Jq.png) --- Good references for SVM [Alexandre Kowalczyk](https://www.syncfusion.com/ebooks/support_vector_machines_succinctly/introduction) [Shuzhanfan](https://shuzhanfan.github.io/2018/05/understanding-mathematics-behind-support-vector-machines/) [Professor Yaser Abu-Mostafa](https://www.youtube.com/watch?v=eHsErlPJWUU)
9529
1
9535
null
1
3842
I am going to do regression analysis with multiple variables. In my data I have n = 23 features and m = 13000 training examples. Here is the plot of my training data (area of houses against price): [](https://i.stack.imgur.com/9lggy.png) There are 13000 training examples on the plot. As you can see it is relatively noisy data. My question is which regression algorithm is more appropriate and reasonable to use in my case. I mean is it more logical to use simple linear regression or some nonlinear regression algorithm. To be more clear I provide some examples. Here is some unrelated example of linear regression fit: [](https://i.stack.imgur.com/DyTKO.png) And some unrelated example of nonlinear regression fit: [](https://i.stack.imgur.com/uB7pm.png) And now I provide some hypothetic regression lines for my data: [](https://i.stack.imgur.com/q8FzC.png) AFAIK primitive linear regression for my data will generate very high error cost because it is very noisy and scattered data. On the other hand, there is no apparent nonlinear pattern (for example sinusoidal). What regression algorithm will be more reasonable to use in my case (house prices data) in order to get more or less appropriate houses' price prediction and why this algorithm (linear or nonlinear) is more reasonable?
How to select regression algorithm for noisy (scattered) data?
CC BY-SA 3.0
null
2015-12-26T20:10:00.473
2015-12-27T16:31:59.260
null
null
14525
[ "machine-learning", "regression", "linear-regression" ]
The model I would use is the one that minimizes the accumulated quadratic error. Both models you are using, linear and quadratic, looks good. You can compute which one has the lowest error. If you want to use an advanced method you can use [RANSAC](https://en.m.wikipedia.org/wiki/RANSAC). It is an iterative method for regression that assumes that there are outliers and remove them from the optimization. So your model should be more accurate that just using the first approach I told you.
regression with noisy target vairable
It depends how much noise: - If it's only a little noise, say for instance 2% of the target values are off by a small value, then you can safely ignore it since the regression method will rely on the most frequent patterns anyway. - If it's a lot of noise, like 50% of the target values are totally random, then unless you can detect and remove the noisy instances you can forget it: the dataset is useless. In general ML algorithms are based on statistical principles, to some extent their job is to avoid the noise and focus on the regular patterns. But there are two things to pay attention to: - Is the noise truly random, or does it introduce some biases in the data? The latter is a much more serious issue. - Noisy data is even more likely to cause overfitting, so extra precaution should be taken against it: depending on the data, it might be necessary to reduce the number of features and/or the complexity of the model.
9532
1
15108
null
5
1055
The cost vs iteration graph while using the ReLU activation function has number of spikes [](https://i.stack.imgur.com/Zz7U1.png) The accuracy on MNIST dataset (both train and test) is around 95%, also on using sigmoid as an activation function I get a smooth downward sloping curve so I think the implementation is correct. Are these spikes expected for ReLU? How would you explain this property?
spikes in the cost vs iteration graph using ReLU activation function
CC BY-SA 3.0
null
2015-12-27T02:19:02.473
2016-11-14T16:06:06.620
null
null
12250
[ "neural-network" ]
When I run my implementation of MNIST digit recognition I even get those spikes for a sigmoid transfer function. You have to question yourself: how bad are those spikes? I presume you use stochastic gradient descent right. For every iteration you consider a batch (a subspace of the samples) and train on that. Some of those batches contain very difficult pictures and your network will fail at them, resulting in a high cost. This in turn will lead to a temporary high adaptations of your weights and biases. You see that it drops again quite quickly because the other samples will again send it in the right direction. I think you have more spikes because your ReLU is just better at generalization and thus after a while you will get more peaks (but lower) because the network gets better and better. And in the end the most important criteria is your accuracy and not the cost ;). This has less problems with the spikes because it takes into account all the samples instead of a batch. Concluding: stochastic gradient descent will go to a low point, but because of it random walk it might bump into some peaks, but in the end it will find a nice low cost. If you still have some questions, feel free to ask, I am still quite new here and still have to learn how to give good answers.
How does cost function change by choice of activation function (ReLU, Sigmoid, Softmax)?
You need to discriminate between two types of neural networks. If your output variable is continous you can use linear, ReLU, tanh, logistic-sigmoid,... as activation functions, because these functions map continous inputs to continous outputs. If your output is discrete / categorical you can use the signum (binary) or softmax activation (multiclass) function as activation function for the output layer. The cost function is often a function that is comparing the real outputs $y_n$ and the predicted outputs $\hat{y}(x_n)$ for the input $x_n$ for all $n=1,...,N$. Let us introduce the comparison function $D(y_n,\hat{y}(x_n))$. The comparison function has a low value if the predicted output is almost equal to the real output and high if the outputs are not similar. Assuming all the observations are equivalently important, we could sum the values of comparison function applied on all observations and obtain the integrated loss $$J=\sum_{n=1}^ND(y_n,\hat{y}(x_n))$$ for the whole data set. In order to see the influence of the activation function $g$ in the last layer we summarize the transfer function from the input to the last layer as $f(x_n)$. Then the predicted output $\hat{y}(x_n)$ can be written as $$\hat{y}(x_n)=g(f(x_n)).$$ Hence, the activation function at the output has an effect on the integrated loss $J$. For example if you choose the $\tanh$ as output activation you will bound your outputs in the intervall $(-1,1)$ which will be a bad choice if your outputs can be from $\mathbb{R}$ and your cost function will probably have a very high value while training. A better choice would be a linear activation function at the output layer.
9536
1
9543
null
5
483
An 1024*1024 pixel image has around one million pixels. If I would like to connect each pixel to an R,G,B input neuron, then more than 3 million neurons are needed. It would be really hard, to train a neural network, which has millions of inputs. How is it possible, to reduce the number of neurons?
How is it possible to process an image with a few neurons?
CC BY-SA 3.0
null
2015-12-27T16:52:01.120
2015-12-31T10:24:18.860
null
null
15024
[ "machine-learning", "neural-network", "deep-learning", "image-classification" ]
There are several ways to make this big number trainable: - Use CNNs - Auto-Encoders (see Reducing the Dimensionality of Data with Neural Networks) ## Dimensionality reduction of the input - Scale the image down - PCA / LDA ## Troll-Answer If you really meant "only a few neurons" then you might want to have a look at [Spiking neural networks](https://en.wikipedia.org/wiki/Spiking_neural_network). Those are incredibly computationally intensive, need a lot of hand-crafting and still get worse performance than normal neural networks for most tasks ... but you only need very little of them.
Why there is only one type of artificial neuron?
The example you cited (using x^2 instead of x) is the idea more popular outside deep learning community, called feature engineering. The trend in neural network modeling is instead to, - Play with weights (w) and fine tune them. - Not change the input vector (x) but feed it to the network directly. - If a single layer neural network is not good enough, add more layers. - Introduce non-linearities using activation functions. - And in general, not hand-roll features (like x^2) but let neural networks discover such features.
9558
1
9566
null
2
793
I don't have a background in neural networks. But, various studies has been proved that neural networks (feed forward / Recurrent) outperformed n-gram language modeling for predicting words in a sequence. But, in an application to text messaging or any text-based conversation, where the language which is most likely used will be more informal or colloquial. Can still a neural networks perform well than n-gram LM? Considering the data to be fed are the text messages (colloquial phrases). If so, please enlighten me, thanks.
Neural Networks for Predictive typing
CC BY-SA 3.0
null
2015-12-30T16:11:20.963
2015-12-31T12:37:01.817
null
null
15079
[ "machine-learning", "neural-network", "nlp", "language-model" ]
A neural network is in principle a good choice when you have A LOT of similar data and classification tasks. Predicting the next character (or word... which is just multiple characters) is such a szenario. I don't think it really matters which kind of language you have, as long as you have enough training data of the same kind. See [The Unreasonable Effectiveness of Recurrent Neural Networks](http://karpathy.github.io/2015/05/21/rnn-effectiveness/) for a nice article where a recurrent neural network (RNN) was used as a character predictor to write complete texts. They also have code on [github.com/karpathy/char-rnn](https://github.com/karpathy/char-rnn) ready to train / go. You can feed it with a start string and ask for the next characters / words.
Types of Recurrent Neural Networks
They match up fairly well. The first Goodfellow description is Karpathy's final "many to many" image. The output at each time step is based on the previous hidden state of the net and the input. The third Goodfellow description directly corresponds to Karpathy's "many to one" image. This model reads an entire input sequence, and then produces one output. The only difference is that the second description from Goodfellow's text isn't captured by Karpathy's image. Here's my rendition of what that description states. [](https://i.stack.imgur.com/SWsKl.jpg)
9562
1
9563
null
3
680
I'm new in this field and I started working with data by using R. Because of that, I find R much easier to approach a data project. However, apparently an employer wants you to know an object-oriented programing language (a language like Python). So it would be smart to think I can use Python just when I need to deal with a complex programming process, like replacing the na's in Titanic/kaggle project with the average based on name and to use R for anything else? So use them both interchangeably? Beside the fact that Python is more programming oriented I don't see why somebody would use it over R...
Using R and Python together
CC BY-SA 4.0
null
2015-12-31T03:53:38.003
2019-06-09T15:20:13.980
2019-06-09T15:20:13.980
29169
15010
[ "r", "python" ]
Several clarifications: - you can program with object-oriented (OOP) concepts in R, even though OOP in R has slightly different syntax from other languages. Methods do not bind to objects. In R, different method versions will be involved based on the input argument classes (and types). (Ref: Advanced R) - you can also replace nans with mean / any stat. / value in R using a mask if you store the data in a dataframe (See SO post) - There is no problem using them interchangeably. I use R from Python using the package RPy2. I assume it is equally easy to do it the other way round. At the end of the day, any language is only as good as how much the users know about it. Use one that you are more familiar with and try to learn it properly using the vast online resources online.
Which is better for a beginner: R or Python?
It's a perennial debate. Python is more readable and quicker to learn, no question. It's also a bit more general-purpose language than R. R, on the other hand, has special statistical packages for just about anything you could even dream of doing. There are some stats it can do that Python doesn't have a library for (though I suspect people are working on that). What's more important in the early stages of learning data science is the more fundamental theory: mathematics, linear algebra, calculus, and statistics. A firm grasp of those areas is a considerably larger share of the learning path than a particular language. Having said that, I do happen to prefer Python because of the readability. The fact is, that very often other people will need to come in after you and read what you wrote. For that matter, you sometimes have to read your own code! Readability is more important than the ability to do fantastic things with one-liners. As for the advanced stats, that comes into play most often in the medical fields; there you definitely find a preponderance of R over Python.
9573
1
9574
null
5
7870
In [this](https://en.wikipedia.org/wiki/Support_vector_machine) wiki page, I came across with the following phrase. > When data is not labeled, a supervised learning is not possible, and an unsupervised learning is required I cannot figure out why supervised learning is not possible? Appreciate any help to resolve this ambiguity.
supervised learning and labels
CC BY-SA 3.0
null
2016-01-01T08:11:03.380
2018-07-18T03:11:48.957
2016-12-30T07:52:27.277
15091
15115
[ "machine-learning", "supervised-learning", "unsupervised-learning" ]
The main difference between supervised and unsupervised learning is the following: In supervised learning you have a set of labelled data, meaning that you have the values of the inputs and the outputs. What you try to achieve with machine learning is to find the true relationship between them, what we usually call the model in math. There are many different algorithms in machine learning that allow you to obtain a model of the data. The objective that you seek, and how you can use machine learning, is to predict the output given a new input, once you know the model. In unsupervised learning you don't have the data labelled. You can say that you have the inputs but not the outputs. And the objective is to find some kind of pattern in your data. You can find groups or clusters that you think that belong to the same group or output. Here you also have to obtain a model. And again, the objective you seek is to be able to predict the output given a new input. Finally, going back to your question, if you don't have labels you can not use supervised learning, you have to use unsupervised learning.
Semi Supervised Learning without label propagation
One strategy that seems good here is instance-level constrained clustering. These methods are semi-supervised algorithm that have "must-link" and "cannot-link" constraints between instances of known labels. So in your example, you would bind the 4 pairs (red, blue), (red, yellow), (blue, yellow), and (man, woman) as "must-link", and the 6 pairs (red, man), (red, woman), ..., (yellow, woman) as "cannot-link". The results are similar to unsupervised clustering. For example, if you were to use DBSCAN (ignoring the labels/constraints), you would not need to specify the number of clusters/groups you're trying to achieve, and the algorithm would even find "outliers". In fact, there is a version of DBSCAN that supports instance-level constraints, called C-DBSCAN. It is described in the work "Density-based semi-supervised clustering" by Ruiz et al (2010). I do not know of any out-of-the-box implementations available, but I have a working version of C-DBSCAN I implemented for an experiment. However it is not documented nor is it performant/production level. You can find it at my [lab's repository](https://github.com/MaLL-UFSCar/CDenStream) if you're interested (also contains the C-DenStream, which is the data streams version of it, but it does not seem to fit your problem).
9596
1
9606
null
3
1168
So I have a large collection of blog posts containing `title`, `content`, `category`, `tags` and `geo-location` fields and I'm looking to achieve three things: - Assign a category (or multiple categories) to all the posts and any new ones. I have a strict vocabulary of categories. - Add new tags to the posts that might be relevant to the post. - Mark the post if it contains information about a place. For example: Lorem ipsum dolor sit amet San Francisco, consectetur adipiscing elit. I've been looking into different machine learning algorithms, most recently decision trees, but I don't feel that is the best algorithm to work out the problems above (or that I haven't understood them enough). Many of these posts already contain `categories`, `tags` and `geo-location` data. Some do not contain any information and some have only a few details. What would be the best machine learning algorithm to look into to solve each of the three areas?
Machine learning algorithm to classify blog posts
CC BY-SA 3.0
null
2016-01-03T23:32:48.993
2019-01-18T09:40:53.570
2016-01-04T12:20:23.690
8820
15157
[ "machine-learning", "data-mining", "classification", "algorithms" ]
# Question 1: Category prediction To predict the category of a new blog post, you could do the following: - Build a MLP (multilayer Perceptron, a very simple neural network). Each category gets an output node, each tag is a binary input node. However, this will only work if the number of tags is very little. As soon as you add new tags, you will have to retrain the network. - Build a MLP with "important words" as features. - If you have internal links, you might want to have a look at "On Node Classification in Dynamic Content-based Networks". In case you're German, you might also like Über die Klassifizierung von Knoten in dynamischen Netzwerken mit Inhalt - You could take all words you currently have, see those as a vector space. Fix that vocabulary (and probably remove some meaningless words like "with", "a", "an" - this is commonly called a "stopword"). For each text, you count the different words you have in your vocabulary. A new blog post is point in this space. Use $k$ nearest neighbor for classification. - Use combinations of different predictiors by letting each predictor give a vote for a classification. ## See also - Yiming Yang, Jan O. Pedersen: A Comparative Study on Feature Selection in Text Categorization, 1997. - Scikit-learn: Working With Text Data # Question 2: Tagging texts This can be treated the same way like question 1. # Question 3: Finding locations Download a database of countries / cities (e.g. [maxmind](https://www.maxmind.com/de/free-world-cities-database)) and just search for a match.
Text to Text classification
## I think that is because logistic regression with ~3k labels is not a good choice you are right but I rephrase it a bit better: In general, Classification with ~3k labels is not a good choice! You basically have a Search/Recommendation problem. Given your input, you find the best fitting ticket/dashboard and assign it. It is a very interesting ML project actually! I give a confident starter. If it did not work, please come back with results and I update the answer: ## If you want to go Unsupervised Query-Document Matching - Use a simple TF-IDF to vectorise your text - Apply a dimensionality reduction to reduce high-dimensionality sparse vectors to low-dimensionality dense vectors. If you use matrix factorisations for this, you are basically doing famous classic LSA - In that vector space, you find the closest label to your query and assign it to the query Topic Modeling - Apply a simple LDA to model topics for the corpus - Given a query, find the best matching topic of that query and assign the query to that topic (cluster) - Please note that LDA finds intrinsic topics. So if your labels are different than topics that it finds, you need to rely on labels and ignore this solution ## A little bit more Supervised## - Create a dataset from your corpus (or maybe you already have it) in which sentence pairs (titles, descriptions, etc.) which belong to same topic/label have label $1$, and sentence pairs which belong to different topics/classes/labels have label $-1$ and sentence pairs with neutral relation have the label $0$. I put an example as PS at the end. - Feed this data to S-Bert to fine-tune the pre-trained model - Read this, learn it and use it for finding most similar ticket/dashboard to the query PS: How data for S-Bert looks like (I just made up some dummy examples! hope you get the idea) ``` sentence1: He is a man sentence2: He is male label: 1 sentence1: programming is hard sentence2: Maradona was a magician label: -1 sentence1: don't know what to write here sentence2: never mind, I think you got what I mean label: 0 . . . ```
9598
1
9599
null
10
4528
I have a dataset containing data on temperature, precipitation and soybean yields for a farm for 10 years (2005 - 2014). I would like to predict yields for 2015 based on this data. Please note that the dataset has DAILY values for temperature and precipitation, but only 1 value per year for the yield, since harvesting of crop happens at end of growing season of crop. I want to build a regression or some other machine learning based model to predict 2015 yields, based on a regression/some other model derived by studying the relation between yields and temperature and precipitation in previous years. I am familiar with performing machine learning using scikit-learn. However, not sure how to represent this problem. The tricky part here is that temperature and precipitation are daily but yield is just 1 value per year. How do I approach this?
Building a machine learning model to predict crop yields based on environmental data
CC BY-SA 3.0
null
2016-01-04T00:17:58.200
2020-07-31T14:32:22.763
null
null
12985
[ "python", "scikit-learn", "pandas" ]
For starters, you can predict the yield for the upcoming year based on the daily data for the previous year. You can estimate the model parameters by considering each year's worth of data as one "point", then validate the model using cross-validation. You can extend this model by considering more than the past year, but look back too far and you'll have trouble validating your model and overfit.
Using machine learning technique to predict commodity prices
Based on your question there are couple of things which I would assume to answer your question: - As you need to predict the commodity price the data which is collected is time series data. - Since you want to use other commodity to predict, it means that you don't have any past data of the product which you want to predict. The answer could be derived by performing some exploratory analysis on the existing data. i.e., based on your business understanding you need to decide which product is similar to the new product. This kind of techniques is used to understand the sale of the new product/how is it going to perform after the launch. Techniques which can be used over here are Time Series Analysis like ARIMA, if seasonality is present then SARIMA, if no trend then Exponential Smoothing(too many spikes), there are other models like Auto Regression, Moving Average, Croston if there are 0's etc. This is one way of looking at your problem.
9604
1
9623
null
5
7344
I have seen many examples online regarding the MNIST dataset, but it's all in black and white. In that case, a 2D array can be constructed where the values at each array element represent the intensity of the corresponding pixel. However, what if I want to do colored images? What's the best way to represent the RGB data? There's a very brief discussion of it [here](http://neuralnetworksanddeeplearning.com/chap6.html#exercise_683491), which I quote below. However, I still don't get how the RGB data should be organized. Additionally, is there some OpenCV library/command we should use to preprocess the colored images? > the feature detectors in the second convolutional-pooling layer have access to all the features from the previous layer, but only within their particular local receptive field* *This issue would have arisen in the first layer if the input images were in color. In that case we'd have 3 input features for each pixel, corresponding to red, green and blue channels in the input image. So we'd allow the feature detectors to have access to all color information, but only within a given local receptive field.
How to prepare colored images for neural networks?
CC BY-SA 3.0
null
2016-01-04T10:38:13.970
2016-01-04T22:49:17.717
null
null
13625
[ "neural-network", "deep-learning", "image-classification" ]
Your R,G, and B pixel values can be broken into 3 separate channels (and in most cases this is done for you). These channels are treated no differently than feature maps in higher levels of the network. Convolution extends naturally to more than 2 dimensions. Imagine the greyscale, single-channel example. Say you have N feature maps to learn in the first layer. Then the output of this layer (and therefore the input to the second layer) will be comprised of N channels, each of which is the result of convolving a feature map with each window in your image. Having 3 channels in your first layer is no different. This tutorial does a nice job on convolution in general. [http://deeplearning.net/tutorial/lenet.html](http://deeplearning.net/tutorial/lenet.html)
How to prepare/augment images for neural network?
The idea with Neural Networks is that they need little pre-processing since the heavy lifting is done by the algorithm which is the one in charge of learning the features. The winners of the Data Science Bowl 2015 have a great write-up regarding their approach, so most of this answer's content was taken from: [Classifying plankton with deep neural networks](https://benanne.github.io/2015/03/17/plankton.html). I suggest you read it, specially the part about Pre-processing and data augmentation. - Resize Images As for different sizes, resolutions or distances you can do the following. You can simply rescale the largest side of each image to a fixed length. Another option is to use openCV or scipy. and this will resize the image to have 100 cols (width) and 50 rows (height): ``` resized_image = cv2.resize(image, (100, 50)) ``` Yet another option is to use scipy module, by using: ``` small = scipy.misc.imresize(image, 0.5) ``` - Data Augmentation Data Augmentation always improves performance though the amount depends on the dataset. If you want to augmented the data to artificially increase the size of the dataset you can do the following if the case applies (it wouldn't apply if for example were images of houses or people where if you rotate them 180degrees they would lose all information but not if you flip them like a mirror does): - rotation: random with angle between 0° and 360° (uniform) - translation: random with shift between -10 and 10 pixels (uniform) - rescaling: random with scale factor between 1/1.6 and 1.6 (log-uniform) - flipping: yes or no (bernoulli) - shearing: random with angle between -20° and 20° (uniform) - stretching: random with stretch factor between 1/1.3 and 1.3 (log-uniform) You can see the results on the Data Science bowl images. Pre-processed images [](https://i.stack.imgur.com/0S0Y0.png) augmented versions of the same images [](https://i.stack.imgur.com/KJXZK.png) -Other techniques These will deal with other image properties like lighting and are already related to the main algorithm more like a simple pre-processing step. Check the full list on: [UFLDL Tutorial](http://ufldl.stanford.edu/tutorial/unsupervised/PCAWhitening/)
9612
1
9618
null
3
443
I have a dataset containing data on temperature, precipitation, and soybean yields for a farm for 10 years (2005 - 2014). I would like to predict yields for 2015 based on this data. Please note that the dataset has DAILY values for temperature and precipitation, but only 1 value per year for the yield (since harvesting of crop happens at end of growing season of crop). I would like to build a regression or some other machine learning based model to predict 2015 yields, based on a regression/some other model derived by studying the relation between yields and temperature and precipitation in previous years. As per, [Building a machine learning model to predict crop yields based on environmental data](https://datascience.stackexchange.com/questions/9598/building-a-machine-learning-model-to-predict-crop-yields-based-on-environmental), I am using `sklearn.cross_validation.LabelKFold` to assign each year the same label. The question is that since I have a single target value per year, do I need to interpolate to fill in target values for all the other days of the year? Should I just use the same target value for each day of the year?
Assigning values to missing target vector values in scikit-learn
CC BY-SA 4.0
null
2016-01-04T15:17:44.630
2019-06-07T16:52:18.083
2019-06-07T16:52:18.083
29169
12985
[ "python", "scikit-learn", "pandas" ]
The model likely won't have much predictive power if the input is a single day. No weather patterns longer than one day can be captured that way. Instead you should aggregate the days together. You can come up with different features that describe your larger, aggregated unit of time (months, year). For example mean precipitation is a very simple one. Binning the data and using counts within those bins would also work. More advanced options would roll the time all the way up to a full year and learn a feature set at that level.
Scikit Learn Missing Data - Categorical values
We would need more information on the prediction problem and the features to be able to give something more precise. Anyhow, I am surprised no answer so far included all possible options since they aren't that many: - get rid of incomplete observations or features --- obviously, only viable if there are few incomplete cases since you lose too much information otherwise - replace NAs with some value like -1 --- this depends on the classifier you use; if your classifier supports categorical variables, you can create a new category for those NAs for example. In some continuous variables, sometimes there are some values that make sense (for instance, in text mining classification, if you have a title-length feature but you have no title, it might make sense to replace with title-length=0) - fill up the missing data This last point encompasses too many things: - replace NAs with the median (this is the usual lazy approach; sklearn has a class for this) - if time series, replace with an average of the previous and following values -- in pandas, this can be done using DataFrame.resample(). - use the $k$ closest neighbors. build a KNN model using the other variables and then do the average of those neighbors (if you use euclidean distance, you probably should normalize first). I never seen this done, but you probably could try predicting the missing NAs using another model as well. But all this depends very much on what you are doing. For instance, if you have performed clustering analysis and you know your data is made up of clusters, you could use the median within each cluster. Possibly other solutions could include things like multimodal or multiview models. These are recent techniques that can cope with missing modalities, and you can see a feature, or subset of features, as a modality. For instance, you could build a different classifier for various subsets of your features (using the complete cases in each of those subsets) and then build another classifier on top of that to merge those probabilities. I would only try these techniques if most of your data is missing. There are more advanced deep learning versions of this using autoencoders.
9632
1
9640
null
7
15582
I have been looking for a while for examples of how I could find the points at which a function achieves its minimum using a genetic algorithm approach in Python. I looked at DEAP documentation, but the examples there were pretty hard for me to follow. For example: ``` def function(x,y): return x*y+3*x-x**2 ``` I am looking for some references on how I can make a genetic algorithm in which I can feed some initial random values for both x and y (not coming from the same dimensions). Can someone with experience creating and using genetic algorithms give me some guidance on this?
Simple example of genetic alg minimization
CC BY-SA 3.0
null
2016-01-05T10:57:35.867
2016-01-05T19:47:52.763
2016-01-05T16:58:45.380
13413
14946
[ "python", "optimization", "genetic-algorithms" ]
[Here is a trivial example](http://lethain.com/genetic-algorithms-cool-name-damn-simple/), which captures the essence of genetic algorithms more meaningfully than the polynomial you provided. The polynomial you provided is solvable via [stochastic gradient descent](https://en.wikipedia.org/wiki/Stochastic_gradient_descent), which is a simpler minimimization technique. For this reason, I am instead suggesting this excellent article and example by Will Larson. [Quoted from the original article](http://lethain.com/genetic-algorithms-cool-name-damn-simple/): > Defining a Problem to Optimize Now we're going to put together a simple example of using a genetic algorithm in Python. We're going to optimize a very simple problem: trying to create a list of N numbers that equal X when summed together. If we set N = 5 and X = 200, then these would all be appropriate solutions. lst = [40,40,40,40,40] lst = [50,50,50,25,25] lst = [200,0,0,0,0] Take a look at the [entire article](http://lethain.com/genetic-algorithms-cool-name-damn-simple/), but here is the complete code: > # Example usage from genetic import * target = 371 p_count = 100 i_length = 6 i_min = 0 i_max = 100 p = population(p_count, i_length, i_min, i_max) fitness_history = [grade(p, target),] for i in xrange(100): p = evolve(p, target) fitness_history.append(grade(p, target)) for datum in fitness_history: print datum """ from random import randint, random from operator import add def individual(length, min, max): 'Create a member of the population.' return [ randint(min,max) for x in xrange(length) ] def population(count, length, min, max): """ Create a number of individuals (i.e. a population). count: the number of individuals in the population length: the number of values per individual min: the minimum possible value in an individual's list of values max: the maximum possible value in an individual's list of values """ return [ individual(length, min, max) for x in xrange(count) ] def fitness(individual, target): """ Determine the fitness of an individual. Higher is better. individual: the individual to evaluate target: the target number individuals are aiming for """ sum = reduce(add, individual, 0) return abs(target-sum) def grade(pop, target): 'Find average fitness for a population.' summed = reduce(add, (fitness(x, target) for x in pop)) return summed / (len(pop) * 1.0) def evolve(pop, target, retain=0.2, random_select=0.05, mutate=0.01): graded = [ (fitness(x, target), x) for x in pop] graded = [ x[1] for x in sorted(graded)] retain_length = int(len(graded)*retain) parents = graded[:retain_length] # randomly add other individuals to # promote genetic diversity for individual in graded[retain_length:]: if random_select > random(): parents.append(individual) # mutate some individuals for individual in parents: if mutate > random(): pos_to_mutate = randint(0, len(individual)-1) # this mutation is not ideal, because it # restricts the range of possible values, # but the function is unaware of the min/max # values used to create the individuals, individual[pos_to_mutate] = randint( min(individual), max(individual)) # crossover parents to create children parents_length = len(parents) desired_length = len(pop) - parents_length children = [] while len(children) < desired_length: male = randint(0, parents_length-1) female = randint(0, parents_length-1) if male != female: male = parents[male] female = parents[female] half = len(male) / 2 child = male[:half] + female[half:] children.append(child) parents.extend(children) return parents I think it could be quite pedagogically useful to also solve your original problem using this algorithm and then also construct a solution using `stochastic grid search` or `stochastic gradient descent` and you will gain a deep understanding of the juxtaposition of those three algorithms. Hope this helps!
Understanding genetic algorithms
A genetic algorithm is an algorithm based on biological evolution, on how nature evolved. It does exactly that, evolve the algorithm so that "it" finds the best solution to the problem at hand. You can use a genetic algorithm to find a solution to a problem which you don't know the answer, you know the answer but want to know a different one, or you are just lazy. The common steps for a genetic algorithm are: - Generate a random population of elements - Evaluate fitness of each element (how good are they against the solution) - Take the best elements for the next generation - Generate child elements using the above elements - Mutate (randomly) each child. This can also be done before step 4 with parents. - Repeat from step 2 for n number of generations until the solution is found, the maximum number of generations have been reached or the fitness is not changing anymore (local minima). Two basic problems with GAs: - To get trapped in a local minima (or local maximum depending on the point of view). This means that you might find a good answer (or not even a good one) but it will never reach the best answer because fitness values are not changing or not getting better. This is fought using Mutation in step 5 to keep diversity in the population so it doesn't get stuck. - They are generally slower than other approaches. If you can, take a look at [this](http://www.springer.com/gb/book/9781484203293) book. It is not free, but worth a look. Alternatively, take a look at [this](http://natureofcode.com/book/chapter-9-the-evolution-of-code/) online book for free, it is a great source and he has his own youtube channel. It's more basic than the other book I recommend, but will help you get started with GAs. To answer the other part of your question, the GA should be used to find a model, but will not act as a model. For example, if you have a neural network you can train it using the backpropagation method, but you could also train it using a GA. The GA will not use anything of the backpropagation mathematics, it could be used to generate weights in its neurons, and evaluate the answer (last layer). Weights will evolve to get closer to the solution. In this scenario, the model will still be the NN, but you used a different algorithm to find the best NN. Hope this helps.
9636
1
9638
null
2
76
I've no experience in data science so this will be one of those questions... I have data from >100k purchases made via a webshop regarding a catalogue of around >100 items. The history of purchases flattened out looks like ``` Item1 Item2 ... ItemN Sex State 5 0 0 M NY 25 15 0 F IL 0 1 1 ? NY ``` By playing around with the data, I can deduce simple facts like "90% of all purchases include at least 3 Item1", "If there are at least 4 of Item2, it is likely that Item3 is 0" or "60% of all customers from NY are male, but only 40% of those from IL are". Given the amount of combinations and data there is, the most obvious question: How can I approach wringing out more information from a data set like the above? I'm mostly interested in how one item does or does not entails inclusion of another...
Mine webshop history for clusters
CC BY-SA 3.0
null
2016-01-05T13:27:19.253
2016-01-05T14:42:39.350
null
null
15196
[ "data-mining", "classification", "clustering" ]
[Frequent Item-Set Mining](https://en.wikipedia.org/wiki/Association_rule_learning) is what you are looking for. You can see the tree structure of your frequent itemsets and the association rules afterwards. For your data I'd suggest to look at the whole for a while to get a sense on what you have in hand. Playing with concepts like Probability Distributions, [Entropy](https://en.wikipedia.org/wiki/Entropy_%28information_theory%29), etc would be really helpful in case you can reduce the size of your features. [PCA](https://en.wikipedia.org/wiki/Principal_component_analysis) also gives you the opportunity of projecting your data into a low-dimenstional space and you can see also plots showing first several PCs in 2-D or 3-D and get an impression about your data. Before all above I strongly suggest to see if you have [Missing Values](https://www.utexas.edu/cola/prc/_files/cs/Missing-Data.pdf) and if yes try to cope with them.
Web services to mine the social web?
Twitter's API is one of the best sources of social network data. You can extract off twitter pretty much everything you can imagine, you just need an account and a developer ID. The documentation is rather big so I will let you navigate it. [https://dev.twitter.com/overview/documentation](https://dev.twitter.com/overview/documentation) As usual there are wrappers that make your life easier. - python-twitter - twitteR There are also companies who offer detailed twitter analytics and historic datasets for a fee. - Gnip - Datasift Check them out!
9651
1
9657
null
1
1385
Suppose I have such a JSON file: ``` [ { "id": "0", "name": "name0", "first_sent": "date0", "analytics": [ { "a": 1, ... }, { "a": 2, ... } ] } ] ``` and I want to parse it with Pandas. So I load it with ``` df = pd.read_json('file.son') ``` It's all good until I try to access and count the number of dictionaries in the "analytics" field for each item, for which task I haven't found any better way than ``` for i in range(df.shape[0]): num = len(df[i:i+1]['analytics'][i]) ``` But this looks totally non-elegant and it's missing the point of using Pandas in the first place. I need to be able to access the fields within "analytics" for each item. The question is how to use Pandas to access fields within a field (which maps to a Series object), without reverting to non-Pandas approaches. A head of the DataFrame looks like this (only fields 'id' and 'analytics' reported): ``` 0 [{u'a': 0.0, u'b... 1 [{u'a': 0.01, u'b... 2 [{u'a': 0.4, u'b... 3 [{u'a': 0.2, u'b... Name: analytics, dtype: object 0 '0' 1 '1' 2 '2' 3 '3' ``` The first number is obviously the index, the string is the 'id', and it is clear that 'analytics' appears as a Series.
Pandas: access fields within field in a DataFrame
CC BY-SA 3.0
null
2016-01-06T14:21:05.260
2020-08-03T15:11:50.693
2016-01-06T16:11:28.687
982
982
[ "python", "pandas" ]
Multi-indexing might be helpful. See [this](http://pandas.pydata.org/pandas-docs/stable/advanced.html). But the below was the immediate solution that came to mind. I think it's a little more elegant than what you came up with (fewer obscure numbers, more interpretable natural language): ``` import pandas as pd df = pd.read_json('test_file.json') df = df.append(df) # just to give us an extra row to loop through below df.reset_index(inplace=True) # not really important except to distinguish your rows for _ , row in df.iterrows(): currNumbDict = len(row['analytics']) print(currNumbDict) ```
Get a portion of a long field in Pandas?
Fairly simply, for 'A' only: ``` max_chars = 300 for index, row in df.iterrows() : print(row['A'][:max_chars], row['B']) ```
9652
1
9654
null
2
276
I have a number of large datasets (10GBs) each with data fetched from a NoSQL database that I have remotely downloaded on my desktop. I would like to write a Python program to run some custom data analysis (plots - preferably interactive) and export custom reports in html or pdf. I was wondering how people do the following: 1) Store the data. For the moment I have plain text files (each file has rows of a fixed number of columns - most of the data are categorical). Would it make sense to save those in some database (SQL) or hdf5? Any hints on which is preferrable? 2) Which plotting library would you propose for the graphs? I have seen about bookeh and matplotlib supports interactive widgets but I don't know what people normally use. 3) Could I export the analysis results in an IPython notebook and then in html programmatically?
Writing custom data analysis program
CC BY-SA 3.0
null
2016-01-06T15:07:08.563
2016-01-06T16:30:57.220
2016-01-06T16:05:26.653
14487
14487
[ "python", "bigdata" ]
> 1) Store the data. For the moment I have plain text files (each file has rows of a fixed number of columns - most of the data are categorical). Would it make sense to save those in some database (SQL) or hdf5? Any hints on which is preferable? Yes, it would make sense to store in a local database, rather than using large csv/text files. As you say that the data is derived from a NoSQL source, I assume unstructured data. So, using a SQL/relational store is out of question. As you say you are using Python, I would suggest you use [TinyDB](http://tinydb.readthedocs.org/en/latest/), which is both light-weight and easy to handle. > 2) Which plotting library would you propose for the graphs? I have seen about bookeh and matplotlib supports interactive widgets but I don't know what people normally use. Matplotlib would be good enough. Actually, this question is more opinion-based than anything else. There are a lot of visualization libraries you can use, like Bokeh, Seaborn, etc. > 3) Could I export the analysis results in an IPython notebook and then in html programmatically? Yes, you can do the analytics directly in an Ipython notebook(Jupyter), which also supports Markdown and HTML cells. In addition, you can also use [widgets and interactive visualization with Jupyter Ipy notebooks](https://github.com/ipython/ipywidgets) and Matplotlib. [Tutorials for the same](https://github.com/ipython/ipywidgets/blob/master/examples/notebooks/Index.ipynb)
Need help with python code as part of a data analysis project
All of your plots are appearing on top of each other. You need to invoke plt.subplot(xxx) before you create each plot. For info on how the xxx command works, go to [the MATLAB documentation](http://www.mathworks.com/help/matlab/ref/subplot.html). You might end up with multiple figures - see [this page](https://stackoverflow.com/questions/21321764/matplotlib-multiple-plots-on-one-figure) for info about that.
9670
1
9671
null
3
435
I need to pull the names of companies out of resumes. Thousands of them. I was thinking of using NLTK to create a list of possible companies, and then cross-referencing the list of strings with something like SEC.gov. I've already been able to successfully pull the candidate's name, and contact info off of the resumes with some RegEx, but this one has me quite stumped. What I'm thinking is that I could use NLTK to create a list of strings of proper nouns from the resume's, and then search SEC.gov, or some other database. This is a link to the SEC page I would be searching: SEC company search page ``` Read Resume1 Get all potential company names as list of strings potentialCompanies IF searching for string1 in SEC gets result, THEN add to candidateCompanies ELSE remove from potentialCompanies, go to next string ``` My Questions To people that have used NLTK, would there be a better way of getting the potential companies from the text besides using proper nouns? Would there be a better place to search for companies than the SEC site? I have never done any web scraping before, and don't really know where to start if it is needed. (I had posted this on Stack Overflow but they told me that it might be better suited for here...)
Python: validating the existence of NLTK data with database search
CC BY-SA 3.0
null
2016-01-07T10:04:14.607
2016-01-07T10:14:52.510
2016-01-07T10:06:46.510
11097
15233
[ "python", "nlp", "nltk" ]
NLTK has a built-in NER model that would extract potential Organizations from text, you can read about it here (and see examples) [NLTK book](http://www.nltk.org/book/ch07.html) (look for section "5 Named Entity Recognition"). However, if your input text has organizations in a very specific context that wasn't seen by NLTK NER model, performance might be quite low. In that case you should be looking into training your own NER model, what would extract company names. For that you would require to manually markup a small amount of your dataset.
An exhaustive, representative test database in phrase search algorithm
Based on the example I assume that the target is persons names. Let's be clear, there's no such thing as an exhaustive dataset containing all possible persons names in the world. Also a crucial part of the question is: in What kind of names? In which language? Persons names in English are pretty different from names in Chinese for instance. And there is also the difficult question of [transliteration](https://en.wikipedia.org/wiki/Transliteration) of proper names. That being said, a few resources exist. They would usually be found by searching resources for "personal names", "record linkage", "named entities matching/coresolution". The following ones probably don't cover all the requirements but it's a start: - Febrl (see also here) - Found This paper which presents a large resource and this corresponding resource description but couldn't find the data. - The EMM news explorer has an interesting database of named entities including persons names with all the spelling variants/transliterations.
9672
1
10185
null
5
1801
I've just read [The Cascade-Correlation Learning Architecture](http://papers.nips.cc/paper/207-the-cascade-correlation-learning-architecture.pdf) by Scott E. Fahlman and Christian Lebiere. I think I've got the overall concept (or at least the "cascade" part - a [4min YouTube video](https://www.youtube.com/watch?v=1E3XZr-bzZ4) how I think it works): - Start with a minimal network with input and output units only - Learn those weights with standard algorithms (e.g. gradient descent - they seem to use another training objective which I don't quite understand, so it is gradient ascent in the paper) - When the network doesn't improve, add a single new hidden unit. This unit gets input from all input nodes and all hidden nodes which were added before. Its output goes to all output nodes only. - Repeat step 3 However, I don't understand the details of step 3: The input weights to hidden units are frozen (indicated by boxes in the paper). When exactly do they get frozen? Are they just initialized by random and never learned at all? I also don't understand this paragraph: > To create a new hidden unit, we begin with a candidate unit that receives trainable input connections from all of the network's external inputs and from all pre-existing hidden units. The output of this candidate unit is not yet connected to the active network. We run a number of passes over the examples of the training set, adjusting the candidate unit's input weights after each pass. The goal of this adjustment is to maximize $S$, the sum over all output units $o$ of the magnitude of the correlation (or, more precisely, the covariance) between $V$, the candidate unit's value, and Eo, the residual output error observed at unit o. We define S as $$S = \sum_{o} | \sum_p (V_p - \bar V) (E_{p,o} - \bar{E_o}) |$$ where $o$ is the network output at which the error is measured and p is the training pattern. The quantities $\bar V$ and $\bar{E_o}$ are the values of $V$ and $E_o$ averaged over all patterns. What is an "residual output error"? Is $V_p$ simply the activation of the unit given the pattern $p$? What does the term $S$ mean and why do we want to maximize it?
How exactly does adding a new unit work in Cascade Correlation?
CC BY-SA 3.0
null
2016-01-07T10:26:48.333
2018-09-15T00:13:20.923
2016-01-07T19:44:29.623
8820
8820
[ "machine-learning", "neural-network" ]
I've been reading up on cascade correlation quite a bit recently and made a python implementation [https://github.com/DanielSlater/CascadeCorrelation](https://github.com/DanielSlater/CascadeCorrelation) (though it still needs a bit of cleaning up/extra work and has a bunch of me mucking around with using Particle Swarm Optimization for selecting candidates, definitely not production ready). To try and explain step 3. - Start by creating a number of candidate hidden nodes with random weights. These have incoming connection from all existing hidden nodes and input nodes. - We then use that equation $$S=\sum_o \left | \sum_p(V_p - \overline{V})(E_{p,o} - \overline{E_o}) \right |$$ to train the candidate nodes. - The residual output error is the difference between the output of the network and the target value(think sum of squared error without the square). - $S$ is the correlation between the activation of our candidate node and the the residual error. - $V_p$ is the activation of the candidate node given input $p$. - After a bit of backprop training against $S$ we choose our best candidate. This becomes a new hidden node. - This is when it's weights are frozen. That is to say after random initialization then back-prop training, then selecting the best one.
Is correlation needed when building a model?
Not really, no. Sort of. It depends on how complex your model/data is. It's entirely possible to have a situation where a feature taken in isolation will not be correlated with the target variable, but multiple features considered together will. This is why univariate correlation is unreliable for feature selection. A trivial case that demonstrates this is a bivariate model performing a binary classification where the positive class is bounded by the right upper and left lower quadrants, and the negative class is bounded by the left upper and right lower quadranta (i.e. the "XOR" pattern): [](https://i.stack.imgur.com/L5huJ.png) If the input features have the same sign (x>0 & y>0 or x<0 & y<0), it's the positive class, else it's the negative class. But either feature in isolation is completely useless and uncorrelated with the target. Additionally, modern models like deep neural networks are effectively capable of "learning their own features", i.e. constructing extremely complex features by developing abstractions from the raw inputs. The "final" features learned by such a model will likely be correlated with the target, but the input features need not be. For example, if you consider the imagenet task (classifying a photo as a member of one of 10,000 classes), I'd be very surprised to learn that there's any correlation between the values of specific pixels and any target class. That's just not how we interpret photos. The value of the pixel in position [25, 10] should not have any correlation with whether or not the picture is a photo of a dog. But, if we think of the entire network before the output layer as a feature engineering system (such that the "classifier" is just the output layer), then the "features" provided by the penultimate layer (the inputs to "The Classifier") probably have some correlation with the target. TL;DR: If a feature is correlated with the target, it probably contains information about the target that will be useful for modeling. But that does not mean uncorrelated features are useless. Reporting correlation when it's there is a simple way to demonstrate that there's a signal in that variable. But lack of correlation doesn't necessarily mean you should throw those features away. In fact, correlation doesn't even mean you should necessarily use that feature either: you can have multiple features correlated with the target that are highly correlated with each other, in which case you would probably only want one or a handful of that group in your model.
9680
1
9706
null
3
596
I am looking for a method to approximate how similar a test set (i.e., test set features) to a train set. For example, something like, for each row in test: is there a similar enough data point in train? I've been thinking about using a mixture model approach, but I haven't been able to find a good reference on this. Can anyone suggest a good approach, or provide good references for how to use mixture models for this application?
Approximating density of test set in train
CC BY-SA 3.0
null
2016-01-07T15:11:21.470
2016-01-08T22:18:16.683
2016-01-07T18:05:58.310
13413
14946
[ "machine-learning" ]
The approach that comes to mind, is to calculate the kullback-leibler divergence between the kernel density estimations of your train dataset and of your test dataset. The kernel density estimation of each of your datasets will give you an approximation to the pdf's of your datasets. The kullback-leibler divergence will give you a number that will represent the divergence in bits from one distribution to another (if you use base 2 for your logarithm). Below are some references I think you would fine useful. [https://en.wikipedia.org/wiki/Kernel_density_estimation](https://en.wikipedia.org/wiki/Kernel_density_estimation) [https://en.wikipedia.org/wiki/Kullback%E2%80%93Leibler_divergence](https://en.wikipedia.org/wiki/Kullback%E2%80%93Leibler_divergence) [https://jakevdp.github.io/blog/2013/12/01/kernel-density-estimation/](https://jakevdp.github.io/blog/2013/12/01/kernel-density-estimation/) If you would like me to show the math behind this method. Feel free to ask. EDIT: Added math as asked by author of question Let $\hat x_1, \hat x_2,\hat x_3...,\hat x_n$ be your training dataset while $x_1,x_2,x_3...,x_n$ is your testing dataset, where both $\hat x_i$ and $x_i$ belong to $\mathbb{R}^d$. $$\hat f(x;H)=\frac{1}{n} \sum_{i=1}^{n}K(x-\hat x_i;H)$$ $$f(x;H)=\frac{1}{n} \sum_{i=1}^{n}K(x-x_i;H)$$ $\hat f$and $f$ represent the kernel density estimation for training set and testing set respectively. The parameter $H$ represents the bandwidth parameter and is a symmetric positive definite $d \times d$ matrix. $K(u;H)$ can be rewritten as $$|H|^{-\frac{1}{2}} K(H^{\frac{1}{2}} u)$$ where K can be any kernel function. I would recommend for simplicity purposes the standard multivariate normal kernel. Okay so now that we have the kernel density estimations of both our training and our testing dataset, we can use the kullback-leibler divergence in order to estimate the difference between the two. Optimally we would like to calculate the kullback-leibler divergence with respect to every point in our space. Mathematically speaking. $$\int_X f(x;H) \ log_2(\frac{f(x;H)}{\hat f(x;H)})dx$$ But this is computationally unpractical to compute. We can approximate this integral by sampling a set of points from the $f(x;H)$ and then computing the discrete sum. $$\sum_{x \in X} f(x;H) \ log_2(\frac{f(x;H)}{\hat f(x;H)})$$ Quick Note: To sample from a kernel density estimator, uniformly randomly select a point from the respective dataset. Then sample from the kernel of choice centered around the point chosen.
Test set larger than train set
- Do I correctly understand, that the test data is the whole dataset, whereas training is only a subset of it? Training and test data must not overlap. The test is a measure of quality on unseen, unfamiliar data. - In the case of inbalanced data and two class classification the naive classifier, predicting always the most probable class has the quality 891 / 1121. Any sensbible model should beat this score. - To handle inbalanced data you can use several approaches - undersampling the majority class, oversampling the minority https://machinelearningmastery.com/random-oversampling-and-undersampling-for-imbalanced-classification/. Also many classifiers have the attribute weights, which can be set to balanced - this would penalize larger for mistakes on minority class. - In the case of imbalanced data measure not only accuracy, but precision and recall as well https://en.wikipedia.org/wiki/Precision_and_recall
9683
1
9694
null
2
376
Suppose A's possible values are ON or OFF. Suppose I represent it as: if A ON then feature f=1 else f=0 Or, suppose I represent it with 2 features, where: -if A is ON then f1=1 and f2=0 -if A is OFF then f1=0 and f2=1 How this kind of representation affects neural networks?
How data representation affects neural networks?
CC BY-SA 3.0
null
2016-01-07T16:49:38.773
2016-01-08T05:57:34.127
2016-01-07T17:22:27.467
11097
15244
[ "neural-network" ]
It will have very little effect The answer most will give is that it will have no effect, but adding one more feature will decrease the ratio of records to features so will slightly increase the bias and will hence make your model slightly less accurate. Unless, of course, you have overfit your model , in which case it will make your model slightly more accurate (a good data scientist would never do this because they understand the importance of cross-validation :-). If you normalize your data and then attempt some sort of dimensionality reduction, your algorithm will immediately eliminate the feature that you added since it is perfectly negatively (linearly) correlated with the first feature. In this case it will have no effect. Please also consider the following: I always see big red flags when someone asks a very fundamental data science question with the words `neural network` included. Neural networks are very powerful and receive a great deal of attention in the media and on Kaggle, but they take more data to train, are difficult to configure, and require much more computing power. If you are just starting out, I suggest getting a foundation in linear regression, logistic regression, clustering, SVMs, decision trees, random forests, and naive Bayes before delving into artificial neural networks. Just some food for thought. Hope this helps!
Applications of Neural networks?
Yes, is applied in most of the fields: - A simple search in google scholar will provide a list what include: fraud detection, stock market trading decision, inventory classification and many more, including the link below [neural network inventory management](https://scholar.google.com/scholar?hl=en&as_sdt=0,11&q=neural%20network%20inventory%20management) - As for Finance we can look at: credit scoring, bankruptcy forecast, financial forecasting and many more. Link from google scholar included below: [neural network finance](https://scholar.google.com/scholar?hl=en&as_sdt=0%2C11&q=neural%20network%20finance&btnG=) Including also working example for simple credit scoring with NN: [Using neural networks for credit scoring: a simple example](https://www.r-bloggers.com/using-neural-networks-for-credit-scoring-a-simple-example/)
9701
1
9704
null
0
100
I am looking for a good python api for timeseries models such as ARIMA. Please list some well tested apis and few more advance models possible for financial time-series analysis.
Please list some well tested api's for arima model
CC BY-SA 3.0
null
2016-01-08T11:51:00.473
2016-01-08T16:42:36.143
null
null
837
[ "python", "predictive-modeling", "time-series" ]
Statsmodels: [Statsmodels](http://statsmodels.sourceforge.net/) is your best bet for a python library that includes ARIMA. I have used it fairly extensively and am quite happy with it. But, its certainly not as well tested as R based ARIMA models. R: If you want something "well-tested" then your best bet is likely to use [Rpy2](http://rpy.sourceforge.net/) to call an [R](https://cran.r-project.org/) based [ARIMA library](https://stat.ethz.ch/R-manual/R-devel/library/stats/html/arima.html) from python. Rpy2 can be a bit tricky based on version reconciliation between python, R and Rpy2. Here's a [turorial on calling R from python using Rpy2](https://sites.google.com/site/aslugsguidetopython/data-analysis/pandas/calling-r-from-python) Hope this helps!
Is there a Feature selection process for ARIMA model?
In some sense, it is common to do feature selection before you fit the ARIMA model, or at the very least, it is natural (in my opinion). The problem is that there seems to be little development in automatic feature selection techniques for statistical time series models that can use exogenous variables (like ARIMA). Thus, it is not clear as to how we can do feature selection. To make things worse, auto.arima doesn't do any feature selection on exogenous variables, it just uses AICc to find the most optimal order of your model (in a stepwise fashion in its default setting). If you include exogenous variables in your model, they will always be included in all models in the selection process. Basically, one way to do variable selection would be to try all possible combinations of exogenous variables, use auto.arima to find the "best" orders based on AICc, record this model's AICc (recall that AICc penalizes models that have large amounts of fitted parameters that do not increase the model's likelihood by a justifiable amount), and then pick the absolute best model out of all combinations of exogenous variables. Kind of a pain, and possibly very time consuming. I hope this helps.
9715
1
9717
null
5
285
I have been doing machine learning for a while, but bits and pieces come together even after some time of practicing. In neural networks, you adjust the weights by doing one pass (forward pass), and then computing the partial derivatives for the weights (backward pass) after each training example - and subtracting those partial derivatives from the initial weights. in turn, the calculation of the new weights is mathematically complex (you need to compute the partial derivative of the weights, for which you compute the error at every layer of the neural net - but the input layer). Is that not by definition an online algorithm, where cost and new weights are calculated after each training example? Thanks!
is neural networks an online algorithm by nature?
CC BY-SA 3.0
0
2016-01-09T16:07:08.427
2016-01-10T19:06:31.767
2016-01-09T16:42:05.333
9197
9197
[ "machine-learning", "neural-network", "online-learning" ]
You can train after each example, or after each epoch. This is the difference between stochastic gradient descent, and batch gradient descent. See pg 84 of Sebastian Raschka's Python Machine Learning book for more.
Neural network or other algorithms?
This is more of question how to select the correct machine learning algorithm, I would refer you to the following blog [Which machine learning algorithm should I use?](https://blogs.sas.com/content/subconsciousmusings/2017/04/12/machine-learning-algorithm-use/) Regression Algorithms models the relationship between variables that is iteratively refined using a measure of error in the predictions. Most popular examples are: - Ordinary Least Squares Regression (OLSR) - Linear Regression - Logistic Regression - etc ... On the other hand, Artificial Neural Networks models are inspired by the structure and/or function of biological neural networks. "Neural networks currently provide the best solutions to many problems in image recognition, speech recognition, and natural language processing." [Neural Networks and Deep Learning/](http://neuralnetworksanddeeplearning.com/). Neural Networks are hard to train; thus my recommentation not to start with Neutral Network.
9726
1
9743
null
0
53
I am currious how good such a procedure could be : I get some predictions of some 10 learners trained on train set and predicted on train set also . Then I am column binding those predictions to the original train. Could this be a valid procedure on improving learning process?
Predicted features combined with original ones
CC BY-SA 3.0
null
2016-01-10T17:10:59.837
2016-01-11T11:07:23.760
null
null
14946
[ "predictive-modeling" ]
This procedure exists and is called stacked generalization or simply stacking. See stacking section from [wikipedia page](https://en.wikipedia.org/wiki/Ensemble_learning). Strating from there you can red more by following references from the page. The first paper on subject was published by Wolpert in 1992. [Later edit] Do not combine the results with original features, but keep only the predictions combined only with the target.
Predicting Missing Features
A simple approach could be the following: suppose $i \in \{0,1\}^d$ is the vector you want to predict which of the $0$ entries could be $1$ and $j \in J$ the rest of the feature vectors. Take the $k$ nearest neighbors, under some suitable distance ([Jaccard](https://en.wikipedia.org/wiki/Jaccard_index), [Hamming](https://en.wikipedia.org/wiki/Hamming_distance), [Manhattan distance](https://en.wikipedia.org/wiki/Taxicab_geometry)). For each $0$ entry the probabilities could be the percentage of the $k$ nearest neighbors that have $1$ in the corresponding entry. This problem has been extensively study in the [collaborative filtering](https://en.wikipedia.org/wiki/Collaborative_filtering) community. The best known example being the [Netflix Prize](https://www.netflixprize.com/). This [blog post](https://medium.com/@victorkohler/item-item-collaborative-filtering-with-binary-or-unary-data-e8f0b465b2c3) provides a nice explanation of this approarch for binary data. Another, more involved, approach is [matrix completion](https://en.wikipedia.org/wiki/Matrix_completion), in particular check this [reference](https://papers.nips.cc/paper/5005-probabilistic-low-rank-matrix-completion-with-adaptive-spectral-regularization-algorithms.pdf). If you are into deep learning check [this](https://dl.acm.org/citation.cfm?id=3141950).
9731
1
10295
null
1
407
First a theoretical question and then a practical one. Neural nets backpropagation is the computation of the weights derivatives or the computation of the new weights (that is, the original weights minus the weight derivatives times the learning rate - simplified)? It may well be a semantics issue but important nevertheless. Also, if anyone is familiar with Torch, nn class ``` gradInput = module:backward(input, gradOutput) ``` is gradinput the weight set for the next forward pass or is it the derivatives of the weights of the previous forward pass? Thanks!
Torch and conceptual question about neural nets backpropagation
CC BY-SA 3.0
null
2016-01-11T01:07:17.193
2017-03-30T21:07:03.073
null
null
9197
[ "neural-network", "backpropagation" ]
I have been using torch for a few months now but I will give it a go (apologies if incorrect). Yes a weight $w$ is updated as follows; $$ w_{new} = w_{old} - \gamma \partial E/ \partial w_{old} $$ where $ \gamma $ is your learning rate and $E$ is the error calculated using something like `criterion:forward(output,target)`. The criterion could be, for example, `nn.MSECriterion()`. To calculate $\partial E/\partial W$ you need $\partial E/\partial y$ `gradOutput = criterion:backward(output,target)`(gradient respect to output) as well as the input `input` to the net i.e. your $X$ (e.g. image data) to generate the recursive set of equations which multiply with `gradOutput`. `model:backward(input, gradOutput)` therefore serves to update the weights so that they are ready for the next `model:forward(input)` as it generates a big derivative tensor $dE/dW_{old}$. This is then combined with a optimiser such as `optim.sgd` using `optimMethod` and the old weights $W_{old}$ to generate the new weights in the first equation. Of course you can just update the weights without an optimiser with `model:updateParameters(learningRate)` but you miss useful stuff like momentum, weight decay etc. Got a bit side tracked there but hope this helps.
Purpose of backpropagation in neural networks
> Is backpropagation just fancy term for weights being optimized on every iteration? Almost. Backpropagation is a fancy term for using the chain rule. It becomes more useful to think of it as a separate thing when you have multiple layers, as unlike your example where you apply the chain rule once, you do need to apply it multiple times, and it is most convenient to apply it layer-by-layer in reverse order to the feed forward steps. For instance, if you have two layers, $l$ and $l-1$ with weight matrix $W^{(l)}$ linking them, non-activated sum for a neuron in each layer $z_i^{(l)}$ and activation function $f()$, then you can link the gradients at the sums (often called logits as they may be passed to logistic activation function) between layers with a general equation: $$ \frac{\partial L}{\partial z^{(l-1)}_j} = f'(z^{(l-1)}_j) \sum_{i=1}^{N^{(l)}} W_{ij}^{(l)} \frac{\partial L}{\partial z^{(l)}_i}$$ This is just two steps of the chain rule applied to generic equations of the feed-forward network. It does not provide the gradients of the weights, which is what you eventually need - there is a separate step for that - but it does link together layers, and is a necessary step to eventually obtain the weights. This equation can be turned into an algorithm that progressively works back through layers - that is back propagation. > To be more precise, what's the point of automatic differentiation when we could simply plug in variables and calculate gradient on every step, correct? That is exactly what automatic differentiation is doing. Essentially "automatic differentiation" = "the chain rule", applied to function labels in a directed graph of functions.
9734
1
9764
null
4
128
I recently came across this term `recurrent heavy subgraph` in a talk. I don't seem to understand what it means and Google doesn't seem to show any good results. Can someone explain what this means in detail.
What is a Recurrent Heavy Subgraph?
CC BY-SA 3.0
null
2016-01-11T05:56:19.680
2016-01-12T16:40:48.607
2016-01-12T16:40:48.607
2932
15324
[ "data-mining", "graphs", "terminology" ]
The term may best be expressed as a Recurrent, Heavy Subgraph. That is, a subgraph which is both Recurrent and Heavy. Heaviness of a subgraph refers to heavily connected vertices- that is, nodes which are connected many times ("many" being relative to the network in question). Recurrent refers to the propensity of a subgraph to occur more than once. Thus, a Recurrent Heavy Subgraph is a densely connected set of vertices which occurs several times in the overall network. These subgraphs are often used to determine properties of a network. For example: In a network of emails interactions within a company organized into 4-person teams with one member acting as the lead, each team's email activity (if they email between themselves sufficiently to be considered "heavy") could be described as a Heavy Subgraph. The fact that these subgraphs occur many times in the network make them Recurrent Heavy Subgraphs. If one was searching for structure in the network, noticing that these recurrent, heavy subgraphs exist would go a long way toward determining the organization of the network as a whole.
Can someone explain to me the structure of a plain Recurrent Neural Network?
The architecture of a RNN is called recurrent because it applies the same function at each step. So all the cells on the graph actually represent the same computation, but not the same state. Each green square in your figure represent the computation. $$ s^{(t)} = f(s^{(t-1)}, x^{(t)}, \theta) $$ Where $f$ is the function of the RNN, $\theta$ are parameters, $s^{(t)}$ is the RNN state at step $t$ and $x^{(t)}$ is the input at step $t$ in the input sequence. What you see represented in the figure is actually what is called the unfolded representation, that is the same RNN cell applied to the input sequence one input vector at a time. I recommend you to read [the chapter 10 on RNN](https://www.deeplearningbook.org/contents/rnn.html) Deep Learning book. The following figure is from this book and summarize the idea. [](https://i.stack.imgur.com/yba4U.png)
9735
1
9737
null
8
5658
As reading [Ensemble methods](http://scikit-learn.org/stable/modules/ensemble.html) on scikit-learn docs, it says that > bagging methods work best with strong and complex models (e.g., fully developed decision trees), in contrast with boosting methods which usually work best with weak models (e.g., shallow decision trees). But search on google it always return information about `Decision Tree`. - I'd like to know the detail of the two trees mentioned in the doc, what's the fully developed and shallow meanings. Update: - About why bagging work best with fully developed and why boosting work best with shallow. First, I think complex models (e.g., fully developed decision trees) means such a data set has a complex format be called as fully developed decision trees. After I read above quote over 20 times and rapaio's answer, I think my poor English lead me to the wrong road(misunderstand). I also mistake shallow as shadow , which make me confusing a long time....Now I understand the meanings of fully developed and shallow . I think the quote is saying bagging work best with a models(already trained) which algorithm is complex. And boosting only need simple model. Both bagging and boosting need many estimators, as n_estimators=100 in scikit-learn examples. If n_estimators=100 : bagging need 100 fully developed decision trees estimators(models) boosting need 100 shallow decision trees estimators(models) Does my thoughts is right? Hope my update can help non-native speakers. - e.g. means for example, so there are other models can use for bagging and boosting. How about changing the model to svm or something else? Or both of they need a tree base model?
what is the difference between "fully developed decision trees" and "shallow decision trees"?
CC BY-SA 3.0
null
2016-01-11T07:07:23.557
2017-04-13T08:49:48.457
2017-03-09T01:08:47.527
15325
15325
[ "scikit-learn", "decision-trees", "ensemble-modeling" ]
[Later edit - Rephrase everything] ### Types of trees A shallow tree is a small tree (most of the cases it has a small depth). A full grown tree is a big tree (most of the cases it has a large depth). Suppose you have a training set of data which looks like a non-linear structure. [](https://i.stack.imgur.com/7Y2k1.png) ### Bias variance decomposition as a way to see the learning error Considering bias variance decomposition we know that the learning error has 3 components: $$Err = \text{Bias}^2 + \text{Var} + \epsilon$$ Bias is the error produced by the fit model when it is not capable to represent the true function; it is in general associated with underfitting. Var is the error produced by the fit model due to sampled data, it describes how unstable a model is if the training data changes; it is in general associated with overfitting. $\epsilon$ is the irreducible error which envelops the true function; this can't be learned [](https://onlinecourses.science.psu.edu/stat857/sites/onlinecourses.science.psu.edu.stat857/files/lesson04/model_complexity.png) Considering our shallow tree we can say that the model has low variance since changing the sample does not change too much the model. It needs too many changed data points to be considered unstable. At the same time we can say that has a high bias, since it really can't represent the sine function which is the true model. We can say also that it has a low complexity. It can be described by 3 constants and 3 regions. Consequently, the full grown tree has low bias. It is very complex since it can be described only using many regions and many constants on those regions. This is why it has low bias. The complexity of the model impacts also variance which is high. In some regions at least, a single point sampled differently can change the shape of the fitted tree. As a general rule of thumb when a model has low bias it has also high variance and when it has low variance it has high bias. This is not true always, but it happens very often. And intuitively is a correct idea. The reason is that when you are getting close to the points in the sample you learn the patterns but also learn the errors from sample, when you are far away from sample you are instead very stable since you do not incorporate errors from sample. ### How can we build ensembles using those kind of trees? Bagging The statistical model behind bagging is based on bootstrapping, which is a statistical procedure to evaluate the error of a statistic. Assuming you have a sample and you want to evaluate the error of a statistical estimation, bootstrapping procedure allows you to approximate the distribution of the estimator. But trees are only a simple function of sample "split the space into regions and predict with the average, a statistic". Thus if one builds multiple trees from bootstrap samples and averages the trees can be considered i.i.d. and the same principle works to reduce variance. Because of that bagging allows one to reduce the variance without affecting too much the bias. Why it needs full depth trees? Well it is a simple reason, perhaps not so obvious at first sight. It need classifiers with high variance in order to reduce it. This procedure does not affect the bias. If the underlying models have low variance high bias, the bagging will reduce slightly the already small variance and produce nothing for bias. Boosting How does boosting? Many compare boosting with model averaging, but the comparison is flawed. The idea of boosting is that the ensemble is an iterative procedure. It is true that the final rule for classification looks like a weighted average of some weak models, but the point is that those models were built iteratively. Has nothing to do with bagging and how it works. Any $k$ tree is build using information learned from all previous $k-1$ trees. So we have initially a weak classifier which we fit to data, where all the points have the same importance. This importance can be changed by weights like in adaboost or by residuals like in gradient boosting, it really does not matter. The next weak classifier will not treat in the same way all the points, but those previously classified correctly has smaller importance than those classifier incorrectly. The consequence is that the model enriches it's complexity, it's ability to reproduce more complex surfaces. This is translated in the fact that it reduces the bias, since it can go closer to data. A similar intuition is behind: if the classifier has already a low bias, what will happen when I boost it? Probably a much unbearable overfit, that's all. ### Which one is better? There is no clear winner. It depends too much on data set and on other parameters. For example bagging can't hurt. It is possible to be useless ut usually does not hurt performance. Boosting can lead to overfit. That is because you can go eventually too close to data. The is a lot of literature which says that when the irreducible error is high, the bagging is much better and boosting does not progress too much. ### Can we decrease both variance and bias? Pure bootstrap and bagging approaches serves a single purpose. Either reduce variance either reduce bias. However modern implementations changed various things in how those approaches works. Sampling can be used in boosting and it seems to work towards reducing also the variance. There are bagging procedures which takes some ideas prom boosting, for example iterative bagging (or adaptive bagging) published by Breiman. So, the answer is yes, is possible. ### Can we use other learners other than trees? Of course. Often you will see boosting to use this approach. I read some papers on bagging svm or another learners also. I tried myself to bag some svms but without much success. The trees are the preferred way, however, for a very simple reason, they are simple to build, simple to adapt and simple to control their effect. My personal opinion is that it is not everything said regarding ensembles of trees. PS: a last note on the number of weak classifiers: this depends entirely on the complexity of the data set mixed with the complexity of the learner. There is no recipe. Often 20 of them are enough to get most of the information and additional ones are only for tiny tuning.
What are the factors to consider when setting the depth of a decision tree?
Yes, but it also means you're likely to overfit to the training data, so you need to find the value that strikes a balance between accuracy and properly fitting the data. Deciding on the proper setting of the `max_depth` parameter is the task of the tuning process, via either Grid Search or Randomised Search with cross-validation. This page from the scikit-learn documentation explains the process well: [https://scikit-learn.org/stable/modules/grid_search.html](https://scikit-learn.org/stable/modules/grid_search.html)
9736
1
9751
null
25
18387
What are the hallmarks or properties that indicate that a certain learning problem can be tackled using support vector machines? In other words, what is it that, when you see a learning problem, makes you go "oh I should definitely use SVMs for this" rather than neural networks or decision trees or anything else?
What kinds of learning problems are suitable for Support Vector Machines?
CC BY-SA 4.0
null
2016-01-11T07:16:58.747
2021-11-17T18:46:49.397
2021-11-17T18:46:49.397
29169
11044
[ "machine-learning", "svm", "supervised-learning", "unsupervised-learning" ]
SVM can be used for classification (distinguishing between several groups or classes) and regression (obtaining a mathematical model to predict something). They can be applied to both linear and non linear problems. Until 2006 they were the best general purpose algorithm for machine learning. I was trying to find a paper that compared many implementations of the most known algorithms: svm, neural nets, trees, etc. I couldn't find it sorry (you will have to believe me, bad thing). In the paper the algorithm that got the best performance was svm, with the library libsvm. In 2006 Hinton came up with deep learning and neural nets. He improved the current state of the art by at least 30%, which is a huge advancement. However deep learning only get good performance for huge training sets. If you have a small training set I would suggest to use svm. Furthermore you can find here a useful infographic about [when to use different machine learning algorithms](http://scikit-learn.org/stable/tutorial/machine_learning_map/) by scikit-learn. However, to the best of my knowledge there is no agreement among the scientific community about if a problem has X,Y and Z features then it's better to use svm. I would suggest to try different methods. Also, please don't forget that svm or neural nets is just a method to compute a model. It is very important as well the features you use.
Mathematical formulation of Support Vector Machines?
Your understandings are right. > deriving the margin to be $\frac{2}{|w|}$ we know that $w \cdot x +b = 1$ If we move from point z in $w \cdot x +b = 1$ to the $w \cdot x +b = 0$ we land in a point $\lambda$. This line that we have passed or this margin between the two lines $w \cdot x +b = 1$ and $w \cdot x +b = 0$ is the margin between them which we call $\gamma$ For calculating the margin, we know that we have moved from z, in opposite direction of w to point $\lambda$. Hence this margin $\gamma$ would be equal to $z - margin \cdot \frac{w}{|w|} = z - \gamma \cdot \frac{w}{|w|} =$ (we have moved in the opposite direction of w, we just want the direction so we normalize w to be a unit vector $\frac{w}{|w|}$) Since this $\lambda$ point lies in the decision boundary we know that it should suit in line $w \cdot x + b = 0$ Hence we set is in this line in place of x: $$w \cdot x + b = 0$$ $$w \cdot (z - \gamma \cdot \frac{w}{|w|}) + b = 0$$ $$w \cdot z + b - w \cdot \gamma \cdot \frac{w}{|w|}) = 0$$ $$w \cdot z + b = w \cdot \gamma \cdot \frac{w}{|w|}$$ we know that $w \cdot z +b = 1$ (z is the point on $w \cdot x +b = 1)$ $$1 = w \cdot \gamma \cdot \frac{w}{|w|}$$ $$\gamma= \frac{1}{w} \cdot \frac{|w|}{w} $$ we also know that $w \cdot w = |w|^2$, hence: $$\gamma= \frac{1}{|w|}$$ Why is in your formula 2 instead of 1? because I have calculated the margin between the middle line and the upper, not the whole margin. > How can $y_i(w^Tx+b)\ge1\;\;\forall\;x_i$? We want to classify the points in the +1 part as +1 and the points in the -1 part as -1, since $(w^Tx_i+b)$ is the predicted value and $y_i$ is the actual value for each point, if it is classified correctly, then the predicted and actual values should be same so their production $y_i(w^Tx+b)$ should be positive (the term >= 0 is substituded by >= 1 because it is a stronger condition) The transpose is in order to be able to calculate the dot product. I just wanted to show the logic of dot product hence, didn't write transpose --- For calculating the total distance between lines $w \cdot x + b = -1$ and $w \cdot x + b = 1$: Either you can multiply the calculated margin by 2 Or if you want to directly find it, you can consider a point $\alpha$ in line $w \cdot x + b = -1$. then we know that the distance between these two lines is twice the value of $\gamma$, hence if we want to move from the point z to $\alpha$, the total margin (passed length) would be: $$z - 2 \cdot \gamma \cdot \frac{w}{|w|}$$ then we can calculate the margin from here. derived from ML course of UCSD by Prof. Sanjoy Dasgupta
9738
1
9740
null
2
805
I am trying to learn TensorFlow, and I could understand how it uses the batch in [this](https://www.tensorflow.org/versions/master/tutorials/mnist/pros/index.html#start-tensorflow-interactivesession) example: ``` cross_entropy = -tf.reduce_sum(y_*tf.log(y_conv)) train_step = tf.train.AdamOptimizer(1e-4).minimize(cross_entropy) correct_prediction = tf.equal(tf.argmax(y_conv,1), tf.argmax(y_,1)) accuracy = tf.reduce_mean(tf.cast(correct_prediction, "float")) sess.run(tf.initialize_all_variables()) for i in range(20000): batch = mnist.train.next_batch(50) if i%100 == 0: train_accuracy = accuracy.eval(feed_dict={ x:batch[0], y_: batch[1], keep_prob: 1.0}) print("step %d, training accuracy %g"%(i, train_accuracy)) train_step.run(feed_dict={x: batch[0], y_: batch[1], keep_prob: 0.5}) print("test accuracy %g"%accuracy.eval(feed_dict={ x: mnist.test.images, y_: mnist.test.labels, keep_prob: 1.0})) ``` My question is, why it get a batch of 50 training data, but only use the first one for training. Maybe I did not understand the code correctly.
Question about train example code for TensorFlow
CC BY-SA 3.0
null
2016-01-11T09:24:36.577
2016-01-11T10:29:30.257
null
null
3167
[ "tensorflow" ]
If I understood you correctly, you are asking about this line of code: ``` train_step.run(feed_dict={x: batch[0], y_: batch[1], keep_prob: 0.5}) ``` Here you only specify which part of batch is used for features and which for your predicted class.
Neural network using Tensorflow
Most of the critical points in a neural network are not local minima, as it can be seen in [this question](https://datascience.stackexchange.com/q/22853/50727). Although it is not impossible to fall into a local minimum, the probability of it happening is so low that in practice it does not happen, except from very special cases as a single-layer perceptron. Being local minima so hard to find, this means that it is highly unlikely to come across the global minimum in your optimization method. All that we do in deep learning is decrease the loss function to find fairly good parameters, but finding local and global minima is extremely unlikely.
9758
1
9760
null
11
9774
In the below graph, - x-axis => Data set Size - y-axis => Cross validation Score [](https://i.stack.imgur.com/4VAFr.jpg) - Red line is for Training Data - Green line is for Testing Data In a tutorial that I'm referring to, the author says that the point where the red line and the green line overlap means, > Collecting more data is unlikely to increase the generalization performance and we're in a region that we are likely to underfit the data. Therefore it makes sense to try out with a model with more capacity I cannot quite understand meaning of the bold phrase and how it happens. Appreciate any help.
Overfitting/Underfitting with Data set size
CC BY-SA 3.0
null
2016-01-12T09:57:38.883
2020-02-19T13:45:45.400
2020-06-16T11:08:43.077
-1
15115
[ "machine-learning", "cross-validation" ]
So, the underfitting means that you still have capacity for improving your learning while overfitting means that you have used a capacity more than needed for learning. Green area is where testing error is rising i.e. you should continue providing capacity (either data points or model complexity) to gain better results. More green line goes, more flat it becomes i.e. you are reaching the point where the provided capacity (which is data) is enough and better to try providing the other type of capacity which is model complexity. If it does not improve your test score or even reduce it that means that the combination of Data-Complexity was somehow optimal and you can stop training.
Overfitting in machine learning
I can tell from your screenshot that you are plotting the validation accuracy. When you overfit your training accuracy should be very high, but your validation accuracy should get lower and lower. Or if you think in terms of error rather than accuracy you should see the following plot in case of overfitting. In the figure below the x-axis contains the training progress, i.e. the number of training iterations. The training error (blue) keeps decreasing, while the validation error (red) starts increasing at the point where you start overfitting. [](https://i.stack.imgur.com/TVkSt.png) This picture is from the wikipedia article on overfitting by the way: [https://en.wikipedia.org/wiki/Overfitting](https://en.wikipedia.org/wiki/Overfitting) Have a look. So to answer your question: No, I don't think you are overfitting. If increasing the number of features would make the overfitting more and more significant the validation accuracy should be falling, not stay constant. In your case it seems that more features are simply no longer adding additional benefit for the classification.
9761
1
9763
null
1
76
I want to cluster a 5 feature data-set. Firstly to explore the data I did a correlation matrix to see if some features where highly correlated so I could reduce them. Then I saw a feature that have close to zero correlation against all the other features. This got me wondering if I should exclude this parameter since it acts as a kind of "noise" relatively to all the other features. What's your opinion?
Feauture selection for clustering regarding zero-correlated feature
CC BY-SA 3.0
null
2016-01-12T13:01:58.427
2016-01-12T18:38:23.660
null
null
14560
[ "clustering", "feature-selection", "correlation" ]
Lack of correlation with other features is not a reason to omit a feature. On the contrary, it is usually a reason to keep the feature because it may provide unique information. Typically, highly correlated features provide redundant information and feature reduction techniques (e.g., [Principal Components Analysis](https://en.wikipedia.org/wiki/Principal_component_analysis)) are used to remove the redundancy. While it is possible that the uncorrelated feature is noise, you should not make that assumption. It could be that the uncorrelated feature is the only one containing information and the other 4 features are all correlated noise.
model selection in clustering
To answer your initial question, yes you can use silhouette score with different clustering methods. You could also use the Davies-Bouldin Index or the Dunn Index. Regarding over-fitting, (this is my personal suggestion) but you could train the model n times on different types of the same data to see if there clustering is the same even though the values are changed. Short example: If you have to cluster 5 apples and 6 oranges, the cluster should be the same for 10 apples and 12 oranges. You can find a bit more detail on this here: [https://datascience.stackexchange.com/a/20292/103857](https://datascience.stackexchange.com/a/20292/103857) For your third query: Calculate distances between data points, as appropriate to your problem. Then plot your data points in two dimensions instead of fifteen, preserving distances as far as possible. This is probably the key aspect of your question. Read up on multidimensional scaling (MDS) for this. Finally, color your points according to cluster membership. (source for third query: [https://stats.stackexchange.com/a/173823](https://stats.stackexchange.com/a/173823)) Regarding pca, its subjective. PCA works well with high correlation. If your dimensions are like apples and oranges then your directly effecting your models performance, so do keep that in check. A bit of eda would help before you dive into that.
9762
1
9766
null
5
3738
I have the following variables along with sales data going back a few years: - date # simple date, can be split in year, month etc - shipping_time (0-6 weeks) # 0 weeks means in stock, more weeks means the product is out of stock but a shipment is on the way to the warehouse. Longer shipping times have a siginificant impact on sales. - sales # amount of products sold I need to predict the sales (which vary seasonally) while taking into account the shipping time. What would be a simple regression model that would produce reasonable results? I tried linear regression with only date and sales, but this does not account for seasonality, so the prediction is rather weak. Edit: As a measure of accuracy, I will withold a random sample of data from the input and compare against the result. Extra points if it can be easily done in python/scipy Data can look like this ``` -------------------------------------------------- | date | delivery_time| sales | -------------------------------------------------- | 2015-01-01 | 0 |10 | -------------------------------------------------- | 2015-01-01 | 7 |2 | -------------------------------------------------- | 2015-01-02 | 7 |3 | ... ```
Best regression model to use for sales prediction
CC BY-SA 3.0
null
2016-01-12T13:54:34.180
2017-07-25T20:52:35.583
2016-01-12T15:21:36.423
15356
15356
[ "predictive-modeling", "regression" ]
This is a pretty classic ARIMA dataset. ARIMA is implemented in the StatsModels package for Python, the documentation for which is available [here](http://statsmodels.sourceforge.net/stable/index.html). An ARIMA model with seasonal adjustment may be the simplest reasonably successful forecast for a complex time series such as sales forecasting. It may (probably will) be that you need to combine the method with an additional model layer to detect additional fluctuation beyond the auto-regressive function of your sales trend. Unfortunately, simple linear regression models tend to fare quite poorly on time series data.
sales price prediction
So firstly, what do you mean by "classifier for price prediction"? You can predict the price as a number, that would like be different for different cars, but if you want to predict a class of price (like, high, low and medium for instance), you would need a column for that (and you can ignore the column for price, as you are not predicting the price, you're predicting the price class). Stage 1. Pre-processing the data Assuming you have the column in the dataset which you want to predict for, you first want to do feature selection. That is, not all features in the data would be important or relevant for predicting the price. For example, in your dataset, the first column/feature ("index") is irrelevant for the price of the car. But how do we prove that? Or, how do we computationally select them (using some measure), especially when they're not as trivial as "index"? We generally check the statistical properties of the features for that. I copied the data you provided in the question, and here's some things for you to start with: ``` import pandas as pd data = pd.read_csv('ex.csv') data ``` [](https://i.stack.imgur.com/llRbm.png) ``` data.describe() # to check the statistical properties of the features, like mean, std dev, etc ``` [](https://i.stack.imgur.com/zln51.png) Then, you could do a simple percentage count of the unique observations in each feature, and maybe you could get some insight about the features that way: ``` for column in data.select_dtypes(include=['object']).columns: display(pd.crosstab(index=data[column], columns='% observations', normalize='columns')) ``` [](https://i.stack.imgur.com/zJDqM.png) Then you could do a histogram analysis of the features and hopefully that gives you some more insight. For example, assuming you have sufficiently enough data, you'd normally expect the histogram of a feature to follow the normal or gaussian distribution. But if its doesn't, then you can further drill down into those features to understand why, and that might lead you to keep or discard those features from the model you're going to build. ``` hist = data.hist(figsize=(10, 10)) ``` [](https://i.stack.imgur.com/ko9D4.png) Then we can do correlation analysis of the features: ``` data.corr().style.background_gradient() ``` [](https://i.stack.imgur.com/sfY0P.png) Or, if you want a more fancy visualization: ``` import seaborn as sns sns.heatmap(data.corr(), annot=True) ``` [](https://i.stack.imgur.com/afk32.png) After doing all these, hopefully you have figured out which features to discard and which to keep for your model. These are of course "manual" methods of feature selection; there are other more complex methods for feature selection like SHAPLEY values, etc, which you can explore. Stage 2 - Building a model and training it Firstly, you need to pick a technique/method using which you want to do the prediction. The simplest one, since you have only one target variable (i.e., only one feature you're predicting, which is the price or the price class), the simplest one would be linear regression, and the most complicated ones would be some deep learning model build with CNN or RNN. So, instead of showing you how to make predictions with the simplest one, i.e., linear regression, let me show you a middle-of-the-road algorithm in terms of complexity which is quite popular and a widely used method in many machine learning tasks, the accelerated gradient boost, or xgboost, algorithm. We need to import some libraries for this: ``` from sklearn.model_selection import train_test_split import xgboost import numpy as np X = data.drop(['price'], axis=1) # take all the features except the target variable y = data['price'] # the target variable ``` Then, we create a train/test split with 80-20 split randomly. That is, we randomly take 80% data for training and 20% for testing: ``` X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2) ``` You can of course do a 70-30 split if you want, and definitely try out different splits at both ends of the spectrum to see what happens - that way you'll learn more about why a 70-30 or 80-20 split is good and, say, a 50-50 split is not that good. Then, if there are missing values in your data, fill them with a high negative value so that it doesn't have any impact in the model. You can also choose to fill them with something else, depending on your goal. ``` X_train.fillna((-999), inplace=True) X_test.fillna((-999), inplace=True) ``` Some more preprocessing steps: ``` # Some of values are float or integer and some object. This is why we need to cast them: from sklearn import preprocessing for f in X_train.columns: if X_train[f].dtype=='object': lbl = preprocessing.LabelEncoder() lbl.fit(list(X_train[f].values)) X_train[f] = lbl.transform(list(X_train[f].values)) for f in X_test.columns: if X_test[f].dtype=='object': lbl = preprocessing.LabelEncoder() lbl.fit(list(X_test[f].values)) X_test[f] = lbl.transform(list(X_test[f].values)) X_train=np.array(X_train) X_test=np.array(X_test) X_train = X_train.astype(float) X_test = X_test.astype(float) d_train = xgboost.DMatrix(X_train, label=y_train, feature_names=list(X)) d_test = xgboost.DMatrix(X_test, label=y_test, feature_names=list(X)) ``` Finally, we can make our model and train it: ``` params = { "eta": 0.01, # something called the learning rate - read up about optimization and gradient descent to understand more about this "subsample": 0.5, "base_score": np.mean(y_train) } # these params are optional - if you don't feed the train function below with the params, it will take the default values model = xgboost.train(params, d_train, 5000, evals = [(d_test, "test")], verbose_eval=100, early_stopping_rounds=50) ``` You can check the root mean square error (RMSE) that this function returns at the end to see how good or bad the training has been (low RMSE is good, high RMSE is bad - but there's no max RMSE value, it can be arbitrarily high). There are other methods to check the error, and you can explore them (like MAE, etc), but this is probably the simplest one. Anyway, the above code will return something like this: ``` [16:56:02] C:\Users\Administrator\Desktop\xgboost\src\tree\updater_prune.cc:74: tree pruning end, 1 roots, 0 extra nodes, 0 pruned nodes, max_depth=0 [0] test-rmse:2275 Will train until test-rmse hasn't improved in 50 rounds. [16:56:02] C:\Users\Administrator\Desktop\xgboost\src\tree\updater_prune.cc:74: tree pruning end, 1 roots, 2 extra nodes, 0 pruned nodes, max_depth=1 [16:56:02] C:\Users\Administrator\Desktop\xgboost\src\tree\updater_prune.cc:74: tree pruning end, 1 roots, 2 extra nodes, 0 pruned nodes, max_depth=1 [16:56:02] C:\Users\Administrator\Desktop\xgboost\src\tree\updater_prune.cc:74: tree pruning end, 1 roots, 4 extra nodes, 0 pruned nodes, max_depth=2 [16:56:02] C:\Users\Administrator\Desktop\xgboost\src\tree\updater_prune.cc:74: tree pruning end, 1 roots, 2 extra nodes, 0 pruned nodes, max_depth=1 [16:56:02] C:\Users\Administrator\Desktop\xgboost\src\tree\updater_prune.cc:74: tree pruning end, 1 roots, 2 extra nodes, 0 pruned nodes, max_depth=1 [16:56:02] C:\Users\Administrator\Desktop\xgboost\src\tree\updater_prune.cc:74: tree pruning end, 1 roots, 0 extra nodes, 0 pruned nodes, max_depth=0 [16:56:02] C:\Users\Administrator\Desktop\xgboost\src\tree\updater_prune.cc:74: tree pruning end, 1 roots, 2 extra nodes, 0 pruned nodes, max_depth=1 [16:56:02] C:\Users\Administrator\Desktop\xgboost\src\tree\updater_prune.cc:74: tree pruning end, 1 roots, 2 extra nodes, 0 pruned nodes, max_depth=1 [16:56:02] C:\Users\Administrator\Desktop\xgboost\src\tree\updater_prune.cc:74: tree pruning end, 1 roots, 4 extra nodes, 0 pruned nodes, max_depth=2 [16:56:02] C:\Users\Administrator\Desktop\xgboost\src\tree\updater_prune.cc:74: tree pruning end, 1 roots, 2 extra nodes, 0 pruned nodes, max_depth=1 [16:56:02] C:\Users\Administrator\Desktop\xgboost\src\tree\updater_prune.cc:74: tree pruning end, 1 roots, 2 extra nodes, 0 pruned nodes, max_depth=1 [16:56:02] C:\Users\Administrator\Desktop\xgboost\src\tree\updater_prune.cc:74: tree pruning end, 1 roots, 2 extra nodes, 0 pruned nodes, max_depth=1 [16:56:02] C:\Users\Administrator\Desktop\xgboost\src\tree\updater_prune.cc:74: tree pruning end, 1 roots, 0 extra nodes, 0 pruned nodes, max_depth=0 [16:56:02] C:\Users\Administrator\Desktop\xgboost\src\tree\updater_prune.cc:74: tree pruning end, 1 roots, 4 extra nodes, 0 pruned nodes, max_depth=2 [16:56:02] C:\Users\Administrator\Desktop\xgboost\src\tree\updater_prune.cc:74: tree pruning end, 1 roots, 4 extra nodes, 0 pruned nodes, max_depth=2 [16:56:02] C:\Users\Administrator\Desktop\xgboost\src\tree\updater_prune.cc:74: tree pruning end, 1 roots, 2 extra nodes, 0 pruned nodes, max_depth=1 [16:56:02] C:\Users\Administrator\Desktop\xgboost\src\tree\updater_prune.cc:74: tree pruning end, 1 roots, 2 extra nodes, 0 pruned nodes, max_depth=1 [16:56:02] C:\Users\Administrator\Desktop\xgboost\src\tree\updater_prune.cc:74: tree pruning end, 1 roots, 2 extra nodes, 0 pruned nodes, max_depth=1 [16:56:02] C:\Users\Administrator\Desktop\xgboost\src\tree\updater_prune.cc:74: tree pruning end, 1 roots, 0 extra nodes, 0 pruned nodes, max_depth=0 [16:56:02] C:\Users\Administrator\Desktop\xgboost\src\tree\updater_prune.cc:74: tree pruning end, 1 roots, 0 extra nodes, 0 pruned nodes, max_depth=0 [16:56:02] C:\Users\Administrator\Desktop\xgboost\src\tree\updater_prune.cc:74: tree pruning end, 1 roots, 2 extra nodes, 0 pruned nodes, max_depth=1 [16:56:02] C:\Users\Administrator\Desktop\xgboost\src\tree\updater_prune.cc:74: tree pruning end, 1 roots, 2 extra nodes, 0 pruned nodes, max_depth=1 [16:56:02] C:\Users\Administrator\Desktop\xgboost\src\tree\updater_prune.cc:74: tree pruning end, 1 roots, 2 extra nodes, 0 pruned nodes, max_depth=1 [16:56:02] C:\Users\Administrator\Desktop\xgboost\src\tree\updater_prune.cc:74: tree pruning end, 1 roots, 2 extra nodes, 0 pruned nodes, max_depth=1 [16:56:02] C:\Users\Administrator\Desktop\xgboost\src\tree\updater_prune.cc:74: tree pruning end, 1 roots, 4 extra nodes, 0 pruned nodes, max_depth=2 [16:56:02] C:\Users\Administrator\Desktop\xgboost\src\tree\updater_prune.cc:74: tree pruning end, 1 roots, 4 extra nodes, 0 pruned nodes, max_depth=2 [16:56:02] C:\Users\Administrator\Desktop\xgboost\src\tree\updater_prune.cc:74: tree pruning end, 1 roots, 0 extra nodes, 0 pruned nodes, max_depth=0 [16:56:02] C:\Users\Administrator\Desktop\xgboost\src\tree\updater_prune.cc:74: tree pruning end, 1 roots, 2 extra nodes, 0 pruned nodes, max_depth=1 [16:56:02] C:\Users\Administrator\Desktop\xgboost\src\tree\updater_prune.cc:74: tree pruning end, 1 roots, 0 extra nodes, 0 pruned nodes, max_depth=0 [16:56:03] C:\Users\Administrator\Desktop\xgboost\src\tree\updater_prune.cc:74: tree pruning end, 1 roots, 2 extra nodes, 0 pruned nodes, max_depth=1 [16:56:03] C:\Users\Administrator\Desktop\xgboost\src\tree\updater_prune.cc:74: tree pruning end, 1 roots, 2 extra nodes, 0 pruned nodes, max_depth=1 [16:56:03] C:\Users\Administrator\Desktop\xgboost\src\tree\updater_prune.cc:74: tree pruning end, 1 roots, 2 extra nodes, 0 pruned nodes, max_depth=1 [16:56:03] C:\Users\Administrator\Desktop\xgboost\src\tree\updater_prune.cc:74: tree pruning end, 1 roots, 2 extra nodes, 0 pruned nodes, max_depth=1 [16:56:03] C:\Users\Administrator\Desktop\xgboost\src\tree\updater_prune.cc:74: tree pruning end, 1 roots, 0 extra nodes, 0 pruned nodes, max_depth=0 [16:56:03] C:\Users\Administrator\Desktop\xgboost\src\tree\updater_prune.cc:74: tree pruning end, 1 roots, 2 extra nodes, 0 pruned nodes, max_depth=1 [16:56:03] C:\Users\Administrator\Desktop\xgboost\src\tree\updater_prune.cc:74: tree pruning end, 1 roots, 2 extra nodes, 0 pruned nodes, max_depth=1 [16:56:03] C:\Users\Administrator\Desktop\xgboost\src\tree\updater_prune.cc:74: tree pruning end, 1 roots, 2 extra nodes, 0 pruned nodes, max_depth=1 [16:56:03] C:\Users\Administrator\Desktop\xgboost\src\tree\updater_prune.cc:74: tree pruning end, 1 roots, 4 extra nodes, 0 pruned nodes, max_depth=2 [16:56:03] C:\Users\Administrator\Desktop\xgboost\src\tree\updater_prune.cc:74: tree pruning end, 1 roots, 2 extra nodes, 0 pruned nodes, max_depth=1 [16:56:03] C:\Users\Administrator\Desktop\xgboost\src\tree\updater_prune.cc:74: tree pruning end, 1 roots, 2 extra nodes, 0 pruned nodes, max_depth=1 [16:56:03] C:\Users\Administrator\Desktop\xgboost\src\tree\updater_prune.cc:74: tree pruning end, 1 roots, 0 extra nodes, 0 pruned nodes, max_depth=0 [16:56:03] C:\Users\Administrator\Desktop\xgboost\src\tree\updater_prune.cc:74: tree pruning end, 1 roots, 6 extra nodes, 0 pruned nodes, max_depth=2 [16:56:03] C:\Users\Administrator\Desktop\xgboost\src\tree\updater_prune.cc:74: tree pruning end, 1 roots, 4 extra nodes, 0 pruned nodes, max_depth=2 [16:56:03] C:\Users\Administrator\Desktop\xgboost\src\tree\updater_prune.cc:74: tree pruning end, 1 roots, 2 extra nodes, 0 pruned nodes, max_depth=1 [16:56:03] C:\Users\Administrator\Desktop\xgboost\src\tree\updater_prune.cc:74: tree pruning end, 1 roots, 2 extra nodes, 0 pruned nodes, max_depth=1 [16:56:03] C:\Users\Administrator\Desktop\xgboost\src\tree\updater_prune.cc:74: tree pruning end, 1 roots, 2 extra nodes, 0 pruned nodes, max_depth=1 [16:56:03] C:\Users\Administrator\Desktop\xgboost\src\tree\updater_prune.cc:74: tree pruning end, 1 roots, 2 extra nodes, 0 pruned nodes, max_depth=1 [16:56:03] C:\Users\Administrator\Desktop\xgboost\src\tree\updater_prune.cc:74: tree pruning end, 1 roots, 0 extra nodes, 0 pruned nodes, max_depth=0 [16:56:03] C:\Users\Administrator\Desktop\xgboost\src\tree\updater_prune.cc:74: tree pruning end, 1 roots, 0 extra nodes, 0 pruned nodes, max_depth=0 [16:56:03] C:\Users\Administrator\Desktop\xgboost\src\tree\updater_prune.cc:74: tree pruning end, 1 roots, 4 extra nodes, 0 pruned nodes, max_depth=2 Stopping. Best iteration: [0] test-rmse:1571.88 ``` It ran the algo iteratively 5000 times, printing out the result every 100 lines (that's what those numbers are in the train method). To see what each of the parameters mean, you can read [here](https://xgboost.readthedocs.io/en/latest/parameter.html#general-parameters). You can also use linear regression, if you want, with xgboost, like so: ``` xg_reg = xgboost.XGBRegressor(objective ='reg:linear', colsample_bytree = 0.3, learning_rate = 0.1, max_depth = 5, alpha = 10, n_estimators = 10) xg_reg.fit(X_train,y_train) preds = xg_reg.predict(X_test) print(preds) # these are the predicted prices for the test data >>> array([2293.7073, 2891.9692, 3822.3757], dtype=float32) ``` And we can check the RMSE like so: ``` from sklearn.metrics import mean_squared_error rmse = np.sqrt(mean_squared_error(y_test, preds)) print("RMSE: %f" % (rmse)) >>> RMSE: 1542.541395 ``` Note that RMSE in the 2 methods is quite close (1571.88 vs 1542.54). This is like a sanity check for us that no matter which method we use, if we use it correctly, we should get similar results. Stage 3 - testing and evaluation of the model - k-fold Cross Validation Finally its time to see how our model performs on test data: ``` params = {"objective":"reg:linear",'colsample_bytree': 0.3,'learning_rate': 0.1, 'max_depth': 5, 'alpha': 10} cv_results = xgboost.cv(dtrain=d_train, params=params, nfold=3, num_boost_round=50,early_stopping_rounds=10,metrics="rmse", as_pandas=True) ``` This will again give you quite a few lines of output like when training: ``` [17:38:33] C:\Users\Administrator\Desktop\xgboost\src\tree\updater_prune.cc:74: tree pruning end, 1 roots, 0 extra nodes, 0 pruned nodes, max_depth=0 [17:38:33] C:\Users\Administrator\Desktop\xgboost\src\tree\updater_prune.cc:74: tree pruning end, 1 roots, 2 extra nodes, 0 pruned nodes, max_depth=1 [17:38:33] C:\Users\Administrator\Desktop\xgboost\src\tree\updater_prune.cc:74: tree pruning end, 1 roots, 2 extra nodes, 0 pruned nodes, max_depth=1 [17:38:33] C:\Users\Administrator\Desktop\xgboost\src\tree\updater_prune.cc:74: tree pruning end, 1 roots, 0 extra nodes, 0 pruned nodes, max_depth=0 [17:38:33] C:\Users\Administrator\Desktop\xgboost\src\tree\updater_prune.cc:74: tree pruning end, 1 roots, 2 extra nodes, 0 pruned nodes, max_depth=1 [17:38:33] C:\Users\Administrator\Desktop\xgboost\src\tree\updater_prune.cc:74: tree pruning end, 1 roots, 2 extra nodes, 0 pruned nodes, max_depth=1 [17:38:33] C:\Users\Administrator\Desktop\xgboost\src\tree\updater_prune.cc:74: tree pruning end, 1 roots, 0 extra nodes, 0 pruned nodes, max_depth=0 [17:38:33] C:\Users\Administrator\Desktop\xgboost\src\tree\updater_prune.cc:74: tree pruning end, 1 roots, 2 extra nodes, 0 pruned nodes, max_depth=1 [17:38:33] C:\Users\Administrator\Desktop\xgboost\src\tree\updater_prune.cc:74: tree pruning end, 1 roots, 2 extra nodes, 0 pruned nodes, max_depth=1 [17:38:33] C:\Users\Administrator\Desktop\xgboost\src\tree\updater_prune.cc:74: tree pruning end, 1 roots, 0 extra nodes, 0 pruned nodes, max_depth=0 [17:38:33] C:\Users\Administrator\Desktop\xgboost\src\tree\updater_prune.cc:74: tree pruning end, 1 roots, 2 extra nodes, 0 pruned nodes, max_depth=1 [17:38:33] C:\Users\Administrator\Desktop\xgboost\src\tree\updater_prune.cc:74: tree pruning end, 1 roots, 2 extra nodes, 0 pruned nodes, max_depth=1 [17:38:33] C:\Users\Administrator\Desktop\xgboost\src\tree\updater_prune.cc:74: tree pruning end, 1 roots, 0 extra nodes, 0 pruned nodes, max_depth=0 [17:38:33] C:\Users\Administrator\Desktop\xgboost\src\tree\updater_prune.cc:74: tree pruning end, 1 roots, 2 extra nodes, 0 pruned nodes, max_depth=1 [17:38:33] C:\Users\Administrator\Desktop\xgboost\src\tree\updater_prune.cc:74: tree pruning end, 1 roots, 2 extra nodes, 0 pruned nodes, max_depth=1 [17:38:33] C:\Users\Administrator\Desktop\xgboost\src\tree\updater_prune.cc:74: tree pruning end, 1 roots, 2 extra nodes, 0 pruned nodes, max_depth=1 [17:38:33] C:\Users\Administrator\Desktop\xgboost\src\tree\updater_prune.cc:74: tree pruning end, 1 roots, 2 extra nodes, 0 pruned nodes, max_depth=1 [17:38:33] C:\Users\Administrator\Desktop\xgboost\src\tree\updater_prune.cc:74: tree pruning end, 1 roots, 2 extra nodes, 0 pruned nodes, max_depth=1 [17:38:33] C:\Users\Administrator\Desktop\xgboost\src\tree\updater_prune.cc:74: tree pruning end, 1 roots, 2 extra nodes, 0 pruned nodes, max_depth=1 [17:38:33] C:\Users\Administrator\Desktop\xgboost\src\tree\updater_prune.cc:74: tree pruning end, 1 roots, 2 extra nodes, 0 pruned nodes, max_depth=1 [17:38:33] C:\Users\Administrator\Desktop\xgboost\src\tree\updater_prune.cc:74: tree pruning end, 1 roots, 2 extra nodes, 0 pruned nodes, max_depth=1 [17:38:33] C:\Users\Administrator\Desktop\xgboost\src\tree\updater_prune.cc:74: tree pruning end, 1 roots, 2 extra nodes, 0 pruned nodes, max_depth=1 [17:38:33] C:\Users\Administrator\Desktop\xgboost\src\tree\updater_prune.cc:74: tree pruning end, 1 roots, 2 extra nodes, 0 pruned nodes, max_depth=1 [17:38:33] C:\Users\Administrator\Desktop\xgboost\src\tree\updater_prune.cc:74: tree pruning end, 1 roots, 2 extra nodes, 0 pruned nodes, max_depth=1 [17:38:33] C:\Users\Administrator\Desktop\xgboost\src\tree\updater_prune.cc:74: tree pruning end, 1 roots, 2 extra nodes, 0 pruned nodes, max_depth=1 [17:38:33] C:\Users\Administrator\Desktop\xgboost\src\tree\updater_prune.cc:74: tree pruning end, 1 roots, 2 extra nodes, 0 pruned nodes, max_depth=1 [17:38:33] C:\Users\Administrator\Desktop\xgboost\src\tree\updater_prune.cc:74: tree pruning end, 1 roots, 2 extra nodes, 0 pruned nodes, max_depth=1 [17:38:33] C:\Users\Administrator\Desktop\xgboost\src\tree\updater_prune.cc:74: tree pruning end, 1 roots, 2 extra nodes, 0 pruned nodes, max_depth=1 [17:38:33] C:\Users\Administrator\Desktop\xgboost\src\tree\updater_prune.cc:74: tree pruning end, 1 roots, 2 extra nodes, 0 pruned nodes, max_depth=1 [17:38:33] C:\Users\Administrator\Desktop\xgboost\src\tree\updater_prune.cc:74: tree pruning end, 1 roots, 2 extra nodes, 0 pruned nodes, max_depth=1 [17:38:33] C:\Users\Administrator\Desktop\xgboost\src\tree\updater_prune.cc:74: tree pruning end, 1 roots, 2 extra nodes, 0 pruned nodes, max_depth=1 [17:38:33] C:\Users\Administrator\Desktop\xgboost\src\tree\updater_prune.cc:74: tree pruning end, 1 roots, 2 extra nodes, 0 pruned nodes, max_depth=1 [17:38:33] C:\Users\Administrator\Desktop\xgboost\src\tree\updater_prune.cc:74: tree pruning end, 1 roots, 2 extra nodes, 0 pruned nodes, max_depth=1 [17:38:33] C:\Users\Administrator\Desktop\xgboost\src\tree\updater_prune.cc:74: tree pruning end, 1 roots, 2 extra nodes, 0 pruned nodes, max_depth=1 [17:38:33] C:\Users\Administrator\Desktop\xgboost\src\tree\updater_prune.cc:74: tree pruning end, 1 roots, 4 extra nodes, 0 pruned nodes, max_depth=2 [17:38:33] C:\Users\Administrator\Desktop\xgboost\src\tree\updater_prune.cc:74: tree pruning end, 1 roots, 2 extra nodes, 0 pruned nodes, max_depth=1 [17:38:33] C:\Users\Administrator\Desktop\xgboost\src\tree\updater_prune.cc:74: tree pruning end, 1 roots, 2 extra nodes, 0 pruned nodes, max_depth=1 [17:38:33] C:\Users\Administrator\Desktop\xgboost\src\tree\updater_prune.cc:74: tree pruning end, 1 roots, 4 extra nodes, 0 pruned nodes, max_depth=2 [17:38:33] C:\Users\Administrator\Desktop\xgboost\src\tree\updater_prune.cc:74: tree pruning end, 1 roots, 2 extra nodes, 0 pruned nodes, max_depth=1 ``` This is how it looks in each of the rounds of the boosting: ``` print(cv_results) ``` [](https://i.stack.imgur.com/WP6Zs.png) So, that's it. We have the predicted values. P.S. Stage 2.5 - Visualizing the model (Optional) Did you know that we can also visualize the model? ``` import matplotlib.pyplot as plt xgboost.plot_tree(xg_reg,num_trees=0) plt.show() ``` [](https://i.stack.imgur.com/5OwCw.png) It shows the tree structure following which the model you trained made its decisions. You can also see the importance of each feature in the dataset with respect to the model: ``` xgboost.plot_importance(xg_reg) plt.show() ``` [](https://i.stack.imgur.com/Rn7qS.png) These visualizations are of course not required for making the predictions, but they may sometimes give you useful insights about your predictions.
9783
1
9787
null
4
1074
I'm new to machine learning and have spent the last couple months having a blast using Sci-Kit Learn to try to understand the basics of building feature sets and predictive models. Now I'm trying to use ML on a data set not to predict future values but to understand the importance and direction (positive or negative) of each feature. My features (X) are boolean and integer values that describe a product. My target (y) is the sales of the product. I have ~15,000 observations with 16 features a piece. With my limited ML knowledge to this point, I'm confident that I can predict (with some level of accuracy) a new y based on a new set of features X. However I'm struggling to coherently identify, report on and present the importance and direction of each feature that makes up X. Thus far, I've taken a two-step approach: - Use a linear regression to observe coefficients - Use a random forest to observe feature importance The code First, I try to get the directional impact of each feature: ``` from sklearn import linear_model linreg = linear_model.LinearRegression() linreg.fit(X, y) coef = linreg.coef_ ... ``` Second, I try to get the importance of each feature: ``` from sklearn import ensemble forest = ensemble.RandomForestRegressor() forest.fit(X, y) importance = forest.feature_importances_ ... ``` Then I multiply the two derived values together for each feature and end up with some value that maybe perhaps could be the information I'm looking for! I'd love to know if I'm on the right track with any of this. Is this a common use case for ML? Are there tools, ideas, packages I should focus on to help guide me? Thank you very much.
Using machine learning specifically for feature analysis, not predictions
CC BY-SA 3.0
null
2016-01-14T04:12:54.237
2022-02-01T06:13:20.777
null
null
15398
[ "machine-learning", "scikit-learn" ]
You don't need the linear regression to understand the effect of features in your random forest, you're better off looking at the partial dependence plots directly, this what you get when you hold all the variables fixed, and you vary one at a time. You can plot these using `sklearn.ensemble.partial_depence.plot_partial_dependence`. Take a look at the [documentation](http://scikit-learn.org/stable/modules/ensemble.html#partial-dependence) for an example of how to use it. Another type of model that can be useful for exploratory data analysis is a `DecisionTreeClassifier`, you can produce a graphical representation of this using `export_graphviz`
Non-prediction Applications of Machine learning
The most obvious applications are indeed the supervised learning approaches (surrogate models, prediction). But there is much more than that! Other usual applications include: - Clustering: This can be seen as a different kind of prediction, but not in the classic supervised learning fashion. For instance, I have been using a clustering algorithm on a 3D geometry (CAD file) to make almost adjacent elements become actually aligned. - Anomaly detection - Study of extreme values (how to learn the behaviour of extreme values without actually observing such values) - Feature importance (determine which sets of features impacts the result the most) or, more generally, data mining - Inference (use a dataset to improve the prior model prediction) Of course, most of these applications are more or less close to prediction, but not trivial prediction.
9785
1
10417
null
28
23205
Given a sentence: "When I open the ?? door it starts heating automatically" I would like to get the list of possible words in ?? with a probability. The basic concept used in word2vec model is to "predict" a word given surrounding context. Once the model is build, what is the right context vectors operation to perform my prediction task on new sentences? Is it simply a linear sum? ``` model.most_similar(positive=['When','I','open','the','door','it','starts' , 'heating','automatically']) ```
Predicting a word using Word2vec model
CC BY-SA 4.0
null
2016-01-14T07:13:45.810
2021-02-09T20:17:39.023
2021-02-09T20:17:39.023
29169
15402
[ "nlp", "predictive-modeling", "word-embeddings" ]
Word2vec works in two models CBOW and skip-gram. Let's take CBOW model, as your question goes in the same way that predict the target word, given the surrounding words. Fundamentally, the model develops input and output weight matrices, which depends upon the input context words and output target word with the help of a hidden layer. Thus back-propagation is used to update the weights when the error difference between predicted output vector and the current output matrix. Basically speaking, predicting the target word from given context words is used as an equation to obtain the optimal weight matrix for the given data. To answer the second part, it seems a bit complex than just a linear sum. - Obtain all the word vectors of context words - Average them to find out the hidden layer vector h of size Nx1 - Obtain the output matrix syn1(word2vec.c or gensim) which is of size VxN - Multiply syn1 by h, the resulting vector will be z with size Vx1 - Compute the probability vector y = softmax(z) with size Vx1, where the highest probability denotes the one-hot representation of the target word in vocabulary. V denotes size of vocabulary and N denotes size of embedding vector. Source : [http://cs224d.stanford.edu/lecture_notes/LectureNotes1.pdf](http://cs224d.stanford.edu/lecture_notes/LectureNotes1.pdf) Update: Long short term memory models are currently doing a great work in predicting the next words. [seq2seq](https://papers.nips.cc/paper/5346-sequence-to-sequence-learning-with-neural-networks.pdf) models are explained in [tensorflow tutorial](https://www.tensorflow.org/tutorials/seq2seq). There is also a [blog post](https://chunml.github.io/ChunML.github.io/project/Creating-Text-Generator-Using-Recurrent-Neural-Network/) about text generation.
Features of word vectors in Word2Vec
1- The number of features: In terms of neural network model it represents the number of neurons in the projection(hidden) layer. As the projection layer is built upon distributional hypothesis, numerical vector for each word signifies it's relation with its context words. These features are learnt by the neural network as this is unsupervised method. Each vector has several set of semantic characteristics. For instance, let's take the classical example, `V(King) -V(man) + V(Women) ~ V(Queen)` and each word represented by 300-d vector. `V(King)` will have semantic characteristics of Royality, kingdom, masculinity, human in the vector in a certain order. `V(man)` will have masculinity, human, work in a certain order. Thus when `V(King)-V(Man)` is done, masculinity,human characteristics will get nullified and when added with `V(Women)` which having femininity, human characteristics will be added thus resulting in a vector much similar to `V(Queen)`. The interesting thing is, these characteristics are encoded in the vector in a certain order so that numerical computations such as addition, subtraction works perfectly. This is due to the nature of unsupervised learning method in neural network. 2- There are two approximation algorithms. `Hierarchical softmax` and `negative sampling`. When the sample parameter is given, it takes negative sampling. In case of hierarchical softmax, for each word vector its context words are given positive outputs and all other words in vocabulary are given negative outputs. The issue of time complexity is resolved by negative sampling. As in negative sampling, rather than the whole vocabulary, only a sampled part of vocabulary is given negative outputs and the vectors are trained which is so much faster than former method.
9791
1
9792
null
1
216
Given person's name, e.g. 'Adjutor Ferguson'. How to define is it a male or female? One solution came to my mind: ``` I have found Person NLP training dataset here mbejda.github.io. And via a machine learning software like Apache Mahout, train it and provide real data. ``` But I am not sure about the accuracy of the results. May be another approach exist? (e.g. scikit-learn.org)
How to define person's gender from the fullname?
CC BY-SA 3.0
null
2016-01-14T13:48:53.553
2016-01-14T14:34:22.810
null
null
15410
[ "classification", "nlp", "algorithms" ]
That dataset looks like a good starting point. Keep in mind that when you make your own dataset from those datasets you'll want to keep the male to female ratio balanced if you want it to predict both well. It should not matter what machine learning software you use (Apache Mahout, scikit-learn, weka, etc.). Pick one that fits your language of choice since speed will probably not be too much of a concern with the smallish dataset size. As for features, you'd generally use ngrams as your baseline for NLP classification tasks. If you use ngrams here you won't end up with anything very interesting because the model won't generalize to any unseen names. I'd suggest as a feature baseline that you try character ngrams, and maybe something like syllable ngrams for something slightly more advanced (for syllable tokenization see [https://stackoverflow.com/questions/405161/detecting-syllables-in-a-word](https://stackoverflow.com/questions/405161/detecting-syllables-in-a-word)).
What predictive model to use to impute Gender?
I agree with Simon's advice. I find that the gains that you obtain from using any external method of imputation is often inferior to an internal method, and on top of this, exposes you to even more potential of severely screwing up with respect to data leakage. That being said, besides using an algorithm that automatically handles missing data for you (which often are models based off trees/rules, though they do not all use the same method of imputation), there are external based methods that might be of interest. I find that as you get more "fancier" the results are not enough of an improvement compared to the computational pain it is to use them. Starting with the simplest; 1) Mode imputation; simply use the most common gender in your training data set. For your test dataset, use the most common gender that exists in your training data set. Since there are 5x more males than females, this would result in you almost certainly assigning male to all observations with missing gender. Obviously, this doesn't use a whole lot of information besides the observed frequency of the class, but this method is pretty common and often "good enough". 2) kNN imputation; take the k most closest neighbours (that do not have missing genders) to the observation that you wish to impute gender for. Then, simply treat each of these k neighbours as a committee of "voters" who use their own gender as their vote. Weight each vote by how close they are (based off other variables that aren't missing) to the observation with the missing gender value. Whichever gender wins in votes gives you the imputed gender. This method to me, is a clear improvement over method 1) and is also quite fast. However, this will require you to center and scale your data (because we are using distances to define "closeness") and k is now a tuning parameter which further complicates matters. 3) Random Forest imputation; initially, use method 1) to temporarily fill in your missing genders (just mode impute). Then, run a random forest algorithm on the imputed dataset, generating N trees. Compute what is referred to as the "proximity matrix", where each $(i,j), i \ne j $ entry in this matrix (diagonal entries are all 0) is equal to the number of times observations $i$ and $j$ fall in the same terminal node through the entire forest divided by the number of trees in the forest. Using these proximities as weights, calculate a weighted vote of all the observations that do not have missing genders using their genders as their "vote". Change any prior "temporary" imputed genders from the initial mode imputation to what has been calculated by the random forest if they differ. Repeat (fit another random forest again), using the imputed genders from the previous random forest, until all observations converge to a single gender or until some stopping criteria. This method is incredibly costly but is probably pretty accurate (I haven't used it much because it is slow). You will also have to deal with an additional tuning parameter; namely how many variables you wish to randomly select in each split. 4) MICE: I haven't really studied this method too closely, but you seem to have mentioned it. One thing I will say is that all of these methods can be used with any kinds of missing data; categorical (like gender) or continuous (like birth_date, though for method 1) you would probably use mean/median imputation instead for continuous variables, and for methods 2 and 3) you would no longer use a "vote" but a weighted average). Ultimately, MICE is just one of many methods of imputation that you can use which is why one needs to properly validate their modelling choices within cross validation if you choose to use an external method of imputation. If you have the time, try a bunch of methods and use the highest performing one. Otherwise, use a method that seems "reasonable enough" given time constraints.
9793
1
9796
null
1
1216
I am new to scikit-learn. I went through the examples given in the docs and I downloaded the script for recognizing images of hand-written digits. When I made the script to run on my laptop, I got the following errors: ``` Traceback (most recent call last): File "C:\Python34\plot_digits_classification.py", line 22, in <module> from sklearn import datasets, svm, metrics File "C:\Python34\lib\site-packages\sklearn\__init__.py", line 57, in<module> from .base import clone File "C:\Python34\lib\site-packages\sklearn\base.py", line 11, in <module> from .utils.fixes import signature File "C:\Python34\lib\site-packages\sklearn\utils\__init__.py", line 11, in <module> from .validation import (as_float_array, File "C:\Python34\lib\site-packages\sklearn\utils\validation.py", line 16, in <module> from ..utils.fixes import signature File "C:\Python34\lib\site-packages\sklearn\utils\fixes.py", line 324, in <module> from scipy.sparse.linalg import lsqr as sparse_lsqr File "C:\Python34\lib\site-packages\scipy\sparse\linalg\__init__.py", line 109, in <module> from .isolve import * File "C:\Python34\lib\site-packages\scipy\sparse\linalg\isolve\__init__.py", line 6, in <module> from .iterative import * File "C:\Python34\lib\site-packages\scipy\sparse\linalg\isolve\iterative.py", line 7, in <module> from . import _iterative ImportError: DLL load failed: The specified module could not be found. ``` Please help me. Also, I want to know if I want to load data for prediction, how should I do that? For example, if I want to test a hand written digit that is stored somewhere on my disk, how to prepare that data for loading and passing into this model for prediction?
Running examples from scikit-learn tutorials
CC BY-SA 4.0
null
2016-01-14T16:37:46.247
2020-12-30T16:04:09.527
2020-12-30T16:04:09.527
85045
15412
[ "machine-learning", "python", "scikit-learn", "prediction" ]
I installed older version of numpy,that's the problem. If you have installed scikit-learn using windows binaries, then you must first install numpy+mkl from the windows binaries site. It is a prerequisite for scikit-learn.
First steps with Python and scikit-learn
In python 3 the `print` function must have parenthesis, so `print(clf.predict([[150, 0]]))` will work
9813
1
9814
null
1
763
I am new to machine learning. I want to develop a face recognition system using scikit-learn. [This](http://scikit-learn.org/stable/auto_examples/applications/face_recognition.html#example-applications-face-recognition-py) is the example given in the tutorials of scikit-learn. I am not getting how the input is being provided to the program. How should I load a particular image and make my program run to predict the label for that?
Face Recognition using Eigenfaces and SVM
CC BY-SA 4.0
null
2016-01-17T04:36:02.373
2020-12-30T17:06:42.760
2020-12-30T17:06:42.760
85045
15412
[ "machine-learning", "python", "scikit-learn", "svm", "object-recognition" ]
Take a look at the code that you linked to: ``` # Download the data, if not already on disk and load it as numpy arrays lfw_people = fetch_lfw_people(min_faces_per_person=70, resize=0.4) ``` [fetch_lfw-poeple](http://scikit-learn.org/stable/modules/generated/sklearn.datasets.fetch_lfw_people.html#sklearn.datasets.fetch_lfw_people) is a routine that loads the data and is detailed [here](http://scikit-learn.org/stable/modules/generated/sklearn.datasets.fetch_lfw_people.html#sklearn.datasets.fetch_lfw_people). Hope this helps!
Support Vectors of SVM
Yes, they are support vectors. This is because they contribute (or serve as support) in the computation of the hyperplane that separates the positive and negative regions. This hyperplane is given by: $$\mathbf{w}^T x + b = 0 $$ Where $x$ is our $\text{n-dimensional}$ input vector that contains the $n$ features of our problem (the input samples during the learning process), and $b$ and $\mathbf{w} = (w_1,w_2,...,w_n)^T$ are the parameters that are optimized. More concretely, $b$ is the intercept of the hyperplane and $(w_1,w_2,...,w_n)^T$ is its normal vector (justification in [Mathworld](https://mathworld.wolfram.com/Plane.html)). But why the misclassified points contribute? More detailed justifications can be found in the [Andrew Ng notes about SVM](http://cs229.stanford.edu/notes/cs229-notes3.pdf) which I recommend. To understand why they contribute, we need to have a look at the cost function, $J$, that is used when the data is not linearly separable, like the case of the question (this cost function is also used to prevent the influence of outliers): $$J = \frac{1}{2}\Vert \mathbf{w}\Vert^2 + C \sum_{i=1}^m \xi_i$$ $$\text{subject to}\begin{cases} y^{(i)}(\mathbf{w}^Tx^{(i)}+b)\geq 1-\xi_i, \,\,\,\,\,\,\,\,\,\, i = 1,...,m\\ \xi_i \geq 0 \end{cases}$$ Where $\xi_i$ is the slack of the input sample $x_i$, being $\xi_i = 0$ only when the input sample $x_i$ is correctly classified and presents a functional margin $\geq1$. In order to solve this in a efficient way, it can be proved that by applying the Lagrange duality to the problem presented before (minimizing $J$ w.r.t. $\mathbf{w}$ and $b$), we end up with a equivalent problem of maximizing the next function w.r.t. $\alpha$: $$ \max_{\alpha}\,\,\,\, \sum_{i=1}^m\alpha_i - \frac{1}{2}\sum_{j=1}^m\sum_{i=1}^my^{(i)}y^{(j)}\alpha_i\alpha_jx_ix_j$$ $$\text{subject to}\begin{cases} 0\leq \alpha_i\leq C, \,\,\,\,\,\,\,\,\,\, i = 1,...,m\\ \sum_{i=1}^m\alpha_iy^{(i)}=0 \end{cases}$$ Where each $\alpha_i$ is the Lagrange multiplier associated with the input sample $x_i$. Furthermore, it can be proved that, once we have determined the optimal values of the Lagrange multipliers, the normal vector of the hyperplane can be computed by: $$ \mathbf{w}=\sum_i^m \alpha_i y^{(i)}x^{(i)}$$ Now we can see that only the vectors with an associated value of $\alpha_i\neq0$ will contribute to the computation of the hyperplane. This vectors are support vectors. Now the question is: When $\alpha_i \neq 0$? As explained in the notes linked above, the values of $\alpha_i \neq 0$ are derived from the KKT conditions which are needed to be satisfied in order to find the values of $\alpha_i$ that minimize our cost function. These are: - $\alpha_i = 0 \implies y^{(i)}(\mathbf{w}^Tx^{(i)}+b)\geq 1$ - $\alpha_i = C \implies y^{(i)}(\mathbf{w}^Tx^{(i)}+b)\leq 1$ - $0<\alpha_i < C \implies y^{(i)}(\mathbf{w}^Tx^{(i)}+b)= 1$ So, in conclusion the vectors (samples) that lie on the margins (condition number 3 and condition number 2 when $=1$), the vectors that are correctly classified but lie between the margins and the hyperplane (condition number 2 when $<1$) and the vectors that are misclassified (also condition number 2 when $<1$) are the ones with $\alpha_i \neq0$ and therefore contribute to the computation of the hyperplane $\rightarrow$ they are support vectors.
9818
1
9820
null
60
16539
Neural networks get top results in Computer Vision tasks (see [MNIST](http://yann.lecun.com/exdb/mnist/), [ILSVRC](http://www.image-net.org/challenges/LSVRC/), [Kaggle Galaxy Challenge](http://blog.kaggle.com/2014/04/18/winning-the-galaxy-challenge-with-convnets/)). They seem to outperform every other approach in Computer Vision. But there are also other tasks: - Kaggle Molecular Activity Challenge - Regression: Kaggle Rain prediction, also the 2nd place - Grasp and Lift 2nd also third place - Identify hand motions from EEG recordings I'm not too sure about ASR (automatic speech recognition) and machine translation, but I think I've also heard that (recurrent) neural networks (start to) outperform other approaches. I am currently learning about Bayesian Networks and I wonder in which cases those models are usually applied. So my question is: Is there any challenge / (Kaggle) competition, where the state of the art are Bayesian Networks or at least very similar models? (Side note: I've also seen [decision trees](http://blog.kaggle.com/2015/12/21/rossmann-store-sales-winners-interview-1st-place-gert/), [2](http://blog.kaggle.com/2015/11/09/profiling-top-kagglers-gilberto-titericz-new-1-in-the-world/), [3](http://blog.kaggle.com/2015/10/30/dato-winners-interview-2nd-place-mortehu/), [4](http://blog.kaggle.com/2015/10/21/recruit-coupon-purchase-winners-interview-2nd-place-halla-yang/), [5](http://blog.kaggle.com/2015/10/20/caterpillar-winners-interview-3rd-place-team-shift-workers/), [6](http://blog.kaggle.com/2015/09/28/liberty-mutual-property-inspection-winners-interview-qingchen-wang/), [7](http://blog.kaggle.com/2015/09/22/caterpillar-winners-interview-1st-place-gilberto-josef-leustagos-mario/) win in several recent Kaggle challenges)
Is there any domain where Bayesian Networks outperform neural networks?
CC BY-SA 3.0
null
2016-01-17T13:04:57.100
2022-12-07T19:52:40.220
2016-01-18T12:16:29.323
8820
8820
[ "machine-learning", "pgm" ]
One of the areas where Bayesian approaches are often used, is where one needs interpretability of the prediction system. You don't want to give doctors a Neural net and say that it's 95% accurate. You rather want to explain the assumptions your method makes, as well as the decision process the method uses. Similar area is when you have a strong prior domain knowledge and want to use it in the system.
How can we use Neural Networks for Decision Making intead of Bayesian networks or Desicion Trees?
I would like more details regarding your actual problem, but here is my suggestion to apply artificial neural networks for decision making. - One way of approaching this problem is by using priori as one of the parameters for a deep neural network. This could be treated as similar as a classification problem with the basis of supervised learning. - The desired decisions and its parameters can be labelled and the output layer can have as many neurons as the desired number of decisions. Then series of training and validation can be conducted until the training reaches expected accuracy. Best, Sangathamilan Ravichandran.
9826
1
10786
null
7
5438
I have set of categories and I want to compare a document vector with word vector of categories to find best matching category. Is it possible to compare a word vector with document vector? If yes, is there any literature which gives proof of concept for this?
Can we compare a word2vec vector with a doc2vec vector?
CC BY-SA 4.0
null
2016-01-18T05:37:40.897
2018-12-15T05:12:19.713
2018-12-15T05:12:19.713
8501
13518
[ "deep-learning", "information-retrieval", "word-embeddings", "word2vec" ]
In paragraph vector, the vector tries to grasp the semantic meaning of all the words in the context by placing the vector itself in each and every context. Thus finally, the paragraph vector contains the semantic meaning of all the words in the context trained. When we compare this to word2vec, each word in word2vec preserves its own semantic meaning. Thus summing up all the vectors or averaging them will result in a vector which could have all the semantics preserved. This is sensible, because when we add the vectors (transport+water) the result nearly equals ship or boat, which means summing the vectors sums up the semantics. Before the paragraph vector paper got published, people used averaged word vectors as sentence vectors. To be honest, in my work these average vectors work better than document vectors. So, with these things in mind, in this way it could be compared.
Doc2Vec or Word2vec for word embedding
As the name implies, doc2vec generates vectors representing documents (sentences, paragraphs) but not single words. So training doc2vec won't give you word vectors but document vectors. This means you can't replace word2vec by doc2vec at all. Here's how the authors of the [underlying paper](https://arxiv.org/abs/1405.4053) describe what doc2vec does: > Our algorithm represents each document by a dense vector which is trained to predict words in the document.
9832
1
10358
null
67
70997
It seems to me that the $V$ function can be easily expressed by the $Q$ function and thus the $V$ function seems to be superfluous to me. However, I'm new to reinforcement learning so I guess I got something wrong. ## Definitions Q- and V-learning are in the context of [Markov Decision Processes](https://en.wikipedia.org/wiki/Markov_decision_process#Definition). A MDP is a 5-tuple $(S, A, P, R, \gamma)$ with - $S$ is a set of states (typically finite) - $A$ is a set of actions (typically finite) - $P(s, s', a) = P(s_{t+1} = s' | s_t = s, a_t = a)$ is the probability to get from state $s$ to state $s'$ with action $a$. - $R(s, s', a) \in \mathbb{R}$ is the immediate reward after going from state $s$ to state $s'$ with action $a$. (It seems to me that usually only $s'$ matters). - $\gamma \in [0, 1]$ is called discount factor and determines if one focuses on immediate rewards ($\gamma = 0$), the total reward ($\gamma = 1$) or some trade-off. A policy $\pi$, according to [Reinforcement Learning: An Introduction](http://incompleteideas.net/book/the-book-2nd.html) by Sutton and Barto is a function $\pi: S \rightarrow A$ (this could be probabilistic). According to [Mario Martins slides](http://www.cs.upc.edu/~mmartin/Ag4-4x.pdf), the $V$ function is $$V^\pi(s) = E_\pi \{R_t | s_t = s\} = E_\pi \{\sum_{k=0}^\infty \gamma^k r_{t+k+1} | s_t = s\}$$ and the Q function is $$Q^\pi(s, a) = E_\pi \{R_t | s_t = s, a_t = a\} = E_\pi \{\sum_{k=0}^\infty \gamma^k r_{t+k+1} | s_t = s, a_t=a\}$$ ## My thoughts The $V$ function states what the expected overall value (not reward!) of a state $s$ under the policy $\pi$ is. The $Q$ function states what the value of a state $s$ and an action $a$ under the policy $\pi$ is. This means, $$Q^\pi(s, \pi(s)) = V^\pi(s)$$ Right? So why do we have the value function at all? (I guess I mixed up something)
What is the Q function and what is the V function in reinforcement learning?
CC BY-SA 3.0
null
2016-01-18T13:51:25.520
2023-02-09T02:47:02.507
2017-12-07T17:01:09.630
836
8820
[ "machine-learning", "reinforcement-learning" ]
Q-values are a great way to the make actions explicit so you can deal with problems where the transition function is not available (model-free). However, when your action-space is large, things are not so nice and Q-values are not so convenient. Think of a huge number of actions or even continuous action-spaces. From a sampling perspective, the dimensionality of $Q(s, a)$ is higher than $V(s)$ so it might get harder to get enough $(s, a)$ samples in comparison with $(s)$. If you have access to the transition function sometimes $V$ is good. There are also other uses where both are combined. For instance, the advantage function where $A(s, a) = Q(s, a) - V(s)$. If you are interested, you can find a recent example using advantage functions here: > Dueling Network Architectures for Deep Reinforcement Learning by Ziyu Wang, Tom Schaul, Matteo Hessel, Hado van Hasselt, Marc Lanctot and Nando de Freitas.
Definition of the Q* function in reinforcement learning
The reward of action $a$ is defined as a stationary probability distribution with mean $q_*(a)$. This is independent of time $t$. However the estimate of $q_*(a)$ at time $t$, denoted by $Q_t(a)$, is dependent on time $t$ > Or are we to understand q∗ as taking the expected reward across all t? The expectation is not over time, but over a probability distribution with mean $q_*(a)$. For eg., in the 10-armed bandit problem, the reward for each of the 10 actions comes from a Normal distribution with mean $q_*(a), a= 1,...,10$ and variance 1.
9850
1
9870
null
59
52996
I am using [TensorFlow](https://en.wikipedia.org/wiki/TensorFlow) for experiments mainly with neural networks. Although I have done quite some experiments (XOR-Problem, MNIST, some Regression stuff, ...) now, I struggle with choosing the "correct" cost function for specific problems because overall I could be considered a beginner. Before coming to TensorFlow I coded some fully-connected MLPs and some recurrent networks on my own with [Python](https://en.wikipedia.org/wiki/Python_(programming_language)) and [NumPy](https://en.wikipedia.org/wiki/NumPy) but mostly I had problems where a simple squared error and a simple gradient descient was sufficient. However, since TensorFlow offers quite a lot of cost functions itself as well as building custom cost functions, I would like to know if there is some kind of tutorial maybe specifically for cost functions on neural networks? (I've already done like half of the official TensorFlow tutorials but they're not really explaining why specific cost functions or learners are used for specific problems - at least not for beginners) To give some examples: ``` cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(y_output, y_train)) ``` I guess it applies the softmax function on both inputs so that the sum of one vector equals 1. But what exactly is cross entropy with logits? I thought it sums up the values and calculates the cross entropy...so some metric measurement?! Wouldn't this be very much the same if I normalize the output, sum it up and take the squared error? Additionally, why is this used e.g. for MNIST (or even much harder problems)? When I want to classify like 10 or maybe even 1000 classes, doesn't summing up the values completely destroy any information about which class actually was the output? `cost = tf.nn.l2_loss(vector)` What is this for? I thought l2 loss is pretty much the squared error but TensorFlow's API tells that it's input is just one tensor. Doesn't get the idea at all?! Besides I saw this for cross entropy pretty often: ``` cross_entropy = -tf.reduce_sum(y_train * tf.log(y_output)) ``` ...but why is this used? Isn't the loss in cross entropy mathematically this: ``` -1/n * sum(y_train * log(y_output) + (1 - y_train) * log(1 - y_output)) ``` Where is the `(1 - y_train) * log(1 - y_output)` part in most TensorFlow examples? Isn't it missing? --- Answers: I know this question is quite open, but I do not expect to get like 10 pages with every single problem/cost function listed in detail. I just need a short summary about when to use which cost function (in general or in TensorFlow, doesn't matter much to me) and some explanation about this topic. And/or some source(s) for beginners ;)
Neural networks: which cost function to use?
CC BY-SA 3.0
null
2016-01-19T11:48:29.337
2019-11-13T04:45:11.677
null
null
13809
[ "machine-learning", "python", "neural-network", "statistics", "tensorflow" ]
This answer is on the general side of cost functions, not related to TensorFlow, and will mostly address the "some explanation about this topic" part of your question. In most examples/tutorial I followed, the cost function used was somewhat arbitrary. The point was more to introduce the reader to a specific method, not to the cost function specifically. It should not stop you to follow the tutorial to be familiar with the tools, but my answer should help you on how to choose the cost function for your own problems. If you want answers regarding Cross-Entropy, Logit, L2 norms, or anything specific, I advise you to post multiple, more specific questions. This will increase the probability that someone with specific knowledge will see your question. --- Choosing the right cost function for achieving the desired result is a critical point of machine learning problems. The basic approach, if you do not know exactly what you want out of your method, is to use [Mean Square Error (Wikipedia)](https://en.wikipedia.org/wiki/Mean_squared_error) for regression problems and Percentage of error for classification problems. However, if you want good results out of your method, you need to define good, and thus define the adequate cost function. This comes from both domain knowledge (what is your data, what are you trying to achieve), and knowledge of the tools at your disposal. I do not believe I can guide you through the cost functions already implemented in TensorFlow, as I have very little knowledge of the tool, but I can give you an example on how to write and assess different cost functions. --- To illustrate the various differences between cost functions, let us use the example of the binary classification problem, where we want, for each sample $x_n$, the class $f(x_n) \in \{0,1\}$. Starting with computational properties; how two functions measuring the "same thing" could lead to different results. Take the following, simple cost function; the percentage of error. If you have $N$ samples, $f(y_n)$ is the predicted class and $y_n$ the true class, you want to minimize - $\frac{1}{N} \sum_n \left\{ \begin{array}{ll} 1 & \text{ if } f(x_n) \not= y_n\\ 0 & \text{ otherwise}\\ \end{array} \right. = \sum_n y_n[1-f(x_n)] + [1-y_n]f(x_n)$. This cost function has the benefit of being easily interpretable. However, it is not smooth; if you have only two samples, the function "jumps" from 0, to 0.5, to 1. This will lead to inconsistencies if you try to use gradient descent on this function. One way to avoid it is to change the cost function to use probabilities of assignment; $p(y_n = 1 | x_n)$. The function becomes - $\frac{1}{N} \sum_n y_n p(y_n = 0 | x_n) + (1 - y_n) p(y_n = 1 | x_n)$. This function is smoother, and will work better with a gradient descent approach. You will get a 'finer' model. However, it has other problem; if you have a sample that is ambiguous, let say that you do not have enough information to say anything better than $p(y_n = 1 | x_n) = 0.5$. Then, using gradient descent on this cost function will lead to a model which increases this probability as much as possible, and thus, maybe, overfit. Another problem of this function is that if $p(y_n = 1 | x_n) = 1$ while $y_n = 0$, you are certain to be right, but you are wrong. In order to avoid this issue, you can take the log of the probability, $\log p(y_n | x_n)$. As $\log(0) = \infty$ and $\log(1) = 0$, the following function does not have the problem described in the previous paragraph: - $\frac{1}{N} \sum_n y_n \log p(y_n = 0 | x_n) + (1 - y_n) \log p(y_n = 1 | x_n)$. This should illustrate that in order to optimize the same thing, the percentage of error, different definitions might yield different results if they are easier to make sense of, computationally. It is possible for cost functions $A$ and $B$ to measure the same concept, but $A$ might lead your method to better results than $B$. --- Now let see how different costs function can measure different concepts. In the context of information retrieval, as in google search (if we ignore ranking), we want the returned results to - have high precision, not return irrelevant information - have high recall, return as much relevant results as possible - Precision and Recall (Wikipedia) Note that if your algorithm returns everything, it will return every relevant result possible, and thus have high recall, but have very poor precision. On the other hand, if it returns only one element, the one that it is the most certain is relevant, it will have high precision but low recall. In order to judge such algorithms, the common cost function is the [$F$-score (Wikipedia)](https://en.wikipedia.org/wiki/F1_score). The common case is the $F_1$-score, which gives equal weight to precision and recall, but the general case it the $F_\beta$-score, and you can tweak $\beta$ to get - Higher recall, if you use $\beta > 1$ - Higher precision, if you use $\beta < 1$. In such scenario, choosing the cost function is choosing what trade-off your algorithm should do. Another example that is often brought up is the case of medical diagnosis, you can choose a cost function that punishes more false negatives or false positives depending on what is preferable: - More healthy people being classified as sick (But then, we might treat healthy people, which is costly and might hurt them if they are actually not sick) - More sick people being classified as healthy (But then, they might die without treatment) --- In conclusion, defining the cost function is defining the goal of your algorithm. The algorithm defines how to get there. --- Side note: Some cost functions have nice algorithm ways to get to their goals. For example, a nice way to the minimum of the [Hinge loss (Wikipedia)](https://en.wikipedia.org/wiki/Hinge_loss) exists, by solving the dual problem in [SVM (Wikipedia)](https://en.wikipedia.org/wiki/Support_vector_machine)
Two different cost functions for neural networks, how they can give the same result?
I don't remember exactly what the book has mentioned, but I guess the difference between the two is due to having one or multiple features. I guess it is already mentioned in the book. They are the same. One is for multi-dimensional input, and the other is for one-dimensional. One sigma is iterating over the features and the other is iterating over the examples. You can put $k$ to one to achieve the simpler formula.
9854
1
9861
null
0
1932
In a binary classification, how can I use sklearn.naive_bayes python module to predict the class of inputs with 5 categorical variables (not binary)?
sklearn.naive_bayes VS categorical variables
CC BY-SA 3.0
null
2016-01-19T15:14:38.403
2016-01-19T21:48:19.997
null
null
3433
[ "machine-learning", "data-mining", "python", "scikit-learn" ]
[Hot encode](https://en.wikipedia.org/wiki/One-hot) the categorical variables and use Bernoulli naive Bayes. Hot encoding is usually the trick one uses in representing categorical variables.
Naive Bayes for Categorical Features (Non Binary)
In a recent scikit-learn release (v0.22.1), the developers have added Categorical Naive Bayes to their list of Naive Bayes implementations: [https://scikit-learn.org/stable/modules/generated/sklearn.naive_bayes.CategoricalNB.html](https://scikit-learn.org/stable/modules/generated/sklearn.naive_bayes.CategoricalNB.html) You could also use my implementation of categorical and/or Gaussian Naive Bayes: [https://github.com/remykarem/mixed-naive-bayes](https://github.com/remykarem/mixed-naive-bayes)
9860
1
9878
null
0
7878
I am trying to read in a .csv file containing some data. I only need to read in specific chunks of rows from the file, such as line 15- line 20, line 45-line 50, and so on. However, the file contains text and copy write information like, such as `©1990-2016 AAR,All rights reserved` in several places. Such lines seem to be producing the error `ValueError: No columns to parse from file`, because when I just copy lines without such information using `pd.read_csv()`, it works fine. My goal is to automate the process of downloading these files from the web and reading them into pandas to grab chunks of rows and then do some processing with it, so I can't just manually specify the windows of text lacking such characters. Here is what I tried:`pd.read_csv("filename.csv",encoding=utf-8, skiprows = 14)` and `pd.read_csv("filename.csv",encoding=utf-16, skiprows = 15)`, after looking at similar answers in stack exchange, but this didn't work. Can anyone give me some guidance on this?
How can I read in a .csv file with special characters in it in pandas?
CC BY-SA 3.0
null
2016-01-19T21:26:28.150
2016-01-21T07:42:08.003
2016-01-21T03:05:23.423
13413
3314
[ "python", "pandas", "scraping" ]
There is `df.drop` command that can be used as follows to remove certain rows (in this case, 15 & 16): `df.drop(df.index[[15,16]])` If the rows you don't need are regular (e.g. you never need row 15) then this is a quick and dirty solution. If you only want to drop arbitrary rows containing some value, this should do the trick: `df = df.drop([df.column_name == ©1990-2016 AAR])`
removing special character from CSV file
``` from pandas import read_csv, concat from ast import literal_eval df = read_csv('file.csv',header=None,names=['name','value']) split = df.value.apply(literal_eval).apply(Series).set_index(df.name) part1 = split.ix[:,:2] part2 = split.ix[:,3:5] part3 = split.ix[:,6:] part2.columns=part3.columns=range(3) stacked = concat([part1,part2,part3]) ``` Note that this yields a different order than what you requested: ``` aad 1 4 77 bchfg 4 1 7 cad 1 2 7 mcfg 0 1 0 aad 4 0 0 bchfg 8 0 0 cad 6 0 0 mcfg 0 0 5 aad 0 0 3 bchfg 0 1 0 cad 0 0 3 mcfg 0 1 1 ```
9862
1
9868
null
3
3458
I was trying to build a 0-1 classifier using xgboost R package. My question is how predictions are made? For example in random forests, trees "vote" against each option and the final prediction is based on majority. As regard xgboost, the regression case is simple since prediction on whole model is equal to sum of predcitions for weak learners (boosted trees), but what about classification? Does xgboost classifier works the same as in the random forest (I don't think so, since it can return predictive probabilities, not class membership).
Classification using xgboost - predictions
CC BY-SA 3.0
null
2016-01-19T22:19:43.050
2016-01-20T12:46:55.283
2016-01-20T12:30:32.187
11097
13384
[ "classification", "predictive-modeling", "xgboost" ]
The gradient boost algorithm create a set of decision tree. The prediction process used [here](https://gist.github.com/shanebutler/5456942) use these steps: - for each tree, create a temporary "predicted variable", applying the tree to the new data set. - use a formula to aggregate all these tree. Depending on the model: bernoulli: 1/(1 + exp(-(intercept + SUM(temporary pred)))) poisson, gamma: exp(intercept + SUM(temporary pred)) adaboost: 1 /(1 + exp(-2*(intercept + SUM(temporary pred)))) The temporary "predicted variable" is a probability, having no sense by its own. The more tree you have, the more smooth is your prediction.( as for each tree, only a finite set of value is spread across your observations) The R process is probably optimised, but it is enough to understand the concept. In the h2o implementation of the gradient boost, the output is a flag 0/1. I think the [F1 score](https://en.wikipedia.org/wiki/F1_score) is used by default to convert probability into flag. I'll do some search/test to confirm that. In that same implementation, one of the default output for a binary outcome is a confusion matrix, which is a great way to assess your model ( and open a whole new bunch of interrogations). The intercept is "the initial predicted value to which trees make adjustments". Basically,just an initial adjustment. In addition: [h2o.gbm documentation](http://h2o-release.s3.amazonaws.com/h2o/rel-tibshirani/8/docs-website/h2o-docs/booklets/GBM_Vignette.pdf)
XGBoost Predictions
I do not know python API, but I guess with the following line you overwrite the trained object with a freshly created booster object (0.5 is a default prediction) ``` bst = xgb.Booster({'nthread': 4}) ```
9865
1
9867
null
13
29268
As discussed with Sean in [this Meta post](https://datascience.meta.stackexchange.com/q/2200/11097), I thought it would be nice to have a question which can help people who were confused like me, to know about the differences between text mining and NLP! So, what are the differences between [nlp](/questions/tagged/nlp) and [text-mining](/questions/tagged/text-mining)? --- I have included my understanding as an answer. If possible, please explain your answer with a brief example!
What is the difference between NLP and text mining?
CC BY-SA 3.0
null
2016-01-20T06:33:54.923
2020-03-10T10:11:24.800
2017-03-16T16:42:03.127
-1
11097
[ "nlp", "text-mining" ]
I agree with Sean's answer. [NLP](https://en.wikipedia.org/wiki/Natural_language_processing) and [text mining](https://en.wikipedia.org/wiki/Text_mining) are usually used for different goals. Also, there is indeed an overlap and both definitions are vogue. Other than the difference in goal, there is a difference in methods. Text mining techniques are usually shallow and do not consider the text structure. Usually, text mining will use bag-of-words, n-grams and possibly stemming over that. In NLP methods usually involve the text structure. You can find there sentence splitting, part-of-speech tagging and parse tree construction. Also, NLP methods provide several techniques to capture context and meaning from text. Typical text mining method will consider the following sentences to indicate happiness while typical NLP methods detect that they are not - I am not happy - I will be happy when it will rain - If it will rain, I'll be happy. - She asked whether I am happy - Are you happy?
NLP vs Keyword-Search. which one is the best?
To add to @noe answer, you will face more issues when working with real data. Here are some examples: - You might find sticky words. Ex: python,django,fastapi. - You might find an alternative word. Ex: Python3.7. - Sentences might be longer in real life than your training data. It will depend on how you prepare the sentences, the side of data you're extracting skills from, and your resources. For instance, do you have GPU or CPU, RAM ... etc.
9880
1
10306
null
3
151
I am trying to analyze soccer's data set: ``` W_OVER_2_5 PREDICTED MATCH_DATE LEAGUE HOME AWAY MATCH_HOME MATCH_DRAW MATCH_AWAY MATCH_U2_50 MATCH_O2_50 0 0 1105135200 5 260 289 2.05 3.00 4.50 1.65 2.30 0 1 1105308000 16 715 700 2.50 3.30 3.05 1.80 2.14 1 1 1105308000 11 445 479 1.36 5.25 12.00 2.15 1.78 0 1 1105308000 11 453 474 3.00 3.35 2.62 1.75 2.20.... ``` Now, I selected 'the best estimator' - ``` LogisticRegression(C=1.0, class_weight=None, dual=False, fit_intercept=True, intercept_scaling=1, max_iter=100, multi_class='ovr', n_jobs=1, penalty='l1', random_state=None, solver='liblinear', tol=0.0001, verbose=1, warm_start=False) ``` with best coefs - ``` 1. -2.40477246e-10 2. -5.57611571e-02 3. -1.32010761e-04 4. 1.51666398e-03 5. 7.54521399e-02 6. 6.38889247e-02 7. -2.25746953e-01 8. -3.79313902e-01 9. 3.70514297e-02 ``` Now, I have a question - how should I understand coefs in terms of real strategy? As example, ``` If `MATCH_HOME` is min among all [`MATCH_HOME`, `MATCH_DRAW`, `MATCH_WAY`] AND `MATCH_O2_50' = 1 THEN PREDICTED := 1 ELSE PREDICTED := 0 ``` PS. I would very appreciated for any science papers about that thematic :)
How should I convert Logistic Regression's coefs into action strategy?
CC BY-SA 3.0
null
2016-01-21T11:02:06.503
2016-02-18T14:25:11.430
2016-01-21T11:11:58.667
11097
14684
[ "machine-learning", "classification", "python", "predictive-modeling", "scikit-learn" ]
To understand the coefficients you just need to understand how the logistic regression model that you fit uses the coefficients to make predictions. No, it does not work like a decision tree. It's a linear model. Really, predictions are based on the dot product of the coefficients and the values from some new instance to predict. This is just the sum of their products. The higher the dot product, the more positive the prediction. So you can understand it as computing something like `-2.40477246e-10 * MATCH_HOME + -5.57611571e-02 * MATCH_AWAY + ...` (I don't know what coefficients go with what feature in your model.) That generally means that inputs with bigger coefficients matter more, and inputs with positive coefficients correlate positively with a positive prediction. That's most of what you can interpret here. The first of those conclusions is only really valid if inputs have been normalized to be on the same scale though. I'm not clear that you've done that here. You should also in general use L1 regularization if you intend to interpret the coefficients this way.
Re: Logistic Regression
This kind of problem is call Data Imbalance issue, this is a very common issue in Financial Industry, Health Care Industry(Cancer Cell Detection) like Banks and Insurance (for Fraud Detection) To overcome such issues, we use different techniques like Over-sampling or Under-sampling. Over-sampling tries to increase that minority records by duplicating those records to make balance in the data Under-sampling tries to decrease the majority records by removing some records which are not significant to make balance in the data. There are different algorithms for implementing the same. you can go through these [Link-1](https://datascience.stackexchange.com/questions/24610/smote-and-multi-class-oversampling/24664#24664),[Link-2](https://datascience.stackexchange.com/questions/24905/best-methods-to-solve-class-imbalance-problem-and-why/24912#24912), for Explanation and Implementation of the same. Let me know if you need anything else.
9886
1
9902
null
6
1349
I have 800 responses to an open-ended survey question. Each response is categorized into 3 categories based on a list of 70 categories. These categories are things like "stronger leadership", "better customer service", "programs", and etc... My question is, can I use this as a training data set in order to develop a model that I can use in the future as we get more survey responses? We would like to be able to tag, label, or classify each survey response into (up to) 3 of the 70 categories. Is this even possible? Or do I have to use a NB with simple words? Can you please guide me to tutorials, examples, etc.? Using R in this exercise.
Classifying survey response text SVM
CC BY-SA 3.0
null
2016-01-21T16:08:59.477
2016-01-22T08:36:18.620
2016-01-21T16:36:58.603
15609
15609
[ "machine-learning", "r", "text-mining", "svm" ]
Assigning ~3 of 70 categories means you would be performing [multi-label classification](https://en.wikipedia.org/wiki/Multi-label_classification). In the end, it doesn't make much difference if you use Naive Bayes or SVM; they are both families of algorithms that translate provided independent variables (your feature space) into hopefully correct dependent variables (target classes). The question is how to construct a good feature space. The state of the art approaches in text mining are (or were) first tokenizing words, stripping punctuation and [stop words](https://en.wikipedia.org/wiki/Stop_words), [stemming](https://stackoverflow.com/questions/16069406/text-mining-with-the-tm-package-word-stemming) or [lemmatizing](https://stackoverflow.com/questions/28214148/how-to-perform-lemmatization-in-r) them, creating a [bag-of-words model](https://en.wikipedia.org/wiki/Bag-of-words_model) of those words' [relative frequencies](https://en.wikipedia.org/wiki/Tf%E2%80%93idf) and perhaps the frequencies of those words' [bigrams or trigrams](https://en.wikipedia.org/wiki/N-gram). Then run your classification learners on that. Assume the resulting feature space table might get really wide (lots of words and combinations of words), so you might want to consider some form of [dimensionality reduction](https://stats.stackexchange.com/questions/3048/how-to-do-dimensionality-reduction-in-r). Of course, you will have to repeat the same filtering process with exact same parameters for each new survey you want to classify. Here's another good batch of answers on [multi-label text classification](https://stats.stackexchange.com/questions/12907/off-the-shelf-tool-for-multi-label-classification).
Decision tree and SVM for text classification - theory
Similarly to NB or kNN, the DT and SVM algorithms work with the features which are provided as input. So whenever ML is applied to text it's important to understand how the unstructured text is transformed into structured data, i.e. how text instances are represented with features. There are many options, but traditionally a document is represented as as a vector over the full vocabulary. A very simple version of this is a boolean vector: a cell $v_i$ contains 1 if the word $w_i$ occurs in the document and 0 otherwise. The DT training will generate the tree the usual way, so in this case the conditions at the nodes will be `v_i == 1`, representing whether the word $w_i$ is present or not. If the values in the vector are say TFIDF weights, the conditions might look like `v_i > 3.5` for instance. Similarly for SVM: the algorithm will find the optimal way to separate the instances in a multi-dimensional space: each dimension actually represents a single word, but the algorithm itself doesn't know (and doesn't care) about that.
9889
1
9897
null
3
108
I'm new to ML. I'm taking over a Classification project which involves analyzing data for customers which returned a product and I need to determine the return reason (~10 categories). This data was captured at the counter, and could include words like: LGTM (Looks good to me) NFF (No fault found), etc. I have a training set of 1000 records and when using Google Prediction API I get a "classificationAccuracy" value of "0.82" and 10 labels. Questions: 1. Any recommended API to analyze this type of data?. 2. What is a good "classificationAccuracy" value? Thank you
Analyzing customer response
CC BY-SA 3.0
null
2016-01-21T17:59:08.770
2017-03-08T22:31:39.140
2017-03-08T22:31:39.140
15613
15613
[ "machine-learning", "data-mining", "classification", "google-prediction-api" ]
[http://scikit-learn.org/stable/tutorial/text_analytics/working_with_text_data.html](http://scikit-learn.org/stable/tutorial/text_analytics/working_with_text_data.html) You can use the above tutorial to get acquaintance with text classification. Afterwards it should be easier to formulate nontrivial questions to move even further.
Predicting customers purchase
I recommend using LSTM RNN or CNN algorithm to pick up the most popular product of on-going month based on the past purchasing history. I have created a working product ranking model for online shopping site [lands'end](https://landsend.com) in the United States. In order to archieve the goal of this solution, you need to follow several steps. - collecting the dataset I think you already gathered the dataset for this part. - Processing of the dataset for supervised learning. Cleaning up the dataset - please remove rows that contains NaN columns mapping the products into eyes - you need to map the categories and products into row matrix with the element of 1 and 0. Normalization - you need to rescale all real numbers into [0, 1] split the dataset into the training, valid and test data. - Build Deep learning model like LSTM RNN select the deep learning framework. I recommend Tensorflow, Keras. set the dimension of model. i.e. the number of layers, the number of neurons per layer. optimizer, metrics - Training the model and testing - Get the predicted ranking of the category by monthly. Protip: the most important thing to get the better result is which column would be set the output for ranking. You should sum up the total purchasing count per product or per category by month or week.
9898
1
9901
null
2
202
When I run Kmeans on my dataset, I notice that some centroids become stale in the they are no longer the closest centroid to any point after some iteration. Right now I am skipping these stale centroids in my next iteration because I think those centroids no longer represent any useful set of the data, however I wanted to know if there are other reasonable ways to deal with these centroids.
What to do with stale centroids in K-means
CC BY-SA 3.0
null
2016-01-22T04:47:28.773
2016-01-22T08:12:36.267
null
null
15631
[ "clustering", "k-means", "unsupervised-learning" ]
k-means finds only a local optima. Thus a wrong number of cluster or simply some random state of equilibrium in the attracting forces could lead to empty clusters. Technically k-means does not provide a procedure for that, but you can enrich the algorithm with no problem. There are two approaches which I found that are useful: - remove the stale cluster, choose a random instance from your data set and create a new cluster with centroid equal with the chosen random point - remove the stale cluster, choose the farthest distant point from any other centroids, create a new cluster with centroid in that point Both procedures can lead to indefinite running time, but if the number of this kind of adjustments is finite (and usually it is) that it will converge with no problem. To guard yourself from infinite running time you can set an upper bound for the number of adjustments. The procedure itself is not practical if you have a huge data set a a large number of clusters. The running time can became prohibitive. Another procedure to decrease the chances for that to happen is to use a better initialization procedure, like k-means++. In fact the second suggestion is an idea from k-means++. There are no guarantees, however. Finally a note regarding implementation. If you can't change to code of the algorithm to make those improvements on the fly, your only option which comes to my mind is to start a new clustering procedure where you initialize the centroid positions for non-stale clusters, and follow procedures for stale clusters.
K-means: What are some good ways to choose an efficient set of initial centroids?
An approach that yields more consistent results is [K-means++](http://en.wikipedia.org/wiki/K-means%2B%2B). This approach acknowledges that there is probably a better choice of initial centroid locations than simple random assignment. Specifically, K-means tends to perform better when centroids are seeded in such a way that doesn't clump them together in space. In short, the method is as follows: - Choose one of your data points at random as an initial centroid. - Calculate $D(x)$, the distance between your initial centroid and all other data points, $x$. - Choose your next centroid from the remaining datapoints with probability proportional to $D(x)^2$ - Repeat until all centroids have been assigned. Note: $D(x)$ should be updated as more centroids are added. It should be set to be the distance between a data point and the nearest centroid. You may also be interested to read [this paper](http://ilpubs.stanford.edu:8090/778/1/2006-13.pdf) that proposes the method and describes its overall expected performance.
9899
1
10259
null
7
271
Here my goal is… - Find Product 5 (New Product) is really influencing other product sales (product 1 to 4) or not? - If it is influencing other product sales, how much? New to R and tried several related posts but didn’t find exact answer to my question. I love R and learning every day something new which helping us in taking data driven decisions. My sample Dataset is like below (Week and Product 1 to Product 5 Sales per each week) Here my new product is Product 5 and launched on Week 5. ``` Week Product-1 Product-2 Product-3 Product-4 Product-5 1 2 4 5 5 2 4 4 6 4 3 4 4 6 5 4 4 4 6 6 4 5 4 6 5 3 5 6 2 7 6 4 3 7 3 8 7 5 6 8 2 9 9 3 6 ``` ## Here my questions are - What is the best process or model to show the influence of product 5 (statistically)? - Do I need to run co-integration tests before I run correlation? Example some of these products are never be correlated with Product 5 (example: growth in cockle growth vs. growth in electricity demand) - How I know correlation vs. causation in this mix? - Since my new product launched on week 5, where I can start my correlations? Is it from week 5 or from earlier weeks? - Do I need to test for stationarity first? and bring the data to stationary?
Time series data: How I measure influence of new product sales on existing product sales (statistically)?
CC BY-SA 3.0
null
2016-01-22T05:10:26.040
2016-02-16T22:15:52.360
2016-01-24T21:26:45.070
15527
9663
[ "machine-learning", "r", "time-series" ]
You could build an ARIMAX model. This would permit to include autoregressive (AR) terms as well as well as the sales in product 5 as an Exogenous Input (X). This would give you a potential model where the sales for a product $i$ at time $t$ is given by $s^i_t$ then, $s^1_t=\alpha_1 s^1_{t-1} + \alpha_2 s^1_{t-2} + \ldots + \beta_0 s^5_t + \beta_1 s^5_{t-1} + \ldots $ Note that you may need to make the series stationary first, but see more on that below. You could estimate this model with the [seasonal](https://cran.r-project.org/web/packages/seasonal/) R package that relies on the [X-13ARIMA-SEATS software developed by the US Census Bureau](https://www.census.gov/srd/www/x13as/). I would recommend to ensure that your time series are all stationary, see for example [this post](https://stats.stackexchange.com/questions/27332/how-to-know-if-a-time-series-is-stationary-or-non-stationary) before you use X13. I would also run cointegration tests. See for more explanation this [excellent post](http://www.econ.uiuc.edu/~econ508/R/e-ta8_R.html). Since you only have data on week 5 I would start modeling in week 5 but you could include autoregressive (AR) terms related to the sales of product 1 prior to week 5.
Analyzing time series association
Here's some ways what I would try: - Shift humidity for n days, which makes Pearson correlation coefficient minimize. - Extract frequencies using Fourier Transform etc. - Do power spectral density estimation using Welch's method etc.
9918
1
9922
null
5
3138
I am computing co-occurence matrix for a fixed windows size in python using scipy's lil_matrix for storing the counts and computing the counts by sliding the context window over each word and then counting in the window. Now the code is taking too much time for relatively small corpus size also (100 MB Wikipedia dump). The code is : ``` def gen_coocur(window_size=5): ''' Generates coocurrence matrix ''' # vocab is precomputed. coocur_matrix = lil_matrix((len(vocab)+1, len(vocab)+1), dtype=np.float64) for page in self.wiki_extract.get_page(): # word_tokenize is tokenizer from nltk doc_tokens = word_tokenize(page.decode('utf-8')) N = len(doc_tokens) for token in self.vocab: for i in xrange(0,window_size): if (token in doc_tokens[0:i] or token in doc_tokens[i:(i+window_size+1)]) and token != doc_tokens[i]: coocur_matrix[self.vocab[doc_tokens[i]],self.vocab[token]] +=1 for i in xrange(window_size, (N-window_size)): if token in doc_tokens[(i-window_size):(i+window_size+1)] and token != doc_tokens[i]: coocur_matrix[self.vocab[doc_tokens[i]],self.vocab[token]] +=1 for i in xrange(N-window_size, N): if (token in doc_tokens[i:N] or token in doc_tokens[i-window_size:N]) and token != doc_tokens[i]: coocur_matrix[self.vocab[doc_tokens[i]],self.vocab[token]] +=1 ``` vocab is a dictionary which maps words -> wordId. How can I optimize this code to run faster?
Optimizing co-occurrence matrix computation
CC BY-SA 3.0
null
2016-01-23T13:04:17.457
2016-08-20T22:33:35.367
2016-01-23T13:22:19.480
13518
13518
[ "python", "text-mining" ]
From easiest to hardest: - Try running it in pypy or numba - Find a faster implementation. Unfortunately I can not recommend one. - Parallelize the loop over the documents. Not so hard since your vocabulary is precomputed. (Even if it weren't you could get away with it using the hashing trick.) Combine this with the first bullet. - Rewrite the inner loop in Cython. - Rewrite the whole thing in a faster language like C++ or Scala.
Clustering of sparse matrix with many co-variates
You need to apply PCA and reduce your data to lower dimensions, then applying a classic clustering technique (e.g. k-means or DBSCAN) works depending on how your samples are distributed. I strongly recommend you visualize data in 2 or 3 dimensions and have a brief visual inspection. Gives you an insight about what is going on there. However the final number of dimensions you get out of PCA might be chosen to be more than 2 or 3 (it is usually). ## Steps - Normalize features if needed - Apply PCA and take the number of PCs which explains 85% of variance - Optional: Visualize embedded data in 2 or 3 dimension to get a feeling about distribution of samples (it does not anything more than just an intuition! Even that intuition can not be validated) - Apply a clustering algorithm on the result of (2)
9930
1
9932
null
8
7917
A recent paper by He et al. ([Deep Residual Learning for Image Recognition](http://arxiv.org/pdf/1512.03385v1.pdf), Microsoft Research, 2015) claims that they use up to 4096 layers (not neurons!). I am trying to understand the paper, but I stumble about the word "residual". Could somebody please give me an explanation / definition what residual means in this case? ## Examples > We explicitly reformulate the layers as learning residual functions with reference to the layer inputs, instead of learning unreferenced functions. [...] Instead of hoping each few stacked layers directly fit a desired underlying mapping, we explicitly let these layers fit a residual mapping. Formally, denoting the desired underlying mapping as $\mathcal{H}(x)$, we let the stacked nonlinear layers fit another mapping of $\mathcal{F}(x) := \mathcal{H}(x)−x$. The original mapping is recast into $\mathcal{F}(x)+x$. We hypothesize that it is easier to optimize the residual mapping than to optimize the original, unreferenced mapping
What is a "residual mapping"?
CC BY-SA 3.0
null
2016-01-24T16:49:11.727
2016-01-24T18:58:14.580
null
null
8820
[ "machine-learning", "neural-network" ]
It's $F(x)$; the difference between the mapping $H(x)$ and its input $x$. It's a [common term in mathematics](https://en.wikipedia.org/wiki/Residual_%28numerical_analysis%29) ([DE](https://de.wikipedia.org/wiki/Residuum_%28Numerische_Mathematik%29)).
Is the graphic of deep residual networks wrong?
(Per the diagram), $F(x)$ here is simply the entire two-layer non-linear chain that is operating on the input $x$. Then, the final output is simply $F(x) + x = H(x)$. That's it! The thing that may be confusing you is that $F(.)$. In this case, they do not mean for $F$ to simply encompass one operation. Instead, it encompasses any set of operations processing $x$, up until you add $x$ back. Hope that helps! PS: It is also common to see this type of nomenclature in a lot of DNN literature, whereby one refers to an entire deep non-linear chain as $D(x)$. For example in Generative Adversarial Networks, (GAN)s, $D(x)$ refers to the entire deep net devoted to the discrimination process, while $G(x)$ refers to the entire net devoted to the noise shaping. In both cases, they are composed of entire functions/nets, and do not signify simply one operation.
9943
1
10206
null
2
9402
My ultimate goal is to use Jupyter together with Python for data analysis using Spark. The current hurdle I face is loading the external `spark_csv` library. I am using Mac OS and Anaconda as the Python distribution. In particular, the following: ``` from pyspark import SparkContext sc = SparkContext('local', 'pyspark') sqlContext = SQLContext(sc) df = sqlContext.read.format('com.databricks.spark.csv').options(header='true').load('file.csv') df.show() ``` when invoked from Jupyter yields: ``` Py4JJavaError: An error occurred while calling o22.load. : java.lang.ClassNotFoundException: Failed to find data source: com.databricks.spark.csv. Please find packages at http://spark-packages.org ``` Here are more details: ## Setting Spark together with Jupyter I managed to set up Spark/PySpark in Jupyter/IPython (using Python 3.x). ## System initial setting On my OS X I installed Python using Anaconda. The default version of Python I have currently installed is 3.4.4 (Anaconda 2.4.0). Note, that I also have installed also 2.x version of Python using `conda create -n python2 python=2.7`. ## Installing Spark This is actually the simplest step; download the latest binaries into `~/Applications` or some other directory of your choice. Next, untar the archive `tar -xzf spark-X.Y.Z-bin-hadoopX.Y.tgz`. For easy access to Spark create a symbolic link to the Spark: ``` ln -s ~/Applications/spark-X.Y.Z-bin-hadoopX.Y ~/Applications/spark ``` Lastly, add the Spark symbolic link to the PATH: ``` export SPARK_HOME=~/Applications/spark export PATH=$SPARK_HOME/bin:$PATH ``` You can now run Spark/PySpark locally: simply invoke `spark-shell` or `pyspark`. ## Setting Jupyter In order to use Spark from within a Jupyter notebook, prepand the following to `PYTHONPATH`: ``` export PYTHONPATH=$SPARKHOME/python/lib/py4j-0.8.2.1-src.zip:$SPARKHOME/python/:$PYTHONPATH ``` Further details can be found [here](https://github.com/drorata/Jupyter-Spark-Python-setup).
Use spark_csv inside Jupyter and using Python
CC BY-SA 3.0
null
2016-01-25T13:57:24.367
2016-05-10T13:57:56.713
2016-05-10T13:57:56.713
21
3591
[ "python", "apache-spark", "pyspark", "jupyter" ]
Assuming the rest of your configuration is correct all you have to do is to make `spark-csv` jar available to your program. There are a few ways you can achieve this: - manually download required jars including spark-csv and csv parser (for example org.apache.commons.commons-csv) and put them somewhere on the CLASSPATH. - using --packages option (use Scala version which has been used to build Spark. Pre-built versions use 2.10): using PYSPARK_SUBMIT_ARGS environmental variable: export PACKAGES="com.databricks:spark-csv_2.11:1.3.0" export PYSPARK_SUBMIT_ARGS="--packages ${PACKAGES} pyspark-shell" adding Gradle string to spark.jars.packages in conf/spark-defaults.conf: spark.jars.packages com.databricks:spark-csv_2.11:1.3.0
AttributeError: type object 'DataFrame' has no attribute 'read_csv'
`read_csv()` is not available on DataFrame. to read csvs using pandas: ``` import pandas as pd data = pd.read_csv("file_name") ``` If you check `type(data)`, it will be pandas DataFrame.
9949
1
9952
null
2
955
I have a big .CSV database of 25k users with various attributes of the last user's activity and events during the past 6 weeks This is an example of the data: ``` username (B) (C) (D) (E) nicole 524 329 203 787 asteria 197 186 286 120 ``` I want to create a common behavior pattern based on the values of the attributes of each user and run an algorithm to find a common pattern that defines this group's behavior and to find out if there is any correlation in the dimensions values and which dimension define this list of users. I am fully aware that correlation does not necessarily equal causation. Now I see several challenges in front of me and would greatly appreciate some input from others, or some good resources to find further information. What is the model for this problem ? What kind of Algorithm is the best to deal with this situation? What is the tools that you recommend to use of the project ? Any ideas would be great.
What model should I use to find a common pattern for a specific user group based on the other dimensions?
CC BY-SA 3.0
null
2016-01-25T18:41:09.973
2016-02-09T11:21:50.280
2016-02-09T11:21:50.280
11097
15695
[ "machine-learning", "r", "data-mining", "bigdata", "predictive-modeling" ]
The most common approach is to create business rules handmade, based on the univariate and multivariate analysis of the variable. Basically, do some frequency count, see if you could isolate some subset of your data just looking at one or two variables. Then when you have your labels, create a linear or so model with this new variable as output. For exemple, a [linear discriminant analysis](https://en.wikipedia.org/wiki/Linear_discriminant_analysis). The analysis will supply you with new insights on your group. If you want to rely on an algorithm, two solutions: As you don't seems to have a lot of variables, a non-supervised segmentation could do the job. For exemple, a k-Nearest Neighbor or a decision tree are basic and good aproachs. With a few more variables, I like is to do a [principal component analysis](https://en.wikipedia.org/wiki/Principal_component_analysis) then a non-supervised classification to define your group on the result of the PCA. Note that a PCA + handmade rules based on the analysis of your PCA result may be enough. Each time, in the end, a discriminant analysis and a profile of your groups to asses the quality of your results.
Clustering of users in a dataset
If your objective is to find clusters of users, then you are interested in finding groups of "similar" reviewers. Therefore you should: - Retain information which relates to the users in a meaningful way - e.g. votes_for_user. - Discard information which has no meaningful relationship to a user - e.g. user_id (unless perhaps it contains some information such as time / order). - Be mindful of fields which may contain implicit relationships involving a user - e.g. vote may be a result of the interaction between user and ISBN.
9950
1
9967
null
22
14248
In NLP, there is the concept of `Gazetteer` which can be quite useful for creating annotations. As far as I understand: > A gazetteer consists of a set of lists containing names of entities such as cities, organisations, days of the week, etc. These lists are used to find occurrences of these names in text, e.g. for the task of named entity recognition. So it is essentially a lookup. Isn't this kind of a cheat? If we use a `Gazetteer` for detecting named entities, then there is not much `Natural Language Processing` going on. Ideally, I would want to detect named entities using `NLP` techniques. Otherwise, how is it any better than a regex pattern matcher?
NLP - Is Gazetteer a cheat?
CC BY-SA 4.0
null
2016-01-25T18:41:24.083
2019-06-09T16:46:31.853
2019-06-09T16:46:31.853
29169
15735
[ "nlp", "named-entity-recognition" ]
Gazetteer or any other option of intentionally fixed size feature seems a very popular approach in academic papers, when you have a problem of finite size, for example NER in a fixed corpora, or POS tagging or anything else. I would not consider it cheating unless the only feature you will be using is Gazetteer matching. However, when you train any kind of NLP model, which does rely on dictionary while training, you may get real world performance way lower than your initial testing would report, unless you can include all objects of interest into the gazetteer (and why then you need that model?) because your trained model will rely on the feature at some point and, in a case when other features will be too weak or not descriptive, new objects of interest would not be recognized. If you do use a Gazetteer in your models, you should make sure, that that feature has a counter feature to let model balance itself, so that simple dictionary match won't be the only feature of positive class (and more importantly, gazetteer should match not only positive examples, but also negative ones). For example, assume you do have a full set of infinite variations of all person names, which makes general person NER irrelevant, but now you try to decide whether the object mentioned in text is capable of singing. You will rely on features of inclusion into your Person gazetteer, which will give you a lot of false positives; then, you will add a verb-centric feature of "Is Subject of verb sing", and that would probably give you false positives from all kind of objects like birds, your tummy when you're hungry and a drunk fellow who thinks he can sing (but let's be honest, he can not) -- but that verb-centric feature will balance with your person gazetteer to assign positive class of 'Singer' to persons and not animals or any other objects. Though, it doesn't solve the case of drunk performer.
Please let me know if I am on the right track to being an NLP Expert
You are definitely doing a great job of getting your basics down. I really like Patrick Winston's [AI Course](https://www.youtube.com/watch?v=TjZBTDzGeGg), he does a great job of conceptualizing the math behind these problems, which is the only place I think Ng lacks. Find a ton of papers you think are interesting, and read them top to bottom. Here is one from spotify on [NLP](http://benanne.github.io/2014/08/05/spotify-cnns.html)(super awesome) Most importantly, IMO, the thing you need to start doing is applying the stuff you learn, to problems you think are interesting. Do a few run throughs of other stuff on github and then start doing your own! Good luck, hope that was helpful:)
9958
1
9960
null
2
131
[http://pastebin.com/K0eq8cyZ](http://pastebin.com/K0eq8cyZ) I went through each season of "It's Always Sunny in Philadelphia" and determined the character groupings (D=Dennis, F=Frank, C=Charlie, M=Mac, B=Sweet Dee) for each episode. I also starred "winners" for some episodes. How best could I organize this data, in what type of database, and what data science tools would extract the most information out of it? I was thinking of making an SQL table like so: ``` (1) (2) (3) (4) (5) Episode# | Dennis | Frank | Charlie | Mac | Sweet Dee 008 | 5 | 3,4 | 2,4 | 2,3 | 1 010 | 5 | 3,4,6| 2,4,6 |2,3,6| 1 ``` ...where all the values are arrays of ints. 6 represents that the character won the episode and each number represents one of the 5 characters. Thoughts?
What would be the best way to structure and mine this set of data?
CC BY-SA 3.0
null
2016-01-25T23:45:28.583
2016-01-26T06:09:04.190
null
null
13165
[ "machine-learning", "clustering", "dataset", "visualization", "data-cleaning" ]
> How best could I organize this data, in what type of database? A simple relational database should do, but you could also use a "fancy" graph database if you want. One table for the users, and one for the "interactions". Each interaction would have foreign key columns for the two participants, labeled winner and loser, and the number of the episode the interaction it occurred. > Also any ideas on the best way to visually represent this data? A graphical representation for [social network analysis](https://en.wikipedia.org/wiki/Social_network_analysis) suggests itself. Here are [some](https://www.researchgate.net/publication/225765648_Social_network_analysis_in_a_movie_using_character-net) [papers](http://bi.snu.ac.kr/Publications/Conferences/International/ASONAM2015_CJNan.pdf) and a [subreddit](https://www.reddit.com/r/sna/) for inspiration. In your case, there is a concept of competition with clear winners/losers, so you could make your graph directed. Have the characters be the nodes, and add directed edges from the winning party to the losing party for each interaction. Collapse repeated interactions, etc. This approach would let you quickly identify overall winners and losers, as well as simply who interacts with whom.
How to structure unstructured data
In [Natural Language Processing](https://en.wikipedia.org/wiki/Natural_language_processing) it's crucial to choose the representation of the data and the design of the system based on the intended task, there is no generic method to represent text data which fits every application. This is not a simple technical problem, it's an important part of designing the system. The simplest method to structure text data is to represent the sentence or document as a [bag of words](https://en.wikipedia.org/wiki/Bag-of-words_model) (BoW), i.e. a set containing all the tokens in the sentence or document. Such a set can be represented with One-Hot-Encoding (OHT) over the full vocabulary (all the words in all the documents) in order to obtain structured data (features). Many preprocessing variants can be applied: remove stop words, replace words with their lemma, filter out rare words, etc. (don't neglect them, these preprocessing options can have a huge impact on performance). Despite their simplicity, BoW models usually preserve the semantic information of the document reasonably well. However they cannot handle any complex linguistic structure: negations, multiword expressions, etc.
9959
1
9970
null
1
918
When using an auto encoder to create non-linear dimensional reduced featires, is it more common to use the output of the network (the prediction of the input features) or to use the weights from the (or 1 of the if there are multiple) hidden layers? If the hidden layer is used, do you use the hidden layer activation as features or weights from the hidden layer to the output?
Autoencoders for feature creation
CC BY-SA 3.0
null
2016-01-26T01:51:21.140
2016-01-26T14:12:26.117
null
null
1138
[ "machine-learning", "neural-network" ]
When you want to use Auto-Encoders (AEs) for dimensionality reduction, you usally add a bottleneck layer. This means, for example, you have 1234-dimensional data. You feed this into your AE, and - as it is an AE - you have an output of dimension 1234. However, you might have many layers in that network and one of them has significantly less dimensions. Lets say you have the topology `1234:1024:784:1024:1234`. You train it like this, but you only use the weights from the `1234:1024:784` part. When you get new input, you just feed it into this network. You can see it as a kind of preprocessing. For the later stages, this is a black box. This is manly useful when you have a lot of unlabeled data. It is called Semi Supervised Learning (SSL).
using simple autoencoder for feature selection
- Autoencoders normally aren't linear models. If you make them linear (i.e. you create a shallow Autoencoder with linear activations) then you get exactly a PCA result. The power of Neural Networks is their non-linearity, if you want to stick with linearity go for PCA imho. - Keep a Train-Validation-Test set split, and try different configurations of hyperparams checking their performance on Validation data. Alternatively there are many libraries, such as hyperopt, that let you implement more sophisticated Bayesian hyperparameter searches, but unless you want to be published at a conference or win some competition it's a bit overkill. If you're still interested, the internet is plenty of tutorials like this one.
9976
1
10035
null
2
192
I am looking for ways to perform clustering on an aggregated category while still using individual data. For example, assume we have a multitude users that we have acquired through different channels. I would like to cluster groups of channels together, so that it is easier to deal with similar channels in similar ways. However, if I aggregate the user data into averages for each channel, then because these averages are heavily driven by outliers (high-spenders), it doesn't give that much insight to the behavior of the majority of users from that channel. Is there a way to use the individual user data but still gain insight into and cluster based on the channel they come from?
Perform clustering on individual data in categories
CC BY-SA 3.0
null
2016-01-26T22:07:26.397
2016-01-30T23:19:11.103
null
null
15768
[ "r", "clustering" ]
As always clustering is about what is the meaning of distance, since that encodes at least some part of your question. So you question is which channels are similar, where similarity is defined on users. Usually you do not assume some nesting on your instances, but what you have is basically a nesting of users in channels. So you have to incorporate this kind of nesting into the distance / similarity function. I would start with some observation on similarity function. We denote similarity between instance $i$ and $j$ with $d(i,j)$. Usually this function obeys the following condition $d(i,j)\ge0$ for any $i$ and $j$. Note that we usually have equality only on identical data points. The main consequence of this observation is that we can have basically an identity at user level. What you want is an identity at channel level. One way would be to define the similarity function like: $$d_c(i,j) = I_{c_i\ne c_j}d(i,j)$$ where $I_{c_i\ne c_j}$ is $1$ when channel of instance $i$ is different that channel of instance $j$. The main effect is that now all the points from the same channel are considered identical since the distance between them is zero. When the identity function is $1$ the distance is given by your real business distance which should be constructed by you and answer your question. If you use such kind of function in a hierarchical clustering basically it will start to find first some clusters which are identically with your grouping on channels and later on it will join clusters which are similar. This kind of approach will work even with a kmeans algorithm and perhaps with most of the clustering approaches. A slight different approach would be to define your $I$ function to return $1-\lambda$ when cluster is different and $\lambda$ when channels are equal, and have $\lambda$ a positive value close to $0$. This will not guarantee that all the clients will go into the same cluster, but it gives you the benefit that you have a slider which you can use to fine tune the compromise between all user of the same channel goes into the same cluster and a distance measure which is more robust to outliers. A totally different approach from an implementation point of view would be to define a more complex function directly on channel samples. This would be similar with how hierarchical clustering works since in order to joins to clusters it should have a distance function which would measure inter clusters similarity. See more on [linkage-criteria](https://en.wikipedia.org/wiki/Hierarchical_clustering#Linkage_criteria). For example average linkage clustering. Note that I said that the approach is different only algorithmic. I would bet that the results would be similar with the first approach. A totally different approach would be to use a more robust criteria. It is known that sample average is not robust since one point can blow away the estimation. Median instead is much stable. You can use a median or a trimmed mean to have a more robust aggregation values. This would have the advantage that the clustering would be much faster since you would work with channels instead of clients and the running time for computing the clustering would be reduced. And finally, another approach which comes to my mind would be to go further with comparing channels, but this time using a distance which would be based on statistical tests. I will give you a scenario to clarify. Suppose that your users have an attribute named Weight, which as expected would be a continuous variable. How could you define a distance between the Weight of users of one channel and Weight of users of other channel? If you can assume a distribution, like a Gaussian on that weight you can build a two sample t-test on the two samples which are the two clusters. If you can't assume a distribution you can employ a homogeneity / independent test like [two sample KS test](https://en.wikipedia.org/wiki/Kolmogorov%E2%80%93Smirnov_test#Two-sample_Kolmogorov.E2.80.93Smirnov_test). KS test is valuable since does not assume any distribution and is sensitive to both changes is shape and location. If you have nominal attributes you can employ a [chi-square independence test](https://en.wikipedia.org/wiki/Pearson's_chi-squared_test#Test_of_independence). Be careful with what you can use from that tests. In order to have equal contribution for each attribute used in distance function you have to you p-values. Also note that if the test is significant it will have a small p-value, since the null hypotheses for both tests is the independence assumption, which can be translated as same marginal for both samples. So, smaller p-values means bigger distance. You can use $1-\text{p_value}$ or even you can try $\frac{1}{\text{p_value}}$.
Clustering ordered categorical data
You can have categories that contain a logic that could be a numeric value and it seems to be your case. That's why you should consider those ratings from a mathematical point of view and assign a numerical scale that would be comprehensive to your algorithm. For instance: ``` AAA+ => 1 AAA => 2 AAA- => 3 AA+ => 4 AA => 5 AA- => 6 ``` etc. In this way, countries rated AAA+ in 2022 and AA- in 2021 should be close to countries rated AAA in 2022 and AA in 2021 because [1,6] are similar to [2,5] from a numeric point of view. However, if you consider those rating as separated categories like this: ``` AAA+ => col_AAA+= True, col_AAA=False, col_AAA-=False, col_AA+=False,... AAA => col_AAA+= False, col_AAA=True, col_AAA-=False, col_AA+=False,... ``` etc. You would have more data to deal with and the algorithm would not see any ranking between columns, and hence would not make good clustering. I recommend using numeric values for any feature that can have a scale and use categories just in case of independent ones (for instance, sea_access=Yes/No, or opec_member=Yes/No). I some case, you can also implement an intermediate solution like this one: ``` AAA+ => col_A= 1, col_B=0, col_C-=0, ... AAA => col_A= 2, col_B=0, col_C-=0, ... ... BBB+ => col_A= 0, col_B=1, col_C-=0, ... BBB => col_A= 0, col_B=2, col_C=0, ... ``` etc. It could be interesting if you want to make a clear difference between rating groups (ex: going from AAA to A+ is not as bad as going from A- to BBB+). Note: clustering could be difficult if you consider too many years, even with algorithms like UMAP or t-SNE. That's why a good option is to consider a few years for a beginning or simplify with smoothing algorithms.
9999
1
10012
null
4
2733
I'm trying to train an algorithm to copy some of the top traders on various forex social trading sites. The problem is that the traders only trade around say 10 times per month so even if I only look at minute resolution numbers that's .02% of the time [ 10/(60*24*30)*100 ]. I've tried using random forest and it gives an error rate of around 2% which is unacceptable and from what I've read most machine learning algorithms have similar errors rates. Does anyone know of a better approach?
What's a good machine learning algorithm for low frequency trading?
CC BY-SA 3.0
null
2016-01-27T19:56:50.257
2016-01-28T21:54:28.953
null
null
15797
[ "machine-learning", "classification", "random-forest" ]
Random forests, GBM or even the newer and fancier xgboost are not the best candidates for binary classification (predicting ups and down) of stocks predictions or forex trading or at least not as the main algorithm. The reason is that, for this particular problem, they require a huge amount of trees (and tree depth in case of GBM or xgboost) to obtain reasonable accuracy (Breiman suggested using at least 5000 trees and to "not be stingy" and in fact his main ML paper on RF he used 50,000 trees per run). However, some quants use random forests as feature selectors while others use it to generate new features. It all depends on the characteristics of the data. I would suggest you read this [question and answers on quant.stackexchange](https://quant.stackexchange.com/questions/9313/machine-learning-vs-regression-and-or-why-still-use-the-latter/9317#9317) where people discuss what methods are the best and when to use them, among them ISOMAP, Laplacian eigenmaps, ANNs, swarm optimization. Check out the [machine-learning tag on the same site](https://quant.stackexchange.com/tags/machine-learning/hot), there you might find information related to your particular dataset.
Reinforcement Learning algorithm for Optimized Trade Execution
> Why do we need n in the cost function update rule. Aren't we visiting each state exactly once? The update is assuming a static distribution and estimating the average value. As each estimate is made available, it is weighted less of the total each time. The formula means that the first sample is weighted $1$, second $\frac{1}{2}$, third $\frac{1}{3}$ which is what you need to get the mean value when you apply the changes due to the samples serially whilst maintaining the best estimate of the mean at each step. This is a little odd in my experience of RL, because it assumes the bootstrap values (the max over next step) come from a final distribution to weight everything equally like this. But I think it is OK due to working back from final step, hence each bootstrap value should be fully estimated before going backwards to previous time step. > If I understand correctly, we should run this algorithm on every episode (in the experiment in the paper they had 45000 episodes) This looks like an algorithm that you run on the whole data set, where each episode is the same length $T$. So you run each timestep (starting with the end time step and working backwards since the ultimate reward is established at the end of the episode, so this is more efficient), and sample from every episode at that timestep in the `While (not end of data)` loop. The values are therefore combined inside the loop at that stage, and there is no need to add anything to the algorithm to combine episodes.
10015
1
10498
null
6
12426
There are many resources online about how to implement MLP in tensorflow, and most of the samples do work :) But I am interested in a particular one, that I learned from [https://www.coursera.org/learn/machine-learning](https://www.coursera.org/learn/machine-learning). In which, it uses a cost function defined as follow: $ J(\theta) = \frac{1}{m} \sum_{i=1}^{m} \sum_{k=1}^{K} \left[ -y_k^{(i)} \log((h_\theta(x^{(i)}))_k - (1 - y_k^{(i)}) \log(1 - (h_\theta(x^{(i)}))_k \right] $ $h_\theta$ is the sigmoid function. And there's my implementation: ``` # one hidden layer MLP x = tf.placeholder(tf.float32, shape=[None, 784]) y = tf.placeholder(tf.float32, shape=[None, 10]) W_h1 = tf.Variable(tf.random_normal([784, 512])) h1 = tf.nn.sigmoid(tf.matmul(x, W_h1)) W_out = tf.Variable(tf.random_normal([512, 10])) y_ = tf.matmul(h1, W_out) # cross_entropy = tf.nn.sigmoid_cross_entropy_with_logits(y_, y) cross_entropy = tf.reduce_sum(- y * tf.log(y_) - (1 - y) * tf.log(1 - y_), 1) loss = tf.reduce_mean(cross_entropy) train_step = tf.train.GradientDescentOptimizer(0.05).minimize(loss) correct_prediction = tf.equal(tf.argmax(y, 1), tf.argmax(y_, 1)) accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32)) # train with tf.Session() as s: s.run(tf.initialize_all_variables()) for i in range(10000): batch_x, batch_y = mnist.train.next_batch(100) s.run(train_step, feed_dict={x: batch_x, y: batch_y}) if i % 100 == 0: train_accuracy = accuracy.eval(feed_dict={x: batch_x, y: batch_y}) print('step {0}, training accuracy {1}'.format(i, train_accuracy)) ``` I think the definition for the layers are correct, but the problem is in the cross_entropy. If I use the first one, the one got commented out, the model converges quickly; but if I use the 2nd one, which I think/hope is the translation of the previous equation, the model won't converge.
Implement MLP in tensorflow
CC BY-SA 3.0
0
2016-01-29T03:10:22.990
2016-03-03T03:11:39.900
null
null
3167
[ "machine-learning", "tensorflow" ]
You made three mistakes: - You omitted the offset terms before the nonlinear transformations (variables b_1 and b_out). This increases the representative power of the neural network. - You omitted the softmax transformation at the top layer. This makes the output a probability distributions, so you can calculate the cross-entropy, which is the usual cost function for classification. - You used the binary form of the cross-entropy when you should have used the multi-class form. When I run this I get accuracies over 90%: ``` import tensorflow as tf from tensorflow.examples.tutorials.mnist import input_data mnist = input_data.read_data_sets('/tmp/MNIST_data', one_hot=True) x = tf.placeholder(tf.float32, shape=[None, 784]) y = tf.placeholder(tf.float32, shape=[None, 10]) W_h1 = tf.Variable(tf.random_normal([784, 512])) b_1 = tf.Variable(tf.random_normal([512])) h1 = tf.nn.sigmoid(tf.matmul(x, W_h1) + b_1) W_out = tf.Variable(tf.random_normal([512, 10])) b_out = tf.Variable(tf.random_normal([10])) y_ = tf.nn.softmax(tf.matmul(h1, W_out) + b_out) # cross_entropy = tf.nn.sigmoid_cross_entropy_with_logits(y_, y) cross_entropy = tf.reduce_sum(- y * tf.log(y_), 1) loss = tf.reduce_mean(cross_entropy) train_step = tf.train.GradientDescentOptimizer(0.05).minimize(loss) correct_prediction = tf.equal(tf.argmax(y, 1), tf.argmax(y_, 1)) accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32)) # train with tf.Session() as s: s.run(tf.initialize_all_variables()) for i in range(10000): batch_x, batch_y = mnist.train.next_batch(100) s.run(train_step, feed_dict={x: batch_x, y: batch_y}) if i % 1000 == 0: train_accuracy = accuracy.eval(feed_dict={x: batch_x, y: batch_y}) print('step {0}, training accuracy {1}'.format(i, train_accuracy)) ```
Using tensorflow for any type of dataset
TensorFlow is a general purpose library for numerical computation using data flow graphs. It is primarily used for neural networks but can be used for any mathematical operations on multidimensional data arrays (tensors). Thus, TensorFlow can be used to estimate binary logistic regression with explanatory categorical variables. An example can be found in the TensorFlow tutorials [here](https://www.tensorflow.org/tutorials/wide#defining_the_logistic_regression_model).
10025
1
20112
null
12
6917
I was looking into the possibility to classify sound (for example sounds of animals) using spectrograms. The idea is to use a deep convolutional neural networks to recognize segments in the spectrogram and output one (or many) class labels. This is not a new idea (see for example [whale sound classification](http://danielnouri.org/notes/2014/01/10/using-deep-learning-to-listen-for-whales/) or [music style recognition](http://papers.nips.cc/paper/3674-unsupervised-feature-learning-for-audio-classification-using-convolutional-deep-belief-networks.pdf)). The problem that I'm facing is that I have sound files of different length and therefore spectrograms of different sizes. So far, every approach I have seen uses a fixed size sound sample but I can't do that because my sound file might be 10 seconds or 2 minutes long. With, for example, a bird sound in the beginning and a frog sound at the end (output should be "Bird, Frog"). My current solution would be to add a temporal component to the neural network (creating more of a recurrent neural network) but I would like to keep it simple for now. Any ideas, links, tutorials, ...?
Deep Learning with Spectrograms for sound recognition
CC BY-SA 3.0
null
2016-01-29T15:39:26.277
2017-07-02T11:46:50.503
2017-07-02T11:46:50.503
836
15847
[ "deep-learning", "multilabel-classification", "audio-recognition" ]
RNNs were not producing good enough results and are also hard to train so I went with CNNs. Because a specific animal sound is only a few seconds long we can divide the spectrogram into chunks. I used a length of 3 seconds. We then perform classification on each chunk and average the outputs to create a single prediction per audio file. This works really well and is also simple to implement. A more in-depth explanation can be found here: [http://ceur-ws.org/Vol-1609/16090547.pdf](http://ceur-ws.org/Vol-1609/16090547.pdf)
Is 10,000 images for one class of spectrograms good enough for music classification model?
Agreed with Emre. One thing that can be helpful when asking the question is to look at comparable datasets since there's usually some around. For spectrograms for example, there are [150,000 samples w/ 12 labels here](https://www.kaggle.com/ollmer/labels-spectrograms-exploration/data), [2890 samples here](http://deepsound.io/dcgan_spectrograms.html), and [1 million samples with various labels here](http://mirg.city.ac.uk/codeapps/the-magnatagatune-dataset). Find a few examples and gauge a rough order of magnitude of samples per class given how close the tasks are to yours. That should give you at-least a starting point. Then, if lets say one set is very large, usually someone will have [written a paper](https://arxiv.org/pdf/1703.01789) using that set, and that can help start both the architecture and size jumping off point (and maybe a baseline for transfer learning) of your task.
10037
1
10041
null
2
87
I am trying to synthetize clients Data in order to do clustering. My problem is for 1 customer I have severals rows. I would like to synthetize informations to get 1 row per customer. This clustering is about how customers use fidelity program. Here is a picture of my table : By Column (left to right) : 1) CustomerID 2)Date at which their use their points 3) Category number (Ex: 1 is gift card, 2 is a flight etc) 4) How many points they used 5) How many items they purchased with points [](https://i.stack.imgur.com/pCvIh.png) My question is how could I have 1 customer per row without loosing informations. Maybe Pivot Table? But I don"t know how it work exactly. I am new to statistic btw. Thank you Cédric
Filter Data for clustering
CC BY-SA 3.0
null
2016-01-31T10:05:49.300
2016-01-31T21:03:18.233
null
null
15879
[ "clustering" ]
If you can afford to do the full join once, do it and learn which columns are useful through [feature selection](https://en.wikipedia.org/wiki/Feature_selection). Then you can only SELECT these columns for subsequent iterations, when the database is updated. Here's a survey: [Feature Selection for Clustering: A Review](http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.295.8115&rep=rep1&type=pdf)
Clustering mixed data
Try HAC with Gower's similarity. It is a very heuristic approach - there is nothing going to save you from weighting variables - but worth a try.
10040
1
10042
null
5
1693
I have a classification problem for which a feedforward, fully connected neural net works reasonably well (two classes, true positive and true negative rate close to 80%). I want to get these rates to 90%, and more features is one of the catalysts for improvements I can think of. Do autoencoders to learn additional, interesting features work well for problems that do not involve images?
do autoencoders work well for non images?
CC BY-SA 3.0
null
2016-01-31T19:09:31.433
2016-08-30T15:03:57.933
null
null
9197
[ "classification", "neural-network", "autoencoder" ]
Yes, but no-one can tell if they will work well for your problem, so just try it and see. Don't give up if it does not work at first, because training neural networks requires some practice; there are lots of parameters, and not every configuration will work well. Even the optimization algorithm is a hyperparameter.
Does it make sense to train a CNN as an autoencoder?
Yes, it makes sense to use CNNs with autoencoders or other unsupervised methods. Indeed, different ways of combining CNNs with unsupervised training have been tried for EEG data, including using (convolutional and/or stacked) autoencoders. Examples: [Deep Feature Learning for EEG Recordings](https://arxiv.org/abs/1511.04306) uses convolutional autoencoders with custom constraints to improve generalization across subjects and trials. [EEG-based prediction of driver's cognitive performance by deep convolutional neural network](http://www.sciencedirect.com/science/article/pii/S0923596516300832) uses convolutional deep belief networks on single electrodes and combines them with fully connected layers. [A novel deep learning approach for classification of EEG motor imagery signals](http://iopscience.iop.org/article/10.1088/1741-2560/14/1/016003/meta;jsessionid=A1E21FCD5521812068616ABB45BFF236.c4.iopscience.cld.iop.org) uses fully connected stacked autoencoders on the output of a supervisedly trained (fairly shallow) CNN. But also purely supervised CNNs have had success on EEG data, see for example: [EEGNet: A Compact Convolutional Network for EEG-based Brain-Computer Interfaces](https://arxiv.org/abs/1611.08024) [Deep learning with convolutional neural networks for brain mapping and decoding of movement-related information from the human EEG](https://arxiv.org/abs/1703.05051) (disclosure: I am the first author of this work, more related work see p. 44) Note that the EEGNet paper shows that also with a smaller number of trials, purely supervised training of their CNN can outperform their baselines (see Figure 3). Also in our experience on a dataset with only 288 training trials, purely supervised CNNs work fine, slightly outperforming a traditional filter bank common spatial patterns baseline.
10047
1
10051
null
6
1829
[Model Selection and Train/Validation/Test Sets - Stanford University | Coursera:](https://www.coursera.org/learn/machine-learning/lecture/QGKbr/model-selection-and-train-validation-test-sets) At 10:59~11:10 > One final note: I should say that in the machine learning as of this practice today, there aren't many people that will do that early thing that I talked about, and said that, you know...​ Is my comprehension correct? Because English subtitles on coursera sometimes are not correct. As I konw, here what Chinese subtitle means is opposite to what English one does. So I am not sure whether Andrew Ng said "there aren't" or "there are" Thanks for your reading.​ --- I would like to ask another one. [Diagnosing Bias vs. Variance - Stanford University | Coursera:](https://www.coursera.org/learn/machine-learning/lecture/yCAup/diagnosing-bias-vs-variance) At 02:34~02:36, what Andrew Ng said is not quite clear as well as the English subtitle. My comprehension is as following > If d equals 1,.... to be high training error. It's not that complete. Would anyone like to identify that? Thank you...
On coursera what exactly does Andrew Ng say in videos Lectures 60 & 61 of machine learning?
CC BY-SA 3.0
null
2016-02-01T12:58:14.967
2016-02-01T17:49:53.790
2016-02-01T17:49:53.790
15908
15908
[ "machine-learning", "cross-validation", "model-selection" ]
No, he actually says the opposite: > One final note: I should say that in the machine learning as of this practice today, there are many people that will do that early thing that I talked about, and said that, you know...​ Then he says (the "early thing" he talked about): > selecting your model as a test set and then using the same test set to report the error ... unfortunately many people do that --- In this lesson he explains about separating the data set: - training set to train the model; - cross validation set to find the right parameters; - test set to find the final generalization error (of the function with the best parameter values found during using the cross validation set). So Andrew Ng is complaining that many people us the same data set to find the right parameters, and then report the error of that data set as final generalization error.
Source of Arthur Samuel's definition of machine learning
The exact quote exists in neither the [1959 paper](https://ieeexplore.ieee.org/abstract/document/5389202) nor the [1967 paper](https://ieeexplore.ieee.org/document/5391906) (second version). These are the closest quotes from the 1959 paper: > A computer can be programmed so that it will learn to play a better game of checkers than can be played by the person who wrote the program. And > Programming computers to learn from experience should eventually eliminate the need for much of this detailed programming effort. Also, [Wiki page](https://en.wikipedia.org/wiki/Arthur_Samuel) of Arthur Samuel states that: > He coined the term "machine learning" in 1959 and references the 1959 paper. Either the quote is created as a gist of Arthur Samuel's 1959 paper, or it is said but not written by him. In my opinion, the former is more probable, since it is not even remotely mentioned in the 1967 paper.
10048
1
10052
null
35
69410
I am working on research, where need to classify one of three event WINNER=(`win`, `draw`, `lose`) ``` WINNER LEAGUE HOME AWAY MATCH_HOME MATCH_DRAW MATCH_AWAY MATCH_U2_50 MATCH_O2_50 3 13 550 571 1.86 3.34 4.23 1.66 2.11 3 7 322 334 7.55 4.1 1.4 2.17 1.61 ``` My current model is: ``` def build_model(input_dim, output_classes): model = Sequential() model.add(Dense(input_dim=input_dim, output_dim=12, activation=relu)) model.add(Dropout(0.5)) model.add(Dense(output_dim=output_classes, activation='softmax')) model.compile(loss='categorical_crossentropy', optimizer='adadelta') return model ``` - I am not sure that is the correct one for multi-class classification - What is the best setup for binary classification? EDIT: #2 - Like that? ``` model.add(Dense(input_dim=input_dim, output_dim=12, activation='sigmoid')) model.add(Dropout(0.5)) model.add(Dense(output_dim=output_classes, activation='softmax')) model.compile(loss='binary_crossentropy', optimizer='adadelta') ```
What is the best Keras model for multi-class classification?
CC BY-SA 3.0
null
2016-02-01T15:18:33.907
2020-01-06T08:02:10.323
2017-05-03T15:19:41.160
31513
14684
[ "python", "neural-network", "classification", "clustering", "keras" ]
Your choices of `activation='softmax'` in the last layer and compile choice of `loss='categorical_crossentropy'` are good for a model to predict multiple mutually-exclusive classes. Regarding more general choices, there is rarely a "right" way to construct the architecture. Instead that should be something you test with different meta-params (such as layer sizes, number of layers, amount of drop-out), and should be results-driven (including any limits you might have on resource use for training time/memory use etc). Use a cross-validation set to help choose a suitable architecture. Once done, to get a more accurate measure of your model's general performance, you should use a separate test set. Data held out from your training set separate to the CV set should be used for this. A reasonable split might be 60/20/20 train/cv/test, depending on how much data you have, and how much you need to report an accurate final figure. For Question #2, you can either just have two outputs with a softmax final similar to now, or you can have final layer with one output, `activation='sigmoid'` and `loss='binary_crossentropy'`. Purely from a gut feel from what might work with this data, I would suggest trying with `'tanh'` or `'sigmoid'` activations in the hidden layer, instead of `'relu'`, and I would also suggest increasing the number of hidden neurons (e.g. 100) and reducing the amount of dropout (e.g. 0.2). Caveat: Gut feeling on neural network architecture is not scientific. Try it, and test it.
Which Keras metric for multiclass classification
One option is to implement F1 score in Keras: ``` from tensorflow.keras import backend as K def f1(y_true, y_pred): def recall_m(y_true, y_pred): TP = K.sum(K.round(K.clip(y_true * y_pred, 0, 1))) Positives = K.sum(K.round(K.clip(y_true, 0, 1))) recall = TP / (Positives+K.epsilon()) return recall def precision_m(y_true, y_pred): TP = K.sum(K.round(K.clip(y_true * y_pred, 0, 1))) Pred_Positives = K.sum(K.round(K.clip(y_pred, 0, 1))) precision = TP / (Pred_Positives+K.epsilon()) return precision precision, recall = precision_m(y_true, y_pred), recall_m(y_true, y_pred) return 2*((precision*recall)/(precision+recall+K.epsilon())) ```
10049
1
10056
null
5
171
I am new to natural language processing and I have not heard of a problem similar to mine yet. I was wondering if anyone could refer me to a method for solving my problem, or tell me how this problem is referred to in the academic literature, so that I can look for resources online. Here is the problem : From some text (wikipedia articles, for example), I would like to extract the hierarchy of different concepts that can be found in it. By hierarchy I mean a tree wherein A is a descendant of B if A or one of A's parents (transitive) is defined by B. For instance, normal distribution would be a descendant of probability (since normal distribution is defined using probabilities) and probability would be a descendant (or child) of mathematics. Since it is transitive, normal distribution would also be a child of mathematics. One way I thought about solving this is by looking at the number of times a word A is used alone (called A), the words A and B are used together (called A AND B, 'together' could be, for instance, in the same article or in the same paragraph, or in the same sentence), and the number of times the word B is used alone (called B). Let A be mathematics and B be probability. Then, if the ratios (A AND B)/A and (A AND B)/B are low, then it could imply that there is no direct link between A and B (but a link could exist through transitivity). Conversely, if A is bigger than B, A is a bigger concept than B. If A and B are almost the same then they are probably siblings (children of the same parent). Let's take 3 examples: - Mathematics (A) and carrot (B). A AND B is really low compared to A and B, so there is no direct link between them (or only an indirect link by transitivity). - Mathematics (A) and probabilities (B). A AND B is quite high compared to B, and A is much bigger than B, so B should be a child of A (probabilities is a child of mathematics). - Topology (A) and Probabilities (B). A AND B is relativaly high (the texts that present the different areas of mathematics will likely speak about the 2), A and B are about the same order of magnitude, so A and B should be the children of a same parent. Indeed, Topology and Probabilities are the children of Mathematics. This way of solving the problem is far from perfect, for instance 'the' (A) and 'probability' (B) would probably end up saying probability is a child of the (because A AND B is huge and A is much bigger than B). If anyone knows some papers on this or has any ideas on how I might solve this problem, I would appreciate some direction. Also, does my solution seem viable? How could it be improved?
Inferring Relational Hierarchies of Words
CC BY-SA 3.0
null
2016-02-01T16:05:41.110
2016-02-01T21:43:53.117
2016-02-01T18:35:50.087
13413
10500
[ "nlp", "unsupervised-learning", "nltk" ]
Look up taxonomy/ontology construction/induction. Relevant papers: - Automatic Taxonomy Construction from Keywords via Scalable Bayesian Rose Trees - Topic Models for Taxonomies - OntoLearn Reloaded. A Graph-Based Algorithm for Taxonomy Induction - Ontology Population and Enrichment: State of the Art - Probabilistic Topic Models for Learning Terminological Ontologies
Correlation between words, then texts
One method you can try first is cosine similarity. It works by counting the number of occurrences of each word in the vocabulary for each individual document. Next, you put these counts into vectors and then you take the cosine of the angle between them. If you have more than one text on a topic, you can combine them into a single text for the purpose of finding the cosine similarity between various topics. If you have many texts that are examples of each topic, then you can create a supervised machine learning model. However, since your purpose is interpretation instead of prediction, I would recommend a technique like cosine similarity before getting insight from interpreting a predictive model.
10050
1
10086
null
3
357
TLDR: Please help me understand the graph representation of the network in the image below. Hi, this is pretty stupid, but I'm just have trouble visualising what I'm actually doing with this neural network. I've read about neural networks and multilayer perceptrons for some time and I'm just getting started with actually using them. I started with a super simple example, just to get warmed up but now I've confused myself. I artificially generated some data and used nntools in matlab to attempt to "predict" the results. I built a neural network with the following parameters: - feed forward backprop network. - Gradient Descent training algorithm. - Gradient Descent learniing algorithm. - Performance/loss function of mean squared error. - two layers: first with three neurons and Tansig activation function. the second with one neuron and linear activation. I end up with something looking like this: [](https://i.stack.imgur.com/EPajk.png) However, I don't know what this actually represents, I'm all sorts of confused right now. Could someone please explain/upload an image/draw some ascii to represent the neurons and edges in the above network? It would really help clear my head. Currently I think it's like this: ``` T L o / \ / \ IN > o--o--o--o > OUT \ / \ / o ``` With linear activations in columns L and Tanh activations in columns T. Is that right? Doesn't make sense to me.
Simple ANN visualisation
CC BY-SA 3.0
null
2016-02-01T16:17:26.047
2016-02-04T09:58:07.610
2016-02-01T19:50:51.213
14617
14617
[ "neural-network" ]
I believe this is the representation you're after, please excuse the rough sketch but I think it explains the structure appropriately. - Single input going to three hidden units, each with a bias and tansig activation. - The outputs of the hidden layer are summed (via linear activation) with a bias to produce the output. [](https://i.stack.imgur.com/CdMAf.jpg)
Which ANN structure to use?
A Siamese network (a network with multiple outputs) will work for such a case.
10060
1
10067
null
6
3059
Our weapons: I am experimenting with k-means and Hadoop, where I am chained to these options for various reasons (e.g. [Help me win this war!](https://askubuntu.com/questions/725444/help-me-win-this-war)). --- The battlefield: I have articles, which belong to c categories, where c is fixed. I am vectorizing the contents of the articles to TF-IDF features. Now I am running a naive k-means algorithm, which takes `c` centroids to begin with and starts, iteratively, grouping articles (i.e. rows of the TF-IDF matrix, where you can see [here](https://stackoverflow.com/questions/35109424/how-to-make-tf-idf-matrix-dense) how I built it), until converenge occurs. --- Special notes: - Initial centroids: Tried with random from within each category or with the mean of all the articles from each category. - Distance function: Euclidean. --- Question(s): The accuracy is poor, as expected, can I do any better, by making another choice for the initial centroids, or/and pick another distance function? --- print "Hello Data Science site!" :)
Improve k-means accuracy
CC BY-SA 3.0
null
2016-02-02T01:42:38.053
2016-02-02T18:28:38.880
2017-05-23T12:38:53.587
-1
15927
[ "python", "text-mining", "apache-hadoop", "k-means", "distance" ]
Great question, @gsamaras! The way you've set up this experiment makes a lot of sense to me, from a design point of view, but I think there are a couple aspects you can still examine. First, it's possible that uninformative features are distracting your classifier, leading to poorer results. In text analytics, we often talk about [stop word](https://en.wikipedia.org/wiki/Stop_words) filtering, which is just the process of removing such text (e.g., the, and, or, etc.). There are standard stop word lists you can easily find online (e.g., [this one](http://www.ranks.nl/stopwords)), but they can sometimes be heavy-handed. The best approach is to build a table relating feature frequency to class, as this will get at domain-specific features that you won't likely find in such look-up tables. There is varying evidence as to the efficacy of stop word removal in the literature, but I think these findings mostly have to do with classifier-specific (for example, support vector machines tend to be less affected by uninformative features than does a naive bayes classifier. I suspect k-means falls into the latter category). Second, you might consider a different feature modeling approach, rather than tf-idf. Nothing against tf-idf--it works fine for many problems--but I like to start with binary feature modeling, unless I have experimental evidence showing a more complex approach leads to better results. That said, it's possible that k-means could respond strangely to the switch from a floating-point feature space to a binary one. It's certainly an easily-testable hypothesis! Finally, you might look at the expected class distribution in your data set. Are all classes equally likely? If not, you may get better results from either a sampling approach, or using a different distance metric. [k-means is known to respond poorly in skewed class situations](https://www.erpublication.org/admin/vol_issue1/upload%20Image/IJETR021251.pdf), so this is something to consider as well! There is probably research available in your specific domain describing how others have handled this situation.
K-Means Clustering too crowded
The figure looks exactly how it should. ``` plt.scatter(X[:, 0], X[:, 0], c=labels, s=50, cmap='rainbow'); ``` You are plotting the a value again itself, in two dimensions it will give you a line of the form y = x Check what you want to plot, if you have only one feature it doesn't mean that the K-Means wouldn't work, you will have a clustering, but the plot its correct.
10077
1
15497
null
24
23726
Perhaps this is too broad, but I am looking for references on how to use deep learning in a text summarization task. I have already implemented text summarization using standard word-frequency approaches and sentence-ranking, but I'd like to explore the possibility of using deep learning techniques for this task. I have also gone through some implementations given on [wildml.com](http://www.wildml.com/2015/12/implementing-a-cnn-for-text-classification-in-tensorflow/) using Convolutional Neural Networks (CNN) for sentiment analysis; I'd like to know how one could use libraries such as TensorFlow or Theano for text summarization and keyword extraction. Its been about a week since I started experimenting with Neural nets, and I am really excited to see how the performance of these libraries compares to my previous approaches to this problem. I am particularly looking for some interesting papers and github projects related to text summarization using these frameworks. Can anyone provide me with some references?
Keyword/phrase extraction from Text using Deep Learning libraries
CC BY-SA 3.0
null
2016-02-03T10:56:51.447
2018-03-09T02:03:26.293
2016-02-03T17:00:31.027
13413
13508
[ "neural-network", "text-mining", "deep-learning", "beginner", "tensorflow" ]
The [Google Research Blog](https://catalog.ldc.upenn.edu/LDC2012T21) should be helpful in the context of [TensorFlow](https://www.tensorflow.org/). In the above article, there is a reference to the [Annotated English Gigaword dataset](https://catalog.ldc.upenn.edu/LDC2012T21) which is routinely used for text summarization. The 2014 paper by [Sutskever et al](https://arxiv.org/abs/1409.3215) titled Sequence to Sequence Learning with Neural Networks could be a meaningful start on your journey as it turns out that for shorter texts, summarization can be learned end-to-end with a deep learning technique. Lastly, [here](https://github.com/tensorflow/models/tree/master/research/textsum) is a great Github repository demonstrating text summarization while making use of TensorFlow.
Text extraction from documents using NLP or Deep Learning
Jurafsky and Martin's [NLP textbook](https://web.stanford.edu/~jurafsky/slp3/) has a [chapter about information extraction](https://web.stanford.edu/~jurafsky/slp3/17.pdf) that should be a good starting point. For example, if you want to extract company names it will tell you how to do that. > A paralegal would go through the entire document and highlight important points from the document. What you need to do depends heavily on what your definition of "important" is here. It would help if you can give some specific examples.
10085
1
10087
null
10
1875
Which of the below set of steps options is the correct one when creating a predictive model? Option 1: First eliminate the most obviously bad predictors, and preprocess the remaining if needed, then train various models with cross-validation, pick the few best ones, identify the top predictors each one has used, then retrain those models with those predictors only and evaluate accuracy again with cross-validation, then pick the best one and train it on the full training set using its key predictors and then use it to predict the test set. Option 2: First eliminate the most obviously bad predictors, then preprocess the remaining if needed, then use a feature selection technique like recursive feature selection (eg. RFE with rf ) with cross-validation for example to identify the ideal number of key predictors and what these predictors are, then train different model types with cross-validation and see which one gives the best accuracy with those top predictors identified earlier. Then train the best one of those models again with those predictors on the full training set and then use it to predict the test set.
Machine Learning Steps
CC BY-SA 3.0
null
2016-02-04T08:43:12.847
2016-06-10T15:02:25.400
2016-06-10T15:02:25.400
20381
15984
[ "machine-learning", "predictive-modeling" ]
I found both of your options slightly faulty. So, this is generally (very broadly) how a predictive modelling workflow looks like: - Data Cleaning: Takes the most time, but every second spent here is worth it. The cleaner your data gets through this step, the lesser would your total time spent would be. - Splitting the data set: The data set would be splitted into training and testing sets, which would be used for the modelling and prediction purposes respectively. In addition, an additional split as a cross-validation set would also need to be done. - Transformation and Reduction: Involves processes like transformations, mean and median scaling, etc. - Feature Selection: This can be done in a lot of ways like threshold selection, subset selection, etc. - Designing predictive model: Design the predictive model on the training data depending on the features you have at hand. - Cross Validation: - Final Prediction, Validation
How to learn Machine Learning
- Online Course: Andrew Ng, Machine Learning Course from Coursera. - Book: Tom Mitchell, Machine Learning, McGraw-Hill, 1997.
10091
1
10092
null
6
9251
I have a set of images that are considered as good quality image and other set that are considered as bad quality image. I have to train a classification model so that any new image can be said good/bad. SVM seems to be the best approach to do it. I know how to do it in MATLAB. But,can anyone suggest how to do it in python? What are the libraries? For SVM scikit is there, what about feature extraction of image and PCA?
Image classification in python
CC BY-SA 3.0
null
2016-02-05T07:50:28.437
2018-03-09T03:05:24.763
2016-02-05T08:19:06.823
11097
13046
[ "python", "image-classification" ]
As this question highly overlaps with a similar question I have already answered, I would include that answer here (linked in the comments underneath the question): In images, some frequently used techniques for feature extraction are binarizing and blurring Binarizing: converts the image array into 1s and 0s. This is done while converting the image to a 2D image. Even gray-scaling can also be used. It gives you a numerical matrix of the image. Grayscale takes much lesser space when stored on Disc. This is how you do it in Python: ``` from PIL import Image %matplotlib inline #Import an image image = Image.open("xyz.jpg") image ``` Example Image: [](https://i.stack.imgur.com/mkf97.jpg) Now, convert into gray-scale: ``` im = image.convert('L') im ``` will return you this image: [](https://i.stack.imgur.com/AGxy6.png) And the matrix can be seen by running this: ``` array(im) ``` The array would look something like this: ``` array([[213, 213, 213, ..., 176, 176, 176], [213, 213, 213, ..., 176, 176, 176], [213, 213, 213, ..., 175, 175, 175], ..., [173, 173, 173, ..., 204, 204, 204], [173, 173, 173, ..., 205, 205, 204], [173, 173, 173, ..., 205, 205, 205]], dtype=uint8) ``` Now, use a histogram plot and/or a contour plot to have a look at the image features: ``` from pylab import * # create a new figure figure() gray() # show contours with origin upper left corner contour(im, origin='image') axis('equal') axis('off') figure() hist(im_array.flatten(), 128) show() ``` This would return you a plot, which looks something like this: [](https://i.stack.imgur.com/56A0K.png) [](https://i.stack.imgur.com/tWpNy.png) Blurring: Blurring algorithm takes weighted average of neighbouring pixels to incorporate surroundings color into every pixel. It enhances the contours better and helps in understanding the features and their importance better. And this is how you do it in Python: ``` from PIL import * figure() p = image.convert("L").filter(ImageFilter.GaussianBlur(radius = 2)) p.show() ``` And the blurred image is: [](https://i.stack.imgur.com/0Dx8q.jpg) So, these are some ways in which you can do feature engineering. And for advanced methods, you have to understand the basics of Computer Vision and neural networks, and also the different types of filters and their significance and the math behind them. --- The entire analytics is done with the [PIL package](http://www.pythonware.com/products/pil/). I wouldn't claim that it's a one-stop shop for Image analytics, but for a starter to novice level, it is pretty much it.
Feature extraction of images in Python
In images, some frequently used techniques for feature extraction are binarizing and blurring Binarizing: converts the image array into 1s and 0s. This is done while converting the image to a 2D image. Even gray-scaling can also be used. It gives you a numerical matrix of the image. Grayscale takes much lesser space when stored on Disc. This is how you do it in Python: ``` from PIL import Image %matplotlib inline #Import an image image = Image.open("xyz.jpg") image ``` Example Image: [](https://i.stack.imgur.com/mkf97.jpg) Now, convert into gray-scale: ``` im = image.convert('L') im ``` will return you this image: [](https://i.stack.imgur.com/AGxy6.png) And the matrix can be seen by running this: ``` array(im) ``` The array would look something like this: ``` array([[213, 213, 213, ..., 176, 176, 176], [213, 213, 213, ..., 176, 176, 176], [213, 213, 213, ..., 175, 175, 175], ..., [173, 173, 173, ..., 204, 204, 204], [173, 173, 173, ..., 205, 205, 204], [173, 173, 173, ..., 205, 205, 205]], dtype=uint8) ``` Now, use a histogram plot and/or a contour plot to have a look at the image features: ``` from pylab import * # create a new figure figure() gray() # show contours with origin upper left corner contour(im, origin='image') axis('equal') axis('off') figure() hist(im_array.flatten(), 128) show() ``` This would return you a plot, which looks something like this: [](https://i.stack.imgur.com/56A0K.png) [](https://i.stack.imgur.com/tWpNy.png) Blurring: Blurring algorithm takes weighted average of neighbouring pixels to incorporate surroundings color into every pixel. It enhances the contours better and helps in understanding the features and their importance better. And this is how you do it in Python: ``` from PIL import * figure() p = image.convert("L").filter(ImageFilter.GaussianBlur(radius = 2)) p.show() ``` And the blurred image is: [](https://i.stack.imgur.com/0Dx8q.jpg) So, these are some ways in which you can do feature engineering. And for advanced methods, you have to understand the basics of Computer Vision and neural networks, and also the different types of filters and their significance and the math behind them.
10093
1
13184
null
7
28812
I have a task from which I would like to find the confidence level given the z value. I have a sample population. From that population, given its distribution, I would like to find the confidence level of a given value of that population. In other words, given a a value of the population, I would like to know if it is within 95% (confidence level) of the whole population or 68% or 50% and so on. Usually, we can find the z value and confidence interval and given confidence level as explained here [How to find the confidence Interval](http://www.wikihow.com/Calculate-Confidence-Interval). But I would like to find the confidence level given the z value (which in this case is a given value from the population). How can I tackle this? If possible it should be in python or in R
How to find a confidence level given the z value
CC BY-SA 3.0
null
2016-02-05T13:52:08.660
2020-04-22T10:07:43.387
null
null
10240
[ "r", "python", "statistics" ]
OK, for a 95% confidence interval, you want to know how many standard deviations away from the mean your point estimate is (the "z-score"). To get that, you take off the 5% "tails". Working in percentile form you have 100-95 which yields a value of 5, or 0.05 in decimal form. Divide that in half to get 0.025 and then, in R, use the qnorm function to get the z-star ("critical value"). Since you only care about one "side" of the curve (the values on either side are mirror images of each other) and you want a positive number, pass the argument lower.tail=FALSE. So, in the end, it would look like this: ``` qnorm(.025,lower.tail=FALSE) ``` yielding a value of 1.959964 You then plug that value into the equation for the margin of error to finish things up. If you want to go the other direction, from a "critical value" to a probability, use the pnorm function. Something like: ``` pnorm(1.959964,lower.tail=FALSE) ``` which will give you back 0.025
Calculating Confidence Interval at a certain confidence level
The CI is defined as the interval which contains your mean with a $\alpha$% of probability. Given that you are using a model which its subjacent assumption is normality (amongst others), the interval is to be obtained comparing the probability in the context of a normal distribution. The function stats.norm.cdf returns the probability of $loc - \frac{mean}{sd}$ being zero. The value returned is the $(\bar{X} - Z_{\alpha}\sigma,\bar{X} + Z_{\alpha}\sigma)$. We can't return $(-2 sd+mean,2 sd+mean)$ because it would not be general in terms of probability, we use tables which say $0.95 \rightarrow 2$ because we (humans) cannot calculate the number for every probability, but the stats.norm.cdf function can. $Z_{\alpha}$ is difficult for us to have all the possible values (what if we need 90%, 95%, 99%, 99.9% or 70% probability?)
10100
1
10101
null
6
2586
After setting up a [2-noded Hadoop cluster](https://gsamaras.wordpress.com/code/hadoop-cluster-with-pc-and-virtualbox/), understanding [Hadoop and Python](http://www.michael-noll.com/tutorials/writing-an-hadoop-mapreduce-program-in-python/) and based on this naive [implementation](https://codereview.stackexchange.com/questions/80050/k-means-clustering-algorithm-in-python), I ended up with this code: ``` def kmeans(data, k, c=None): if c is not None: centroids = c else: centroids = [] centroids = randomize_centroids(data, centroids, k) old_centroids = [[] for i in range(k)] iterations = 0 while not (has_converged(centroids, old_centroids, iterations)): iterations += 1 clusters = [[] for i in range(k)] # assign data points to clusters clusters = euclidean_dist(data, centroids, clusters) # recalculate centroids index = 0 for cluster in clusters: old_centroids[index] = centroids[index] centroids[index] = np.mean(cluster, axis=0).tolist() index += 1 print("The total number of data instances is: " + str(len(data))) ``` I have tested it for serial execution and it is OK. How to make it distributed in Hadoop? In other words, what should go to the reducer and what to the mapper? Please note that if possible, I would like to follow the tutorial's style, since it's something I have understood.
How to make k-means distributed?
CC BY-SA 3.0
null
2016-02-06T02:38:20.750
2020-08-11T08:46:37.487
2017-04-13T12:41:09.900
-1
15927
[ "python", "apache-hadoop", "k-means", "map-reduce", "distributed" ]
Unless you are trying to do this as a learning exercise, just use Spark which has ML libraries made for distributed computing. See [here](http://spark.apache.org/docs/latest/mllib-clustering.html)
Distributed k-means in Spark
In that link you posted, you can look at the python full solution [here](http://www.cs.berkeley.edu/~rxin/ampcamp-ecnu/machine-learning-with-spark.html#solution_11) at the end and go through it to see what all is distributed. In short, some parts are distributed, like reading data from the file, but the very important parts like the distance computation are not. Running down, we see: > sc = SparkContext("local[6]", "PythonKMeans") This instantiates the context and creates a local cluster which the jobs will be submitted to > lines = sc.textFile(..) This is still setting up. No operations have taken place yet. You can verify this by putting timing statements in the code > data = lines.map(lambda x: (x.split("#")[0], parseVector(x.split("#")[1]))) The lambda here will be applied to lines, so this operation will split the file in parallel. Note that the actual line also has a `cache()` at the end (see [cache](http://spark.apache.org/docs/latest/quick-start.html#caching)]). `data` is just a reference to the spark object in memory. (I may be wrong here, but I think the operation still doesn't happen yet) > count = data.count() This forces the parallel computation to start, and the count to be stored. At the end, the reference data is still valid, and we'll use it for further computations. I'll stop with detailed explanations here, but wherever `data` is being used is a possible parallel computation. The python code itself is single threaded, and interfaces with the Spark cluster. An interesting line is: > tempDist = sum(np.sum((centroids[x] - y) ** 2) for (x, y) in newCentroids.iteritems()) `centroids` is an object in python memory, as is `newCentroids`. So, at this point, all computations are being done in memory (and on the client, typically clients are slim, i.e. have limited capabilities, or the client is an SSH shell, so the computers resources are shared. You should ideally never do any computation here), so no parallelization is being used. You could optimize this method further by doing this computation in parallel. Ideally you want the python program to never directly handle individual points' $x$ and $y$ values.
10103
1
28772
null
28
19718
I would like to do dimensionality reduction on nearly 1 million vectors each with 200 dimensions(`doc2vec`). I am using `TSNE` implementation from `sklearn.manifold` module for it and the major problem is time complexity. Even with `method = barnes_hut`, the speed of computation is still low. Some time even it runs out of Memory. I am running it on a 48 core processor with 130G RAM. Is there a method to run it parallely or make use of the plentiful resource to speed up the process.
Improve the speed of t-sne implementation in python for huge data
CC BY-SA 3.0
null
2016-02-06T14:19:10.243
2020-08-06T13:00:54.440
2016-02-06T14:23:23.533
11097
16024
[ "python", "bigdata", "nlp", "scikit-learn", "dimensionality-reduction" ]
You must look at [this Multicore implementation](https://github.com/DmitryUlyanov/Multicore-TSNE) of t-SNE. I actually tried it and can vouch for its superior performance.
t-SNE Python implementation: Kullback-Leibler divergence
The TSNE source in scikit-learn is in pure Python. Fit `fit_transform()` method is actually calling a private `_fit()` function which then calls a private `_tsne()` function. That `_tsne()` function has a local variable `error` which is printed out at the end of the fit. Seems like you could pretty easily change one or two lines of source code to have that value returned to `fit_transform()`.
10108
1
10117
null
8
6774
I presently receive files from a device in a semi-csv format. I have a written a simple recursive descent parser for getting information out of these files. Every time the device updates firmware, I have a new version of the parser for the changes the update brings. Down the road, we will be taking data from other devices, which means another parser and more updates to firmware. I'm wondering if I could define a basic structure of "this is the data I need" and use a neural network to get the parsed data without having to write a parser for each new file type that comes in. Is this a pipe dream or is it a valid application of machine learning? I'm much more of a software engineer than I am a data scientist, but I'm starting to dip my toes into the machine learning realm. Thanks in advance.
Is parsing files an application of machine learning?
CC BY-SA 3.0
null
2016-02-06T18:29:08.427
2018-01-30T06:56:42.813
null
null
16032
[ "machine-learning", "parsing" ]
I would answer the question at two levels. The first level is "can it be done using machine learning?" I would say that machine learning is essentially about learning. So given that you prepare sufficient examples of sample documents and the output to expect from those documents, you can train a network to learn the structure of documents and extract the relevant information. The more general form of extracting information from documents is a well-researched problem and is more commonly known as [Information Retrieval](https://en.wikipedia.org/wiki/Information_retrieval). And it is not limited to just machine learning techniques, you can use Natural Language Processing tools as well. So, in its general form, it is actually being done in practice. Coming to the second level, "should you be doing it using machine learning?". I would agree to what @NeilSlater said. The better and more feasible approach would be to use good programming practices so that you can reuse parts of your parser as your dataset evolves.
Is machine learning the right tool for this job?
There is lack for details here, but problem is very interesting, sounds like QA chat-bot (you may read more about these) a bit. My thoughts for this solution to be viable: - there should be sufficient amount of users of your wizard to collect necessary amount of data if you want to use deep learning here. If you only have several hundred of cases when wizard is used, forget it, use some simple statistical analysis (like linear regression, naive Bayes etc.). - For collected data being variable enough to cover different cases, there should be some randomization in collecting the data - sometimes wizard should be asking questions in different order and maybe even asking irrelevant questions. - As for architecture, you may want to classify sequences of questions/answers, then the most obvious choice is variant of RNN. Or maybe you want to classify how good the next question is - maybe you want to use decision tree/random forest here to classify it. Or maybe you need reinforcement learning? That was to point you should think very well what you want to classify and then test it, maybe in several iterations. So, if the wizard is complex enough and amount of data you have or may collect is big, then answer is most probably yes.
10112
1
10124
null
4
1911
I apologize for lack of terminology, I'm no computer scientist. I have a problem of validating paths in a directed graph with complex nodes. The full description is the following: - I have a decent set (about 1K) of directed graphs; - Each node contains a complex data structure (it is a hierarchical data structure, not a picture or sound); - I have some of paths in those graphs known as "correct" paths (based mostly on data in nodes); - And I have some of paths in those graphs known as "incorrect" paths (with classification why it is incorrect). I'd like to predict given a graph with those complex nodes and a path, is this path "correct". Which machine learning algorithm will suit me best? In general, what approach I should use? Edit: - Each full graph is either have app paths processed (correct/incorrect) or completely blank (no path is processed); - Correctness depends on both position of node in a graph AND data in the node; - Humans would need heuristics to decide or guess which paths are correct; - Most of the paths are "correct"; - I hope to convert human heuristics to some kind of "correctness" recognition.
Which machine learning approach/algorithm do I choose for path validation?
CC BY-SA 3.0
null
2016-02-06T22:17:04.323
2019-04-15T18:28:42.470
2016-02-07T20:12:55.557
16042
16042
[ "machine-learning" ]
This is a good question but it's rather complicated. I can suggest two approaches: - Graphical models; specifically Bayesian networks since your graph is directed. - Recurrent neural networks. Here's a talk on popular recent model: Sequence to Sequence Learning with Neural Networks.
What clustering algorithm is appropriate for clustering paths?
Like Ricardo mentioned in his comment on your question, the main step here is finding a distance metric between paths. Then you can experiment with different clustering algorithms and see what works. What comes to mind is [dynamic time warping](https://en.wikipedia.org/wiki/Dynamic_time_warping?oldformat=true) (DTW). DTW gives you a way to find a measure of "distance" (it is actually not strictly a distance metric, but it is close) between two time series. One very useful thing is that it can be used to compare two time series that are of different lengths. There are many good [blog posts](https://jeremykun.com/2012/07/25/dynamic-time-warping/) on DTW, so I won't try to give yet another explanation of it. There are also many [python implementations](https://github.com/wannesm/dtaidistance) of it. And a lot of work has gone into making the algorithm very fast. DTW is a strange algorithm--in some ways very simplistic, but typically works well. Once you modify the algorithm to deal with paths, you can construct the distance matrix and use that for clustering. One common clustering algorithm that is used in conjunction with DTW is [spectral clustering](https://www.wikiwand.com/en/Spectral_clustering), since the distance matrix can be used directly (instead of the matrix of data points, which we don't have here).
10151
1
10210
null
5
5397
I am currently reading: > Stephen Jose Hanson: Meiosis Networks, 1990. and I stumbled about this: > It is possible to precisely characterize the search problem in terms of the resources or degress of freedom in the learning model. If the task the learning system is to perform is classification then the system can be analyzed in terms of its ability to dichotomize stimulus points in feature space. Dichotomization Capability: Network Capacity Using a linear fan-in or hyperplane type neuron we can characterize the degrees of freedom inherent in a network of units with thresholded output. For example, with linear boundaries, consider 4 points, well distributed in a 2-dimensional feature space. There are exactly 14 linearly separable dichotomies that can be formed with the 4 target points. However, there are actually 16 ($2^4$) possible dichotomies of 4 points in 2 dimensions consequently, the number of possible dichotomies or arbitrary categories that are linearly implementable can be thought of as a capacity of the linear network in $k$ dimensions with $n$ examples. What is a "dichonomy" in this case? (Side questions: what is a fan-in type neuron?)
What is a Dichotomy?
CC BY-SA 3.0
null
2016-02-09T09:32:54.870
2016-02-11T21:29:01.537
2016-02-11T21:29:01.537
12527
8820
[ "machine-learning", "neural-network" ]
In a machine learning context, a [dichotomy](https://en.wikipedia.org/wiki/Dichotomy) is simply a split of a set into two mutually exclusive subsets whose union is the original set. The point being made in your quoted text is that for four points, a linear boundary can not form all possible dichotomies (i.e., it does not [shatter](https://en.wikipedia.org/wiki/Shattered_set) the set). For example, if the four points are arranged on the corners of a square, a linear boundary can be used to create all possible dichotomies except it cannot produce a boundary that splits the two points lying along one diagonal from the other two points (and vice versa), as you indicated in your own answer.
Terminology question
Model in general can be said as a representation of a process. In machine learning, the model can be referred to something that applies a machine learning algorithm on the given data and gives numerical outputs to make predictions on that data. It is algorithm that learns the pattern in the data to make the predictions and not the model. The entire process of making algorithm learning from the data with highest accuracy is called as the creating the model. Since the machine learning algorithms are based upon mathematics, we can refer to machine learning model as mathematical representation of the process that is used to solve the problem in hand which is to learn patterns in the data to make predictions. To make the model more successful in the real world we will try to increase its accuracy using the machine learning techniques like hyper parameter tuning, adding regularization etc.
10162
1
10167
null
11
20335
I would like to use the knn distance plot to be able to figure out which eps value should I choose for the DBSCAN algorithm. Based on [this](http://www.sthda.com/english/wiki/dbscan-density-based-clustering-for-discovering-clusters-in-large-datasets-with-noise-unsupervised-machine-learning) page: > The idea is to calculate, the average of the distances of every point to its k nearest neighbors. The value of k will be specified by the user and corresponds to MinPts. Next, these k-distances are plotted in an ascending order. The aim is to determine the “knee”, which corresponds to the optimal eps parameter. Using python with numpy/sklearn, I have the following points, with the following distance for 6-knn: ``` X = np.array([[-1, -1], [-2, -1], [-3, -2], [1, 1], [2, 1], [3, 2]]) nbrs = NearestNeighbors(n_neighbors=len(X)).fit(X) distances, indices = nbrs.kneighbors(X) # Indices [[0 1 2 3 4 5] [1 0 2 3 4 5] [2 1 0 3 4 5] [3 4 5 0 1 2] [4 3 5 0 1 2] [5 4 3 0 1 2]] # Distances [[ 0. 1. 2.23606798 2.82842712 3.60555128 5. ] [ 0. 1. 1.41421356 3.60555128 4.47213595 5.83095189] [ 0. 1.41421356 2.23606798 5. 5.83095189 7.21110255] [ 0. 1. 2.23606798 2.82842712 3.60555128 5. ] [ 0. 1. 1.41421356 3.60555128 4.47213595 5.83095189] [ 0. 1.41421356 2.23606798 5. 5.83095189 7.21110255]] ``` then I computed the average distance: ``` distances.mean() 2.9269575028354495 ``` The problem is I don't understand how exactly could I represent the same plot as them with distances in y-axis and number of points according to the distances on the x-axis using python. Thank for your help.
Knn distance plot for determining eps of DBSCAN
CC BY-SA 3.0
null
2016-02-09T16:29:52.363
2019-04-22T06:49:49.127
2016-03-02T15:50:11.347
8878
13915
[ "python", "clustering", "parameter-estimation", "dbscan" ]
You - take the last column of that matrix - sort descending - plot index, distance - hope to see a knee (if the distance does not work well. there might be none)
Understanding and find the best eps value for DBSCAN
KMeans and DBSCAN are two different types of Clustering techniques. The elbow method you used to get the best cluster count should be used in K-Means only. You used that value i.e. K=4 to assign colors to the scatterplot, while the parameter is not used in DBSCAN fit method. Actually that is not a valid parm for DBSCAN You will have to control "esp" to control the number of Clusters Fit with esp=6 resulted in 112 Clusters. You only need these few lines of code ``` dbscan2 = DBSCAN(eps=6, min_samples=10).fit(data3) fig = plt.figure(figsize=(12,7)) # max(dbscan2.labels_) # This is the number of Cluster plt.scatter(data3[:,0], data3[:,1], s=10, edgecolors='none', c=dbscan2.labels_, alpha=0.5, cmap='hsv') ``` [](https://i.stack.imgur.com/tiWER.png)
10168
1
11072
null
6
4118
I'm captivated by autoencoders and really like the idea of convolution. It seems though that both Theano and TensorFlow only support conv2d to go from an array of 2D-RGB (n 3D arrays) to an array of higher-depth images. That makes sense from the traditional tensor-product math, c_ijkl = sum{a_ijk*b_klm}, but means it's hard to 'de-convolve' an image. In both cases, if I have an image (in #batch, depth, height, width form), I can do a conv to get (#batch, num_filters, height/k, width/k). I'd really like to do the opposite, like going from (#batch, some_items, height/k, width/k) to (#batch, depth, height, width). TensorFlow had the hidden deconv2d function for a while (in 0.6, I think, undocumented), but I'd like to know if there's a math trick I can use to get a bigger output in the last two dimensions after a convolution than the input. I'd settle for a series of differentiable operations, like conv -> resize, but I want to avoid just doing a dense matrix multiplication -> resize like I've been doing so far. EDIT: As of today (2016/02/17) TensorFlow 0.7 has the tf.depth_to_space method, which helps greatly in this endeavor. ([https://www.tensorflow.org/api_docs/python/tf/depth_to_space](https://www.tensorflow.org/api_docs/python/tf/depth_to_space)) I would still love a Theano based solution, too, to complete my understanding of the material.
Stuck on deconvolution in Theano and TensorFlow
CC BY-SA 3.0
null
2016-02-09T19:43:01.207
2021-02-16T15:50:54.480
2021-02-16T15:50:54.480
29169
16135
[ "tensorflow", "convolutional-neural-network", "autoencoder", "theano" ]
Things have changed in TensorFlow since this question was asked but here is a link to doing [conv2d_transpose](https://github.com/loliverhennigh/All-Convnet-Autoencoder-Example). I think thats what you are looking for
How can I implement deconvolution on CNN (TensorFlow)?
Deconvolution have very simple structure: unpooling → deconv like this: ``` # Unpooling Ps = (tf.gradients(pooled, h))[0] unpooled = tf.multiply(Ps, P) # Deconv batch_size = tf.shape(self.input_x)[0] ds = [batch_size] ds.append(self.embedded_chars_expanded.get_shape()[1]) ds.append(self.embedded_chars_expanded.get_shape()[2]) ds.append(self.embedded_chars_expanded.get_shape()[3]) deconv_shape = tf.stack(ds) deconv = tf.nn.conv2d_transpose( unpooled, W, deconv_shape, strides=[1, 1, 1, 1], padding='VALID', name="Deconv" ) ```
10178
1
10180
null
0
187
I have a pandas data frame of the form: ``` r1 r2 r3 r4 r5 0 1 12 0 4 1 1 2 9 2 32 5 0 0 0 12 14 3 1 23 0 2 43 5 2 9 3 5 1 1 0 0 0 0 1 1 0 0 0 0 ``` And I want to check if any column: `r1, r2, r3, r4, r5` significantly differs from any of the other. Should I do a t test or an anova? And how would I set it up for the computation?
t test or anova
CC BY-SA 3.0
null
2016-02-10T16:37:43.837
2016-02-10T16:46:04.273
null
null
10584
[ "python", "statistics", "pandas" ]
This is typical statistics problem. When you have multiple 'classes' that you assume are normally distributed you first run an ANOVA. Then, IFF (if-and-only-if) the ANOVA is significant, then run post-hoc pairwise t-tests with an appropriate correction (e.g. Bonferroni).
How to interpret ANOVA results?
f_val is the F Statistic value. Mathematically it is $ F = \frac{MS_{Between}}{MS_{Within}}$ The null hypothesis for your ANOVA is $H_0: \mu_{Entire home/apt} = \mu_{Private room} = \mu_{Shared room} $ which means all means ($\mu_i$ s) are equal and there is no need of grouping using explanatory variable vs $H_A:$ At least one $\mu_i$ is different. There is a need for grouping The p-value for this test was very very low, hence python returned 0. Anything less than 0.05 is considered low enough to reject the null hypothesis. Independent variables are also called explanatory variables. I believe price is a dependent variable.
10188
1
10192
null
113
68465
I'm just getting started with some machine learning, and until now I have been dealing with linear regression over one variable. I have learnt that there is a hypothesis, which is: $h_\theta(x)=\theta_0+\theta_1x$ To find out good values for the parameters $\theta_0$ and $\theta_1$ we want to minimize the difference between the calculated result and the actual result of our test data. So we subtract $h_\theta(x^{(i)})-y^{(i)}$ for all $i$ from $1$ to $m$. Hence we calculate the sum over this difference and then calculate the average by multiplying the sum by $\frac{1}{m}$. So far, so good. This would result in: $\frac{1}{m}\sum_{i=1}^mh_\theta(x^{(i)})-y^{(i)}$ But this is not what has been suggested. Instead the course suggests to take the square value of the difference, and to multiply by $\frac{1}{2m}$. So the formula is: $\frac{1}{2m}\sum_{i=1}^m(h_\theta(x^{(i)})-y^{(i)})^2$ Why is that? Why do we use the square function here, and why do we multiply by $\frac{1}{2m}$ instead of $\frac{1}{m}$?
Why do cost functions use the square error?
CC BY-SA 3.0
null
2016-02-10T21:52:30.730
2019-02-25T11:03:05.913
2018-01-01T17:27:08.807
44002
16148
[ "machine-learning", "linear-regression", "loss-function" ]
Your loss function would not work because it incentivizes setting $\theta_1$ to any finite value and $\theta_0$ to $-\infty$. Let's call $r(x,y)=\frac{1}{m}\sum_{i=1}^m {h_\theta\left(x^{(i)}\right)} -y$ the residual for $h$. Your goal is to make $r$ as close to zero as possible, not just minimize it. A high negative value is just as bad as a high positive value. EDIT: You can counter this by artificially limiting the parameter space $\mathbf{\Theta} $(e.g. you want $|\theta_0| < 10$). In this case, the optimal parameters would lie on certain points on the boundary of the parameter space. See [https://math.stackexchange.com/q/896388/12467](https://math.stackexchange.com/q/896388/12467). This is not what you want. ## Why do we use the square loss The squared error forces $h(x)$ and $y$ to match. It's minimized at $u=v$, if possible, and is always $\ge 0$, because it's a square of the real number $u-v$. $|u-v|$ would also work for the above purpose, as would $(u-v)^{2n}$, with $n$ some positive integer. The first of these is actually used (it's called the $\ell_1$ loss; you might also come across the $\ell_2$ loss, which is another name for squared error). So, why is the squared loss better than these? This is a deep question related to the link between Frequentist and Bayesian inference. In short, the squared error relates to Gaussian Noise. If your data does not fit all points exactly, i.e. $h(x)-y$ is not zero for some point no matter what $\theta$ you choose (as will always happen in practice), that might be because of noise. In any complex system there will be many small independent causes for the difference between your model $h$ and reality $y$: measurement error, environmental factors etc. By the [Central Limit Theorem](https://en.wikipedia.org/wiki/Central_limit_theorem)(CLT), the total noise would be distributed Normally, i.e. according to the Gaussian distribution. We want to pick the best fit $\theta$ taking this noise distribution into account. Assume $R = h(X)-Y$, the part of $\mathbf{y}$ that your model cannot explain, follows the Gaussian distribution $\mathcal{N}(\mu,\sigma)$. We're using capitals because we're talking about random variables now. The Gaussian distribution has two parameters, mean $\mu = \mathbb{E}[R] = \frac{1}{m} \sum_i h_\theta(X^{(i)})-Y^{(i))}$ and variance $\sigma^2 = E[R^2] = \frac{1}{m} \sum_i \left(h_\theta(X^{(i)})-Y^{(i))}\right)^2$. See [here](https://math.stackexchange.com/questions/518281/how-to-derive-the-mean-and-variance-of-a-gaussian-random-variable) to understand these terms better. - Consider $\mu$, it is the systematic error of our measurements. Use $h'(x) = h(x) - \mu$ to correct for systematic error, so that $\mu' = \mathbb{E}[R']=0$ (exercise for the reader). Nothing else to do here. - $\sigma$ represents the random error, also called noise. Once we've taken care of the systematic noise component as in the previous point, the best predictor is obtained when $\sigma^2 = \frac{1}{m} \sum_i \left(h_\theta(X^{(i)})-Y^{(i))}\right)^2$ is minimized. Put another way, the best predictor is the one with the tightest distribution (smallest variance) around the predicted value, i.e. smallest variance. Minimizing the the least squared loss is the same thing as minimizing the variance! That explains why the least squared loss works for a wide range of problems. The underlying noise is very often Gaussian, because of the CLT, and minimizing the squared error turns out to be the right thing to do! To simultaneously take both the mean and variance into account, we include a bias term in our classifier (to handle systematic error $\mu$), then minimize the square loss. Followup questions: - Least squares loss = Gaussian error. Does every other loss function also correspond to some noise distribution? Yes. For example, the $\ell_1$ loss (minimizing absolute value instead of squared error) corresponds to the Laplace distribution (Look at the formula for the PDF in the infobox -- it's just the Gaussian with $|x-\mu|$ instead of $(x-\mu)^2$). A popular loss for probability distributions is the KL-divergence. -The Gaussian distribution is very well motivated because of the Central Limit Theorem, which we discussed earlier. When is the Laplace distribution the right noise model? There are some circumstances where it comes about naturally, but it's more commonly as a regularizer to enforce sparsity: the $\ell_1$ loss is the least convex among all convex losses. As Jan mentions in the comments, the minimizer of squared deviations is the mean and the minimizer of the sum of absolute deviations is the median. Why would we want to find the median of the residuals instead of the mean? Unlike the mean, the median isn't thrown off by one very large outlier. So, the $\ell_1$ loss is used for increased robustness. Sometimes a combination of the two is used. - Are there situations where we minimize both the Mean and Variance? Yes. Look up Bias-Variance Trade-off. Here, we are looking at a set of classifiers $h_\theta \in H$ and asking which among them is best. If we ask which set of classifiers is the best for a problem, minimizing both the bias and variance becomes important. It turns out that there is always a trade-off between them and we use regularization to achieve a compromise. ## Regarding the $\frac{1}{2}$ term The 1/2 does not matter and actually, neither does the $m$ - they're both constants. The optimal value of $\theta$ would remain the same in both cases. - The expression for the gradient becomes prettier with the $\frac{1}{2}$, because the 2 from the square term cancels out. When writing code or algorithms, we're usually concerned more with the gradient, so it helps to keep it concise. You can check progress just by checking the norm of the gradient. The loss function itself is sometimes omitted from code because it is used only for validation of the final answer. - The $m$ is useful if you solve this problem with gradient descent. Then your gradient becomes the average of $m$ terms instead of a sum, so its' scale does not change when you add more data points. I've run into this problem before: I test code with a small number of points and it works fine, but when you test it with the entire dataset there is loss of precision and sometimes over/under-flows, i.e. your gradient becomes nan or inf. To avoid that, just normalize w.r.t. number of data points. - These aesthetic decisions are used here to maintain consistency with future equations where you'll add regularization terms. If you include the $m$, the regularization parameter $\lambda$ will not depend on the dataset size $m$ and it will be more interpretable across problems.
Why do we have to divide by 2 in the ML squared error cost function?
It is simple. It is because when you take the derivative of the cost function, that is used in updating the parameters during gradient descent, that $2$ in the power get cancelled with the $\frac{1}{2}$ multiplier, thus the derivation is cleaner. These techniques are or somewhat similar are widely used in math in order "To make the derivations mathematically more convenient". You can simply remove the multiplier, see [here](http://mccormickml.com/2014/03/04/gradient-descent-derivation/) for example, and expect the same result.
10189
1
10194
null
5
3392
I want to implement K-means algorithm in Spark. I am looking for a starting point and I found [Berkeley's naive implementation](http://www.cs.berkeley.edu/~rxin/ampcamp-ecnu/machine-learning-with-spark.html). However, is that distributed? I mean I see no mapreduce operations. Or maybe, when submitted in Spark, the framework actually makes the needed tricks under the hood to distribute the algorithm? I also found that [Spark shows mapreduce the exit](http://www.infoworld.com/article/2982475/big-data/spark-15-shows-mapreduce-the-exit.html) and I am using Spark 1.6. --- EDIT: This code produces a runtime error, check [here](https://stackoverflow.com/questions/35373478/mapping-of-elements-gone-bad).
Distributed k-means in Spark
CC BY-SA 3.0
null
2016-02-10T22:53:49.620
2016-02-12T23:02:16.177
2017-05-23T12:38:53.587
-1
15927
[ "clustering", "k-means", "apache-spark", "distributed", "scala" ]
In that link you posted, you can look at the python full solution [here](http://www.cs.berkeley.edu/~rxin/ampcamp-ecnu/machine-learning-with-spark.html#solution_11) at the end and go through it to see what all is distributed. In short, some parts are distributed, like reading data from the file, but the very important parts like the distance computation are not. Running down, we see: > sc = SparkContext("local[6]", "PythonKMeans") This instantiates the context and creates a local cluster which the jobs will be submitted to > lines = sc.textFile(..) This is still setting up. No operations have taken place yet. You can verify this by putting timing statements in the code > data = lines.map(lambda x: (x.split("#")[0], parseVector(x.split("#")[1]))) The lambda here will be applied to lines, so this operation will split the file in parallel. Note that the actual line also has a `cache()` at the end (see [cache](http://spark.apache.org/docs/latest/quick-start.html#caching)]). `data` is just a reference to the spark object in memory. (I may be wrong here, but I think the operation still doesn't happen yet) > count = data.count() This forces the parallel computation to start, and the count to be stored. At the end, the reference data is still valid, and we'll use it for further computations. I'll stop with detailed explanations here, but wherever `data` is being used is a possible parallel computation. The python code itself is single threaded, and interfaces with the Spark cluster. An interesting line is: > tempDist = sum(np.sum((centroids[x] - y) ** 2) for (x, y) in newCentroids.iteritems()) `centroids` is an object in python memory, as is `newCentroids`. So, at this point, all computations are being done in memory (and on the client, typically clients are slim, i.e. have limited capabilities, or the client is an SSH shell, so the computers resources are shared. You should ideally never do any computation here), so no parallelization is being used. You could optimize this method further by doing this computation in parallel. Ideally you want the python program to never directly handle individual points' $x$ and $y$ values.
SPARK RDD - Clustering - K-Means
This is well answered in this earlier question: [https://stackoverflow.com/q/31447141/1060350](https://stackoverflow.com/q/31447141/1060350) Beware that Spark k-means is slow. If your data fits into main memory (i.e. a few gigabyte, which means billions of vectors!) then other tools such as ELKI that don't have the cluster overhead will be much faster. Use spark only for preprocessing the data, if you e.g. have several TB of jsons, and you need to first extract the numbers out of the JSONs, then here is where Spark shines. Once your data is then vectors, use ELKI instead, it's much faster.
10193
1
10196
null
8
24648
I'm trying to work out if I'm correctly interpreting a decision tree found online. - The dependent variable of this decision tree is Credit Rating which has two classes, Bad or Good. The root of this tree contains all 2464 observations in this dataset. - The most influential attribute to determine how to classify a good or bad credit rating is the Income Level attribute. - The majority of the people (454 out of 553) in our sample that had a less than low income also had a bad credit rating. If I was to launch a premium credit card without a limit I should ignore these people. - If I were to use this decision tree for predictions to classify new observations, are the largest number of class in a leaf used as the prediction? E.g. Observation x has medium income, 7 credit cards and 34 years old. Would the predicted classification for credit rating = "Good" - Another new observation could be Observation Y, which has less than low income so their credit rating = "Bad" Is this the correct way to interpret a decision tree or have I got this completely wrong? [](https://i.stack.imgur.com/QI0QU.jpg)
How to interpret a decision tree correctly?
CC BY-SA 3.0
null
2016-02-11T01:47:47.487
2016-02-11T06:34:10.360
2016-02-11T02:22:52.397
11097
16179
[ "predictive-modeling", "decision-trees" ]
Let me evaluate each of your observations one by one, so that it would be more clear: > The dependent variable of this decision tree is Credit Rating which has two classes, Bad or Good. The root of this tree contains all 2464 observations in this dataset. If `Good, Bad` is what you mean by credit rating, then Yes. And you are right with the conclusion that all the 2464 observations are contained in the root of the tree. > The most influential attribute to determine how to classify a good or bad credit rating is the Income Level attribute. Debatable Depends on how you consider something to be influential. Some might argue that the number of cards might be the most influential, and some might agree with your point. So, you are both right and wrong here. > The majority of the people (454 out of 553) in our sample that had a less than low income also had a bad credit rating. If I was to launch a premium credit card without a limit I should ignore these people. Yes, but it would also be better if you consider the probability of getting a bad credit from these people. But, even that would turn out to be NO for this class, which makes your observation correct again. > If I were to use this decision tree for predictions to classify new observations, are the largest number of class in a leaf used as the prediction? E.g. Observation x has medium income, 7 credit cards and 34 years old. Would the predicted classification for credit rating = "Good" Depends on the probability. So, [calculate the probability](https://social.msdn.microsoft.com/Forums/sqlserver/en-US/97c9ce39-024f-450f-8b21-a2d2961d8be7/decision-trees-how-is-prediction-probability-calculated?forum=sqldatamining) from the leaves and then make a decision depending on that. Or much simpler, use a library like the Sklearn's decision tree classifier to do that for you. > Another new observation could be Observation Y, which has less than low income so their credit rating = "Bad" Again, same as the explanation above. > Is this the correct way to interpret a decision tree or have I got this completely wrong? Yes, this is a correct way of interpreting decision trees. You might be tempted to sway when it comes to selection of influential variables, but that is dependant on a lot of factors, including the problem statement, construction of the tree, analyst's judgement, etc.
Interpreting 'values' of a Decision Tree
This answer was submitted by the user @Vlad_Z These values represent the weighted observations for each class, i.e. number of observations per class multiplied by the respective class weight. Since your class weights aren't integers, the resulting values are the way they are. If you want to get class counts, you can simply divide your values by class weights.
10197
1
10285
null
3
340
I would like to develop a soccer field segmentation method. For this purpose, I prepared a training image data set and annotated field and non-field pixels. Following is a gr-chromacity plot of all training samples, colored with respect to their labels. [](https://i.stack.imgur.com/QObI3.png) I want to train a classifier for inferring the label of a new sample. The first approach comes to my mind is using Gaussian mixture models to model both distributions. Would you recommend another method for this purpose?
Soccer Field Segmentation
CC BY-SA 3.0
null
2016-02-11T09:47:09.200
2016-02-17T17:37:15.280
null
null
16195
[ "machine-learning", "classification", "statistics", "svm", "naive-bayes-classifier" ]
I would not suggest GMM at this point as the distribution of points in the space is not well-shaped enough. Even if you want to use it it's better to look at your data in PC space (i.e. using PCA). My suggestion would be: 1) Think of your features. What are they? Are you going to use these gr-chromacity as features? If yes you should know that kernel methods work better on this as the features are highly nonlinear. The image show that you need a feature mapping anyway. 2) It seems you have already thought of kernel methods as you put SVM as a tag. you can use it for classification. Might work better than GMM here. Also think of probabilistic graphical models as they have been used intensively for image segmentation and your images are structured enough (a football field has its fixed position in the image anyway). 3) If you have raw labeled dataset, I'd recommend to think of smarter features for segmentation. in gr-chromacity you already loose some information about colors which is the most important thing for you here. I would recommend taking the position of pixels into account as well. Then a PCA on the new data may reveal some more linearly separated classes.
Active Mask - image segmentation
The [defaultdict](https://docs.python.org/3/library/collections.html#collections.defaultdict) in this case is meant to return the next increasing integer each time a new element is encountered in `P.flat()`. But to also return the same integer each time that same element is found again. You should have encountered the error: > TypeError: first argument must be callable or None `defaultdict` expects a callable. It will call the passed in function each time there is a lookup miss, and set the `dict` to that value. To fix that error you need to remove the `()` from the `__next__()` to leave: ``` mis = defaultdict(range(1, P.max() + 1).__iter__().__next__) ``` However this is overly complicated for what it is doing, and can be simplified a bit by recognizing that there is no reason to have the upper limit in the `range`, it is just an increasing integer which can be done with [itertools.count()](https://docs.python.org/3/library/itertools.html#itertools.count) liike: ``` import itertools as it mis = defaultdict(it.count(1).__next__) ```
10204
1
10207
null
9
3725
When implementing mini-batch gradient descent for neural networks, is it important to take random elements in each mini-batch? Or is it enough to shuffle the elements at the beginning of the training once? (I'm also interested in sources which definitely say what they do.)
Should I take random elements for mini-batch gradient descent?
CC BY-SA 3.0
null
2016-02-11T16:35:19.590
2020-08-05T19:48:35.610
null
null
8820
[ "machine-learning", "neural-network" ]
It should be enough to shuffle the elements at the beginning of the training and then to read them sequentially. This really achieves the same objective as taking random elements every time, which is to break any sort of predefined structure that may exist in your original dataset (e.g. all positives in the beginning, sequential images, etc). While it would work to fetch random elements every time, this operation is typically not optimal performance-wise. Datasets are usually large and are not saved in your memory with fast random access, but rather in your slow HDD. This means sequential reads are pretty much the only option you have for good performance. Caffe for example uses LevelDB, which does not support efficient random seeking. See [this](https://github.com/BVLC/caffe/issues/1087), which confirms that the dataset is trained with images always in the same order.
Stochastic Gradient Descent Batching
In SGD you just feed an example to your model, compute the gradient of the loss function of that example and update the weights according to the gradient of the loss of that example. In mini-batch gradient descent you feed a batch to your model, compute the gradient of the loss of that batch and update the weights according to the gradient of the loss of that batch. In fact, SGD is mini-batch gradient descent with batch size equal to 1.
10211
1
10795
null
4
7259
Does anybody knows python library to retrieve sentiment from Russian text. The dictionary with sentiment parameterization will be ok to. The idea of library something like in GPOMS in [article](http://arxiv.org/abs/1010.3003).
Sentiment retriving from text (Russian)
CC BY-SA 3.0
null
2016-02-11T21:52:50.030
2019-01-10T18:52:31.710
2017-05-11T14:26:24.317
31513
9992
[ "python", "nlp", "sentiment-analysis" ]
If you are seeking a working solution, I know of an API that supports many languages, including Russian: [indico.io Text Analysis sentiment()](https://indico.io/docs#sentiment) ``` >>> import indicoio >>> indicoio.config.api_key = YOUR_API_KEY >>> indicoio.sentiment(u"Это круто, убивает! Хочу.", language='ru') 0.6978093435482927 >>> indicoio.sentiment(u"Ты кто такой? Давай досвидания", language='ru') 0.13258737684773209 ``` Note that the `language` parameter is optional (Obviously it's not a lib, but they offer a Python client and their free tier is generous enough.) Update: As of Q2 2018, Google and IBM sentiment analysis APIs still do not support Russian.
Sentimental Analysis on Twitter Data
You should look at literature on unsupervised sentiment analysis. The paper by Peter Turney could be a good starting point. Thumbs Up or Thumbs Down? Semantic Orientation Applied to Unsupervised Classification of Reviews Turner 2002 You can also check this if you use R [https://datascienceplus.com/unsupervised-learning-and-text-mining-of-emotion-terms-using-r/](https://datascienceplus.com/unsupervised-learning-and-text-mining-of-emotion-terms-using-r/)
10216
1
10230
null
22
17470
I am wondering how to label (tag) sentences / paragraphs / documents with doc2vec in gensim - from a practical standpoint. Do you need to have each sentence / paragraph / document with its own unique label (e.g. "Sent_123")? This seems useful if you want to say "what words or sentences are most similar to a single specific sentence labeled "Sent_123". Can you have the labels be repeated based on content? For example if each sentence / paragraph / document is about a certain product item (and there are multiple sentence / paragraph / document for a given product item) can you label the sentences based on the item and then compute the similarity between a word or a sentence and this label (which I guess would be like an average of all those sentences that had to do with the product item)?
Doc2Vec - How to label the paragraphs (gensim)
CC BY-SA 3.0
null
2016-02-12T02:22:01.940
2016-02-12T22:27:54.073
null
null
1138
[ "machine-learning", "text-mining", "word-embeddings", "word2vec" ]
Both are possible. You can give every document a unique ID (such as a sequential serial number) as a doctag, or a shared string doctag representing something else about it, or both at the same time. The TaggedDocument constructor takes a list of tags. (If you happen to limit yourself to to plain ints ascending from 0, the Doc2Vec model will use those as direct indexes into its backing array, and you'll save a lot of memory that would otherwise be devoted to a string -> index lookup, which could be important for large datasets. But you can use string doctags or even a mixture of int and string doctags.) You'll have to experiment with what works best for your needs. For some classification tasks, an approach that's sometimes worked better than I would have expected is skipping per-text IDs entirely, and just training the Doc2Vec model with known-class examples, with the desired classes as the doctags. You then get 'doc vectors' just for the class doctags – not every document – a potentially much smaller model. Later inferring vectors for new texts results in vectors meaningfully close to related class doc vectors.
Doc2vec(gensim) - How to calculate the most similar sentence and get its label?
As far as I understood you are using type of TV as tag of particular sentence , and you are using doc2vec model for future classification . So As above answer is suggesting that model will learn semantic meaning of type of TV(tag). let's suppose s is your future sentence for prediction. then you use infer vector. > model = Load_model('model.doc2vec') infer_vector = model.infer_vector(s) similar_documents = model.docvecs.most_similar([infer_vector], topn = 1) here similar document is list of tuples. where first element is label. let me know if this help you.
10223
1
10701
null
0
1104
I am using rpart package in order to create a segmentation of my data using decision tree. As final result I want to obtain a classification of my data. For exemple, if the rpart devide data into 3 classes, I want to divide my data onto this three subsets such that I know that row n°1 is in subset 1, row n°2 in subset 3, etc. I can't find how could I get this information from the results of rpart object? ``` tree.res <- rpart(x ~ ., data, method="class", parms=list(split = "gini")) ``` I know that I can retrieve the results of this classification as follows: ``` plot(tree) #plot the tree text(tree) printcp(tree) #Displays CP table for Fitted Rpart Object predict(tree) #displays prediction results of x variable ``` How can I get the information on the segmentation of my data set? As first attempt I used : ``` predict(tree, type = "vector") ``` Is this a correct approach?
How to retrieve the clustering results of rpart
CC BY-SA 3.0
null
2016-02-12T15:25:52.077
2016-03-14T15:44:26.867
null
null
16152
[ "machine-learning", "r", "clustering", "decision-trees" ]
I found an answer to my question using the package partykit, I post it to help others. ``` tree_party <- as.party(tree) #coercing the rpart.object into partykit tree_fit <- fitted(tree_party) #a two-column data.frame with the fitted node numbers and the observed responses on the training data. tree_pred <- predict(tree_party, newdata = data, type = "node") ```
rpart and rpart2
Both `rpart` and `rpart2` implement a CART and wrap the `rpart` function from the rpart library. The difference is the constraints on the model each enforces. `rpart` uses the complexity parameter, `cp`, while `rpart2` uses the max tree depth, `maxdepth`. See: `train_model_list` section of [http://cran.r-project.org/web/packages/caret/caret.pdf](http://cran.r-project.org/web/packages/caret/caret.pdf) and `rpart.control` section of [http://cran.r-project.org/web/packages/rpart/rpart.pdf](http://cran.r-project.org/web/packages/rpart/rpart.pdf).
10248
1
10251
null
0
89
As a total beginner I am trying to apply some "predictions" on top of a bunch of csv files which contains house transactions for the last 20 years divided per area. What I would like to predict is the trend of the transactions for lets say the next year for a specific area. What general steps would you follow, to analyse those data and then predict? I read different articles but what I am looking for it is a sort of "best general practice" for this sort of problem.
How to make data predictions
CC BY-SA 3.0
null
2016-02-15T08:30:39.100
2016-02-15T09:42:51.110
null
null
16280
[ "data-mining", "predictive-modeling" ]
Regression will work well if your data set is large, but only for predicting current house prices (say, for example, estimating the value of your house). That's what people generally mean when they talk about predicting house prices from current house sales data. The question of how house prices will behave in the next year is much, but much more complicated, and would not depend simply on the data you currently have. You would need to involve other information and a much more complex model, which would need to involve things like current level of household debt, inflation rate, economic outlook, etc. Daunting. Generally, speculative prices follow some stochastic process. They depend on the current value, but diverge more and more the farther you go in the future.
How to start prediction from dataset?
The problem you are facing is a [time series](https://en.wikipedia.org/wiki/Time_series) problem. Your events are categorial which is a specific case (so most common techniques like [arima](https://en.wikipedia.org/wiki/Autoregressive_integrated_moving_average) and [Fourier transform](https://en.wikipedia.org/wiki/Fourier_transform) are irrelevant). Before getting into the analysis, try to find out whether the events among nodes are independent. If they are independent, you can break them into sequences per node and analyze them. If they are not (e.g., "Main power supply has a fault alarm" on node x indicates the same event on node y) you should the combined sequence. Sometime even when the the sequence are dependent you can gain from using the per node sequence as extra data. You data set is quite large, which means that computation will take time. You data is probably noisy, so you will probably have some mistakes. Therefore, I recommend advancing in small steps from simple models to more complex ones. Start with descriptive statistics, just to explore the data. How many events do you have? How common are they? What is the probability of the events that you try to predict? Can you remove some of the events as meaningless (e.g., by using domain knowledge)? In case you have domain knowledge that indicates that recent events are the important ones, I would have try predicting based on the n last events. Start with 1 and grow slowly since the number of combinations will grow very fast and the number of samples you will have for each combination will become small and might introduce errors. Incase that the important event are not recent, try to condition on these events in the past. In most cases such simple model will help you get a bit above the baseline but not too much. Then you will need more complex models. I recommend using [association rules](https://en.wikipedia.org/wiki/Association_rule_learning) that fit your case and have plenty of implementations. You can further advance more but try these technique first. The techniques mentioned before will give you a model the will predict the probability that a node will be down, answering your question (ii). Running it on the sequence of the nodes will enable you to predict the number of nodes that will fail answering question (i).
10288
1
10294
null
2
370
I have a data set that's a dictionary of tuples. Each key represents an ID number and each tuple is (yesvotes, totalvotes). Example: `{17: (6, 10), 18: (1, 1), 21: (0, 2), 26: (1, 1), 27: (3, 4), 13: (2, 2)}` I need to find the max key of the set. I want to assign weights so, for instance, key 17 would be ranked higher than key 18 because even though the ratio is much smaller, it has ten times the total votes. Is there an optimal way to do this? My best guess is simply calculate new ratios by `(yesvotes/totalvotes)*(totalvotes+1)` but that doesn't seem right... Is there some kind of standardized field of study concerning fair-voting?
Sort by average votes/ratings
CC BY-SA 3.0
null
2016-02-17T20:43:08.740
2016-02-18T14:26:15.610
2016-02-18T14:26:15.610
15527
13165
[ "python", "data-cleaning", "weighted-data", "data-wrangling" ]
Yes, this is a well-studied problem: rank aggregation. [Here](http://www.evanmiller.org/how-not-to-sort-by-average-rating.html) is a solution with code. The problem is that the quantity you are trying to estimate, the "score" of the item, is subject to noise. The fewer votes you have the greater the noise. Therefore you want to consider the variance of your estimates when ranking them.
Median versus Average, how to choose?
You haven't asked a proper statistical question, so the choice of mean or median as "best" as a measure of your runtime is unanswerable. Have you looked at the distribution of run times? Is the algorithm intrinsically variable in its run-time, or is it fixed in its run time but the run times differ because of noise caused by the OS doing other things? Do you want to remove that noise? What if the OS suddenly decides to swap to disk for a bit, or a big network data packet arrives, and the OS goes and does something for a few ms. You could get a long run time for one of your times, and that could pull the mean value way off. The median is a robust estimator which means a single "bad" value can't throw it off. The mean can be thrown off by a single "rogue" value. Is that what you want? Maybe you do.
10296
1
10298
null
0
60
machine learning is being hyped since the deep neural networks. It seems to me, that you have to program in order to do machine learning. But is the process of training data and labeling data is the same of every problem. Why isn't there an Excel like application that enables thousands of non experts to do machine learning ? Disclaimer : I am not a data scientist .
Why is there no end user Application, yet?
CC BY-SA 3.0
null
2016-02-18T00:08:09.197
2016-02-18T02:35:39.410
null
null
16353
[ "machine-learning" ]
Listing 2 examples: [IBM Watson Analytics](https://www.ibm.com/marketplace/cloud/watson-analytics/us/en-us) [Amazon ML use case](https://www.ibm.com/marketplace/cloud/watson-analytics/us/en-us) Preparing the data for supervised learning is require skills. Not all data came labeled and in form to be used as in need to solving problems. Also many more platforms/Api are in market now but for sure you can't solve a problem only with 1 algorithm, is needed much more ... Hope it help.
What common/simple problem would work well as a web app?
Maybe you could try solving easy classification problems like with [Iris Dataset](https://scikit-learn.org/stable/auto_examples/datasets/plot_iris_dataset.html) or [Titanic Dataset](https://github.com/minsuk-heo/kaggle-titanic/tree/master/input). You'll find many tutorials dealing with those subjects, and they are basic and famous exercices for someone starting in Data Science.
10299
1
15438
null
2
6903
I am trying to find a resource to understand non-negative matrix factorization. Apart from Wikipedia, I couldn't find anything useful.
What is a good explanation of Non Negative Matrix Factorization?
CC BY-SA 3.0
null
2016-02-18T04:25:38.627
2016-11-30T23:07:55.847
2016-11-30T23:07:55.847
26596
14652
[ "nlp", "text-mining", "dimensionality-reduction", "feature-engineering", "reference-request" ]
Non-Negative Matrix Factorization (NMF) is described well in the paper by [Lee and Seung, 1999](https://www.google.com/url?sa=t&rct=j&q=&esrc=s&source=web&cd=1&cad=rja&uact=8&ved=0ahUKEwjQvKC6zc_QAhXLMSYKHY9pDVwQFggcMAA&url=http%3A%2F%2Fwww.columbia.edu%2F~jwp2128%2FTeaching%2FW4721%2Fpapers%2Fnmf_nature.pdf&usg=AFQjCNHOf7BKOMfBKKs1wJ2SxSwfj7bgaA). Simply Put NMF takes as an input a [term-document matrix](https://en.wikipedia.org/wiki/Document-term_matrix) and generates a set of topics that represent weighted sets of co-occurring terms. The discovered topics form a basis that provides an efficient representation of the original documents. About NMF NMF is used for [feature extraction](https://en.wikipedia.org/wiki/Feature_extraction) and is generally seen to be useful when there are many attributes, particularly when the attributes are ambiguous or are not strong predictors. By combining attributes NMF can display patterns, topics, or themes which have importance. In practice, one encounters NMF typically where text is involved. Consider an example, where the same word (love) in a document could different meanings: - I love lettuce wraps. - I love the way I feel when I'm on vacation in Mexico. - I love my dog, Euclid. - I love being a Data Scientist. In all 4 cases, the word 'love' is used, but it has a different meaning to the reader. By combining attributes, NMF introduces context which creates additional predictive power. $"love" + "lettuce \ wraps" \ \Rightarrow \ "pleasure \ by \ food"$ $"love" + "vacation \ in \ Mexico" \ \Rightarrow \ "pleasure \ by \ relaxation"$ $"love" + "dog" \ \Rightarrow \ "pleasure \ by \ companionship"$ $"love" + "Data \ Scientist" \ \Rightarrow \ "pleasure \ by \ occupation"$ How Does It Happen NMF breaks down the multivariate data by creating a user-defined number of features. Each one of these features is a combination of the original attribute set. It is also key to remember these coefficients of these linear combinations are non-negative. Another way to think about it is that NMF breaks your original data features (let's call it V) into the product of two lower ranked matrices (let's call it W and H). NMF uses an iterative approach to modify the initial values of W and H so that the product approaches V. When the approximation error converges or the user-defined number of iterations is reached, NMF terminates. NMF data preparation - Numeric attributes are normalized. - Missing numerical values are replaced with the mean. - Missing categorical values are replaced with the mode. It is important to note that outliers can impact NMF significantly. In practice, most Data Scientist use a clipping transformation before binning or normalizing. In addition, NMF in many cases will benefit from normalization. As in many other algorithmic cases, to improve matrix factorization, one needs to decrease the error tolerance (which will increase compute time).
Advantages of matrix factorization when the number of products is low
Disadvantage of the matrix factorization - how to encode the feature ? should it be binary (present and absent) or rather integer (for example summing up number of signals user have interacted with item_i) - unfortunately you can't integrate meta data on your matrix factorization, neither on the item nor on the user - the cold start - you don't know how to offer new items or what to offer to new users. So you would have to setup up a bandit model in addition on the other hand, - easy to deploy and easily scalable - you wont have much challenges when it comes to feature engineering challenges for the multi-class classification: - taking care of time in training/evaluating the model: feature engineering is a bit tougher (this is also the case for matrix factorization) but here you also have to make sure, your feature engineering is respecting the time. For example, if you have a feature how many times user_i has purchased item_j, you have to make sure the test labels (buying item j) is not already encoded in the feature set (if you say the user already has purchased item or preferred that price) - otherwise there would be a huge data leak in your model. - perhaps in your data you will have rows belong to a single user who has purchased multiple items - so you have to take care of these mixed effects. otherwise you have to somehow make sure your model learns about each individual user (if you want to have an individual-level high-precision recommender)! imagine a user has purchased 25 items before. then you have 25 rows, right ? - deployment is gonna be a bit harder as you have to do feature engineering also for upcoming data. hopefully you won't have issue with memory usage but these are serious side effects of models that need feature engineering.
10302
1
10362
null
9
759
I am trying to understand reinforcement learning and markov decision processes (MDP) in the case where a neural net is being used as the function approximator. I'm having difficulty with the relationship between the MDP where the environment is explored in a probabilistic manner, how this maps back to learning parameters and how the final solution/policies are found. Am I correct to assume that in the case of Q-learning, the neural-network essentially acts as a function approximator for q-value itself so many steps in the future? How does this map to updating parameters via backpropagation or other methods? Also, once the network has learned how to predict the future reward, how does this fit in with the system in terms of actually making decisions? I am assuming that the final system would not probabilistically make state transitions. Thanks
Understanding Reinforcement Learning with Neural Net (Q-learning)
CC BY-SA 3.0
null
2016-02-18T10:11:23.997
2018-12-28T16:53:00.080
2016-02-18T10:24:21.377
5144
5144
[ "machine-learning", "neural-network", "q-learning" ]
In Q-Learning, on every step you will use observations and rewards to update your Q-value function: $$ Q_{t+1}(s_t,a_t) = Q_t(s_t,a_t) + \alpha [R_{t+1}+ \gamma \underset{a'}{\max} Q_t(s_{t+1},a') - Q_t(s_t, a_t)] $$ You are correct in saying that the neural network is just a function approximation for the q-value function. In general, the approximation part is just a standard supervised learning problem. Your network uses (s,a) as input and the output is the q-value. As q-values are adjusted, you need to train these new samples to the network. Still, you will find some issues as you as using correlated samples and SGD will suffer. If you are looking at the DQN paper, things are slightly different. In that case, what they are doing is putting samples in a vector (experience replay). To teach the network, they sample tuples from the vector, bootstrap using this information to obtain a new q-value that is taught to the network. When I say teaching, I mean adjusting the network parameters using stochastic gradient descent or your favourite optimisation approach. By not teaching the samples in the order that are being collected by the policy the decorrelate them and that helps in the training. Lastly, in order to make a decision on state $ s $, you choose the action that provides the highest q-value: $$ a^*(s)= \underset{a}{argmax} \space Q(s,a) $$ If your Q-value function has been learnt completely and the environment is stationary, it is fine to be greedy at this point. However, while learning, you are expected to explore. There are several approaches being $\varepsilon$-greedy one of the easiest and most common ways.
Why to Train Q-function in Reinforcement Learning?
It is not true that most RL algorithms are to estimate the Q function. A famous RL algorithm is the policy-gradient method which does exactly what you mentioned. There is a neural net that takes in state and outputs distribution over what action should be taken next. The parameters of this network are then trained via policy gradient estimates, so this network is never explicitly modeling the Q-function or the predicted rewards at all. That said, the variance of these policy gradients tends to be extremely high, so in practice people use Actor-Critic training which optimizes both the policy network and a Q-network jointly to stabilize training.
10320
1
10321
null
1
5936
I have a set of points where I performed a KMeans classification. How make a plot where the color of the point is based on the cluster they belong? EDIT: for clarification, having the set of points, I want to use the values of the array generated from `KMeans.predict()` ( from sklearn) to choose the color of each point.
Colouring points based on cluster on matplotlib
CC BY-SA 3.0
null
2016-02-19T16:08:12.400
2016-02-19T16:37:04.990
2016-02-19T16:17:22.797
11097
16096
[ "python", "visualization" ]
The [sklearn documentation](http://scikit-learn.org/stable/auto_examples/cluster/plot_cluster_comparison.html) shows you how: ``` colors = np.array([x for x in 'bgrcmykbgrcmykbgrcmykbgrcmyk']) colors = np.hstack([colors] * 20) ... if hasattr(algorithm, 'cluster_centers_'): centers = algorithm.cluster_centers_ center_colors = colors[:len(centers)] plt.scatter(centers[:, 0], centers[:, 1], s=100, c=center_colors) ``` [](https://i.stack.imgur.com/quQKq.png)
Coloring clusters so that nearby clusters have different colors
I found that taking the centroids of each cluster, running k-nearest-neighbors, and then applying [https://en.wikipedia.org/wiki/Greedy_coloring](https://en.wikipedia.org/wiki/Greedy_coloring) works well. Just keep increasing K until the clusters stand out. Edit: following @Fatemeh Asgarinejad's suggestion, use the minimum distance from a cluster centroid to a member of the other clusters as the distance in computing KNN Now. This is slower but seems to give a more robust coloring when clusters overlap or have irregular shapes. My python code: ``` # data is a pandas data frame of data points with cluster labels from sklearn.neighbors import NearestNeighbors def assign_cluster_colors(data, clusters, n_colors=10, n_neighbors = 8): centroids = data.groupby('cluster').agg({'x':np.mean,'y':np.mean}) color_ids = np.arange(n_colors) distances = np.empty(shape=(centroids.shape[0],centroids.shape[0])) groups = tsne_data.groupby('cluster') for centroid in centroids.itertuples(): c_dists = groups.apply(lambda r: min(np.sqrt(np.square(centroid.x - r.x) + np.square(centroid.y-r.y)))) distances[:,centroid.Index] = c_dists nbrs = NearestNeighbors(n_neighbors=n_neighbors,metric='precomputed').fit(distances) distances, indices = nbrs.kneighbors() color_assignments = np.repeat(-1,len(centroids)) for i in range(len(centroids)): knn = indices[i] knn_colors = color_assignments[knn] available_colors = color_ids[list(set(color_ids) - set(knn_colors))] if(len(available_colors) > 0): color_assignments[i] = available_colors[0] else: raise Exception("Can't color this many neighbors with this many colors") centroids = centroids.reset_index() colors = centroids.loc[:,['cluster']] colors['color'] = color_assignments data = data.merge(colors,on='cluster') return(data) ```
10329
1
11420
null
9
10340
Would it be possible for a an amateur who is interested in getting some "hands-on" experience in desining and training deep neural networks, to use an ordinary laptop for that purpose (no GPU), or is it hopeless to get good results in reasonable time without a powerful computer/cluster/GPU? To be more specific, the laptop's CPU is an Intel Core i7 5500U fith generation, with 8GB RAM. Now, since I haven't specified what problems I would like to work on, I'll frame my questions in a different way: which deep architectures would you recommend that I try to implement with my hardware, such that the following goal is achieved: Acquiring intuition and knowledge about how and when to use techniques that were introduced in the past 10 years and were essential to the uprising of deep nets (such as understanding of initialisations, drop-out, rmsprop, just to name a few). I read about these techniques, but of course without trying them out myself I wouldn't know exactly how and when to implement these in an effective way. On the other hand, I'm afraid that if I try using a PC which isn't strong enough, then my own learning rate will be so slow that it would be meaningless to say that I've acquired any better understanding. And if I try using these techniques on shallow nets, maybe I wouldn't be building the right intuition. I imagine the process of (my) learning as follows: I implement a neural net, let it practice for up to several hours, see what I've got, and repeat the process. If I do this once or twice a day, I would be happy if after, say, 6 months I will have gained practical knowledge which is comparable to what a professional in the field should know.
Training Deep Nets on an Ordinary Laptop
CC BY-SA 3.0
null
2016-02-20T07:24:56.140
2016-04-29T15:41:45.560
2016-02-20T19:17:24.003
836
16424
[ "machine-learning", "deep-learning" ]
Yes, a laptop will work just fine for getting acquainted with some deep learning projects: You can pick a smallish deep learning problem and gain some tractable insight using a laptop so give it a try. The [Theano](http://deeplearning.net/software/theano/) project has a [set of tutorials](http://deeplearning.net/tutorial/lenet.html) on digit recognition that I've played with and moded on a laptop. [Tensorflow](https://www.tensorflow.org/) also has a [set of tutorials](https://www.tensorflow.org/versions/r0.8/tutorials/index.html). I let some of the longer runs go overnight, but nothing was intractable. You might also consider availing yourself of [AWS](https://aws.amazon.com/) or one of the other cloud services. For 20-30 dollars you can perform some of the bigger calculations in the cloud on some sort of [elastic computing node](https://aws.amazon.com/ec2/). The secondary advantage is that you can also list AWS or other cloud services as skill on your resume also :-) Hope this helps!
Fine Tuning the Neural Nets
It can work either way. If you want to keep the exact feature extractors, then you should freeze everything except the "top" of the model. You can also unfreeze the whole model; the "top" of the model will be trained from scratch, and the feature extractors near the "bottom" of the model will be tweaked to work better with your dataset. The potential drawback of unfreezing the whole model is a higher potential for overfitting (and a longer, more expensive training time) > [I]s it necessary to Freeze the model and train only the top part of the model and then unfreeze some layers and again train the model or one can directly begin by unfreezing some layers? I'm not aware of any training routine that involves freezing and unfreezing different parts of the model at different times during training. People may have done this, but I'm not sure what the benefits would be.
10349
1
10383
null
4
1817
I have tried to build a model to forecast the count of a particular variable.The model that was used for the purpose was poisson .Unfortunately ,i don't have enough stat knowledge to analyze the model performance .If somebody can provide some insights as of how the model is performing,as well as some tweaks to improve the model performance will be greatly helpful. I am also willing to try out other models if it performs better. I am using python with the statmodels package to build the model. > Attaching a graph which shows the fitted and the actual values(Green shows the actual values and Blue shows the fitted values) [](https://i.stack.imgur.com/YtRYb.png) Also,providing the summary() output of the model ``` Generalized Linear Model Regression Results ============================================================================== Dep. Variable: Work_Item_Type No. Observations: 581 Model: GLM Df Residuals: 574 Model Family: Poisson Df Model: 6 Link Function: log Scale: 1.0 Method: IRLS Log-Likelihood: -16752. Date: Mon, 22 Feb 2016 Deviance: 31268. Time: 21:59:12 Pearson chi2: 1.05e+05 No. Iterations: 9 =============================================================================== coef std err z P>|z| [95.0% Conf. Int.] ------------------------------------------------------------------------------- Intercept 2.8492 0.051 55.426 0.000 2.748 2.950 Weekday -0.2066 0.032 -6.446 0.000 -0.269 -0.144 day_of_week -0.0926 0.007 -13.367 0.000 -0.106 -0.079 wom 0.1122 0.007 16.996 0.000 0.099 0.125 week -0.0411 0.001 -53.597 0.000 -0.043 -0.040 TimeDelta 0.0001 5.1e-05 2.933 0.003 4.96e-05 0.000 month_of_yr 0.2192 0.004 60.981 0.000 0.212 0.226 =============================================================================== Also attaching a sample of the dataset used clear_date Count_Work_Item_Type 7/7/2014 1 7/10/2014 1 7/11/2014 5 7/17/2014 2 7/22/2014 1 7/24/2014 1 7/29/2014 3 7/30/2014 4 8/13/2014 1 ``` Since i had only the date and the variable to be forecast i created a bunch of other variables like ``` Weekday(binomial)? Day of week Week of Month Week Time Delta (Starts from 0 increment by one until end) Month of Year ``` Also,i haven't done any kind of transformation on the variables. Please do comment if you need additional information: Thhanks
Analyze performance Poisson regression model on a time series(count forecasting)
CC BY-SA 3.0
null
2016-02-22T17:04:58.207
2020-08-03T18:11:20.417
null
null
13515
[ "python", "time-series", "forecast" ]
I'm not sure what you mean by "performance", but if what you mean is fit the answer is clear. You need to be using the log-likelihood to differentiate between different models. Basically, when you are fitting the model you are trying to maximize the log-likelihood. Thus the log-likelihood is giving you some sense of how well the parameters of your model are doing at fitting the data. In your case, you want to get log-likelihood to be as close to zero as possible. Now this is kind of terrible advice, because if you were clever enough to come up with a feature for every observation, you could get a perfect fit. That's bad because your model would be completely useless. There are functions that take your log-likelihood as an input and transform it to penalize you for adding more variables, etc. We won't worry about those right now. Just keep in mind not to mindlessly lower the log-likelihood. Once you have a model that you can live with you should run some sort of cross-validation, and/or hold out set. Then you can use any number of metrics to validate the predictive performance of your model. I think that is the more important of the two issues. You could calculate the mean square error on your hold out set. $$MSE=\sum_{i=1}^n(\hat y_i - y_i)^2$$ This would give you a really basic metric to assess how well your model is predictive of the output.
Regression model for a count proces
I have to quote Tukey, perhaps the grandfather of data science: > The combination of some data and an aching desire for an answer does not ensure that a reasonable answer can be extracted from a given body of data. I see nothing wrong with your Poisson model. In fact its a pretty good fit to the data. The data is noisy. There is nothing you can do about it. Perhaps the noise if due to whatever else is on TV at the time, or the weather, or the phase of the moon. Whatever it is, its not in your data. If you reasonably think the weather might be affecting your data, get the weather data and add it. If it decreases the log-likelihood enough for each degree of freedom then it's doing a good job and you leave it in. This is regression modelling 101. Of course there's a zillion other things you can do. Scale the data by any old transformation you want. Fit a quadratic. A quartic. A quintic. A spline. You could include the date and possible temporal correlation effects. But always bear in mind what Tukey was saying - if your data is noisy, you won't get anything much out of it. So it goes.
10357
1
10361
null
1
62
How do linear learning systems, such as the simple "closest to the class average" algorithm or SVMs, classify datapoints that fall on the hyperplane?
How do linear learning systems classify datapoints that fall on the hyperplane
CC BY-SA 3.0
null
2016-02-23T14:28:29.563
2016-02-24T06:54:18.073
null
null
11044
[ "machine-learning", "classification", "algorithms", "svm", "supervised-learning" ]
Linear, binary classifiers can choose either class (but consistently) when the datapoint which is to classify is on the hyperplane. It just depends on how you programmed it. Also, it doesn't really matter. This is very unlikely to happen. In fact, if we had arbitrary precision computing and normal distributed features, there would be a probability of 0 (exactly, not rounded) that this would happen. We have IEEE 754 floats, so the probability is not 0, but still so small that there are much more important factors to worry about.
Linear Learning Machines
Not sure at all since I am not very familiar with these old concepts. But I think what you call LLM (not a very common concept it seems) are algorithms that solve linearly separable data classification. SVM are algorithms that look for an hyperplane that maximize the margin with data and thus can be considered LLM. Quick reminder how a SVM solves a linearly separable problem : [](https://i.stack.imgur.com/r8dpH.png) We escape LLM's grasp when trying to solve a nonlinear classification problem, as represented on the left of the following picture. [](https://i.stack.imgur.com/C1eQ6.png) SVM can still solve these type of classification with the kernel trick, that applies a non-linear function to turn the classification into a linear problem. Hope this can help your understanding of the difference, all images are from [Wikipedia](https://en.wikipedia.org/wiki/Support-vector_machine#Linear_SVM). My apologies for not knowing how to resize these images :(
10363
1
10364
null
2
9298
In one field I have entries like 'U$ 192,0'. Working on pandas, how I ignore non numerical data and get only the numerical part?
Ignoring symbols and select only numerical values with pandas
CC BY-SA 3.0
null
2016-02-23T18:25:24.797
2019-06-09T07:24:57.953
2016-02-23T19:10:43.467
16096
16096
[ "python", "data-cleaning", "pandas" ]
Use `str.strip` if the prefix is fixed or `str.replace` if not: ``` data = pandas.Series(["U$ 192.0"]) data.str.replace('^[^\d]*', '').astype(float) ``` This removes all the non-numeric characters to the left of the number, and casts to float.
Delete/Drop only the rows which has all values as NaN in pandas
The complete command is this: ``` df.dropna(axis = 0, how = 'all', inplace = True) ``` you must add `inplace = True` argument, if you want the dataframe to be actually updated. Alternatively, you would have to type: ``` df = df.dropna(axis = 0, how = 'all') ``` but that's less pythonic IMHO.
10368
1
10782
null
6
3674
I am trying to implement demo of Image Captioning system from [Keras documentation](http://keras.io/examples/). From the documentation I could understand training part. ``` max_caption_len = 16 vocab_size = 10000 # first, let's define an image model that # will encode pictures into 128-dimensional vectors. # it should be initialized with pre-trained weights. image_model = VGG-16 CNN definition image_model.load_weights('weight_file.h5') # next, let's define a RNN model that encodes sequences of words # into sequences of 128-dimensional word vectors. language_model = Sequential() language_model.add(Embedding(vocab_size, 256, input_length=max_caption_len)) language_model.add(GRU(output_dim=128, return_sequences=True)) language_model.add(TimeDistributedDense(128)) # let's repeat the image vector to turn it into a sequence. image_model.add(RepeatVector(max_caption_len)) # the output of both models will be tensors of shape (samples, max_caption_len, 128). # let's concatenate these 2 vector sequences. model = Merge([image_model, language_model], mode='concat', concat_axis=-1) # let's encode this vector sequence into a single vector model.add(GRU(256, 256, return_sequences=False)) # which will be used to compute a probability # distribution over what the next word in the caption should be! model.add(Dense(vocab_size)) model.add(Activation('softmax')) model.compile(loss='categorical_crossentropy', optimizer='rmsprop') model.fit([images, partial_captions], next_words, batch_size=16, nb_epoch=100) ``` But now I am confused in how to generate caption for test image. Input here is [image, partial_caption] pair, now for test image how to input partial caption?
Image Captioning in Keras
CC BY-SA 3.0
null
2016-02-24T04:17:12.343
2017-10-23T07:55:27.233
2017-10-23T07:55:27.233
29575
13518
[ "neural-network", "deep-learning", "image-classification", "keras" ]
This example trains an image and a partial caption to predict the next word in the caption. ``` Input: [, "<BEGIN> The cat sat on the"] Output: "mat" ``` Notice the model doesn't predict the entire output of the caption only the next word. To construct a new caption, you would have to predict multiple times for each word. ``` Input: [, "<BEGIN>"] # predict "The" Input: [, "<BEGIN> The"] # predict "cat" Input: [, "<BEGIN> The cat"] # predict "sat" ... ``` To predict the entire sequence, I believe you need to use `TimeDistributedDense` for the output layer. ``` Input: [, "<BEGIN> The cat sat on the mat"] Output: "The cat sat on the mat <END>" ``` See this issue: [https://github.com/fchollet/keras/issues/1029](https://github.com/fchollet/keras/issues/1029)
How to import image data into python for keras?
The [docs](https://keras.io/preprocessing/image/) for ImageDataGenerator suggest that no augmentation is done by default. So you could instantiate it without any augmentation parameters and keep the rest of your code for handling your directory structure: ``` train_datagen = ImageDataGenerator(rescale=1./255) ``` You are also allowed to write your own custom data generator and pass it to `model.fit_generator()`. [Here](https://stanford.edu/~shervine/blog/keras-how-to-generate-data-on-the-fly.html) is a nice tutorial. Or if your data fits in memory you could write some simpler code possibly using `keras.preprocessing.image.load_img` to load all the images into an array and pass them to `model.fit` instead.
10377
1
11451
null
2
1082
I’ve a model with 14 dependent variables (all of them are significant) and 678 observations. I used best subset regression and validation set (33% of data for the validation) to find which statistical model has the lowest MSE (for my curiosity). I got the following graph which surprisingly the MSE for validation data set is always lower than training data set for all the models (from 1 to 14 dependent variables). Here is the code that I used, ``` library(MASS) set.seed(1) train=sample(seq(678),452,replace=FALSE) train regfit.exh=regsubsets(HPV~. -Model.Types..code.-Year..code.,data=Mydata, nvmax=NULL,force.in = NULL, force.out = NULL, method="exhaustive") val.errors=rep(NA,14) x.test=model.matrix(HPV~.-Model.Types..code.-Year..code.,data=Mydata[-train,]) for(i in 1:14){ coefi=coef(regfit.exh,id=i) pred=x.test[,names(coefi)]%*%coefi val.errors[i]=mean((Mydata$HPV[-train]-pred)^2) } plot(sqrt(val.errors),ylab="Root MSE",ylim=c(3,12), pch=11, type="b") points(sqrt(regfit.exh$rss[-1]/452),col="blue",pch=11,type="b") legend("topright",legend=c("Training","Validation"),col=c("blue","black"),pch=11) ``` [](https://i.stack.imgur.com/R4OwJ.png) How come the validation root MSE could always beat the training?. Any feedback would be appreciated.
Comparing training and validation data set Root MSE for a best subset regression?
CC BY-SA 3.0
null
2016-02-24T17:54:04.143
2016-04-30T03:07:50.653
2016-02-25T02:33:05.247
11097
12867
[ "r", "regression" ]
Im a little late, but better late than never. It looks like your line where you find the coefficients: ``` regfit.exh=regsubsets(HPV~. -Model.Types..code.-Year..code.,data=Mydata, nvmax=NULL,force.in = NULL, force.out = NULL, method="exhaustive") ``` should have `data=Mydata[train,]` instead of `data=Mydata`. Your model had already seen your test samples, so the validation error is not an accurate assessment.
Statistical comparison of model performance when training and validation data is always the same
If your model is deterministic (no randomness), then repeating the training/testing on the exact same set of data is pointless - you will get the exact same answer every time. The benefit of cross-validation is that it provides an unbiased estimate of your model performance, and does so by using different perturbations of the train/test data. You can still do something similar, selecting 80% of your training data and testing on some subset of the test data, and repeatedly doing a resampling. There's a slight difference from traditional CV, where your train/test set are mutually exclusive and essentially define one another, whereas in this case, your training and test datasets can be defined totally independently (but that shouldn't be a problem). Incidentally, what model are you using that you expect can accurately predict data that's completely unlike what it's been trained on? Usually the point of the training data is to provide the model with examples of the data you expect to see, along with the correct output. It's not clear to me why you'd want to train a model to classify tweets in English, but then test how it performs at classifying Dutch tweets - the model has never seen Dutch, so I don't expect it would perform well. It seems this evaluation would test how similar English and Dutch are, rather than testing how good your classification model is.
10389
1
10393
null
3
270
When we have linearly inseparable datasets and we are using machine learning algorithms such as SVMs, we use kernels to implicitly map datapoints into a feature space that makes them linearly separable. But how do we know if a kernel has indeed, implicitly, been successful in making the datapoints linearly separable in the new feature space? What is the guarantee?
How do we know Kernels are successful in making data linearly Separable?
CC BY-SA 3.0
null
2016-02-25T10:30:42.690
2016-02-25T17:47:01.463
null
null
11044
[ "machine-learning", "classification", "svm", "supervised-learning" ]
You cannot guarantee this. Some data is not separable by any kernel because of duplicates. By trying too hard, you will cause overfitting. Essentially, you force the implicit mapping to be so complex it contains a copy of your training data (which is exactly what happens if you choose a too small bandwidth with RBF). If you want a good generalization performance, you will have to tolerate some errors, and use e.g. soft-margin and such techniques. Perfect separation is not something to aim for. Such a guarantee is just a guarantee of being able to overfit! Use cross-validation to reduce the risk of overfitting and find the right balance between being optimal on training data and actual performance.
How do we define a linearly separable problem?
1.) Perceptron is a non-linear transformation! 2.) Linear seperable function is only defined for boolean functions, see [Wikipedia](https://en.wikipedia.org/wiki/Linear_separability#Linear_separability_of_Boolean_functions_in_n_variables). Therefore, yes, the statement is meant only for binary classification. 3.) For general functions, see the [universal approximation theorem](https://en.wikipedia.org/wiki/Universal_approximation_theorem).