Id
stringlengths 2
6
| PostTypeId
stringclasses 1
value | AcceptedAnswerId
stringlengths 2
6
| ParentId
stringclasses 0
values | Score
stringlengths 1
3
| ViewCount
stringlengths 1
6
| Body
stringlengths 34
27.1k
| Title
stringlengths 15
150
| ContentLicense
stringclasses 2
values | FavoriteCount
stringclasses 1
value | CreationDate
stringlengths 23
23
| LastActivityDate
stringlengths 23
23
| LastEditDate
stringlengths 23
23
⌀ | LastEditorUserId
stringlengths 2
6
⌀ | OwnerUserId
stringlengths 2
6
⌀ | Tags
sequencelengths 1
5
| Answer
stringlengths 32
27.2k
| SimilarQuestion
stringlengths 15
150
| SimilarQuestionAnswer
stringlengths 44
22.3k
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
14 | 1 | 29 | null | 26 | 1909 | I am sure data science as will be discussed in this forum has several synonyms or at least related fields where large data is analyzed.
My particular question is in regards to Data Mining. I took a graduate class in Data Mining a few years back. What are the differences between Data Science and Data Mining and in particular what more would I need to look at to become proficient in Data Mining?
| Is Data Science the Same as Data Mining? | CC BY-SA 3.0 | null | 2014-05-14T01:25:59.677 | 2020-08-16T13:01:33.543 | 2014-06-17T16:17:20.473 | 322 | 66 | [
"data-mining",
"definitions"
] | [@statsRus](https://datascience.stackexchange.com/users/36/statsrus) starts to lay the groundwork for your answer in another question [What characterises the difference between data science and statistics?](https://datascience.meta.stackexchange.com/q/86/98307):
>
Data collection: web scraping and online surveys
Data manipulation: recoding messy data and extracting meaning from linguistic and social network data
Data scale: working with extremely large data sets
Data mining: finding patterns in large, complex data sets, with an emphasis on algorithmic techniques
Data communication: helping turn "machine-readable" data into "human-readable" information via visualization
## Definition
[data-mining](/questions/tagged/data-mining) can be seen as one item (or set of skills and applications) in the toolkit of the data scientist. I like how he separates the definition of mining from collection in a sort of trade-specific jargon.
However, I think that data-mining would be synonymous with data-collection in a US-English colloquial definition.
As to where to go to become proficient? I think that question is too broad as it is currently stated and would receive answers that are primarily opinion based. Perhaps if you could refine your question, it might be easier to see what you are asking.
| Is Data Science just a trend or is a long term concept? | The one thing that you can say for sure is: Nobody can say this for sure. And it might indeed be opinion-based to some extent. The introduction of terms like "Big Data" that some people consider as "hypes" or "buzzwords" don't make it easier to flesh out an appropriate answer here. But I'll try.
In general, interdisciplinary fields often seem to have the problem of not being taken serious by either of the fields they are spanning. However, the more research is invested into a particular field, the greater is the urge to split this field into several sub-topics. And these sub-topics sonner of later have to be re-combined in new ways, in order to prevent an overspecialization, and to increase and broaden the applicability of techniques that are developed by the (over?)specialized experts in the different fields.
And I consider "Data Science" as such an approach to combine the expertise and findings from different fields. You described it as
>
...a mix of computer science and statistics techniques
And indeed, several questions here aim at the differentiation between data science and statistics. But a pure statistician will most likely not be able to set up a Hadoop cluster and show the results of his analysis in an interactive HTML5 dashboard. And someone who can implement a nice HTML5 dashboard might not be so familiar with the mathematical background of a Chi-Squared-Test.
It is reasonable to assume that giving students enough knowledge to apply the most important techniques from the different fields that are covered by data science will lead to new applications of these techniques, and be beneficial - also for the "purists" in these fields. The combination of these techniques is not straightforward in many cases, and can justify an own branch of research.
You also asked whether in 10 years, data science will be considered as "just a topic inside computer science". Again: Nobody can say for sure. But I wonder at which point people stopped asking the question whether "Computer Science" will one day only be considered only as a mix of (or a subject of) Electrical Engineering and Mathematics...
|
16 | 1 | 46 | null | 17 | 432 | I use [Libsvm](http://www.csie.ntu.edu.tw/~cjlin/libsvm/) to train data and predict classification on semantic analysis problem. But it has a performance issue on large-scale data, because semantic analysis concerns n-dimension problem.
Last year, [Liblinear](http://www.csie.ntu.edu.tw/~cjlin/liblinear/) was release, and it can solve performance bottleneck.
But it cost too much memory. Is MapReduce the only way to solve semantic analysis problem on big data? Or are there any other methods that can improve memory bottleneck on Liblinear?
| Use liblinear on big data for semantic analysis | CC BY-SA 3.0 | null | 2014-05-14T01:57:56.880 | 2014-05-17T16:24:14.523 | 2014-05-17T16:24:14.523 | 84 | 63 | [
"machine-learning",
"bigdata",
"libsvm"
] | Note that there is an early version of LIBLINEAR ported to [Apache Spark](http://spark.apache.org). See [mailing list comments](http://apache-spark-user-list.1001560.n3.nabble.com/Spark-LIBLINEAR-td5546.html) for some early details, and the [project site](http://www.csie.ntu.edu.tw/~cjlin/libsvmtools/distributed-liblinear/).
| How to preprocess with NLP a big dataset for text classification | Let me first clarify the general principle of classification with text data. Note that I'm assuming that you're using a "traditional" method (like decision trees), as opposed to Deep Learning (DL) method.
As you correctly understand, each individual text document (instance) has to be represented as a vector of features, each feature representing a word. But there is a crucial constraint: every feature/word must be at the same position in the vector for all the documents. This is because that's how the learning algorithm can find patterns across instances. For example the decision tree algorithm might create a condition corresponding to "does the document contains the word 'cat'?", and the only way for the model to correctly detect if this condition is satisfied is if the word 'cat' is consistently represented at index $i$ in the vector for every instance.
For the record this is very similar to one-hot-encoding: the variable "word" has many possible values, each of them must be represented as a different feature.
This means that you cannot use a different index representation for every instance, as you currently do.
>
Vectors generated from those texts needs to have the same dimension Does padding them with zeroes make any sense?
As you probably understood now, no it doesn't.
>
Vectors for prediction needs also to have the same dimension as those from the training
Yes, they must not only have the same dimension but also have the same exact features/words in the same order.
>
At prediction phase, those words that hasn't been added to the corpus are ignored
Absolutely, any out of vocabulary word (word which doesn't appear in the training data) has to be ignored. It would be unusable anyway since the model has no idea which class it is related to.
>
Also, the vectorization doesn't make much sense since they are like [0, 1, 2, 3, 4, 1, 2, 3, 5, 1, 2, 3] and this is different to [1, 0, 2, 3, 4, 1, 2, 3, 5, 1, 2, 3] even though they both contain the same information
Indeed, you had the right intuition that there was a problem there, it's the same issue as above.
Now of course you go back to solving the problem of fitting these very long vectors in memory. So in theory the vector length is the full vocabulary size, but in practice there are several good reasons not to keep all the words, more precisely to remove the least frequent words:
- The least frequent words are difficult to use by the model. A word which appears only once (btw it's called a hapax legomenon, in case you want to impress people with fancy terms ;) ) doesn't help at all, because it might appear by chance with a particular class. Worse, it can cause overfitting: if the model creates a rule that classifies any document containing this word as class C (because in the training 100% of the documents with this word are class C, even though there's only one) and it turns out that the word has nothing specific to class C, the model will make errors. Statistically it's very risky to draw conclusions from a small sample, so the least frequent words are often "bad features".
- You're going to like this one: texts in natural language follow a Zipf distribution. This means that in any text there's a small number of distinct words which appear frequently and a high number of distinct words which appear rarely. As a result removing the least frequent words reduces the size of the vocabulary very quickly (because there are many rare words) but it doesn't remove a large proportion of the text (because the most frequent occurrences are frequent words). For example removing the words which appear only once might reduce the vocabulary size by half, while reducing the text size by only 3%.
So practically what you need to do is this:
- Calculate the word frequency for every distinct word across all the documents in the training data (only in the training data). Note that you need to store only one dict in memory so it's doable. Sort it by frequency and store it somewhere in a file.
- Decide a minimum frequency $N$ in order to obtain your reduced vocabulary by removing all the words which have frequency lower than $N$.
- Represent every document as a vector using only this predefined vocabulary (and fixed indexes, of course). Now you can train a model and evaluate it on a test set.
Note that you could try different values of $N$ (2,3,4,...) and observe which one gives the best performance (it's not necessarily the lowest one, for the reasons mentioned above). If you do that you should normally use a validation set distinct from the final test set, because evaluating several times on the test set is like "cheating" (this is called [data leakage](https://en.wikipedia.org/wiki/Leakage_(machine_learning))).
|
22 | 1 | 24 | null | 200 | 292233 | My data set contains a number of numeric attributes and one categorical.
Say, `NumericAttr1, NumericAttr2, ..., NumericAttrN, CategoricalAttr`,
where `CategoricalAttr` takes one of three possible values: `CategoricalAttrValue1`, `CategoricalAttrValue2` or `CategoricalAttrValue3`.
I'm using default [k-means clustering algorithm implementation for Octave](https://blog.west.uni-koblenz.de/2012-07-14/a-working-k-means-code-for-octave/).
It works with numeric data only.
So my question: is it correct to split the categorical attribute `CategoricalAttr` into three numeric (binary) variables, like `IsCategoricalAttrValue1, IsCategoricalAttrValue2, IsCategoricalAttrValue3` ?
| K-Means clustering for mixed numeric and categorical data | CC BY-SA 4.0 | null | 2014-05-14T05:58:21.927 | 2022-10-14T09:40:25.270 | 2020-08-07T14:12:08.577 | 98307 | 97 | [
"data-mining",
"clustering",
"octave",
"k-means",
"categorical-data"
] | The standard k-means algorithm isn't directly applicable to categorical data, for various reasons. The sample space for categorical data is discrete, and doesn't have a natural origin. A Euclidean distance function on such a space isn't really meaningful. As someone put it, "The fact a snake possesses neither wheels nor legs allows us to say nothing about the relative value of wheels and legs." (from [here](http://www.daylight.com/meetings/mug04/Bradshaw/why_k-modes.html))
There's a variation of k-means known as k-modes, introduced in [this paper](http://www.cs.ust.hk/~qyang/Teaching/537/Papers/huang98extensions.pdf) by Zhexue Huang, which is suitable for categorical data. Note that the solutions you get are sensitive to initial conditions, as discussed [here](http://arxiv.org/ftp/cs/papers/0603/0603120.pdf) (PDF), for instance.
Huang's paper (linked above) also has a section on "k-prototypes" which applies to data with a mix of categorical and numeric features. It uses a distance measure which mixes the Hamming distance for categorical features and the Euclidean distance for numeric features.
A Google search for "k-means mix of categorical data" turns up quite a few more recent papers on various algorithms for k-means-like clustering with a mix of categorical and numeric data. (I haven't yet read them, so I can't comment on their merits.)
---
Actually, what you suggest (converting categorical attributes to binary values, and then doing k-means as if these were numeric values) is another approach that has been tried before (predating k-modes). (See Ralambondrainy, H. 1995. A conceptual version of the k-means algorithm. Pattern Recognition Letters, 16:1147–1157.) But I believe the k-modes approach is preferred for the reasons I indicated above.
| Clustering ordered categorical data | You can have categories that contain a logic that could be a numeric value and it seems to be your case.
That's why you should consider those ratings from a mathematical point of view and assign a numerical scale that would be comprehensive to your algorithm.
For instance:
```
AAA+ => 1
AAA => 2
AAA- => 3
AA+ => 4
AA => 5
AA- => 6
```
etc.
In this way, countries rated AAA+ in 2022 and AA- in 2021 should be close to countries rated AAA in 2022 and AA in 2021 because [1,6] are similar to [2,5] from a numeric point of view.
However, if you consider those rating as separated categories like this:
```
AAA+ => col_AAA+= True, col_AAA=False, col_AAA-=False, col_AA+=False,...
AAA => col_AAA+= False, col_AAA=True, col_AAA-=False, col_AA+=False,...
```
etc.
You would have more data to deal with and the algorithm would not see any ranking between columns, and hence would not make good clustering.
I recommend using numeric values for any feature that can have a scale and use categories just in case of independent ones (for instance, sea_access=Yes/No, or opec_member=Yes/No).
I some case, you can also implement an intermediate solution like this one:
```
AAA+ => col_A= 1, col_B=0, col_C-=0, ...
AAA => col_A= 2, col_B=0, col_C-=0, ...
...
BBB+ => col_A= 0, col_B=1, col_C-=0, ...
BBB => col_A= 0, col_B=2, col_C=0, ...
```
etc.
It could be interesting if you want to make a clear difference between rating groups (ex: going from AAA to A+ is not as bad as going from A- to BBB+).
Note: clustering could be difficult if you consider too many years, even with algorithms like UMAP or t-SNE. That's why a good option is to consider a few years for a beginning or simplify with smoothing algorithms.
|
31 | 1 | 72 | null | 10 | 1760 | I have a bunch of customer profiles stored in a [elasticsearch](/questions/tagged/elasticsearch) cluster. These profiles are now used for creation of target groups for our email subscriptions.
Target groups are now formed manually using elasticsearch faceted search capabilities (like get all male customers of age 23 with one car and 3 children).
How could I search for interesting groups automatically - using data science, machine learning, clustering or something else?
[r](/questions/tagged/r) programming language seems to be a good tool for this task, but I can't form a methodology of such group search. One solution is to somehow find the largest clusters of customers and use them as target groups, so the question is:
How can I automatically choose largest clusters of similar customers (similar by parameters that I don't know at this moment)?
For example: my program will connect to elasticsearch, offload customer data to CSV and using R language script will find that large portion of customers are male with no children and another large portion of customers have a car and their eye color is brown.
| Clustering customer data stored in ElasticSearch | CC BY-SA 3.0 | null | 2014-05-14T08:38:07.007 | 2022-10-21T03:12:52.913 | 2014-05-15T05:49:39.140 | 24 | 118 | [
"data-mining",
"clustering"
] | One algorithm that can be used for this is the [k-means clustering algorithm](http://en.wikipedia.org/wiki/K-means_clustering).
Basically:
- Randomly choose k datapoints from your set, $m_1$, ..., $m_k$.
- Until convergence:
Assign your data points to k clusters, where cluster i is the set of points for which m_i is the closest of your current means
Replace each $m_i$ by the mean of all points assigned to cluster i.
It is good practice to repeat this algorithm several times, then choose the outcome that minimizes distances between the points of each cluster i and the center $m_i$.
Of course, you have to know `k` to start here; you can use cross-validation to choose this parameter, though.
| Clustering Customer Data | The answer could be anything according to your data! As you can not post your data here, I propose to spend some time on EDA to visualize your data from various POVs and see how it looks like. My suggestions:
- Use only price and quantity for a 2-d scatter plot of your customers. In this task you may need feature scaling if the scale of prices and quantities are much different.
- In the plot above, you may use different markers and/or colors to mark category or customer (as one customer can have several entries)
- Convert "date" feature to 3 features, namely, year, month and day. (Using Python modules you may also get the weekday which might be meaningful). Then apply dimensionality reduction methods and visualize your data to get some insight about it.
- Convert date to an ordinal feature (earliest date becomes 0 or 1 and it increases by 1 for each day) and plot total sale for each customer as a time-series and see it. You may do the same for categories. These can also be plotted as cumulative time-series. This can also be done according to year and month.
All above are just supposed to give you insight about the data (sometimes this insight can give you a proper hint for the number of clusters). This insight sometimes determines the analysis approach as well.
If your time-series become very sparse then time-series analysis might not be the best option (you can make it more dense by increasing time-stamp e.g. weekly, monthly, yearly, etc.)
The idea in your comment is pretty nice. You can use this cumulative features and apply dimensionality reduction methods to (again) see the nature of your data. Do not limit to [linear](http://scikit-learn.org/stable/modules/generated/sklearn.decomposition.PCA.html) ones. Try [nonlinear](http://scikit-learn.org/stable/modules/generated/sklearn.manifold.LocallyLinearEmbedding.html) ones as well.
You may create a [graph](https://en.wikipedia.org/wiki/Graph_theory) out of your data and try graph analysis as well. Each customer is a node, so is each product when each edge shows a purchase ([directed](https://en.wikipedia.org/wiki/Directed_graph) from customer to product) and the [weight](https://en.wikipedia.org/wiki/Glossary_of_graph_theory_terms#weighted_graph) of that edge is the price and/or quantity. Then you end up with a [bipartite graph](https://en.wikipedia.org/wiki/Bipartite_graph). [Try some analysis](http://snap.stanford.edu/class/cs224w-2016/projects/cs224w-83-final.pdf) on this graph and see if it helps.
Hope it helps and good luck!
|
61 | 1 | 62 | null | 56 | 16700 | Logic often states that by overfitting a model, its capacity to generalize is limited, though this might only mean that overfitting stops a model from improving after a certain complexity. Does overfitting cause models to become worse regardless of the complexity of data, and if so, why is this the case?
---
Related: Followup to the question above, "[When is a Model Underfitted?](https://datascience.stackexchange.com/questions/361/when-is-a-model-underfitted)"
| Why Is Overfitting Bad in Machine Learning? | CC BY-SA 3.0 | null | 2014-05-14T18:09:01.940 | 2017-09-17T02:27:31.110 | 2017-04-13T12:50:41.230 | -1 | 158 | [
"machine-learning",
"predictive-modeling"
] | Overfitting is empirically bad. Suppose you have a data set which you split in two, test and training. An overfitted model is one that performs much worse on the test dataset than on training dataset. It is often observed that models like that also in general perform worse on additional (new) test datasets than models which are not overfitted.
One way to understand that intuitively is that a model may use some relevant parts of the data (signal) and some irrelevant parts (noise). An overfitted model uses more of the noise, which increases its performance in the case of known noise (training data) and decreases its performance in the case of novel noise (test data). The difference in performance between training and test data indicates how much noise the model picks up; and picking up noise directly translates into worse performance on test data (including future data).
Summary: overfitting is bad by definition, this has not much to do with either complexity or ability to generalize, but rather has to do with mistaking noise for signal.
P.S. On the "ability to generalize" part of the question, it is very possible to have a model which has inherently limited ability to generalize due to the structure of the model (for example linear SVM, ...) but is still prone to overfitting. In a sense overfitting is just one way that generalization may fail.
| Overfitting in machine learning | I can tell from your screenshot that you are plotting the validation accuracy. When you overfit your training accuracy should be very high, but your validation accuracy should get lower and lower. Or if you think in terms of error rather than accuracy you should see the following plot in case of overfitting. In the figure below the x-axis contains the training progress, i.e. the number of training iterations. The training error (blue) keeps decreasing, while the validation error (red) starts increasing at the point where you start overfitting.
[](https://i.stack.imgur.com/TVkSt.png)
This picture is from the wikipedia article on overfitting by the way: [https://en.wikipedia.org/wiki/Overfitting](https://en.wikipedia.org/wiki/Overfitting) Have a look.
So to answer your question: No, I don't think you are overfitting. If increasing the number of features would make the overfitting more and more significant the validation accuracy should be falling, not stay constant. In your case it seems that more features are simply no longer adding additional benefit for the classification.
|
86 | 1 | 101 | null | 15 | 2829 | Given website access data in the form `session_id, ip, user_agent`, and optionally timestamp, following the conditions below, how would you best cluster the sessions into unique visitors?
`session_id`: is an id given to every new visitor. It does not expire, however if the user doesn't accept cookies/clears cookies/changes browser/changes device, he will not be recognised anymore
`IP` can be shared between different users (Imagine a free wi-fi cafe, or your ISP reassigning IPs), and they will often have at least 2, home and work.
`User_agent` is the browser+OS version, allowing to distinguish between devices. For example a user is likely to use both phone and laptop, but is unlikely to use windows+apple laptops. It is unlikely that the same session id has multiple useragents.
Data might look as the fiddle here:
[http://sqlfiddle.com/#!2/c4de40/1](http://sqlfiddle.com/#!2/c4de40/1)
Of course, we are talking about assumptions, but it's about getting as close to reality as possible. For example, if we encounter the same ip and useragent in a limited time frame with a different session_id, it would be a fair assumption that it's the same user, with some edge case exceptions.
Edit: Language in which the problem is solved is irellevant, it's mostly about logic and not implementation. Pseudocode is fine.
Edit: due to the slow nature of the fiddle, you can alternatively read/run the mysql:
```
select session_id, floor(rand()*256*256*256*256) as ip_num , floor(rand()*1000) as user_agent_id
from
(select 1+a.nr+10*b.nr as session_id, ceil(rand()*3) as nr
from
(select 1 as nr union all select 2 union all select 3 union all select 4 union all select 5
union all select 6 union all select 7 union all select 8 union all select 9 union all select 0)a
join
(select 1 as nr union all select 2 union all select 3 union all select 4 union all select 5
union all select 6 union all select 7 union all select 8 union all select 9 union all select 0)b
order by 1
)d
inner join
(select 1 as nr union all select 2 union all select 3 union all select 4 union all select 5
union all select 6 union all select 7 union all select 8 union all select 9 )e
on d.nr>=e.nr
```
| Clustering unique visitors by useragent, ip, session_id | CC BY-SA 3.0 | null | 2014-05-15T09:04:09.710 | 2014-05-15T21:41:22.703 | 2014-05-15T10:06:06.393 | 116 | 116 | [
"clustering"
] | One possibility here (and this is really an extension of what Sean Owen posted) is to define a "stable user."
For the given info you have you can imagine making a user_id that is a hash of ip and some user agent info (pseudo code):
```
uid = MD5Hash(ip + UA.device + UA.model)
```
Then you flag these ids with "stable" or "unstable" based on usage heuristics you observe for your users. This can be a threshold of # of visits in a given time window, length of time their cookies persist, some end action on your site (I realize this wasn't stated in your original log), etc...
The idea here is to separate the users that don't drop cookies from those that do.
From here you can attribute session_ids to stable uids from your logs. You will then have "left over" session_ids for unstable users that you are relatively unsure about. You may be over or under counting sessions, attributing behavior to multiple people when there is only one, etc... But this is at least limited to the users you are now "less certain" about.
You then perform analytics on your stable group and project that to the unstable group. Take a user count for example, you know the total # of sessions, but you are unsure of how many users generated those sessions. You can find the # sessions / unique stable user and use this to project the "estimated" number of unique users in the unstable group since you know the number of sessions attributed to that group.
```
projected_num_unstable_users = num_sess_unstable / num_sess_per_stable_uid
```
This doesn't help with per user level investigation on unstable users but you can at least get some mileage out of a cohort of stable users that persist for some time. You can, by various methods, project behavior and counts into the unstable group. The above is a simple example of something you might want to know. The general idea is again to define a set of users you are confident persist, measure what you want to measure, and use certain ground truths (num searches, visits, clicks, etc...) to project into the unknown user space and estimate counts for them.
This is a longstanding problem in unique user counting, logging, etc... for services that don't require log in.
| Clustering of users in a dataset | If your objective is to find clusters of users, then you are interested in finding groups of "similar" reviewers.
Therefore you should:
- Retain information which relates to the users in a meaningful way - e.g. votes_for_user.
- Discard information which has no meaningful relationship to a user - e.g. user_id (unless perhaps it contains some information such as time / order).
- Be mindful of fields which may contain implicit relationships involving a user - e.g. vote may be a result of the interaction between user and ISBN.
|
115 | 1 | 131 | null | 15 | 4194 | If I have a very long list of paper names, how could I get abstract of these papers from internet or any database?
The paper names are like "Assessment of Utility in Web Mining for the Domain of Public Health".
Does any one know any API that can give me a solution? I tried to crawl google scholar, however, google blocked my crawler.
| Is there any APIs for crawling abstract of paper? | CC BY-SA 3.0 | null | 2014-05-17T08:45:08.420 | 2021-01-25T09:43:02.103 | null | null | 212 | [
"data-mining",
"machine-learning"
] | Look it up on:
- Google Scholar link
- Citeseer link
If you get a single exact title match then you have probably found the right article, and can fill in the rest of the info from there. Both give you download links and bibtex-style output. What you would likely want to do though to get perfect metadata is download and parse the pdf (if any) and look for DOI-style identifier.
Please be nice and rate-limit your requests if you do this.
| where can i find the algorithm of these papers? | You can email the authors to ask them if they could share their code with you, but maybe they can't for IP reasons or don't want to share it.
Papers like these are not unusual in experimental research. In theory you should be able to reproduce their system following the explanations in the paper.
However there are other tools available for biomedical NER: [MetaMap](https://metamap.nlm.nih.gov/), [cTakes](https://ctakes.apache.org/).
|
116 | 1 | 121 | null | 28 | 3243 | I have a database from my Facebook application and I am trying to use machine learning to estimate users' age based on what Facebook sites they like.
There are three crucial characteristics of my database:
- the age distribution in my training set (12k of users in sum) is skewed towards younger users (i.e. I have 1157 users aged 27, and 23 users aged 65);
- many sites have no more than 5 likers (I filtered out the FB sites with less than 5 likers).
- there's many more features than samples.
So, my questions are: what strategy would you suggest to prepare the data for further analysis? Should I perform some sort of dimensionality reduction? Which ML method would be most appropriate to use in this case?
I mainly use Python, so Python-specific hints would be greatly appreciated.
| Machine learning techniques for estimating users' age based on Facebook sites they like | CC BY-SA 3.0 | null | 2014-05-17T09:16:18.823 | 2021-02-09T04:31:08.427 | 2014-05-17T19:26:53.783 | 173 | 173 | [
"machine-learning",
"dimensionality-reduction",
"python"
] | One thing to start off with would be k-NN. The idea here is that you have a user/item matrix and for some of the users you have a reported age. The age for a person in the user item matrix might be well determined by something like the mean or median age of some nearest neighbors in the item space.
So you have each user expressed as a vector in item space, find the k nearest neighbors and assign the vector in question some summary stat of the nearest neighbor ages. You can choose k on a distance cutoff or more realistically by iteratively assigning ages to a train hold out and choosing the k that minimizes the error in that assignment.
If the dimensionality is a problem you can easily perform reduction in this setup by single value decomposition choosing the m vectors that capture the most variance across the group.
In all cases since each feature is binary it seems that cosine similarity would be your go to distance metric.
I need to think a bit more about other approaches (regression, rf, etc...) given the narrow focus of your feature space (all variants of the same action, liking) I think the user/item approach might be the best.
One note of caution, if the ages you have for train are self reported you might need to correct some of them. People on facebook tend to report ages in the decade they were born. Plot a histogram of the birth dates (derived from ages) and see if you have spikes at decades like 70s, 80s, 90s.
| Determine relationship between users and age? | If you are using pandas, all you need to do is:
```
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
corrMatrix = df.corr()
```
Then you can print the correlation matrix and also plot it using seaborn or any other plotting method.
```
sns.heatmap(corrMatrix, annot=True)
plt.show()
```
Hope this helps.
|
128 | 1 | 296 | null | 62 | 31257 | [Latent Dirichlet Allocation (LDA)](http://en.wikipedia.org/wiki/Latent_Dirichlet_allocation) and [Hierarchical Dirichlet Process (HDP)](http://en.wikipedia.org/wiki/Hierarchical_Dirichlet_process) are both topic modeling processes. The major difference is LDA requires the specification of the number of topics, and HDP doesn't. Why is that so? And what are the differences, pros, and cons of both topic modelling methods?
| Latent Dirichlet Allocation vs Hierarchical Dirichlet Process | CC BY-SA 3.0 | null | 2014-05-18T06:10:52.543 | 2021-02-04T09:10:56.807 | 2014-05-20T13:45:59.373 | 84 | 122 | [
"nlp",
"topic-model",
"lda"
] | HDP is an extension of LDA, designed to address the case where the number of mixture components (the number of "topics" in document-modeling terms) is not known a priori. So that's the reason why there's a difference.
Using LDA for document modeling, one treats each "topic" as a distribution of words in some known vocabulary. For each document a mixture of topics is drawn from a Dirichlet distribution, and then each word in the document is an independent draw from that mixture (that is, selecting a topic and then using it to generate a word).
For HDP (applied to document modeling), one also uses a Dirichlet process to capture the uncertainty in the number of topics. So a common base distribution is selected which represents the countably-infinite set of possible topics for the corpus, and then the finite distribution of topics for each document is sampled from this base distribution.
As far as pros and cons, HDP has the advantage that the maximum number of topics can be unbounded and learned from the data rather than specified in advance. I suppose though it is more complicated to implement, and unnecessary in the case where a bounded number of topics is acceptable.
| Calculating optimal number of topics for topic modeling (LDA) | LDA being a probabilistic model, the results depend on the type of data and problem statement. There is nothing like a valid range for coherence score but having more than 0.4 makes sense. By fixing the number of topics, you can experiment by tuning hyper parameters like alpha and beta which will give you better distribution of topics.
>
The alpha controls the mixture of topics for any given document. Turn
it down and the documents will likely have less of a mixture of
topics. Turn it up and the documents will likely have more of a
mixture of topics.
The beta controls the distribution of words per topic. Turn it down and
the topics will likely have less words. Turn it up and the topics will
likely have more words.
The main purpose of lda is to find hidden meaning of corpus and find words which best describe that corpus.
To know more about coherence score you can refer [this](https://stats.stackexchange.com/questions/375062/how-does-topic-coherence-score-in-lda-intuitively-makes-sense)
|
129 | 1 | 166 | null | 10 | 1581 | [This question](https://stackoverflow.com/questions/879432/what-is-the-difference-between-a-generative-and-discriminative-algorithm) asks about generative vs. discriminative algorithm, but can someone give an example of the difference between these forms when applied to Natural Language Processing? How are generative and discriminative models used in NLP?
| What is generative and discriminative model? How are they used in Natural Language Processing? | CC BY-SA 3.0 | null | 2014-05-18T06:17:37.587 | 2014-05-19T11:13:48.067 | 2017-05-23T12:38:53.587 | -1 | 122 | [
"nlp",
"language-model"
] | Let's say you are predicting the topic of a document given its words.
A generative model describes how likely each topic is, and how likely words are given the topic. This is how it says documents are actually "generated" by the world -- a topic arises according to some distribution, words arise because of the topic, you have a document. Classifying documents of words W into topic T is a matter of maximizing the joint likelihood: P(T,W) = P(W|T)P(T)
A discriminative model operates by only describing how likely a topic is given the words. It says nothing about how likely the words or topic are by themselves. The task is to model P(T|W) directly and find the T that maximizes this. These approaches do not care about P(T) or P(W) directly.
| Which type of models generalize better, generative or discriminative models? | My answer is not limited to NLP and I think NLP is no different in this aspect than other types of learning.
An interesting technical look is offered by: [On Discriminative vs. Generative Classifiers - Andrew Ng, Michael Jordan](http://robotics.stanford.edu/%7Eang/papers/nips01-discriminativegenerative.pdf).
Now a more informal opinion:
Discriminative classifiers attack the problem of learning directly. In the end, you build classifiers for prediction, which means you build an estimation of $p(y|x)$. Generative models arrive through Bayes theorem to the same estimation, but it does that estimating the joint probability and the conditional is obtained as a consequence.
Intuitively, generative classifiers require more data since the space modeled is usually larger than that for a discriminative model. More parameters mean there is a need for more data. Sometimes not only the parameters but even the form of a joint distribution is harder to be modeled rather than a conditional.
But if you have enough data available it is also to be expected that a generative model should give a more robust model. Those are intuitions. Vapnik asked once why to go for joint distribution when what we have to solve is the conditional? He seems to be right if you are interested only in prediction.
My opinion is that there many factors that influence building a generative model of a conditional one which includes the complexity of formalism, the complexity of input data, flexibility to extend results beyond prediction and the model themselves. If there is a superiority of discriminant models as a function of available data, that is perhaps a small margin.
|
159 | 1 | 160 | null | 6 | 558 | I see a lot of courses in Data Science emerging in the last 2 years. Even big universities like Stanford and Columbia offers MS specifically in Data Science. But as long as I see, it looks like data science is just a mix of computer science and statistics techniques.
So I always think about this. If it is just a trend and if in 10 years from now, someone will still mention Data Science as an entire field or just a subject/topic inside CS or stats.
What do you think?
| Is Data Science just a trend or is a long term concept? | CC BY-SA 3.0 | null | 2014-05-18T19:46:44.653 | 2014-05-18T21:05:28.990 | null | null | 199 | [
"bigdata",
"machine-learning",
"databases",
"statistics",
"education"
] | The one thing that you can say for sure is: Nobody can say this for sure. And it might indeed be opinion-based to some extent. The introduction of terms like "Big Data" that some people consider as "hypes" or "buzzwords" don't make it easier to flesh out an appropriate answer here. But I'll try.
In general, interdisciplinary fields often seem to have the problem of not being taken serious by either of the fields they are spanning. However, the more research is invested into a particular field, the greater is the urge to split this field into several sub-topics. And these sub-topics sonner of later have to be re-combined in new ways, in order to prevent an overspecialization, and to increase and broaden the applicability of techniques that are developed by the (over?)specialized experts in the different fields.
And I consider "Data Science" as such an approach to combine the expertise and findings from different fields. You described it as
>
...a mix of computer science and statistics techniques
And indeed, several questions here aim at the differentiation between data science and statistics. But a pure statistician will most likely not be able to set up a Hadoop cluster and show the results of his analysis in an interactive HTML5 dashboard. And someone who can implement a nice HTML5 dashboard might not be so familiar with the mathematical background of a Chi-Squared-Test.
It is reasonable to assume that giving students enough knowledge to apply the most important techniques from the different fields that are covered by data science will lead to new applications of these techniques, and be beneficial - also for the "purists" in these fields. The combination of these techniques is not straightforward in many cases, and can justify an own branch of research.
You also asked whether in 10 years, data science will be considered as "just a topic inside computer science". Again: Nobody can say for sure. But I wonder at which point people stopped asking the question whether "Computer Science" will one day only be considered only as a mix of (or a subject of) Electrical Engineering and Mathematics...
| Is Data Science the Same as Data Mining? | [@statsRus](https://datascience.stackexchange.com/users/36/statsrus) starts to lay the groundwork for your answer in another question [What characterises the difference between data science and statistics?](https://datascience.meta.stackexchange.com/q/86/98307):
>
Data collection: web scraping and online surveys
Data manipulation: recoding messy data and extracting meaning from linguistic and social network data
Data scale: working with extremely large data sets
Data mining: finding patterns in large, complex data sets, with an emphasis on algorithmic techniques
Data communication: helping turn "machine-readable" data into "human-readable" information via visualization
## Definition
[data-mining](/questions/tagged/data-mining) can be seen as one item (or set of skills and applications) in the toolkit of the data scientist. I like how he separates the definition of mining from collection in a sort of trade-specific jargon.
However, I think that data-mining would be synonymous with data-collection in a US-English colloquial definition.
As to where to go to become proficient? I think that question is too broad as it is currently stated and would receive answers that are primarily opinion based. Perhaps if you could refine your question, it might be easier to see what you are asking.
|
169 | 1 | 170 | null | 15 | 5505 | Assume a set of loosely structured data (e.g. Web tables/Linked Open Data), composed of many data sources. There is no common schema followed by the data and each source can use synonym attributes to describe the values (e.g. "nationality" vs "bornIn").
My goal is to find some "important" attributes that somehow "define" the entities that they describe. So, when I find the same value for such an attribute, I will know that the two descriptions are most likely about the same entity (e.g. the same person).
For example, the attribute "lastName" is more discriminative than the attribute "nationality".
How could I (statistically) find such attributes that are more important than others?
A naive solution would be to take the average IDF of the values of each attribute and make this the "importance" factor of the attribute. A similar approach would be to count how many distinct values appear for each attribute.
I have seen the term feature, or attribute selection in machine learning, but I don't want to discard the remaining attributes, I just want to put higher weights to the most important ones.
| How to specify important attributes? | CC BY-SA 3.0 | null | 2014-05-19T15:55:24.983 | 2021-03-11T20:12:24.030 | 2015-05-18T13:30:46.940 | 113 | 113 | [
"machine-learning",
"statistics",
"feature-selection"
] | A possible solution is to calculate the [information gain](http://en.wikipedia.org/wiki/Decision_tree_learning#Information_gain) associated to each attribute:
$$I_{E}(f) = - \sum \limits_{i = 1}^m f_ilog_2f_i$$
Initially you have the whole dataset, and compute the information gain of each item. The item with the best information gain is the one you should use to partition the dataset (considering the item's values). Then, perform the same computations for each item (but the ones selected), and always choose the one which best describes/differentiates the entries from your dataset.
There are implementations available for such computations. [Decision trees](http://en.wikipedia.org/wiki/Decision_tree_learning) usually base their feature selection on the features with best information gain. You may use the resulting tree structure to find these important items.
| Using attributes to classify/cluster user profiles | Right now, I only have time for a very brief answer, but I'll try to expand on it later on.
What you want to do is a clustering, since you want to discover some labels for your data. (As opposed to a classification, where you would have labels for at least some of the data and you would like to label the rest).
In order to perform a clustering on your users, you need to have them as some kind of points in an abstract space. Then you will measure distances between points, and say that points that are "near" are "similar", and label them according to their place in that space.
You need to transform your data into something that looks like a user profile, i.e.: a user ID, followed by a vector of numbers that represent the features of this user. In your case, each feature could be a "category of website" or a "category of product", and the number could be the amount of dollars spent in that feature. Or a feature could be a combination of web and product, of course.
As an example, let us imagine the user profile with just three features:
- dollars spent in "techy" webs,
- dollars spent on "fashion" products,
- and dollars spent on "aggressive" video games on "family-oriented" webs (who knows).
In order to build those profiles, you need to map the "categories" and "keywords" that you have, which are too plentiful, into the features you think are relevant. Look into [topic modeling](http://scikit-learn.org/stable/auto_examples/applications/topics_extraction_with_nmf.html) or [semantic similarity](http://en.wikipedia.org/wiki/Semantic_similarity) to do so. Once that map is built, it will state that all dollars spent on webs with keywords "gadget", "electronics", "programming", and X others, should all be aggregated into our first feature; and so on.
Do not be afraid of "imposing" the features! You will need to refine them and maybe completely change them once you have clustered the users.
Once you have user profiles, proceed to cluster them using [k-means](http://en.wikipedia.org/wiki/K-means_clustering) or whatever else you think is interesting. Whatever technique you use, you will be interested in getting the "representative" point for each cluster. This is usually the geometric "center" of the points in that cluster.
Plot those "representative" points, and also plot how they compare to other clusters. Using a [radar chart](http://en.wikipedia.org/wiki/Radar_chart) is very useful here. Wherever there is a salient feature (something in the representative that is very marked, and is also very prominent in its comparison to other clusters) is a good candidate to help you label the cluster with some catchy phrase ("nerds", "fashionistas", "aggressive moms" ...).
Remember that a clustering problem is an open problem, so there is no "right" solution! And I think my answer is quite long already; check also about normalization of the profiles and filtering outliers.
|
186 | 1 | 187 | null | 9 | 345 | I'm learning [Support Vector Machines](http://en.wikipedia.org/wiki/Support_vector_machine), and I'm unable to understand how a class label is chosen for a data point in a binary classifier. Is it chosen by consensus with respect to the classification in each dimension of the separating hyperplane?
| Using SVM as a binary classifier, is the label for a data point chosen by consensus? | CC BY-SA 3.0 | null | 2014-05-21T15:12:18.980 | 2014-05-21T15:39:54.830 | 2014-05-21T15:26:02.533 | 84 | 133 | [
"svm",
"classification",
"binary"
] | The term consensus, as far as I'm concerned, is used rather for cases when you have more a than one source of metric/measure/choice from which to make a decision. And, in order to choose a possible result, you perform some average evaluation/consensus over the values available.
This is not the case for SVM. The algorithm is based on a [quadratic optimization](http://upload.wikimedia.org/wikipedia/commons/2/2a/Svm_max_sep_hyperplane_with_margin.png), that maximizes the distance from the closest documents of two different classes, using a hyperplane to make the split.
![Hyperplane separating two different classes](https://i.stack.imgur.com/CCO7Z.png)
So, the only consensus here is the resulting hyperplane, computed from the closest documents of each class. In other words, the classes are attributed to each point by calculating the distance from the point to the hyperplane derived. If the distance is positive, it belongs to a certain class, otherwise, it belongs to the other one.
| in binary classification where class labels are {-1, 1} is preprocessing needed? | You need `{0,1}` or `{-1,1}` labels depending on the output of your model. If you have a Sigmoid output use `{0,1}`, while TanH outputs work with `{-1,1}`.
No label choice is inherently right or wrong, as long as it's compatible with your model architecture and gives you good results.
---
EDIT:
In case of logistic regression you must use `{0,1}`, that is because this class of models has a Sigmoid output. Sigmoid function is always bounded in `[0,1]` and it can't take values outside of that range. It could never reach `-1` and training won't work.
|
191 | 1 | 194 | null | 8 | 1166 | Can someone explain me, how to classify a data like MNIST with MLBP-Neural network if I make more than one output (e.g 8), I mean if I just use one output I can easily classify the data, but if I use more than one, which output should I choose ?
| Multi layer back propagation Neural network for classification | CC BY-SA 3.0 | null | 2014-05-22T13:36:24.120 | 2014-06-10T08:38:27.093 | null | null | 273 | [
"neural-network"
] | Suppose that you need to classify something in K classes, where K > 2. In this case the most often setup I use is one hot encoding. You will have K output columns, and in the training set you will set all values to 0, except the one which has the category index, which could have value 1. Thus, for each training data set instance you will have all outputs with values 0 or 1, all outputs sum to 1 for each instance.
This looks like a probability, which reminds me of a technique used often to connect some outputs which are modeled as probability. This is called softmax function, more details [on Wikipedia](http://en.wikipedia.org/wiki/Softmax_activation_function). This will allow you to put some constraints on the output values (it is basically a logistic function generalization) so that the output values will be modeled as probabilities.
Finally, with or without softmax you can use the output as a discriminant function to select the proper category.
Another final thought would be to avoid to encode you variables in a connected way. For example you can have the binary representation of the category index. This would induce to the learner an artificial connection between some outputs which are arbitrary. The one hot encoding has the advantage that is neutral to how labels are indexed.
| Neural network back propagation gradient descent calculus | We have $$\hat{y}_{\color{blue}k}=h_1W_{k1}^o + h_2W_{k2}^o$$
If we let $\hat{y} = (\hat{y}_1, \ldots, \hat{y}_K)^T$, $W_1^o=(W_{11}, \ldots, W_{K1})^T$, and $W_2^o=(W_{12}, \ldots, W_{K2})^T$
Then we have $$\hat{y}=h_1W_1^o+h_2W_2^o$$
I believe you are performing a regression, $$J(w) = \frac12 \|\hat{y}-y\|^2=\frac12\sum_{k=1}^K(\hat{y}_k-y_k)^2$$
It is possible to weight individual term as well depending on applications.
|
196 | 1 | 197 | null | 13 | 7379 | So we have potential for a machine learning application that fits fairly neatly into the traditional problem domain solved by classifiers, i.e., we have a set of attributes describing an item and a "bucket" that they end up in. However, rather than create models of probabilities like in Naive Bayes or similar classifiers, we want our output to be a set of roughly human-readable rules that can be reviewed and modified by an end user.
Association rule learning looks like the family of algorithms that solves this type of problem, but these algorithms seem to focus on identifying common combinations of features and don't include the concept of a final bucket that those features might point to. For example, our data set looks something like this:
```
Item A { 4-door, small, steel } => { sedan }
Item B { 2-door, big, steel } => { truck }
Item C { 2-door, small, steel } => { coupe }
```
I just want the rules that say "if it's big and a 2-door, it's a truck," not the rules that say "if it's a 4-door it's also small."
One workaround I can think of is to simply use association rule learning algorithms and ignore the rules that don't involve an end bucket, but that seems a bit hacky. Have I missed some family of algorithms out there? Or perhaps I'm approaching the problem incorrectly to begin with?
| Algorithm for generating classification rules | CC BY-SA 3.0 | null | 2014-05-22T21:47:26.980 | 2020-08-06T11:04:09.857 | 2014-05-23T03:27:20.630 | 84 | 275 | [
"machine-learning",
"classification"
] | C45 made by Quinlan is able to produce rule for prediction. Check this [Wikipedia](http://en.wikipedia.org/wiki/C4.5_algorithm) page. I know that in [Weka](http://www.cs.waikato.ac.nz/~ml/weka/) its name is J48. I have no idea which are implementations in R or Python. Anyway, from this kind of decision tree you should be able to infer rules for prediction.
Later edit
Also you might be interested in algorithms for directly inferring rules for classification. RIPPER is one, which again in Weka it received a different name JRip. See the original paper for RIPPER: [Fast Effective Rule Induction, W.W. Cohen 1995](http://www.google.ro/url?sa=t&rct=j&q=&esrc=s&source=web&cd=1&ved=0CCYQFjAA&url=http://www.cs.utsa.edu/~bylander/cs6243/cohen95ripper.pdf&ei=-XJ-U-7pGoqtyAOej4Ag&usg=AFQjCNFqLnuJWi3gGXVCrugmv3NTRhHHLA&bvm=bv.67229260,d.bGQ&cad=rja)
| Algorithm for generating rules for classifying documents | It sounds like you have two issues. The first one is preprocessing and feature extraction. The second one is how to learn classification rules.
The second issue is the easier one to approach. There are a number of algorithms for learning classification rules. You could use a decision tree algorithm such as CART or C.4.5 but there are also rule induction algorithms like the CN2 algorithm. Both these types of algorithms can learn the types of rules you mention, however, rule induction based systems can usually be supplemented with hand crafted rules in a more straight forward way than decision tree based systems, while, unless my memory fails me, decision tree algorithms generally perform better on classification tasks.
The first issue is bit hairier. To recommend the types of changes you suggest you first need to extract the relevant features. There are pre-processors which perform part-of-speech tagging, syntactic parsing, named entity recognition etc. and if the citations follow a strict format, I guess a regular expression could perhaps solve the problem, but otherwise you have to first train a system to recognize and count the number of citations in a text (and the same for any other non-trivial feature). Then you can pass the output of this feature extraction system into the classification system. However, on reading your question again I'm unsure whether this problem might already be solved in your case?
|
205 | 1 | 208 | null | 12 | 1771 | Working on what could often be called "medium data" projects, I've been able to parallelize my code (mostly for modeling and prediction in Python) on a single system across anywhere from 4 to 32 cores. Now I'm looking at scaling up to clusters on EC2 (probably with StarCluster/IPython, but open to other suggestions as well), and have been puzzled by how to reconcile distributing work across cores on an instance vs. instances on a cluster.
Is it even practical to parallelize across instances as well as across cores on each instance? If so, can anyone give a quick rundown of the pros + cons of running many instances with few cores each vs. a few instances with many cores? Is there a rule of thumb for choosing the right ratio of instances to cores per instance?
Bandwidth and RAM are non-trivial concerns in my projects, but it's easy to spot when those are the bottlenecks and readjust. It's much harder, I'd imagine, to benchmark the right mix of cores to instances without repeated testing, and my projects vary too much for any single test to apply to all circumstances. Thanks in advance, and if I've just failed to google this one properly, feel free to point me to the right answer somewhere else!
| Instances vs. cores when using EC2 | CC BY-SA 3.0 | null | 2014-05-23T19:45:54.283 | 2017-02-19T09:12:49.270 | null | null | 250 | [
"parallel",
"clustering",
"aws"
] | When using IPython, you very nearly don't have to worry about it (at the expense of some loss of efficiency/greater communication overhead). The parallel IPython plugin in StarCluster will by default start one engine per physical core on each node (I believe this is configurable but not sure where). You just run whatever you want across all engines by using the DirectView api (map_sync, apply_sync, ...) or the %px magic commands. If you are already using IPython in parallel on one machine, using it on a cluster is no different.
Addressing some of your specific questions:
"how to reconcile distributing work across cores on an instance vs. instances on a cluster" - You get one engine per core (at least); work is automatically distributed across all cores and across all instances.
"Is it even practical to parallelize across instances as well as across cores on each instance?" - Yes :) If the code you are running is embarrassingly parallel (exact same algo on multiple data sets) then you can mostly ignore where a particular engine is running. If the core requires a lot of communication between engines, then of course you need to structure it so that engines primarily communicate with other engines on the same physical machine; but that kind of problem is not ideally suited for IPython, I think.
"If so, can anyone give a quick rundown of the pros + cons of running many instances with few cores each vs. a few instances with many cores? Is there a rule of thumb for choosing the right ratio of instances to cores per instance?" - Use the largest c3 instances for compute-bound, and the smallest for memory-bandwidth-bound problems; for message-passing-bound problems, also use the largest instances but try to partition the problem so that each partition runs on one physical machine and most message passing is within the same partition. Problems which would run significantly slower on N quadruple c3 instances than on 2N double c3 are rare (an artificial example may be running multiple simple filters on a large number of images, where you go through all images for each filter rather than all filters for the same image). Using largest instances is a good rule of thumb.
| Which Amazon EC2 instance for Deep Learning tasks? | [](https://i.stack.imgur.com/Pe9JX.png)
[](https://i.stack.imgur.com/DAuW6.png)
I think the differences and use cases are well pointed here. As far the workload, there are features which help you optimise it. According to the official [documentation](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/accelerated-computing-instances.html), you can try:
- For persistency,
sudo nvidia-smi -pm 1
- Disabling the autoboost feature
sudo nvidia-smi --auto-boost-default=0
- Set all GPU clock speeds to their maximum frequency.
sudo nvidia-smi -ac 2505,875
|
211 | 1 | 213 | null | 9 | 4593 | I'm new to this community and hopefully my question will well fit in here.
As part of my undergraduate data analytics course I have choose to do the project on human activity recognition using smartphone data sets. As far as I'm concern this topic relates to Machine Learning and Support Vector Machines. I'm not well familiar with this technologies yet so I will need some help.
I have decided to follow [this project idea](http://www.inf.ed.ac.uk/teaching/courses/dme/2014/datasets.html) (first project on the top)
The project goal is determine what activity a person is engaging in (e.g., WALKING, WALKING_UPSTAIRS, WALKING_DOWNSTAIRS, SITTING, STANDING, LAYING) from data recorded by a smartphone (Samsung Galaxy S II) on the subject's waist. Using its embedded accelerometer and gyroscope, the data includes 3-axial linear acceleration and 3-axial angular velocity at a constant rate of 50Hz.
All the data set is given in one folder with some description and feature labels. The data is divided for 'test' and 'train' files in which data is represented in this format:
```
2.5717778e-001 -2.3285230e-002 -1.4653762e-002 -9.3840400e-001 -9.2009078e-001 -6.6768331e-001 -9.5250112e-001 -9.2524867e-001 -6.7430222e-001 -8.9408755e-001 -5.5457721e-001 -4.6622295e-001 7.1720847e-001 6.3550240e-001 7.8949666e-001 -8.7776423e-001 -9.9776606e-001 -9.9841381e-001 -9.3434525e-001 -9.7566897e-001 -9.4982365e-001 -8.3047780e-001 -1.6808416e-001 -3.7899553e-001 2.4621698e-001 5.2120364e-001 -4.8779311e-001 4.8228047e-001 -4.5462113e-002 2.1195505e-001 -1.3489443e-001 1.3085848e-001 -1.4176313e-002 -1.0597085e-001 7.3544013e-002 -1.7151642e-001 4.0062978e-002 7.6988933e-002 -4.9054573e-001 -7.0900265e-001
```
And that's only a very small sample of what the file contain.
I don't really know what this data represents and how can be interpreted. Also for analyzing, classification and clustering of the data, what tools will I need to use?
Is there any way I can put this data into excel with labels included and for example use R or python to extract sample data and work on this?
Any hints/tips would be much appreciated.
| Human activity recognition using smartphone data set problem | CC BY-SA 4.0 | null | 2014-05-27T10:41:33.220 | 2020-08-17T03:25:03.437 | 2020-08-16T21:51:47.670 | 98307 | 295 | [
"bigdata",
"machine-learning",
"databases",
"clustering",
"data-mining"
] | The data set definitions are on the page here:
[Attribute Information at the bottom](http://archive.ics.uci.edu/ml/datasets/Human+Activity+Recognition+Using+Smartphones#)
or you can see inside the ZIP folder the file named activity_labels, that has your column headings inside of it, make sure you read the README carefully, it has some good info in it. You can easily bring in a `.csv` file in R using the `read.csv` command.
For example if you name you file `samsungdata` you can open R and run this command:
```
data <- read.csv("directory/where/file/is/located/samsungdata.csv", header = TRUE)
```
Or if you are already inside of the working directory in R you can just run the following
```
data <- read.csv("samsungdata.csv", header = TRUE)
```
Where the name `data` can be changed to whatever you want to call your data set.
| Activity recognition in smart homes with different sources | I ended up using [multi input neural networks](https://keras.io/getting-started/functional-api-guide/), where each input is used for each source.
Also called [Data-fusion](https://en.wikipedia.org/wiki/Data_fusion).
|
231 | 1 | 287 | null | 10 | 6442 | I want to test the accuracy of a methodology. I ran it ~400 times, and I got a different classification for each run. I also have the ground truth, i.e., the real classification to test against.
For each classification I computed a confusion matrix. Now I want to aggregate these results in order to get the overall confusion matrix. How can I achieve it?
May I sum all confusion matrices in order to obtain the overall one?
| How to get an aggregate confusion matrix from n different classifications | CC BY-SA 3.0 | null | 2014-06-05T09:00:27.950 | 2014-06-11T09:39:34.373 | 2014-06-05T15:21:40.640 | 84 | 133 | [
"classification",
"confusion-matrix",
"accuracy"
] | I do not know a standard answer to this, but I thought about it some times ago and I have some ideas to share.
When you have one confusion matrix, you have more or less a picture of how you classification model confuse (mis-classify) classes. When you repeat classification tests you will end up having multiple confusion matrices. The question is how to get a meaningful aggregate confusion matrix. The answer depends on what is the meaning of meaningful (pun intended). I think there is not a single version of meaningful.
One way is to follow the rough idea of multiple testing. In general, you test something multiple times in order to get more accurate results. As a general principle one can reason that averaging on the results of the multiple tests reduces the variance of the estimates, so as a consequence, it increases the precision of the estimates. You can proceed in this way, of course, by summing position by position and then dividing by the number of tests. You can go further and instead of estimating only a value for each cell of the confusion matrix, you can also compute some confidence intervals, t-values and so on. This is OK from my point of view. But it tell only one side of the story.
The other side of the story which might be investigated is how stable are the results for the same instances. To exemplify that I will take an extreme example. Suppose you have a classification model for 3 classes. Suppose that these classes are in the same proportion. If your model is able to predict one class perfectly and the other 2 classes with random like performance, you will end up having 0.33 + 0.166 + 0.166 = 0.66 misclassification ratio. This might seem good, but even if you take a look on a single confusion matrix you will not know that your performance on the last 2 classes varies wildly. Multiple tests can help. But averaging the confusion matrices would reveal this? My belief is not. The averaging will give the same result more or less, and doing multiple tests will only decrease the variance of the estimation. However it says nothing about the wild instability of prediction.
So another way to do compose the confusion matrices would better involve a prediction density for each instance. One can build this density by counting for each instance, the number of times it was predicted a given class. After normalization, you will have for each instance a prediction density rather a single prediction label. You can see that a single prediction label is similar with a degenerated density where you have probability of 1 for the predicted class and 0 for the other classes for each separate instance. Now having this densities one can build a confusion matrix by adding the probabilities from each instance and predicted class to the corresponding cell of the aggregated confusion matrix.
One can argue that this would give similar results like the previous method. However I think that this might be the case sometimes, often when the model has low variance, the second method is less affected by how the samples from the tests are drawn, and thus more stable and closer to the reality.
Also the second method might be altered in order to obtain a third method, where one can assign as prediction the label with highest density from the prediction of a given instance.
I do not implemented those things but I plan to study further because I believe might worth spending some time.
| Comparison of classifier confusion matrices | A few comments:
- I don't know this dataset but it seems to be a difficult one to classify since the performance is not much better than a random baseline (the random baseline in binary classification gives 50% accuracy, since it guesses right half the time).
- If I'm not mistaken the majority class (class 1) has 141 instances out of 252, i.e. 56% (btw the numbers are not easily readable in the matrices). This means that a classifier which automatically assigns class 1 would reach 56% accuracy. This is called the majority baseline, this is usually the minimal performance one wants to reach with a binary classifier. The LR and LDA classifiers are worse than this, so practically they don't really work.
- The k-NN classifier appears to give better results indeed, and importantly above 56% so it actually "learns" something useful.
- It's a bit strange that the first 2 classifers predict class 0 more often than class 1. It looks as if the training set and test set don't have the same distribution.
- the k-NN classifier correcly predicts class 1 more often, and that's why it works better. k-NN is also much less sensitive to the data distribution: in case it differs between training and test set, this could explain the difference with the first 2 classifiers.
- However it's rarely meaningful for the $k$ in $k$-NN to be this high (125). Normally it should be a low value, like one digit only. I'm not sure what this means in this case.
- Suggestion: you could try some more robust classifiers like decision trees (or random forests) or SVM.
|
235 | 1 | 237 | null | 3 | 1572 | Data visualization is an important sub-field of data science and python programmers need to have available toolkits for them.
Is there a Python API to Tableau?
Are there any Python based data visualization toolkits?
| Are there any python based data visualization toolkits? | CC BY-SA 4.0 | null | 2014-06-09T08:34:29.337 | 2019-06-08T03:11:24.957 | 2019-06-08T03:11:24.957 | 29169 | 122 | [
"python",
"visualization"
] | There is a Tablaeu API and you can use Python to use it, but maybe not in the sense that you think. There is a Data Extract API that you could use to import your data into Python and do your visualizations there, so I do not know if this is going to answer your question entirely.
As in the first comment you can use Matplotlib from [Matplotlib website](http://www.matplotlib.org), or you could install Canopy from Enthought which has it available, there is also Pandas, which you could also use for data analysis and some visualizations. There is also a package called `ggplot` which is used in `R` alot, but is also made for Python, which you can find here [ggplot for python](https://pypi.python.org/pypi/ggplot).
The Tableau data extract API and some information about it can be found [at this link](http://www.tableausoftware.com/new-features/data-engine-api-0). There are a few web sources that I found concerning it using duckduckgo [at this link](https://duckduckgo.com/?q=tableau%20PYTHON%20API&kp=1&kd=-1).
Here are some samples:
[Link 1](https://www.interworks.com/blogs/bbickell/2012/12/06/introducing-python-tableau-data-extract-api-csv-extract-example)
[Link 2](http://ryrobes.com/python/building-tableau-data-extract-files-with-python-in-tableau-8-sample-usage/)
[Link 3](http://nbviewer.ipython.org/github/Btibert3/tableau-r/blob/master/Python-R-Tableau-Predictive-Modeling.ipynb)
As far as an API like matplotlib, I cannot say for certain that one exists. Hopefully this gives some sort of reference to help answer your question.
Also to help avoid closure flags and downvotes you should try and show some of what you have tried to do or find, this makes for a better question and helps to illicit responses.
| What kind of data visualization should I use? | First, I think you'll need to measure when you've made a typing mistake. For example, you might log each key press and then in an analysis after, look at when you press the backspace key. If you press it only once, you might consider the key you pressed to be incorrect and the one you type after to be the correct key.
This supplies you with a truth value. It would be difficult to measure anything if you don't know what would ideally happen.
In terms of visualizing this, I would opt for a confusion matrix. There are some [nice visuals provided by Seaborn](https://seaborn.pydata.org/generated/seaborn.heatmap.html), but it might look like [what's in this SO answer](https://stackoverflow.com/a/5824945/3234482). As you can see, each letter has a high value for itself, and maybe a couple mistakes for other letters. Looking at this plot, you might say "F" is often typed when "E" is desired. The y-axis would be the letter you intended to type, the x-axis might be the letter you actually typed. This could help you see which letters are frequently mistyped. Additionally, it would be intuitive to compute ratios off of this.
If you're not interested in which keys are mistyped as other keys, you could easily do a bar chart of key frequencies. Or a bar chart where each x-tick is a letter with proportion typed (in)correctly.
|
265 | 1 | 285 | null | 42 | 45677 | I have a variety of NFL datasets that I think might make a good side-project, but I haven't done anything with them just yet.
Coming to this site made me think of machine learning algorithms and I wondering how good they might be at either predicting the outcome of football games or even the next play.
It seems to me that there would be some trends that could be identified - on 3rd down and 1, a team with a strong running back theoretically should have a tendency to run the ball in that situation.
Scoring might be more difficult to predict, but the winning team might be.
My question is whether these are good questions to throw at a machine learning algorithm. It could be that a thousand people have tried it before, but the nature of sports makes it an unreliable topic.
| Can machine learning algorithms predict sports scores or plays? | CC BY-SA 3.0 | null | 2014-06-10T10:58:58.447 | 2020-08-20T18:25:42.540 | 2015-03-02T12:33:11.007 | 553 | 434 | [
"machine-learning",
"sports"
] | There are a lot of good questions about Football (and sports, in general) that would be awesome to throw to an algorithm and see what comes out. The tricky part is to know what to throw to the algorithm.
A team with a good RB could just pass on 3rd-and-short just because the opponents would probably expect run, for instance. So, in order to actually produce some worthy results, I'd break the problem in smaller pieces and analyse them statistically while throwing them to the machines.
There are a few (good) websites that try to do the same, you should check'em out and use whatever they found to help you out:
- Football Outsiders
- Advanced Football Analytics
And if you truly want to explore Sports Data Analysis, you should definitely check the [Sloan Sports Conference](http://www.sloansportsconference.com/) videos. There's a lot of them spread on Youtube.
| Which Machine Learning algorithm should I use for a sports prediction study? | Welcome to the wonderful world of ML.
I'd use [XGBoost](https://xgboost.readthedocs.io/en/stable/install.html). It's simple to get started. It can be kind of a pain to install on windows, but [this might help](https://stackoverflow.com/a/39811079/10818367). As I recall, on linux it's a breeze.
It's what's called a "decision tree", so it takes all your inputs and learns a series of thresholds (if x>y and z<7, they'll win). This has several advantages, especially for a beginner in the field:
- it's very tolerant to poorly formatted data (non normalized)
- most of the hyperparameters are pretty intuitive
- it has a tendency to work fairly well out of the box.
It will be daunting, the first time you implement just about any algorithm it's challenging. Just keep your head down and perserveere.
If you do want to go with a NN (which is also an excelent choice), I recommend using `tf.keras`. There's excellent beginner tutorials by [this guy](https://www.youtube.com/watch?v=wQ8BIBpya2k). This is an, arguably, more useful library, but it can also be tough to get started. If you watch a few tutorials, though, you'll be fine.
You will quickly find that the choice of model is often the easy part. It's the data preprocessing, training/validation, etc. that is a pain. So, If I were you, I would just pick a model and get started ASAP; your objective is to learn, not to make a perfect model.
Some other things you'll probably need in your tool belt:
- python in general
- pandas for storing and manipulating data
- numpy for messing around with data types
- matplotlib.pyplot for plotting
- sklearn for miscellaneous stuff (or more, if you look into it)
|
266 | 1 | 272 | null | 12 | 3010 | Being new to machine-learning in general, I'd like to start playing around and see what the possibilities are.
I'm curious as to what applications you might recommend that would offer the fastest time from installation to producing a meaningful result.
Also, any recommendations for good getting-started materials on the subject of machine-learning in general would be appreciated.
| What are some easy to learn machine-learning applications? | CC BY-SA 3.0 | null | 2014-06-10T11:05:47.273 | 2014-06-12T17:58:21.467 | null | null | 434 | [
"machine-learning"
] | I would recommend to start with some MOOC on machine learning. For example Andrew Ng's [course](https://www.coursera.org/course/ml) at coursera.
You should also take a look at [Orange](http://orange.biolab.si/) application. It has a graphical interface and probably it is easier to understand some ML techniques using it.
| How to learn Machine Learning |
- Online Course: Andrew Ng, Machine Learning Course from Coursera.
- Book: Tom Mitchell, Machine Learning, McGraw-Hill, 1997.
|
369 | 1 | 465 | null | 9 | 3911 | What kind of error measures do RMSE and nDCG give while evaluating a recommender system, and how do I know when to use one over the other? If you could give an example of when to use each, that would be great as well!
| Difference between using RMSE and nDCG to evaluate Recommender Systems | CC BY-SA 3.0 | null | 2014-06-14T18:53:32.243 | 2014-10-09T02:35:24.533 | 2014-06-16T19:30:46.940 | 84 | 838 | [
"machine-learning",
"recommender-system",
"model-evaluations"
] | nDCG is used to evaluate a golden ranked list (typically human judged) against your output ranked list. The more is the correlation between the two ranked lists, i.e. the more similar are the ranks of the relevant items in the two lists, the closer is the value of nDCG to 1.
RMSE (Root Mean Squared Error) is typically used to evaluate regression problems where the output (a predicted scalar value) is compared with the true scalar value output for a given data point.
So, if you are simply recommending a score (such as recommending a movie rating), then use RMSE. Whereas, if you are recommending a list of items (such as a list of related movies), then use nDCG.
| How to calculate NDCG in recommendation system | IMHO,
The fundamental definition of [DCG](https://en.wikipedia.org/wiki/Discounted_cumulative_gain) is that it is a measure of ranking quality. This assumes that you have computed the utilities of each document/item and ranked them in a certain order.
With this definition in mind, if you have n-items with same utility (which is 0 in your case), computing NDCG to measure the ranking quality within this subset of items (since you are only looking at items 5, 4, 3, 2 and 1, all of which are not recommended), will yield you a NDCG score of 1 - since your ranking is perfect if you are only looking at these items.
NDCG is merely a way to quantify the quality of ordering, i.e., current order Vs perfect order (items sorted w.r.to their utilities). This is meaningless if you are looking ONLY at items with same utility score.
I hope this answers your question.
|
371 | 1 | 374 | null | 12 | 9876 | I'm curious about natural language querying. Stanford has what looks to be a strong set of [software for processing natural language](http://nlp.stanford.edu/software/index.shtml). I've also seen the [Apache OpenNLP library](http://opennlp.apache.org/documentation/1.5.3/manual/opennlp.html), and the [General Architecture for Text Engineering](http://gate.ac.uk/science.html).
There are an incredible amount of uses for natural language processing and that makes the documentation of these projects difficult to quickly absorb.
Can you simplify things for me a bit and at a high level outline the tasks necessary for performing a basic translation of simple questions into SQL?
The first rectangle on my flow chart is a bit of a mystery.
![enter image description here](https://i.stack.imgur.com/wJPx9.png)
For example, I might want to know:
```
How many books were sold last month?
```
And I'd want that translated into
```
Select count(*)
from sales
where
item_type='book' and
sales_date >= '5/1/2014' and
sales_date <= '5/31/2014'
```
| How to process natural language queries? | CC BY-SA 3.0 | null | 2014-06-14T20:32:06.143 | 2021-05-16T12:13:06.787 | 2014-06-14T20:39:25.657 | 434 | 434 | [
"nlp"
] | Natural language querying poses very many intricacies which can be very difficult to generalize. From a high level, I would start with trying to think of things in terms of nouns and verbs.
So for the sentence: How many books were sold last month?
You would start by breaking the sentence down with a parser which will return a tree format similar to this:
![enter image description here](https://i.stack.imgur.com/ogoiY.png)
You can see that there is a subject books, a compound verbal phrase indicating the past action of sell, and then a noun phrase where you have the time focus of a month.
We can further break down the subject for modifiers: "how many" for books, and "last" for month.
Once you have broken the sentence down you need to map those elements to sql language e.g.: how many => count, books => book, sold => sales, month => sales_date (interval), and so on.
Finally, once you have the elements of the language you just need to come up with a set of rules for how different entities interact with each other, which leaves you with:
Select count(*)
from sales
where
item_type='book' and
sales_date >= '5/1/2014' and
sales_date <= '5/31/2014'
This is at a high level how I would begin, while almost every step I have mentioned is non-trivial and really the rabbit hole can be endless, this should give you many of the dots to connect.
| Natural Language to SQL query | If you want to tackle the problem from another perspective, with an end to end learning, such that you don't specify ahead of time this large pipeline you've mentioned earlier, all you care about is the mapping between sentences and their corresponding SQL queries.
Tutorials:
[How to talk to your database](https://blog.einstein.ai/how-to-talk-to-your-database)
Papers:
- Seq2SQL: Generating Structured Queries from
Natural Language using Reinforcement Learning salesforce
- Neural Enquirer: Learning to Query Tables in Natural Language
Dataset:
[A large annotated semantic parsing corpus for developing natural language interfaces.](https://github.com/salesforce/WikiSQL)
Github code:
- seq2sql
- SQLNet
Also, there are commercial solutions like [nlsql](https://www.nlsql.com/)
|
384 | 1 | 395 | null | 20 | 28189 | I have a binary classification problem:
- Approximately 1000 samples in training set
- 10 attributes, including binary, numeric and categorical
Which algorithm is the best choice for this type of problem?
By default I'm going to start with SVM (preliminary having nominal attributes values converted to binary features), as it is considered the best for relatively clean and not noisy data.
| Choose binary classification algorithm | CC BY-SA 3.0 | null | 2014-06-15T14:01:38.233 | 2015-04-12T16:12:39.063 | 2014-06-16T14:02:42.467 | 97 | 97 | [
"classification",
"binary",
"svm",
"random-forest",
"logistic-regression"
] | It's hard to say without knowing a little more about your dataset, and how separable your dataset is based on your feature vector, but I would probably suggest using extreme random forest over standard random forests because of your relatively small sample set.
Extreme random forests are pretty similar to standard random forests with the one exception that instead of optimizing splits on trees, extreme random forest makes splits at random. Initially this would seem like a negative, but it generally means that you have significantly better generalization and speed, though the AUC on your training set is likely to be a little worse.
Logistic regression is also a pretty solid bet for these kinds of tasks, though with your relatively low dimensionality and small sample size I would be worried about overfitting. You might want to check out using K-Nearest Neighbors since it often performs very will with low dimensionalities, but it doesn't usually handle categorical variables very well.
If I had to pick one without knowing more about the problem I would certainly place my bets on extreme random forest, as it's very likely to give you good generalization on this kind of dataset, and it also handles a mix of numerical and categorical data better than most other methods.
| Algorithm for Binary classification | First thing that comes to my mind is to do different encodings. There are some ways to deal with high cardinality categorical data such as: Label Encoding or the famous [target encoding](https://contrib.scikit-learn.org/category_encoders/targetencoder.html). Before anything else I will recommend changing the encoding type.
But, since your question about which predictor use with small and space data. I will go still with logistic regression, decision tree or SVM. When data is small all algorithms tend to work quite similar.
Things like Random Forest might perform well since they do bootstrapping what tends to be a way to sample your data with replacement.
|
398 | 1 | 405 | null | 8 | 990 | I'm currently in the very early stages of preparing a new research-project (still at the funding-application stage), and expect that data-analysis and especially visualisation tools will play a role in this project.
In view of this I face the following dilemma: Should I learn Python to be able to use its extensive scientific libraries (Pandas, Numpy, Scipy, ...), or should I just dive into similar packages of a language I'm already acquainted with (Racket, or to a lesser extent Scala)?
(Ideally I would learn Python in parallel with using statistical libraries in Racket, but I'm not sure I'll have time for both)
I'm not looking for an answer to this dilemma, but rather for feedback on my different considerations:
My current position is as follows:
In favour of Python:
- Extensively used libraries
- Widely used (may be decisive in case of collaboration with others)
- A lot of online material to start learning it
- Conferences that are specifically dedicated to Scientific Computing with Python
- Learning Python won't be a waste of time anyway
In favour of a language I already know:
- It's a way to deepen my knowledge of one language rather than getting superficial knowledge of one more language (under the motto: you should at least know one language really well)
- It is feasible. Both Racket and Scala have good mathematics and statistics libraries
- I can start right away with learning what I need to know rather than first having to learn the basics
Two concrete questions:
- What am I forgetting?
- How big of a nuisance could the Python 2 vs 3 issue be?
| What to consider before learning a new language for data analysis | CC BY-SA 3.0 | null | 2014-06-16T07:32:29.137 | 2014-06-16T15:00:04.577 | null | null | 872 | [
"python",
"visualization"
] | Personally going to make a strong argument in favor of Python here. There are a large number of reasons for this, but I'm going to build on some of the points that other people have mentioned here:
- Picking a single language: It's definitely possible to mix and match languages, picking d3 for your visualization needs, FORTRAN for your fast matrix multiplies, and python for all of your networking and scripting. You can do this down the line, but keeping your stack as simple as possible is a good move, especially early on.
- Picking something bigger than you: You never want to be pushing up against the barriers of the language you want to use. This is a huge issue when it comes to languages like Julia and FORTRAN, which simply don't offer the full functionality of languages like python or R.
- Pick Community: The one most difficult thing to find in any language is community. Python is the clear winner here. If you get stuck, you ask something on SO, and someone will answer in a matter of minutes, which is simply not the case for most other languages. If you're learning something in a vacuum you will simply learn much slower.
In terms of the minus points, I might actually push back on them.
Deepening your knowledge of one language is a decent idea, but knowing only one language, without having practice generalizing that knowledge to other languages is a good way to shoot yourself in the foot. I have changed my entire favored development stack three time over as many years, moving from `MATLAB` to `Java` to `haskell` to `python`. Learning to transfer your knowledge to another language is far more valuable than just knowing one.
As far as feasibility, this is something you're going to see again and again in any programming career. Turing completeness means you could technically do everything with `HTML4` and `CSS3`, but you want to pick the right tool for the job. If you see the ideal tool and decide to leave it by the roadside you're going to find yourself slowed down wishing you had some of the tools you left behind.
A great example of that last point is trying to deploy `R` code. 'R''s networking capabilities are hugely lacking compared to `python`, and if you want to deploy a service, or use slightly off-the-beaten path packages, the fact that `pip` has an order of magnitude more packages than `CRAN` is a huge help.
| Need some tips regarding starting out with the field's specific programming languages, with a heavy focus on data visualization | [R](https://www.r-project.org/) is a more compact, target oriented, package. Good if you want to focus on very specific tasks (generally scientific). [Python](https://www.python.org/), on the other hand, is a general purpose language.
That being said, and obviously this is a matter of opinion, if you are an experienced developer go for Python. You'll have far more choices in libraries and a far bigger potential to build big software.
Some examples of 2D scientific plotting libraries:
- Matplotlib
- Bokeh (targeted D3.js)
- Chaco
- ggplot
- Seaborn
- pyQtGraph (some significant 3D features)
Some examples of 3D scientific plotting libraries
- Vispy
- Mayavi
- VTK
- Glumpy
Some examples of libraries typically used in Data Science in Python:
- Pandas
- Numpy
- Scipy
- Scikit-learn
- Scikit-image
Also check the list for other relevant [Scikit packages](https://scikits.appspot.com/scikits).
As for starting software I would advise you to use any of the already prepared Python distributions that already come with a bunch of scientific libraries inside as well as software such as IDEs. Some examples are:
- WinPython
- Python XY
- Anaconda
- Canopy
Personally I'm a user of WinPython due to being portable (former user of Python XY, both are great). In any case these distributions will greatly simplify the task of having your scientific Python environment (so to speak) prepared. You just need to code. One IDE known to be specially good for scientists is [Spyder](https://github.com/spyder-ide/spyder/). Yet these ones also will work:
- PyDev
- PyCharm
- WingIDE
- Komodo
- Python Tools for Visual Studio
As for data visualization tips you'll see that the most common functions in the libraries mentioned above are also the most widely used. For instance a library like Pandas let's you call plots directly from the object so there is already an intuitive approach to data visualization. A library like scikit-learn (check the site) already shows examples followed by data visualization of the results. I wouldn't be too concerned about this point. You'll learn just by roaming a bit on the libraries documentation ([example](http://scikit-learn.org/stable/modules/generated/sklearn.cluster.DBSCAN.html#sklearn.cluster.DBSCAN)).
|
410 | 1 | 414 | null | 114 | 121896 | I'm currently working on implementing Stochastic Gradient Descent, `SGD`, for neural nets using back-propagation, and while I understand its purpose I have some questions about how to choose values for the learning rate.
- Is the learning rate related to the shape of the error gradient, as it dictates the rate of descent?
- If so, how do you use this information to inform your decision about a value?
- If it's not what sort of values should I choose, and how should I choose them?
- It seems like you would want small values to avoid overshooting, but how do you choose one such that you don't get stuck in local minima or take to long to descend?
- Does it make sense to have a constant learning rate, or should I use some metric to alter its value as I get nearer a minimum in the gradient?
In short: How do I choose the learning rate for SGD?
| Choosing a learning rate | CC BY-SA 3.0 | null | 2014-06-16T18:08:38.623 | 2020-01-31T16:28:25.547 | 2018-01-17T14:59:36.183 | 28175 | 890 | [
"machine-learning",
"neural-network",
"deep-learning",
"optimization",
"hyperparameter"
] |
- Is the learning rate related to the shape of the error gradient, as
it dictates the rate of descent?
In plain SGD, the answer is no. A global learning rate is used which is indifferent to the error gradient. However, the intuition you are getting at has inspired various modifications of the SGD update rule.
- If so, how do you use this information to inform your decision about a value?
Adagrad is the most widely known of these and scales a global learning rate η on each dimension based on l2 norm of the history of the error gradient gt on each dimension:
Adadelta is another such training algorithm which uses both the error gradient history like adagrad and the weight update history and has the advantage of not having to set a learning rate at all.
- If it's not what sort of values should I choose, and how should I choose them?
Setting learning rates for plain SGD in neural nets is usually a
process of starting with a sane value such as 0.01 and then doing cross-validation
to find an optimal value. Typical values range over a few orders of
magnitude from 0.0001 up to 1.
- It seems like you would want small values to avoid overshooting, but
how do you choose one such that you don't get stuck in local minima
or take too long to descend? Does it make sense to have a constant learning rate, or should I use some metric to alter its value as I get nearer a minimum in the gradient?
Usually, the value that's best is near the highest stable learning
rate and learning rate decay/annealing (either linear or
exponentially) is used over the course of training. The reason behind this is that early on there is a clear learning signal so aggressive updates encourage exploration while later on the smaller learning rates allow for more delicate exploitation of local error surface.
| Which learning rate should I choose? | I am afraid the that besides learning rate, there are a lot of values for you to make a choice for over a lot of hyperparameters, especially if you’re using ADAM Optimization, etc.
A principled order of importance for tuning is as follows
- Learning rate
- Momentum term , num of hidden units in each layer, batch size.
- Number of hidden layers, learning rate decay.
To tune a set of hyperparameters, you need to define a range that makes sense for each parameter. Given a number of different values you want to try according to your budget, you could choose a hyperparameter value from a random sampling.
Specifically to learning rate investigation though, you may want to try a wide range of values, e.g. from 0.0001 to 1, and so you can avoid sampling random values directly from 0.0001 to 1. You can instead go for $x=[-4,0]$ for $a=10^x$ essentially following a logarithmic scale.
As far as number of epochs go, you should set an early stopping callback with `patience~=50`, depending on you "exploration" budget. This means, you give up training with a certain learning rate value if there is no improvement for a defined number of epochs.
Parameter tuning for neural networks is a form of art, one could say. For this reason I suggest you look at basic methodologies for non-manual tuning, such as `GridSearch` and `RandomSearch` which are implemented in the sklearn package. Additionally, it may be worth looking at more advanced techniques such as bayesian optimisation with Gaussian processes and Tree Parzen Estimators. Good luck!
---
## Randomized Search for parameter tuning in Keras
- Define function that creates model instance
```
# Model instance
input_shape = X_train.shape[1]
def create_model(n_hidden=1, n_neurons=30, learning_rate = 0.01, drop_rate = 0.5, act_func = 'ReLU',
act_func_out = 'sigmoid',kernel_init = 'uniform', opt= 'Adadelta'):
model = Sequential()
model.add(Dense(n_neurons, input_shape=(input_shape,), activation=act_func,
kernel_initializer = kernel_init))
model.add(BatchNormalization())
model.add(Dropout(drop_rate))
# Add as many hidden layers as specified in nl
for layer in range(n_hidden):
# Layers have nn neurons model.add(Dense(nn, activation='relu'
model.add(Dense(n_neurons, activation=act_func, kernel_initializer = kernel_init))
model.add(BatchNormalization())
model.add(Dropout(drop_rate))
model.add(Dense(1, activation=act_func_out, kernel_initializer = kernel_init))
opt= Adadelta(lr=learning_rate)
model.compile(loss='binary_crossentropy',optimizer=opt, metrics=[f1_m])
return model
```
- Define parameter search space
```
params = dict(n_hidden= randint(4, 32),
epochs=[50], #, 20, 30],
n_neurons= randint(512, 600),
act_func=['relu'],
act_func_out=['sigmoid'],
learning_rate= [0.01, 0.1, 0.3, 0.5],
opt = ['adam','Adadelta', 'Adagrad','Rmsprop'],
kernel_init = ['uniform','normal', 'glorot_uniform'],
batch_size=[256, 512, 1024, 2048],
drop_rate= [np.random.uniform(0.1, 0.4)])
```
- Wrap Keras model with sklearn API and instantiate random search
```
model = KerasClassifier(build_fn=create_model)
random_search = RandomizedSearchCV(model, params, n_iter=5, scoring='average_precision',
cv=5)
```
- Search for optimal hyperparameters
```
random_search_results = random_search.fit(X_train, y_train,
validation_data =(X_test, y_test),
callbacks=[EarlyStopping(patience=50)])
```
|
412 | 1 | 446 | null | 44 | 6139 |
# Motivation
I work with datasets that contain personally identifiable information (PII) and sometimes need to share part of a dataset with third parties, in a way that doesn't expose PII and subject my employer to liability. Our usual approach here is to withhold data entirely, or in some cases to reduce its resolution; e.g., replacing an exact street address with the corresponding county or census tract.
This means that certain types of analysis and processing must be done in-house, even when a third party has resources and expertise more suited to the task. Since the source data is not disclosed, the way we go about this analysis and processing lacks transparency. As a result, any third party's ability to perform QA/QC, adjust parameters or make refinements may be very limited.
# Anonymizing Confidential Data
One task involves identifying individuals by their names, in user-submitted data, while taking into account errors and inconsistencies. A private individual might be recorded in one place as "Dave" and in another as "David," commercial entities can have many different abbreviations, and there are always some typos. I've developed scripts based on a number of criteria that determine when two records with non-identical names represent the same individual, and assign them a common ID.
At this point we can make the dataset anonymous by withholding the names and replacing them with this personal ID number. But this means the recipient has almost no information about e.g. the strength of the match. We would prefer to be able to pass along as much information as possible without divulging identity.
# What Doesn't Work
For instance, it would be great to be able to encrypt strings while preserving edit distance. This way, third parties could do some of their own QA/QC, or choose to do further processing on their own, without ever accessing (or being able to potentially reverse-engineer) PII. Perhaps we match strings in-house with edit distance <= 2, and the recipient wants to look at the implications of tightening that tolerance to edit distance <= 1.
But the only method I am familiar with that does this is [ROT13](http://www.techrepublic.com/blog/it-security/cryptographys-running-gag-rot13/) (more generally, any [shift cipher](https://en.wikipedia.org/wiki/Caesar_cipher)), which hardly even counts as encryption; it's like writing the names upside down and saying, "Promise you won't flip the paper over?"
Another bad solution would be to abbreviate everything. "Ellen Roberts" becomes "ER" and so forth. This is a poor solution because in some cases the initials, in association with public data, will reveal a person's identity, and in other cases it's too ambiguous; "Benjamin Othello Ames" and "Bank of America" will have the same initials, but their names are otherwise dissimilar. So it doesn't do either of the things we want.
An inelegant alternative is to introduce additional fields to track certain attributes of the name, e.g.:
```
+-----+----+-------------------+-----------+--------+
| Row | ID | Name | WordChars | Origin |
+-----+----+-------------------+-----------+--------+
| 1 | 17 | "AMELIA BEDELIA" | (6, 7) | Eng |
+-----+----+-------------------+-----------+--------+
| 2 | 18 | "CHRISTOPH BAUER" | (9, 5) | Ger |
+-----+----+-------------------+-----------+--------+
| 3 | 18 | "C J BAUER" | (1, 1, 5) | Ger |
+-----+----+-------------------+-----------+--------+
| 4 | 19 | "FRANZ HELLER" | (5, 6) | Ger |
+-----+----+-------------------+-----------+--------+
```
I call this "inelegant" because it requires anticipating which qualities might be interesting and it's relatively coarse. If the names are removed, there's not much you can reasonably conclude about the strength of the match between rows 2 & 3, or about the distance between rows 2 & 4 (i.e., how close they are to matching).
# Conclusion
The goal is to transform strings in such a way that as many useful qualities of the original string are preserved as possible while obscuring the original string. Decryption should be impossible, or so impractical as to be effectively impossible, no matter the size of the data set. In particular, a method that preserves the edit distance between arbitrary strings would be very useful.
I've found a couple papers that might be relevant, but they're a bit over my head:
- Privacy Preserving String Comparisons Based on Levenshtein Distance
- An Empirical Comparison of Approaches to Approximate String
Matching in Private Record Linkage
| How can I transform names in a confidential data set to make it anonymous, but preserve some of the characteristics of the names? | CC BY-SA 3.0 | null | 2014-06-16T19:48:31.797 | 2015-12-07T17:44:55.910 | 2015-12-07T17:44:55.910 | 322 | 322 | [
"data-cleaning",
"anonymization"
] | One of the references I mentioned in the OP led me to a potential solution that seems quite powerful, described in "Privacy-preserving record linkage using Bloom filters" ([doi:10.1186/1472-6947-9-41](http://www.biomedcentral.com/1472-6947/9/41)):
>
A new protocol for privacy-preserving record linkage with encrypted identifiers allowing for errors in identifiers has been developed. The protocol is based on Bloom filters on q-grams of identifiers.
The article goes into detail about the method, which I will summarize here to the best of my ability.
A Bloom filter is a fixed-length series of bits storing the results of a fixed set of independent hash functions, each computed on the same input value. The output of each hash function should be an index value from among the possible indexes in the filter; i.e., if you have a 0-indexed series of 10 bits, hash functions should return (or be mapped to) values from 0 to 9.
The filter starts with each bit set to 0. After hashing the input value with each function from the set of hash functions, each bit corresponding to an index value returned by any hash function is set to 1. If the same index is returned by more than one hash function, the bit at that index is only set once. You could consider the Bloom filter to be a superposition of the set of hashes onto the fixed range of bits.
The protocol described in the above-linked article divides strings into n-grams, which are in this case sets of characters. As an example, `"hello"` might yield the following set of 2-grams:
```
["_h", "he", "el", "ll", "lo", "o_"]
```
Padding the front and back with spaces seems to be generally optional when constructing n-grams; the examples given in the paper that proposes this method use such padding.
Each n-gram can be hashed to produce a Bloom filter, and this set of Bloom filters can be superimposed on itself (bitwise OR operation) to produce the Bloom filter for the string.
If the filter contains many more bits than there are hash functions or n-grams, arbitrary strings are relatively unlikely to produce exactly the same filter. However, the more n-grams two strings have in common, the more bits their filters will ultimately share. You can then compare any two filters `A, B` by means of their Dice coefficient:
>
DA, B = 2h / (a + b)
Where `h` is the number of bits that are set to 1 in both filters, `a` is the number of bits set to 1 in only filter A, and `b` is the number of bits set to 1 in only filter B. If the strings are exactly the same, the Dice coefficient will be 1; the more they differ, the closer the coefficient will be to `0`.
Because the hash functions are mapping an indeterminate number of unique inputs to a small number of possible bit indexes, different inputs may produce the same filter, so the coefficient indicates only a probability that the strings are the same or similar. The number of different hash functions and the number of bits in the filter are important parameters for determining the likelihood of false positives - pairs of inputs that are much less similar than the Dice coefficient produced by this method predicts.
I found [this tutorial](http://billmill.org/bloomfilter-tutorial/) to be very helpful for understanding the Bloom filter.
There is some flexibility in the implementation of this method; see also [this 2010 paper](https://www.uni-due.de/~hq0215/documents/2010/Bachteler_2010_An_Empirical_Comparison_Of_Approaches_To_Approximate_String_Matching_In_Private_Record_Linkage.pdf) (also linked at the end of the question) for some indications of how performant it is in relation to other methods, and with various parameters.
| What are the best practices to anonymize user names in data? | I suspected you were using the names as identifiers. You shouldn't; they're not unique and they raise this privacy issue. Use instead their student numbers, which you can verify from their IDs, stored in hashed form. Use the student's last name as a salt, for good measure (form the string to be hashed by concatenating the ID number and the last name).
|
424 | 1 | 440 | null | 23 | 5223 | I recently saw a cool feature that [was once available](https://support.google.com/docs/answer/3543688?hl=en) in Google Sheets: you start by writing a few related keywords in consecutive cells, say: "blue", "green", "yellow", and it automatically generates similar keywords (in this case, other colors). See more examples in [this YouTube video](http://youtu.be/dlslNhfrQmw).
I would like to reproduce this in my own program. I'm thinking of using Freebase, and it would work like this intuitively:
- Retrieve the list of given words in Freebase;
- Find their "common denominator(s)" and construct a distance metric based on this;
- Rank other concepts based on their "distance" to the original keywords;
- Display the next closest concepts.
As I'm not familiar with this area, my questions are:
- Is there a better way to do this?
- What tools are available for each step?
| How to grow a list of related words based on initial keywords? | CC BY-SA 3.0 | null | 2014-06-17T06:05:39.653 | 2020-08-06T16:18:05.960 | 2014-06-19T05:48:43.540 | 322 | 906 | [
"nlp",
"text-mining",
"freebase"
] | The [word2vec algorithm](https://code.google.com/p/word2vec/) may be a good way to retrieve more elements for a list of similar words. It is an unsupervised "deep learning" algorithm that has previously been demonstrated with Wikipedia-based training data (helper scripts are provided on the Google code page).
There are currently [C](https://code.google.com/p/word2vec/) and [Python](http://radimrehurek.com/gensim/models/word2vec.html) implementations. This [tutorial](http://radimrehurek.com/2014/02/word2vec-tutorial) by [Radim Řehůřek](http://radimrehurek.com/), the author of the [Gensim topic modelling library](http://radimrehurek.com/gensim/), is an excellent place to start.
The ["single topic"](http://radimrehurek.com/2014/02/word2vec-tutorial#single) demonstration on the tutorial is a good example of retreiving similar words to a single term (try searching on 'red' or 'yellow'). It should be possible to extend this technique to find the words that have the greatest overall similarity to a set of input words.
| Selecting most relevant word from lists of candidate words | There are many ways you could approach this problem
- Word embeddings
If you have word embeddings at hand, you can look at the distance between the tags and the bucket and pick the one with the smallest distance.
- Frequentist approach
You could simply look at the frequency of a bucket/tag pair and choose this. Likely not the best model, but might already go a long way.
- Recommender system
Given a bucket, your goal is to recommend the best tag. You can use collaborative filtering or neural approaches to train a recommender. I feel this could work well especially if the data is sparse (i.e. lots of different tags, lots of buckets).
The caveat I would see with this approach is that you would technically always compare all tags, which only works if tag A is always better than tag B regardless of which tags are proposed to the user.
- Ranking problem
You could look at it as a ranking problem, I recommend reading [this blog](https://medium.com/@nikhilbd/intuitive-explanation-of-learning-to-rank-and-ranknet-lambdarank-and-lambdamart-fe1e17fac418) to have a better idea of how you can train such model.
- Classification problem
This becomes a classification problem if you turn your problem into the following: given a bucket, and two tags (A & B), return 0 if tag A is preferred, 1 if tag B is preferred. You can create your training data as every combination of two tags from your data, times 2 (swap A and B).
The caveat is that given N tags, you might need to do a round-robin or tournament approach to know which tag is the winner, due to the pairwise nature.
- Recurrent/Convolutional network
If you want to implicitly deal with the variable-length nature of the problem, you could pass your tags as a sequence. Since your tags have no particular order, this creates a different input for each permutation of the tags. During training, this provides more data points, and during inference, this could be used to create an ensemble (i.e. predict a tag for each permutation and do majority voting).
If you believe that it matters in which order the tags are presented to the user, then deal with the sequence in the order it is in your data.
Your LSTM/CNN would essentially learn to output a single score for each item, such that the item with the highest score is the desired one.
|
430 | 1 | 525 | null | 14 | 1612 | I'm trying to understand how all the "big data" components play together in a real world use case, e.g. hadoop, monogodb/nosql, storm, kafka, ... I know that this is quite a wide range of tools used for different types, but I'd like to get to know more about their interaction in applications, e.g. thinking machine learning for an app, webapp, online shop.
I have vistors/session, transaction data etc and store that; but if I want to make recommendations on the fly, I can't run slow map/reduce jobs for that on some big database of logs I have. Where can I learn more about the infrastructure aspects? I think I can use most of the tools on their own, but plugging them into each other seems to be an art of its own.
Are there any public examples/use cases etc available? I understand that the individual pipelines strongly depend on the use case and the user, but just examples will probably be very useful to me.
| Looking for example infrastructure stacks/workflows/pipelines | CC BY-SA 3.0 | null | 2014-06-17T10:37:22.987 | 2014-06-23T13:36:51.493 | 2014-06-17T13:37:47.400 | 84 | 913 | [
"machine-learning",
"bigdata",
"efficiency",
"scalability",
"distributed"
] | In order to understand the variety of ways machine learning can be integrated into production applications, I think it is useful to look at open source projects and papers/blog posts from companies describing their infrastructure.
The common theme that these systems have is the separation of model training from model application. In production systems, model application needs to be fast, on the order of 100s of ms, but there is more freedom in how frequently fitted model parameters (or equivalent) need to be updated.
People use a wide range of solutions for model training and deployment:
- Build a model, then export and deploy it with PMML
AirBnB describes their model training in R/Python and deployment of PMML models via OpenScoring.
Pattern is project related to Cascading that can consume PMML and deploy predictive models.
- Build a model in MapReduce and access values in a custom system
Conjecture is an open source project from Etsy that allows for model training with Scalding, an easier to use scala wrapper around MapReduce, and deployment via Php.
Kiji is an open source project from WibiData that allows for real-time model scoring (application) as well as functioanlity for persisting user data and training models on that data via Scalding.
- Use an online system that allows for continuously updating model parameters.
Google released a great paper about an online collaborative filtering they implemented to deal with recommendations in Google News.
| software for workflow integrating network analysis, predictive analytics, and performance metrics | In Orange, you can do something like this:
[](https://i.stack.imgur.com/hkZNu.png)
This takes the network, which is already containing class you'd like to predict, then training (or testing) the learner in Test & Score and evaluating it in Confusion Matrix. Then you can see misclassifications directly in the network graph.
There are a bunch of other learners and evaluation methods available. A big plus is also interactive data exploration (see how you can input wrongly classified data into Network Explorer?). However, there's no dashboard available yet. We make do with opening several windows side by side.
That's just my 2¢ on Orange. I suggest you to at least try all of them and see which one works best for you. :)
|
437 | 1 | 444 | null | 5 | 157 | I think that Bootstrap can be useful in my work, where we have a lot a variables that we don't know the distribution of it. So, simulations could help.
What are good sources to learn about Bootstrap/other useful simulation methods?
| What are good sources to learn about Bootstrap? | CC BY-SA 3.0 | null | 2014-06-17T18:13:46.230 | 2014-06-17T22:29:36.720 | null | null | 199 | [
"data-mining",
"statistics",
"education"
] | A classic book is by B. Efron who created the technique:
- Bradley Efron; Robert Tibshirani (1994). An Introduction to the Bootstrap. Chapman & Hall/CRC. ISBN 978-0-412-04231-7.
| The affect of bootstrap on Isolation Forest | This is well explained on the [original paper](https://cs.nju.edu.cn/zhouzh/zhouzh.files/publication/icdm08b.pdf?q=isolation-forest) Section 3.
As well as in the Supervised Random Forest, Isolation Forest makes use of sampling on both, features and instances, so the latter in this case helps alleviate 2 main problems:
- Swamping
Swamping refers to wrongly identifying normal instances as anomalies. When normal instances are too close to anomalies, the number of partitions required to separate anomalies increases – which makes it harder to distinguish anomalies from normal in- stances.
- Masking
Masking is the existence of too many anomalies concealing their own presence.
>
Contrary to existing methods where large sampling size is more desirable, isolation method works best when the sampling size is kept small. Large sampling size reduces iForest’s ability to isolate anomalies as normal instances can interfere with the isolation process and therefore reduces its ability to clearly isolate anomalies. Thus, sub-sampling provides a favourable environment for iForest to work well. Throughout this paper, sub-sampling is conducted by ran- dom selection of instances without replacement.
[](https://i.stack.imgur.com/6RZG4.png)
|
454 | 1 | 620 | null | 16 | 2453 | I have a highly biased binary dataset - I have 1000x more examples of the negative class than the positive class. I would like to train a Tree Ensemble (like Extra Random Trees or a Random Forest) on this data but it's difficult to create training datasets that contain enough examples of the positive class.
What would be the implications of doing a stratified sampling approach to normalize the number of positive and negative examples? In other words, is it a bad idea to, for instance, artificially inflate (by resampling) the number of positive class examples in the training set?
| What are the implications for training a Tree Ensemble with highly biased datasets? | CC BY-SA 3.0 | null | 2014-06-18T15:48:19.497 | 2016-07-22T20:19:02.920 | 2015-11-22T16:25:06.530 | 13727 | 403 | [
"machine-learning",
"feature-selection",
"class-imbalance"
] | Yes, it's problematic. If you oversample the minority, you risk overfitting. If you undersample the majority, you risk missing aspects of the majority class. Stratified sampling, btw, is the equivalent to assigning non-uniform misclassification costs.
Alternatives:
(1) Independently sampling several subsets from the majority class and making multiple classifiers by combining each subset with all the minority class data, as suggested in the answer from @Debasis and described in this [EasyEnsemble paper](http://cse.seu.edu.cn/people/xyliu/publication/tsmcb09.pdf),
(2) [SMOTE (Synthetic Minority Oversampling Technique)](http://arxiv.org/pdf/1106.1813.pdf) or [SMOTEBoost, (combining SMOTE with boosting)](http://www3.nd.edu/~nchawla/papers/ECML03.pdf) to create synthetic instances of the minority class by making nearest neighbors in the feature space. SMOTE is implemented in R in [the DMwR package](http://cran.r-project.org/web/packages/DMwR/index.html).
| Why don't tree ensembles require one-hot-encoding? | The encoding leads to a question of representation and the way that the algorithms cope with the representation.
Let's consider 3 methods of representing n categorial values of a feature:
- A single feature with n numeric values.
- one hot encoding (n Boolean features, exactly one of them must be on)
- Log n Boolean features,representing the n values.
Note that we can represent the same values in the same methods. The one hot encoding is less efficient, requiring n bits instead of log n bits.
More than that, if we are not aware that the n features in the on hot encoding are exclusive, our [vc dimension](https://en.wikipedia.org/wiki/VC_dimension) and our hypothesis set are larger.
So, one might wonder why use one hot encoding in the first place?
The problem is that in the single feature representation and the log representation we might use wrong deductions.
In a single feature representation the algorithm might assume order. Usually the encoding is arbitrary and the value 3 is as far for 3 as from 8. However, the algorithm might treat the feature as a numeric feature and come up with rules like "f < 4". Here you might claim that if the algorithm found such a rule, it might be beneficial, even if not intended. While that might be true, small data set, noise and other reason to have a data set that mis represent the underlying distribution might lead to false rules.
Same can happen in logarithmic representation (e.g., having rules like "third bit is on). Here we are likely to get more complex rules, all unintended and sometimes misleading.
So, we should had identical representations, leading to identical results in ideal world. However, in some cases the less efficient representation can lead to worse results while on other cases the badly deduce rules can lead to worse results.
In general, if the values are indeed very distinct in behaviour, the algorithm will probably won't deduce such rule and you will benefit from the more efficient representation. Many times it is hard to analyze it beforehand so what you did, trying both representations, is a good way to choose the proper one.
|
455 | 1 | 464 | null | 9 | 2914 | Which freely available datasets can I use to train a text classifier?
We are trying to enhance our users engagement by recommending the most related content for him, so we thought If we classified our content based on a predefined bag of words we can recommend to him engaging content by getting his feedback on random number of posts already classified before.
We can use this info to recommend for him pulses labeled with those classes. But we found If we used a predefined bag of words not related to our content the feature vector will be full of zeros, also categories may be not relevant to our content. so for those reasons we tried another solution that will be clustering our content not classifying it.
Thanks :)
| Suggest text classifier training datasets | CC BY-SA 3.0 | null | 2014-06-18T16:21:12.203 | 2016-07-05T08:40:00.757 | 2015-05-29T08:59:43.343 | 553 | 960 | [
"machine-learning",
"classification",
"dataset",
"clustering",
"text-mining"
] | Some standard datasets for text classification are the 20-News group, Reuters (with 8 and 52 classes) and WebKb. You can find all of them [here](http://web.ist.utl.pt/~acardoso/datasets/).
| Build train data set for natural language text classification? | It would help to do some analysis of the scripts to identify aspects that distinguish the various categories. Once you do this manually for some examples, you could consider writing some rules based on the observations. The rest of the examples can be labeled using the rules. For a model-based approach, if you label a small set of examples (~50), then a simple model (Naive Bayes, etc.) can potentially be trained on these.
|
458 | 1 | 459 | null | 17 | 10358 | [K-means](http://en.wikipedia.org/wiki/K-means_clustering) is a well known algorithm for clustering, but there is also an online variation of such algorithm (online K-means). What are the pros and cons of these approaches, and when should each be preferred?
| K-means vs. online K-means | CC BY-SA 3.0 | null | 2014-06-18T19:48:54.883 | 2017-04-26T16:24:21.560 | 2017-04-26T16:24:21.560 | 31513 | 84 | [
"clustering",
"algorithms",
"k-means"
] | Online k-means (more commonly known as [sequential k-means](https://stackoverflow.com/questions/3698532/online-k-means-clustering)) and traditional k-means are very similar. The difference is that online k-means allows you to update the model as new data is received.
Online k-means should be used when you expect the data to be received one by one (or maybe in chunks). This allows you to update your model as you get more information about it. The drawback of this method is that it is dependent on the order in which the data is received ([ref](http://www.cs.princeton.edu/courses/archive/fall08/cos436/Duda/C/sk_means.htm)).
| Online k-means explanation | The original MacQueen k-means publication (the first to use the name "kmeans") is an online algorithm.
>
MacQueen, J. B. (1967). "Some Methods for classification and Analysis of Multivariate Observations". Proceedings of 5th Berkeley Symposium on Mathematical Statistics and Probability 1. University of California Press. pp. 281–297
After assigning each point, the mean is incrementally updated.
As far as I can tell, it was also meant to be a single pass over the data only, although it can be trivially repeated multiple times to reassign points until convergence.
MacQueen usually takes fewer iterations than Lloyds to converge if your data is shuffled. On ordered data, it can have problems. On the downside, it requires more computation for each object, so each iteration takes slightly longer.
When you implement a parallel version of k-means, make sure to study the update formulas in MacQueens publication. They're useful.
|
488 | 1 | 489 | null | 12 | 15256 | I thought that generalized linear model (GLM) would be considered a statistical model, but a friend told me that some papers classify it as a machine learning technique. Which one is true (or more precise)? Any explanation would be appreciated.
| Is GLM a statistical or machine learning model? | CC BY-SA 3.0 | null | 2014-06-19T18:02:24.650 | 2016-12-05T11:43:00.267 | 2015-07-08T11:37:50.907 | 21 | 1021 | [
"machine-learning",
"statistics",
"glm"
] | A GLM is absolutely a statistical model, but statistical models and machine learning techniques are not mutually exclusive. In general, statistics is more concerned with inferring parameters, whereas in machine learning, prediction is the ultimate goal.
| What are Machine learning model characteristics? | I'm a little torn on helping on this question because I think that you're being given good advice above about modifying your question and using this site in a better way. But at the same time, I hate when questions are closed so quickly on here because the people with those votes just do a terrible job with that privilege (a privilege that I have but rarely use because nothing should be closed here). So, I'm going to choose to help here but please use the feedback you're being given when posting here in the future.
When I interview most data scientists, I am looking for understanding of concepts and rationale. With this particular question, I don't think they are looking for deep detail; a smart scientist starts by getting a high view into the project. So I think that with this question, they want to see how you walk through the analysis. I would reply with the following, roughly in this order:
- What is the business case the algorithm is trying to solve?
- Is this algorithm predictive or is it doing categorizations?
- How many factors are in the complete dataset? How many factors are actually used?
- Is it a neural network or does it use "traditional approaches" like regression, decision trees, etc, etc?
- Can you show me a confusion matrix for the results? What is the accuracy? What is the recall? What is the precision?
- Can you show me an ROC curve?
I think that at this point, once you are given the information and have time to analyze it, you will be in a much better position to make statements about a particular model. Good luck!
|
492 | 1 | 2367 | null | 27 | 13852 | I'm looking to use google's word2vec implementation to build a named entity recognition system. I've heard that recursive neural nets with back propagation through structure are well suited for named entity recognition tasks, but I've been unable to find a decent implementation or a decent tutorial for that type of model. Because I'm working with an atypical corpus, standard NER tools in NLTK and similar have performed very poorly, and it looks like I'll have to train my own system.
In short, what resources are available for this kind of problem? Is there a standard recursive neural net implementation available?
| Word2Vec for Named Entity Recognition | CC BY-SA 3.0 | null | 2014-06-19T19:29:57.797 | 2020-08-05T08:41:02.810 | 2017-05-19T16:11:58.100 | 21 | 684 | [
"machine-learning",
"python",
"neural-network",
"nlp"
] | Instead of "recursive neural nets with back propagation" you might consider the approach used by Frantzi, et. al. at National Centre for Text Mining (NaCTeM) at University of Manchester for Termine (see: [this](http://www.nactem.ac.uk/index.php) and [this](http://personalpages.manchester.ac.uk/staff/sophia.ananiadou/IJODL2000.pdf)) Instead of deep neural nets, they "combine linguistic and statistical information".
| Semantic networks: word2vec? | There are a few models that are trained to analyse a sentence and classify each token (or recognise dependencies between words).
- Part of speech tagging (POS) models assign to each word its function (noun, verb, ...) - have a look at this link
- Dependency parsing (DP) models will recognize which words go together (in this case Angela and Merkel for instance) - check this out
- Named entity recognition (NER) models will for instance say that "Angela Merkel" is a person, "Germany" is a country ... - another link
|
497 | 1 | 506 | null | 23 | 712 | I am trying to find a formula, method, or model to use to analyze the likelihood that a specific event influenced some longitudinal data. I am having difficultly figuring out what to search for on Google.
Here is an example scenario:
Image you own a business that has an average of 100 walk-in customers every day. One day, you decide you want to increase the number of walk-in customers arriving at your store each day, so you pull a crazy stunt outside your store to get attention. Over the next week, you see on average 125 customers a day.
Over the next few months, you again decide that you want to get some more business, and perhaps sustain it a bit longer, so you try some other random things to get more customers in your store. Unfortunately, you are not the best marketer, and some of your tactics have little or no effect, and others even have a negative impact.
What methodology could I use to determine the probability that any one individual event positively or negatively impacted the number of walk-in customers? I am fully aware that correlation does not necessarily equal causation, but what methods could I use to determine the likely increase or decrease in your business's daily walk in client's following a specific event?
I am not interested in analyzing whether or not there is a correlation between your attempts to increase the number of walk-in customers, but rather whether or not any one single event, independent of all others, was impactful.
I realize that this example is rather contrived and simplistic, so I will also give you a brief description of the actual data that I am using:
I am attempting to determine the impact that a particular marketing agency has on their client's website when they publish new content, perform social media campaigns, etc. For any one specific agency, they may have anywhere from 1 to 500 clients. Each client has websites ranging in size from 5 pages to well over 1 million. Over the course of the past 5 year, each agency has annotated all of their work for each client, including the type of work that was done, the number of webpages on a website that were influenced, the number of hours spent, etc.
Using the above data, which I have assembled into a data warehouse (placed into a bunch of star/snowflake schemas), I need to determine how likely it was that any one piece of work (any one event in time) had an impact on the traffic hitting any/all pages influenced by a specific piece of work. I have created models for 40 different types of content that are found on a website that describes the typical traffic pattern a page with said content type might experience from launch date until present. Normalized relative to the appropriate model, I need to determine the highest and lowest number of increased or decreased visitors a specific page received as the result of a specific piece of work.
While I have experience with basic data analysis (linear and multiple regression, correlation, etc), I am at a loss for how to approach solving this problem. Whereas in the past I have typically analyzed data with multiple measurements for a given axis (for example temperature vs thirst vs animal and determined the impact on thirst that increased temperate has across animals), I feel that above, I am attempting to analyze the impact of a single event at some point in time for a non-linear, but predictable (or at least model-able), longitudinal dataset. I am stumped :(
Any help, tips, pointers, recommendations, or directions would be extremely helpful and I would be eternally grateful!
| What statistical model should I use to analyze the likelihood that a single event influenced longitudinal data | CC BY-SA 3.0 | null | 2014-06-20T03:18:59.477 | 2019-02-15T11:30:40.717 | 2014-10-22T12:07:33.977 | 134 | 1047 | [
"machine-learning",
"data-mining",
"statistics"
] | For the record, I think this is the type of question that's perfect for the data science Stack Exchange. I hope we get a bunch of real world examples of data problems and several perspectives on how best to solve them.
I would encourage you not to use p-values as they can be pretty misleading ([1](http://andrewgelman.com/2013/03/12/misunderstanding-the-p-value/), [2](http://occamstypewriter.org/boboh/2008/08/19/why_p_values_are_evil/)). My approach hinges on you being able to summarize traffic on a given page before and after some intervention. What you care about is the difference in the rate before and after the intervention. That is, how does the number of hits per day change? Below, I explain a first stab approach with some simulated example data. I will then explain one potential pitfall (and what I would do about it).
First, let's think about one page before and after an intervention. Pretend the intervention increases hits per day by roughly 15%:
```
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
def simulate_data(true_diff=0):
#First choose a number of days between [1, 1000] before the intervention
num_before = np.random.randint(1, 1001)
#Next choose a number of days between [1, 1000] after the intervention
num_after = np.random.randint(1, 1001)
#Next choose a rate for before the intervention. How many views per day on average?
rate_before = np.random.randint(50, 151)
#The intervention causes a `true_diff` increase on average (but is also random)
rate_after = np.random.normal(1 + true_diff, .1) * rate_before
#Simulate viewers per day:
vpd_before = np.random.poisson(rate_before, size=num_before)
vpd_after = np.random.poisson(rate_after, size=num_after)
return vpd_before, vpd_after
vpd_before, vpd_after = simulate_data(.15)
plt.hist(vpd_before, histtype="step", bins=20, normed=True, lw=2)
plt.hist(vpd_after, histtype="step", bins=20, normed=True, lw=2)
plt.legend(("before", "after"))
plt.title("Views per day before and after intervention")
plt.xlabel("Views per day")
plt.ylabel("Frequency")
plt.show()
```
![Distribution of hits per day before and after the intervention](https://i.stack.imgur.com/FJJqD.png)
We can clearly see that the intervention increased the number of hits per day, on average. But in order to quantify the difference in rates, we should use one company's intervention for multiple pages. Since the underlying rate will be different for each page, we should compute the percent change in rate (again, the rate here is hits per day).
Now, let's pretend we have data for `n = 100` pages, each of which received an intervention from the same company. To get the percent difference we take (mean(hits per day before) - mean(hits per day after)) / mean(hits per day before):
```
n = 100
pct_diff = np.zeros(n)
for i in xrange(n):
vpd_before, vpd_after = simulate_data(.15)
# % difference. Note: this is the thing we want to infer
pct_diff[i] = (vpd_after.mean() - vpd_before.mean()) / vpd_before.mean()
plt.hist(pct_diff)
plt.title("Distribution of percent change")
plt.xlabel("Percent change")
plt.ylabel("Frequency")
plt.show()
```
![Distribution of percent change](https://i.stack.imgur.com/CAitf.png)
Now we have the distribution of our parameter of interest! We can query this result in different ways. For example, we might want to know the mode, or (approximation of) the most likely value for this percent change:
```
def mode_continuous(x, num_bins=None):
if num_bins is None:
counts, bins = np.histogram(x)
else:
counts, bins = np.histogram(x, bins=num_bins)
ndx = np.argmax(counts)
return bins[ndx:(ndx+1)].mean()
mode_continuous(pct_diff, 20)
```
When I ran this I got 0.126, which is not bad, considering our true percent change is 15. We can also see the number of positive changes, which approximates the probability that a given company's intervention improves hits per day:
```
(pct_diff > 0).mean()
```
Here, my result is 0.93, so we could say there's a pretty good chance that this company is effective.
Finally, a potential pitfall: Each page probably has some underlying trend that you should probably account for. That is, even without the intervention, hits per day may increase. To account for this, I would estimate a simple linear regression where the outcome variable is hits per day and the independent variable is day (start at day=0 and simply increment for all the days in your sample). Then subtract the estimate, y_hat, from each number of hits per day to de-trend your data. Then you can do the above procedure and be confident that a positive percent difference is not due to the underlying trend. Of course, the trend may not be linear, so use discretion! Good luck!
| Is there a machine learning model suited well for longitudinal data? | Assuming we are not talking about a time series and also assuming unseen data you want to make a prediction on could include individuals not currently present in your data set, your best bet is to restructure your data first.
What you want to do is predict daily outcome Y from X1...Xn predictors which I understand to be measurements taken. A normal approach here would be to fit a RandomForest or boosting model which, yes would be based on a logistical regressor.
However you point out that simply assuming each case is independent is incorrect because outcomes are highly dependent on the individual measured. If this is the case then we need to add the attributes describing the individual as additional predictors.
So this:
```
id | day | measurement1 | measurement2 | ... | outcome
A | Mon | 1 | 0 | 1 | 1
B | Mon | 0 | 1 | 0 | 0
```
becomes this:
```
id | age | gender | day | measurement1 | measurement2 | ... | outcome
A | 34 | male | Mon | 1 | 0 | 1 | 1
B | 28 | female | Mon | 0 | 1 | 0 | 0
```
By including the attributes of each individual we can use each daily measurement as a single case in training the model because we assume that the correlation between the intraindividual outcomes can be explained by the attributes (i.e. individuals with similar age, gender, other attributes that are domain appropriate should have the same outcome bias).
If you do not have any attributes about the individuals besides their measurements then you can also safely ignore those because your model will have to predict an outcome on unseen data without knowing anything about the individual. That the prediction could be improved because we know individuals bias the outcome does not matter because the data simply isn't there.
You have to understand that prediction tasks are different than other statistical work, the only thing we care about is the properly validated performance of the prediction model. If you can get a model that is good enough by ignoring individuals than you are a-okay and if your model sucks you need more data.
If on the other hand you only want to predict outcomes for individuals ALREADY IN YOUR TRAINING SET the problem becomes even easier to solve. Simply add the individual identifier as a predictor variable.
To sum it up, unless you have a time series, you should be okay to use any ML classification model like RandomForest or boosting models even if they are based on normal logistical regressions. However you might have to restructure your data a bit.
|
518 | 1 | 583 | null | -6 | 233 | Please, could someone recommend a paper or blog post that describes the online k-means algorithm.
| Online k-means explanation | CC BY-SA 3.0 | 0 | 2014-06-21T10:55:41.700 | 2017-08-14T13:29:11.063 | 2017-08-14T13:29:11.063 | 8432 | 960 | [
"machine-learning",
"clustering"
] | The original MacQueen k-means publication (the first to use the name "kmeans") is an online algorithm.
>
MacQueen, J. B. (1967). "Some Methods for classification and Analysis of Multivariate Observations". Proceedings of 5th Berkeley Symposium on Mathematical Statistics and Probability 1. University of California Press. pp. 281–297
After assigning each point, the mean is incrementally updated.
As far as I can tell, it was also meant to be a single pass over the data only, although it can be trivially repeated multiple times to reassign points until convergence.
MacQueen usually takes fewer iterations than Lloyds to converge if your data is shuffled. On ordered data, it can have problems. On the downside, it requires more computation for each object, so each iteration takes slightly longer.
When you implement a parallel version of k-means, make sure to study the update formulas in MacQueens publication. They're useful.
| K-means vs. online K-means | Online k-means (more commonly known as [sequential k-means](https://stackoverflow.com/questions/3698532/online-k-means-clustering)) and traditional k-means are very similar. The difference is that online k-means allows you to update the model as new data is received.
Online k-means should be used when you expect the data to be received one by one (or maybe in chunks). This allows you to update your model as you get more information about it. The drawback of this method is that it is dependent on the order in which the data is received ([ref](http://www.cs.princeton.edu/courses/archive/fall08/cos436/Duda/C/sk_means.htm)).
|
530 | 1 | 532 | null | 5 | 1222 | There is a general recommendation that algorithms in ensemble learning combinations should be different in nature. Is there a classification table, a scale or some rules that allow to evaluate how far away are the algorithms from each other? What are the best combinations?
| How to select algorithms for ensemble methods? | CC BY-SA 3.0 | null | 2014-06-23T04:39:26.623 | 2014-06-24T15:44:52.540 | null | null | 454 | [
"machine-learning"
] | In general in an ensemble you try to combine the opinions of multiple classifiers. The idea is like asking a bunch of experts on the same thing. You get multiple opinions and you later have to combine their answers (e.g. by a voting scheme). For this trick to work you want the classifiers to be different from each other, that is you don't want to ask the same "expert" twice for the same thing.
In practice, the classifiers do not have to be different in the sense of a different algorithm. What you can do is train the same algorithm with different subset of the data or a different subset of features (or both). If you use different training sets you end up with different models and different "independent" classifiers.
There is no golden rule on what works best in general. You have to try to see if there is an improvement for your specific problem.
| Questions on ensemble technique in machine learning | >
Instead, model 2 may have a better overall performance on all the data
points, but it has worse performance on the very set of points where
model 1 is better. The idea is to combine these two models where they
perform the best. This is why creating out-of-sample predictions have
a higher chance of capturing distinct regions where each model
performs the best.
It's not about training on all the data or not. Both models trained on all the data. But each of them is better than the other at different points. If I and my older brother are tying to guess the exact year of a song, I will do better in 90s songs and he in 80s songs - it's not a perfect analogy but you get the point - imagine my brain just can't process 80s songs, and his can't process 90s songs. The best is to deploy us both knowing we each have learnt different regions of the input space better.
>
Simply, for a given input data point, all we need to do is to pass it
through the M base-learners and get M number of predictions, and send
those M predictions through the meta-learner as inputs
k-fold is still just one learner. But you're training multiple times to chose parameters that minimize error in the left-out fold. This is like training only me on all the songs showing me k-1 folds of data, and I calibrate my internal model the best I can... but I'll still never be very good at those 80s songs. I'm just one base learner whose functional form (my brain) isn't fit for those songs. If we could bring the second learner along, that would improve things.
|
531 | 1 | 533 | null | 16 | 6262 | I have a dataset with the following specifications:
- Training dataset with 193,176 samples with 2,821 positives
- Test Dataset with 82,887 samples with 673 positives
- There are 10 features.
I want to perform a binary classification (0 or 1). The issue I am facing is that the data is very unbalanced. After normalization and scaling the data along with some feature engineering and using a couple of different algorithms, these are the best results I could achieve:
```
mean square error : 0.00804710026904
Confusion matrix : [[82214 667]
[ 0 6]]
```
i.e only 6 correct positive hits. This is using logistic regression. Here are the various things I tried with this:
- Different algorithms like RandomForest, DecisionTree, SVM
- Changing parameters value to call the function
- Some intuition based feature engineering to include compounded features
Now, my questions are:
- What can I do to improve the number of positive hits ?
- How can one determine if there is an overfit in such a case ? ( I have tried plotting etc. )
- At what point could one conclude if maybe this is the best possible fit I could have? ( which seems sad considering only 6 hits out of 673 )
- Is there a way I could make the positive sample instances weigh more so the pattern recognition improves leading to more hits ?
- Which graphical plots could help detect outliers or some intuition about which pattern would fit the best?
I am using the scikit-learn library with Python and all implementations are library functions.
edit:
Here are the results with a few other algorithms:
Random Forest Classifier(n_estimators=100)
```
[[82211 667]
[ 3 6]]
```
Decision Trees:
```
[[78611 635]
[ 3603 38]]
```
| Binary classification model for unbalanced data | CC BY-SA 4.0 | null | 2014-06-23T07:03:15.643 | 2019-06-28T10:01:52.693 | 2019-05-07T04:22:04.987 | 1330 | 793 | [
"machine-learning",
"python",
"classification",
"logistic-regression"
] |
- Since you are doing binary classification, have you tried adjusting the classification threshold? Since your algorithm seems rather insensitive, I would try lowering it and check if there is an improvement.
- You can always use Learning Curves, or a plot of one model parameter vs. Training and Validation error to determine whether your model is overfitting. It seems it is under fitting in your case, but that's just intuition.
- Well, ultimately it depends on your dataset, and the different models you have tried. At this point, and without further testing, there can not be a definite answer.
- Without claiming to be an expert on the topic, there are a number of different techniques you may follow (hint: first link on google), but in my opinion you should first make sure you choose your cost function carefully, so that it represents what you are actually looking for.
- Not sure what you mean by pattern intuition, can you elaborate?
By the way, what were your results with the different algorithms you tried? Were they any different?
| Binary Classification with Imbalanced Target | If you don't have any way to obtain negative instances, the standard option is [one-class classification](https://en.wikipedia.org/wiki/One-class_classification):
>
one-class classification (OCC), also known as unary classification or class-modelling, tries to identify objects of a specific class amongst all objects, by primarily learning from a training set containing only the objects of that class.
I think the most common method is [One-Class SVM](https://scikit-learn.org/stable/modules/generated/sklearn.svm.OneClassSVM.html), there might be others as well.
|
559 | 1 | 561 | null | 17 | 3127 | I have a hobby project which I am contemplating committing to as a way of increasing my so far limited experience of machine learning. I have taken and completed the Coursera MOOC on the topic. My question is with regards to the feasibility of the project.
The task is the following:
Neighboring cats are from time to time visiting my garden, which I dislike since they tend to defecate on my lawn. I would like to have a warning system that alerts me when there's a cat present so that I may go chase it off using my super soaker. For simplicity's sake, say that I only care about a cat with black and white coloring.
I have setup a raspberry pi with camera module that can capture video and/or pictures of a part of the garden.
Sample image:
![Sample garden image](https://i.stack.imgur.com/cNqus.jpg)
My first idea was to train a classifier to identify cat or cat-like objects, but after realizing that I will be unable to obtain a large enough number of positive samples, I have abandoned that in favor of anomaly detection.
I estimate that if I captured a photo every second of the day, I would end up with maybe five photos containing cats (out of about 60,000 with sunlight) per day.
Is this feasible using anomaly detection? If so, what features would you suggest? My ideas so far would be to simply count the number of pixels with that has certain colors; do some kind of blob detection/image segmenting (which I do not know how do to, and would thus like to avoid) and perform the same color analysis on them.
| Detecting cats visually by means of anomaly detection | CC BY-SA 3.0 | null | 2014-06-24T12:28:10.990 | 2014-06-25T03:30:19.787 | null | null | 1147 | [
"machine-learning"
] | You could simplify your problem significantly by using a motion/change detection approach. For example, you could compare each image/frame with one from an early time (e.g., a minute earlier), then only consider pixels that have changed since the earlier time. You could then extract the rectangular region of change and use that as the basis for your classification or anomaly detection.
Taking this type of approach can significantly simplify your classifier and reduce your false target rate because you can ignore anything that is not roughly the size of a cat (e.g., a person or bird). You would then use the extracted change regions that were not filtered out to form the training set for your classifier (or anomaly detector).
Just be sure to get your false target rate sufficiently low before mounting a laser turret to your feline intrusion detection system.
| Data Preprocessing, how separate background from image to detect animals? | You need to do some Background Subtraction on the images. If you have the Background image without the animal, you can simply subtract it from the current image to get just the animal.
Once you have just the animal, you can apply SIFT or CNNs or whatever.
This is called frame differencing.
[](https://i.stack.imgur.com/fvyrk.png)
If you don't have the background image, you can try methods like [this](http://docs.opencv.org/3.3.0/db/d5c/tutorial_py_bg_subtraction.html) provided by opencv
Basically what you are looking for is background subtraction/foreground detection.
Hope this helps.
image source: [http://docs.opencv.org/3.3.0/d1/dc5/tutorial_background_subtraction.html](http://docs.opencv.org/3.3.0/d1/dc5/tutorial_background_subtraction.html)
|
566 | 1 | 2456 | null | 6 | 1940 | Many times [Named Entity Recognition](http://en.wikipedia.org/wiki/Named-entity_recognition) (NER) doesn't tag consecutive NNPs as one NE. I think editing the NER to use RegexpTagger also can improve the NER.
For example, consider the following input:
>
"Barack Obama is a great person."
And the output:
```
Tree('S', [Tree('PERSON', [('Barack', 'NNP')]), Tree('ORGANIZATION', [('Obama', 'NNP')]),
('is', 'VBZ'), ('a', 'DT'), ('great', 'JJ'), ('person', 'NN'), ('.', '.')])
```
where as for the input:
>
'Former Vice President Dick Cheney told conservative radio host Laura Ingraham that he "was honored" to be compared to Darth Vader while in office.'
the output is:
```
Tree('S', [('Former', 'JJ'), ('Vice', 'NNP'), ('President', 'NNP'),
Tree('NE', [('Dick', 'NNP'), ('Cheney', 'NNP')]), ('told', 'VBD'), ('conservative', 'JJ'),
('radio', 'NN'), ('host', 'NN'), Tree('NE', [('Laura', 'NNP'), ('Ingraham', 'NNP')]),
('that', 'IN'), ('he', 'PRP'), ('``', '``'), ('was', 'VBD'), ('honored', 'VBN'),
("''", "''"), ('to', 'TO'), ('be', 'VB'), ('compared', 'VBN'), ('to', 'TO'),
Tree('NE', [('Darth', 'NNP'), ('Vader', 'NNP')]), ('while', 'IN'), ('in', 'IN'),
('office', 'NN'), ('.', '.')])
```
Here `Vice/NNP, President/NNP, (Dick/NNP, Cheney/NNP)` is correctly extracted. So, I think if `nltk.ne_chunk` is used first, and then if two consecutive trees are NNP, there are higher chances that both refer to one entity.
I have been playing with NLTK toolkit, and I came across this problem a lot, but couldn't find a satisfying answer. Any suggestion will be really appreciated. I'm looking for flaws in my approach.
| Named Entity Recognition: NLTK using Regular Expression | CC BY-SA 3.0 | null | 2014-06-24T17:06:10.310 | 2021-03-30T22:42:29.823 | 2021-03-30T22:42:29.823 | 29169 | 1165 | [
"nlp",
"named-entity-recognition"
] | You have a great idea going, and it might work for your specific project. However there are a few considerations you should take into account:
- In your first sentence, Obama in incorrectly classified as an organization, instead of a person. This is because the training model used my NLTK probably does not have enough data to recognize Obama as a PERSON. So, one way would be to update this model by training a new model with a lot of labeled training data. Generating labeled training data is one of the most expensive tasks in NLP - because of all the man hours it takes to tag sentences with the correct Part of Speech as well as semantic role.
- In sentence 2, there are 2 concepts - "Former Vice President", and "Dick Cheney". You can use co-reference to identify the relation between the 2 NNPs. Both the NNP are refering to the same entity, and the same entity could be referenced as - "former vice president" as well as "Dick Cheney". Co-reference is often used to identify the Named entity that pronouns refer to. e.g. "Dick Cheney is the former vice president of USA. He is a Republican". Here the pronoun "he" refers to "Dick Cheney", and it should be identified by a co-reference identification tool.
| Named entity recognition (NER) features | The features for a token in a NER algorithm are usually binary. i.e The feature exists or it does not. For example, a token (say the word 'hello'), is all lower case. Therefore, that is a feature for that word.
You could name the feature 'IS_ALL_LOWERCASE'.
Now, for POS tags, lets take the word 'make'. It is a verb and hence the feature "IS_VERB" is a feature for that word.
A gazetter can be used to generate features. The presence (or absence) of a word in the gazatter is a valid feature. Example: the word 'John' is present in the gazetter of Person names. so "IS_PERSON_NAME" can be a feature.
|
586 | 1 | 910 | null | 7 | 2398 | I'm new to the world of text mining and have been reading up on annotators at places like the [UIMA website](http://uima.apache.org/). I'm encountering many new terms like named entity recognition, tokenizer, lemmatizer, gazetteer, etc. Coming from a layman background, this is all very confusing so can anyone tell me or link to resources that can explain what the main categories of annotators are and what they do?
| What are the main types of NLP annotators? | CC BY-SA 3.0 | null | 2014-06-25T17:37:23.380 | 2015-10-12T07:20:26.220 | null | null | 1192 | [
"nlp",
"text-mining"
] | Here are the basic Natural Language Processing capabilities (or annotators) that are usually necessary to extract language units from textual data for sake of search and other applications:
[Sentence breaker](http://en.wikipedia.org/wiki/Sentence_boundary_disambiguation) - to split text (usually, text paragraphs) to sentences. Even in English it can be hard for some cases like "Mr. and Mrs. Brown stay in room no. 20."
[Tokenizer](http://en.wikipedia.org/wiki/Tokenization) - to split text or sentences to words or word-level units, including punctuation. This task is not trivial for languages with no spaces and no stable understanding of word boundaries (e.g. Chinese, Japanese)
[Part-of-speech Tagger](http://en.wikipedia.org/wiki/POS_tagger) - to guess part of speech of each word in the context of sentence; usually each word is assigned a so-called POS-tag from a tagset developed in advance to serve your final task (for example, parsing).
[Lemmatizer](http://en.wikipedia.org/wiki/Lemmatization) - to convert a given word into its canonical form ([lemma](http://en.wikipedia.org/wiki/Lemma_(morphology))). Usually you need to know the word's POS-tag. For example, word "heating" as gerund must be converted to "heat", but as noun it must be left unchanged.
[Parser](http://en.wikipedia.org/wiki/Parser) - to perform syntactic analysis of the sentence and build a syntactic tree or graph. There're two main ways to represent syntactic structure of sentence: via [constituency or dependency](http://en.wikipedia.org/wiki/Dependency_grammar#Dependency_vs._constituency).
[Summarizer](http://en.wikipedia.org/wiki/Automatic_summarization) - to generate a short summary of the text by selecting a set of top informative sentences of the document, representing its main idea. However can be done in more intelligent manner than just selecting the sentences from existing ones.
[Named Entity Recognition](http://en.wikipedia.org/wiki/Named-entity_recognition) - to extract so-called named entities from the text. Named entities are the chunks of words from text, which refer to an entity of certain type. The types may include: geographic locations (countries, cities, rivers, ...), person names, organization names etc. Before going into NER task you must understand what do you want to get and, possible, predefine a taxonomy of named entity types to resolve.
[Coreference Resolution](http://en.wikipedia.org/wiki/Coreference_resolution) - to group named entities (or, depending on your task, any other text units) into clusters corresponding to a single real object/meaning. For example, "B. Gates", "William Gates", "Founder of Microsoft" etc. in one text may mean the same person, referenced by using different expressions.
There're many other interesting NLP applications/annotators (see [NLP tasks category](http://en.wikipedia.org/wiki/Category:Tasks_of_natural_language_processing)), sentiment analysis, machine translation etc.). There're many books on this, the classical book: "Speech and Language Processing" by Daniel Jurafsky and James H. Martin., but it can be too detailed for you.
| NLP methods specific to a language? | This question is quite open, but nonetheless, here are some:
- lemmatization/stemming only makes sense in languages where there is a lemma/stem in the word. Some languages like Chinese have no morphological variations (apart from some arguable cases like the explicit plural 们), and therefore lemmatization and stemming are not applied in Chinese.
- Word-based vocabularies are used to represent text in many NLP systems. However, in agglutinative and polysynthetic languages, using word-level vocabularies is crazy, because you can put together a lot of affixes and form a new word, therefore, a prior segmentation of the words is needed.
- In some languages like Chinese and Japanese, there are no spaces between words. Therefore, in order to apply almost any NLP, you need a preprocessing step to segment text into words.
|
595 | 1 | 611 | null | 2 | 2175 | I'm new to machine learning, but I have an interesting problem. I have a large sample of people and visited sites. Some people have indicated gender, age, and other parameters. Now I want to restore these parameters to each user.
Which way do I look for? Which algorithm is suitable to solve this problem? I'm familiar with Neural Networks (supervised learning), but it seems they don't fit.
| How to use neural networks with large and variable number of inputs? | CC BY-SA 3.0 | null | 2014-06-26T12:25:55.663 | 2014-06-27T19:18:11.433 | 2014-06-26T16:25:31.680 | 84 | 1207 | [
"machine-learning",
"data-mining",
"algorithms",
"neural-network"
] | I had almost the same problem: 'restoring' age, gender, location for social network users. But I used users' ego-networks, not visited sites statistics. And I faced with two almost independent tasks:
- 'Restoring' or 'predicting' data. You can use a bunch of different technics to complete this task, but my vote is for simplest ones (KISS, yes). E.g., in my case, for age prediction, mean of ego-network users' ages gave satisfactory results (for about 70% of users error was less than +/-3 years, in my case it was enough). It's just an idea, but you can try to use for age prediction weighted average, defining weight as similarity measure between visited sites sets of current user and others.
- Evaluating prediction quality. Algorithm from task-1 will produce prediction almost in all cases. And second task is to determine, if prediction is reliable. E.g., in case of ego network and age prediction: can we trust in prediction, if a user has only one 'friend' in his ego network? This task is more about machine-learning: it's a binary classification problem. You need to compose features set, form training and test samples from your data with both right and wrong predictions. Creating appropriate classifier will help you to filter out unpredictable users. But you need to determine, what are your features set. I used a number of network metrics, and summary statistics on feature of interest distribution among ego-network.
This approach wouldn't populate all the gaps, but only predictable ones.
| Neural network with flexible number of inputs? | Yes this is possible by treating the audio as a sequence into a [Recurrent Neural Network (RNN)](https://deeplearning4j.konduit.ai/getting-started/tutorials/recurrent-networks). You can train a RNN against a target that is correct at the end of a sequence, or even to predict another sequence offset from the input.
Do note however that there is [a bit to learn about options that go into the construction and training of a RNN](https://iamtrask.github.io/2015/11/15/anyone-can-code-lstm/), that you will not already have studied whilst looking at simpler layered feed-forward networks. Modern RNNs make use of layer designs which include memory gates - the two most popular architectures are LSTM and GRU, and these add more trainable parameters into each layer as the memory gates need to learn weights in addition to the weights between and within the layer.
RNNs are used extensively to predict from audio sequences that have already been processed in MFCC or similar feature sets, because they can handle sequenced data as input and/or output, and this is a desirable feature when dealing with variable length data such as [spoken word](https://arxiv.org/abs/1402.1128), music etc.
Some other things worth noting:
- RNNs can work well for sequences of data that are variable length, and where there is a well-defined dimension over which the sequences evolve. But they are less well adapted for variable-sized sets of features where there is no clear order or sequence.
- RNNs can get state-of-the-art results for signal processing, NLP and related tasks, but only when there is a very large amount of training data. Other, simpler, models can work just as well or better if there is less data.
- For the specific problem of generating MFCCs from raw audio samples: Whilst it should be possible to create a RNN that predicts MFCC features from raw audio, this might take some effort and experimentation to get right, and could take a lot of processing power to make an RNN powerful enough to cope with very long sequences at normal audio sample rates. Whilst creating MFCC from raw audio using the standard approach starting with FFT will be a lot simpler, and is guaranteed to be accurate.
|
608 | 1 | 612 | null | 9 | 623 | I have just learned about regularisation as an approach to control over-fitting, and I would like to incorporate the idea into a simple implementation of backpropagation and [Multilayer perceptron](http://en.wikipedia.org/wiki/Multilayer_perceptron) (MLP) that I put together.
Currently to avoid over-fitting, I cross-validate and keep the network with best score so far on the validation set. This works OK, but adding regularisation would benefit me in that correct choice of the regularisation algorithm and parameter would make my network converge on a non-overfit model more systematically.
The formula I have for the update term (from Coursera ML course) is stated as a batch update e.g. for each weight, after summing all the applicable deltas for the whole training set from error propagation, an adjustment of `lambda * current_weight` is added as well before the combined delta is subtracted at the end of the batch, where `lambda` is the regularisation parameter.
My implementation of backpropagation uses per-item weight updates. I am concerned that I cannot just copy the batch approach, although it looks OK intuitively to me. Does a smaller regularisation term per item work just as well?
For instance `lambda * current_weight / N` where N is size of training set - at first glance this looks reasonable. I could not find anything on the subject though, and I wonder if that is because regularisation does not work as well with a per-item update, or even goes under a different name or altered formula.
| Any differences in regularisation in MLP between batch and individual updates? | CC BY-SA 3.0 | null | 2014-06-26T22:58:32.380 | 2014-06-27T12:28:25.467 | 2014-06-27T09:52:53.277 | 836 | 836 | [
"neural-network"
] | Regularization is relevant in per-item learning as well. I would suggest to start with a basic validation approach for finding out lambda, whether you are doing batch or per-item learning. This is the easiest and safest approach. Try manually with a number of different values. e.g. 0.001. 0.003, 0.01, 0.03, 0.1 etc. and see how your validation set behaves. Later on you may automate this process by introducing a linear or local search method.
As a side note, I believe the value of lambda should be considered in relation to the updates of the parameter vector, rather than the training set size. For batch training you have one parameter update per dataset pass, while for online one update per sample (regardless of the training set size).
I recently stumbled upon this [Crossvalidated Question](https://stats.stackexchange.com/questions/64224/regularization-and-feature-scaling-in-online-learning), which seems quite similar to yours. There is a link to a paper about [a new SGD algorithm](http://leon.bottou.org/publications/pdf/jmlr-2009.pdf), with some relevant content. It might be useful to take a look (especially pages 1742-1743).
| Batch Normalization vs Other Normalization Techniques | Cosine normalisation is result of the fact that we bound dot
product and hence decrease the variance, when we use cosine similarity or centered cosine similarity instead of
dot product in neural networks (which is quasi ground-stone in NN)
Main benefit of cosine normalisation is Cosine
normalization bounds the pre-activation of neuron within
a narrower range, thus makes lower variance of neurons.
Also it does not depend on any statistics on batch or mini-batch
examples, and performs the same computation in forward
propagation at training and inference times. In convolutional networks, it normalizes the neurons from the receptive fields rather than the same layer or batch size.
Have a look at this [paper](https://arxiv.org/pdf/1702.05870.pdf) showing emipirically comparison between normalisations you mentioned. C.N. comes on top.
|
613 | 1 | 619 | null | 2 | 859 | I understand Hadoop MapReduce and its features but I am confused about R MapReduce.
One difference I have read is that R utilizes maximum RAM. So do perform parallel processing integrated R with Hadoop.
## My doubt is:
- R can do all stats, math and data science related stuff, but why R MapReduce?
- Is there any new task I can achieve by using R MapReduce instead of Hadoop MapReduce? If yes, please specify.
- We can achieve the task by using R with Hadoop (directly) but what is the importance of MapReduce in R and how it is different from normal MapReduce?
| Difference Between Hadoop Mapreduce(Java) and RHadoop mapreduce | CC BY-SA 3.0 | 0 | 2014-06-27T12:03:53.357 | 2014-06-27T19:39:00.247 | 2014-06-27T19:39:00.247 | 322 | 1235 | [
"machine-learning",
"r",
"apache-hadoop",
"map-reduce"
] | [rhadoop](https://github.com/RevolutionAnalytics/RHadoop) (the part you are interested in is now called [rmr2](https://github.com/RevolutionAnalytics/rmr2)) is simply a client API for MapReduce written in R. You invoke MapReduce using R package APIs, and send an R function to the workers, where it is executed by an R interpreter locally. But it is otherwise exactly the same MapReduce.
You can call anything you like in R this way, but no R functions are themselves parallelized to use MapReduce in this way. The point is simply that you can invoke M/R from R. I don't think it somehow lets you do anything more magical than that.
| Are Hadoop and Python SciPy used for the same? | I think you're quite confused.
Hadoop is a collection of software that contains a a distributed file system called HDFS. Essentially HDFS is a way to store data cross a cluster. You can access file stores as you would in a local file store (with some modification) and modify things via Java API. Furthermore, ON TOP OF the file system there exist a MapReduce engine that allows for distributive workflow.
Python on the other hand is a generic programming language that can be made to do a myriad of task such as build a web applciation, to generating reports and even peforming analytics.
SciPy is a package that can be used in conjunction with Python (and often numpy) as a way to perform common scientific task.
Truthfully, they focus on different paradigms. If you have LARGE DATA (ie terabytes worth of it), it might be worth wild to setup a hadoop cluster (ie multiple servers and racks) and use Java MapReduce, Hive, Pig or Spark (of which there is a python version) to do analytics.
If your data is small or you only have one computer, then it probably makes sense to just use python instead of adding the overhead of setting up hadoop.
Edit: Made correction via comment.
|
634 | 1 | 635 | null | 10 | 156 | I'm working on a fraud detection system. In this field, new frauds appear regularly, so that new features have to be added to the model on ongoing basis.
I wonder what is the best way to handle it (from the development process perspective)? Just adding a new feature into the feature vector and re-training the classifier seems to be a naive approach, because too much time will be spent for re-learning of the old features.
I'm thinking along the way of training a classifier for each feature (or a couple of related features), and then combining the results of those classifiers with an overall classifier. Are there any drawbacks of this approach? How can I choose an algorithm for the overall classifier?
| Handling a regularly increasing feature set | CC BY-SA 3.0 | null | 2014-06-30T09:43:01.940 | 2014-07-11T14:27:01.603 | 2014-07-09T00:19:42.423 | 322 | 1271 | [
"machine-learning",
"bigdata"
] | In an ideal world, you retain all of your historical data, and do indeed run a new model with the new feature extracted retroactively from historical data. I'd argue that the computing resource spent on this is quite useful actually. Is it really a problem?
Yes, it's a widely accepted technique to build an ensemble of classifiers and combine their results. You can build a new model in parallel just on new features and average in its prediction. This should add value, but, you will never capture interaction between the new and old features this way, since they will never appear together in a classifier.
| How handle the add of a new feature to the dataset? | What you ask for is known as [Transfer Learning](https://en.wikipedia.org/wiki/Transfer_learning) in the Machine Learning framework, so you might want to look more into that direction.
An interesting publication regarding Transfer Learning in Decision Trees is [this](https://ieeexplore.ieee.org/document/4371047/).
|
640 | 1 | 642 | null | 5 | 224 | I'm currently working on a project that would benefit from personalized predictions. Given an input document, a set of output documents, and a history of user behavior, I'd like to predict which of the output documents are clicked.
In short, I'm wondering what the typical approach to this kind of personalization problem is. Are models trained per user, or does a single global model take in summary statistics of past user behavior to help inform that decision? Per user models won't be accurate until the user has been active for a while, while most global models have to take in a fixed length feature vector (meaning we more or less have to compress a stream of past events into a smaller number of summary statistics).
| Large Scale Personalization - Per User vs Global Models | CC BY-SA 3.0 | null | 2014-06-30T20:51:58.640 | 2014-06-30T23:10:53.397 | null | null | 684 | [
"classification"
] | The answer to this question is going to vary pretty wildly depending on the size and nature of your data. At a high level, you could think of it as a special case of multilevel models; you have the option of estimating a model with complete pooling (i.e., a universal model that doesn't distinguish between users), models with no pooling (a separate model for each user), and partially pooled models (a mixture of the two). You should really read Andrew Gelman on this topic if you're interested.
You can also think of this as a learning-to-rank problem that either tries to produce point-wise estimates using a single function or instead tries to optimize on some list-wise loss function (e.g., NDCG).
As with most machine learning problems, it all depends on what kind of data you have, the quality of it, the sparseness of it, and what kinds of features you are able to extract from it. If you have reason to believe that each and every user is going to be pretty unique in their behavior, you might want to build a per-user model, but that's going to be unwieldy fast -- and what do you do when you are faced with a new user?
| Best practices for serving user-specific large models in a web application? | No idea about standard approaches but one option you have is: instead of fine-tuning the whole model, fine-tune only a part of it. For instance, you may fine-tune only the last layers few layers. This way, you can keep loaded the common part of the model, load just the small fine-tuned part and combine them to perform inference.
This would reduce both storage space and decompression time, at the cost of more complex code logic.
Of course, you should first determine what are the minimum fine-tuned parts of the model that let you get the desired output quality.
|
653 | 1 | 654 | null | 4 | 314 | I am trying to find which classification methods, that do not use a training phase, are available.
The scenario is gene expression based classification, in which you have a matrix of gene expression of m genes (features) and n samples (observations).
A signature for each class is also provided (that is a list of the features to consider to define to which class belongs a sample).
An application (non-training) is the [Nearest Template Prediction](http://www.plosone.org/article/info%3Adoi%2F10.1371%2Fjournal.pone.0015543) method. In this case it is computed the cosine distance between each sample and each signature (on the common set of features). Then each sample is assigned to the nearest class (the sample-class comparison resulting in a smaller distance). No already classified samples are needed in this case.
A different application (training) is the [kNN](http://en.wikipedia.org/wiki/K-nearest_neighbors_algorithm) method, in which we have a set of already labeled samples. Then, each new sample is labeled depending on how are labeled the k nearest samples.
Are there any other non-training methods?
Thanks
| Which non-training classification methods are available? | CC BY-SA 3.0 | null | 2014-07-02T13:40:27.000 | 2015-04-12T16:08:13.467 | null | null | 133 | [
"classification"
] | What you are asking about is [Instance-Based Learning](http://en.wikipedia.org/wiki/Instance-based_learning). k-Nearest Neighbors (kNN) appears to be the most popular of these methods and is applicable to a wide variety of problem domains. Another general type of instance-based learning is [Analogical Modeling](http://en.wikipedia.org/wiki/Analogical_modeling), which uses instances as exemplars for comparison with new data.
You referred to kNN as an application that uses training but that is not correct (the Wikipedia entry you linked is somewhat misleading in that regard). Yes, there are "training examples" (labeled instances) but the classifier doesn't learn/train from these data. Rather, they are only used whenever you actually want to classify a new instance, which is why it is considered a "lazy" learner.
Note that the Nearest Template Prediction method you mention effectively is a form of kNN with `k=1` and cosine distance as the distance measure.
| Classification for 'not something' Neural Networks | Your problem resembles the learning task of 'one-class classification', otherwise known as anomaly detection. Essentially, you have a set of images all belonging to one class (galaxies), and you want a model to tell you if a new image is a galaxy or not, without providing any counter-examples to train with.
You should try a recently developed method called Deep Support Vector Data Description. In this method, a transformation is learned by a neural network to map the inputs to points contained in a small hypersphere in latent space. Then, examples falling inside the hypersphere can be considered 'galaxies', and outside can be considered 'not galaxies'.
You can read more about it [here](http://proceedings.mlr.press/v80/ruff18a.html) (quite technical), or just try to use the authors' PyTorch implementation [here](https://github.com/lukasruff/Deep-SVDD-PyTorch).
|
657 | 1 | 665 | null | 4 | 835 | Suppose I want to use CART as classification tree (I want a categorical response). I have the training set, and I split it using observation labels.
Now, to build the decision tree (classification tree) how are selected the features to decide which label apply to testing observations?
Supposing we are working on gene expression matrix, in which each element is a real number, is that done using features that are more distant between classes?
| How are selected the features for a decision tree in CART? | CC BY-SA 3.0 | null | 2014-07-02T16:41:47.467 | 2014-07-03T17:52:00.963 | null | null | 133 | [
"classification"
] | At each split point, CART will choose the feature which "best" splits the observations. What qualifies as best varies, but generally the split is done so that the subsequent nodes are more homogenous/pure with respect to the target. There are different ways of measuring homogeneity, for example Gini, Entropy, Chi-square. If you are using software, it may allow you to choose the measure of homogenity that the tree algorithm will use.
Distance is not a factor with trees - what matters is whether the value is greater than or less than the split point, not the distance from the split point.
| Correct Way of Displaying Features in Decision Tree | If you pass `header=None`, `pandas.read_csv` assumes that the first row contains data and names the columns '0' to '12'.
Instead you should pass `header=0` to specify that the column names are in the first row or equivalently skip the header argument.
You can then still continue with `X = balance_data.values[:, 1:12]`, because calling `values` returns a `numpy` array without the column names.
Alternatively, you could also select your feature columns like so:
```
feature_names = ['A','AAAA',....]
X = balance_data[feature_names].values
```
You can then pass the same list of `feature_names` to graphviz.
Also note that you don't have to pass a `numpy` array to `scikit-learn`'s functions. It can handle `pandas` DataFrames as well, so `values` is optional.
|
670 | 1 | 672 | null | 3 | 112 | I was building a model that predicts user churn for a website, where I have data on all users, both past and present.
I can build a model that only uses those users that have left, but then I'm leaving 2/3 of the total user population unused.
Is there a good way to incorporate data from these users into a model from a conceptual standpoint?
| Dealing with events that have not yet happened when building a model | CC BY-SA 3.0 | null | 2014-07-04T02:31:42.080 | 2016-08-11T09:33:39.037 | 2014-07-04T02:38:12.523 | 1334 | 1334 | [
"data-mining"
] | This setting is common in reliability, health care, and mortality. The statistical analysis method is called [Survival Analysis](http://en.wikipedia.org/wiki/Survival_analysis). All users are coded according to their start date (or week or month). You use the empirical data to estimate the survival function, which is the probability that the time of defection is later than some specified time t.
Your baseline model will estimate survival function for all users. Then you can do more sophisticated modeling to estimate what factors or behaviors might predict defection (churn), given your baseline survival function. Basically, any model that is predictive will yield a survival probability that is significantly lower than the baseline.
---
There's another approach which involves attempting to identify precursor events patterns or user behavior pattern that foreshadow defection. Any given event/behavior pattern might occur for users that defect, or for users that stay. For this analysis, you may need to censor your data to only include users that have been members for some minimum period of time. The minimum time period can be estimated using your estimate of survival function, or even simple histogram analysis of the distribution of membership period for users who have defected.
| Poor performance shown on Rare event modeling | If you are willing to use the caret package in R and use random forests, you can use the method in the following blog post for downsampling with unbalanced datasets: [http://appliedpredictivemodeling.com/blog/2013/12/8/28rmc2lv96h8fw8700zm4nl50busep](http://appliedpredictivemodeling.com/blog/2013/12/8/28rmc2lv96h8fw8700zm4nl50busep)
Basically, you just add a single line to your train call. Here is the relevant part:
```
> rfDownsampled <- train(Class ~ ., data = training,
+ method = "rf",
+ ntree = 1500,
+ tuneLength = 5,
+ metric = "ROC",
+ trControl = ctrl,
+ ## Tell randomForest to sample by strata. Here,
+ ## that means within each class
+ strata = training$Class,
+ ## Now specify that the number of samples selected
+ ## within each class should be the same
+ sampsize = rep(nmin, 2))
```
I have had some success with this approach in your type of situation.
For some more context, here is an in-depth post about experiments with unbalanced datasets: [http://www.win-vector.com/blog/2015/02/does-balancing-classes-improve-classifier-performance/](http://www.win-vector.com/blog/2015/02/does-balancing-classes-improve-classifier-performance/)
|
671 | 1 | 729 | null | 8 | 206 | I have a linearly increasing time series dataset of a sensor, with value ranges between 50 and 150. I've implemented a [Simple Linear Regression](http://en.wikipedia.org/wiki/Simple_linear_regression) algorithm to fit a regression line on such data, and I'm predicting the date when the series would reach 120.
All works fine when the series move upwards. But, there are cases in which the sensor reaches around 110 or 115, and it is reset; in such cases the values would start over again at, say, 50 or 60.
This is where I start facing issues with the regression line, as it starts moving downwards, and it starts predicting old date. I think I should be considering only the subset of data from where it was previously reset. However, I'm trying to understand if there are any algorithms available that consider this case.
I'm new to data science, would appreciate any pointers to move further.
Edit: nfmcclure's suggestions applied
Before applying the suggestions
![enter image description here](https://i.stack.imgur.com/ZsyyQ.png)
Below is the snapshot of what I've got after splitting the dataset where the reset occurs, and the slope of two set.
![enter image description here](https://i.stack.imgur.com/OEQCw.png)
finding the mean of the two slopes and drawing the line from the mean.
![enter image description here](https://i.stack.imgur.com/i2qv5.png)
Is this OK?
| Linearly increasing data with manual reset | CC BY-SA 3.0 | null | 2014-07-04T05:12:44.707 | 2019-01-01T19:50:02.190 | 2014-08-27T07:15:52.587 | 870 | 870 | [
"machine-learning",
"statistics",
"time-series"
] | I thought this was an interesting problem, so I wrote a sample data set and a linear slope estimator in R. I hope it helps you with your problem. I'm going to make some assumptions, the biggest is that you want to estimate a constant slope, given by some segments in your data. Another assumption to separate the blocks of linear data is that the natural 'reset' will be found by comparing consecutive differences and finding ones that are X-standard deviations below the mean. (I chose 4 sd's, but this can be changed)
Here is a plot of the data, and the code to generating it is at the bottom.
![Sample Data](https://i.stack.imgur.com/2dC1w.png)
For starters, we find the breaks and fit each set of y-values and record the slopes.
```
# Find the differences between adjacent points
diffs = y_data[-1] - y_data[-length(y_data)]
# Find the break points (here I use 4 s.d.'s)
break_points = c(0,which(diffs < (mean(diffs) - 4*sd(diffs))),length(y_data))
# Create the lists of y-values
y_lists = sapply(1:(length(break_points)-1),function(x){
y_data[(break_points[x]+1):(break_points[x+1])]
})
# Create the lists of x-values
x_lists = lapply(y_lists,function(x) 1:length(x))
#Find all the slopes for the lists of points
slopes = unlist(lapply(1:length(y_lists), function(x) lm(y_lists[[x]] ~ x_lists[[x]])$coefficients[2]))
```
Here are the slopes:
(3.309110, 4.419178, 3.292029, 4.531126, 3.675178, 4.294389)
And we can just take the mean to find the expected slope (3.920168).
---
Edit: Predicting when series reaches 120
I realized I didn't finish predicted when series reaches 120. If we estimate the slope to be m and we see a reset at time t to a value x (x<120), we can predict how much longer it would take to reach 120 by some simple algebra.
![enter image description here](https://i.stack.imgur.com/DixZv.gif)
Here, t is the time it would take to reach 120 after a reset, x is what it resets to, and m is the estimated slope. I'm not going to even touch the subject of units here, but it's good practice to work them out and make sure everything makes sense.
---
Edit: Creating The Sample Data
The sample data will consist of 100 points, random noise with a slope of 4 (Hopefully we will estimate this). When the y-values reach a cutoff, they reset to 50. The cutoff is randomly chosen between 115 and 120 for each reset. Here is the R code to create the data set.
```
# Create Sample Data
set.seed(1001)
x_data = 1:100 # x-data
y_data = rep(0,length(x_data)) # Initialize y-data
y_data[1] = 50
reset_level = sample(115:120,1) # Select initial cutoff
for (i in x_data[-1]){ # Loop through rest of x-data
if(y_data[i-1]>reset_level){ # check if y-value is above cutoff
y_data[i] = 50 # Reset if it is and
reset_level = sample(115:120,1) # rechoose cutoff
}else {
y_data[i] = y_data[i-1] + 4 + (10*runif(1)-5) # Or just increment y with random noise
}
}
plot(x_data,y_data) # Plot data
```
| Scaling continuous data to discrete range | Update Accordingly: Your question was not clear before, therefore, sorry for the irrelevant solution.
To achieve that, I don't think there is a public library function, however, you can build your solution using some beautiful "ready" functions.
Two solutions come into my mind:
- First one's time complexity is O(N*M)
N is your prediction (list `a` in your case) size, and M is your dictionary (list `b` in your case) size.
```
import numpy as np
def findClosest(dictionary, value):
idx = (np.abs(dictionary - value)).argmin()
return dictionary[idx]
#[findClosest(b, elem) for elem in a]
print([findClosest(b, elem) for elem in a])
```
This just subtracts your prediction value from the values in your dictionary and takes the absolute value of them. Then in the resulting array, it looks for the location of value that is smallest.
- Second one's time complexity is O(N*log(M))
N and M denote the same thing as the first solution.
```
from bisect import bisect_left
def findClosestBinary(myList, myNumber):
pos = bisect_left(myList, myNumber)
if pos == 0:
return myList[0]
if pos == len(myList):
return myList[-1]
before = myList[pos - 1]
after = myList[pos]
if after - myNumber < myNumber - before:
return after
else:
return before
#[findClosestBinary(b, elem) for elem in a]
print([findClosestBinary(b, elem) for elem in a])
```
Note: To save time I didn't implement myself but took the `findClosestBinary()` function from [here](https://stackoverflow.com/a/12141511/4799206).
This one is a better algorithmic approach in terms of time complexity. This does the same thing but uses a binary search to efficiently find the closest value in the dictionary. However, it assumes your dictionary (list b) is sorted. Since your dictionary is a predefined list, you can improve the performance by providing a sorted array.
However, if your dictionary that you will map predictions to is not a very big one, then you can just use the first one. In the case of the dictionary being small, these two functions will behave the same in terms of time.
|
677 | 1 | 5834 | null | 7 | 4752 | I'm trying to use the sklearn_pandas module to extend the work I do in pandas and dip a toe into machine learning but I'm struggling with an error I don't really understand how to fix.
I was working through the following dataset on [Kaggle](https://www.kaggle.com/c/data-science-london-scikit-learn/data).
It's essentially an unheadered table (1000 rows, 40 features) with floating point values.
```
import pandas as pdfrom sklearn import neighbors
from sklearn_pandas import DataFrameMapper, cross_val_score
path_train ="../kaggle/scikitlearn/train.csv"
path_labels ="../kaggle/scikitlearn/trainLabels.csv"
path_test = "../kaggle/scikitlearn/test.csv"
train = pd.read_csv(path_train, header=None)
labels = pd.read_csv(path_labels, header=None)
test = pd.read_csv(path_test, header=None)
mapper_train = DataFrameMapper([(list(train.columns),neighbors.KNeighborsClassifier(n_neighbors=3))])
mapper_train
```
Output:
```
DataFrameMapper(features=[([0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39], KNeighborsClassifier(algorithm='auto', leaf_size=30, metric='minkowski',
n_neighbors=3, p=2, weights='uniform'))])
```
So far so good. But then I try the fit
```
mapper_train.fit_transform(train, labels)
```
Output:
```
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-6-e3897d6db1b5> in <module>()
----> 1 mapper_train.fit_transform(train, labels)
//anaconda/lib/python2.7/site-packages/sklearn/base.pyc in fit_transform(self, X, y, **fit_params)
409 else:
410 # fit method of arity 2 (supervised transformation)
--> 411 return self.fit(X, y, **fit_params).transform(X)
412
413
//anaconda/lib/python2.7/site-packages/sklearn_pandas/__init__.pyc in fit(self, X, y)
116 for columns, transformer in self.features:
117 if transformer is not None:
--> 118 transformer.fit(self._get_col_subset(X, columns))
119 return self
120
TypeError: fit() takes exactly 3 arguments (2 given)`
```
What am I doing wrong? While the data in this case is all the same, I'm planning to work up a workflow for mixtures categorical, nominal and floating point features and sklearn_pandas seemed to be a logical fit.
| Struggling to integrate sklearn and pandas in simple Kaggle task | CC BY-SA 3.0 | null | 2014-07-05T15:01:43.940 | 2015-05-27T03:16:56.403 | null | null | 974 | [
"python",
"pandas",
"scikit-learn"
] | Here is an example of how to get pandas and sklearn to play nice
say you have 2 columns that are both strings and you wish to vectorize - but you have no idea which vectorization params will result in the best downstream performance.
create the vectorizer
```
to_vect = Pipeline([('vect',CountVectorizer(min_df =1,max_df=.9,ngram_range=(1,2),max_features=1000)),
('tfidf', TfidfTransformer())])
```
create the DataFrameMapper obj.
```
full_mapper = DataFrameMapper([
('col_name1', to_vect),
('col_name2',to_vect)
])
```
this is the full pipeline
```
full_pipeline = Pipeline([('mapper',full_mapper),('clf', SGDClassifier(n_iter=15, warm_start=True))])
```
define the params you want the scan to consider
```
full_params = {'clf__alpha': [1e-2,1e-3,1e-4],
'clf__loss':['modified_huber','hinge'],
'clf__penalty':['l2','l1'],
'mapper__features':[[('col_name1',deepcopy(to_vect)),
('col_name2',deepcopy(to_vect))],
[('col_name1',deepcopy(to_vect).set_params(vect__analyzer= 'char_wb')),
('col_name2',deepcopy(to_vect))]]}
```
Thats it! - note however that mapper_features are a single item in this dictionary - so use a for loop or itertools.product to generate a FLAT list of all to_vect options you wish to consider - but that is a separate task outside the scope of the question.
Go on to create the optimal classifier or whatever else your pipeline ends with
```
gs_clf = GridSearchCV(full_pipe, full_params, n_jobs=-1)
```
| Quick start using python and sklearn kmeans? | This has been answered in other places e.g. [here](https://stackoverflow.com/a/14110321).
- You could run Principal Component Analysis (or other dimensionality reduction techniques) and plot the cluster for the first two principal components.
- You could plot the results for two variables at a time.
- You could encode third or fourth variables using standard visualization techniques like color coding, symbols or facetting.
- There are ways to visualize the quality of the fit e.g. silhouette analysis or elbow test for determining the number of cluster etc.
- Have a quick look at this link
|
678 | 1 | 689 | null | 39 | 10907 | When I say "document", I have in mind web pages like Wikipedia articles and news stories. I prefer answers giving either vanilla lexical distance metrics or state-of-the-art semantic distance metrics, with stronger preference for the latter.
| What are some standard ways of computing the distance between documents? | CC BY-SA 3.0 | null | 2014-07-05T16:10:21.580 | 2020-08-06T13:01:07.800 | 2015-11-06T09:00:24.573 | 13727 | 1097 | [
"machine-learning",
"data-mining",
"nlp",
"text-mining",
"similarity"
] | There's a number of different ways of going about this depending on exactly how much semantic information you want to retain and how easy your documents are to tokenize (html documents would probably be pretty difficult to tokenize, but you could conceivably do something with tags and context.)
Some of them have been mentioned by ffriend, and the paragraph vectors by user1133029 is a really solid one, but I just figured I would go into some more depth about plusses and minuses of different approaches.
- Cosine Distance - Tried a true, cosine distance is probably the most common distance metric used generically across multiple domains. With that said, there's very little information in cosine distance that can actually be mapped back to anything semantic, which seems to be non-ideal for this situation.
- Levenshtein Distance - Also known as edit distance, this is usually just used on the individual token level (words, bigrams, etc...). In general I wouldn't recommend this metric as it not only discards any semantic information, but also tends to treat very different word alterations very similarly, but it is an extremely common metric for this kind of thing
- LSA - Is a part of a large arsenal of techniques when it comes to evaluating document similarity called topic modeling. LSA has gone out of fashion pretty recently, and in my experience, it's not quite the strongest topic modeling approach, but it is relatively straightforward to implement and has a few open source implementations
- LDA - Is also a technique used for topic modeling, but it's different from LSA in that it actually learns internal representations that tend to be more smooth and intuitive. In general, the results you get from LDA are better for modeling document similarity than LSA, but not quite as good for learning how to discriminate strongly between topics.
- Pachinko Allocation - Is a really neat extension on top of LDA. In general, this is just a significantly improved version of LDA, with the only downside being that it takes a bit longer to train and open-source implementations are a little harder to come by
- word2vec - Google has been working on a series of techniques for intelligently reducing words and documents to more reasonable vectors than the sparse vectors yielded by techniques such as Count Vectorizers and TF-IDF. Word2vec is great because it has a number of open source implementations. Once you have the vector, any other similarity metric (like cosine distance) can be used on top of it with significantly more efficacy.
- doc2vec - Also known as paragraph vectors, this is the latest and greatest in a series of papers by Google, looking into dense vector representations of documents. The gensim library in python has an implementation of word2vec that is straightforward enough that it can pretty reasonably be leveraged to build doc2vec, but make sure to keep the license in mind if you want to go down this route
Hope that helps, let me know if you've got any questions.
| How to measure the similarity between two text documents? | In general,there are two ways for finding document-document similarity
## TF-IDF approach
- Make a text corpus containing all words of documents . You have to use tokenisation and stop word removal . NLTK library provides all .
- Convert the documents into tf-idf vectors .
- Find the cosine-similarity between them or any new document for similarity measure.
You can use libraries like NLTK , Scikit learn ,Gensim for Tf-Idf implementation . Gensim provides many additional functionality .
See : [https://www2.cs.duke.edu/courses/spring14/compsci290/assignments/lab02.html](https://www2.cs.duke.edu/courses/spring14/compsci290/assignments/lab02.html)
## Word Embedding
Google's Doc2Vec ,which is available in Gensim library can be used for document similarity .Additonaly,teh Doc2Vec model itself can compute the similarity score ( no cosine or anything needed her ) . You just need the vectorise the docs by tokenizing ( use NLTK ) and make a Doc2vec model using gensim and fins similarity and many using Gensim inbuilt methods like model.n_similarity for similarity between two documents .
## Other
Additionally,since your aim is to cluster documents,you can try Topic Modelling using LDA ( Latent Dirichet Allocation) or LSI ( Latent Semantic Indexing ) .
|
679 | 1 | 699 | null | 8 | 199 | I made a similar question asking about distance between "documents" (Wikipedia articles, news stories, etc.). I made this a separate question because search queries are considerably smaller than documents and are considerably noisier. I hence don't know (and doubt) if the same distance metrics would be used here.
Either vanilla lexical distance metrics or state-of-the-art semantic distance metrics are preferred, with stronger preference for the latter.
| What are some standard ways of computing the distance between individual search queries? | CC BY-SA 3.0 | null | 2014-07-05T16:20:17.963 | 2014-08-07T07:07:03.713 | null | null | 1097 | [
"machine-learning",
"nlp",
"search"
] | From my experience only some classes of queries can be classified on lexical features (due to ambiguity of natural language). Instead you can try to use boolean search results (sites or segments of sites, not documents, without ranking) as features for classification (instead on words). This approach works well in classes where there is a big lexical ambiguity in a query but exists a lot of good sites relevant to the query (e.g. movies, music, commercial queries and so on).
Also, for offline classification you can do LSI on query-site matrix. See "Introduction to Information Retrieval" book for details.
| What are some standard ways of computing the distance between documents? | There's a number of different ways of going about this depending on exactly how much semantic information you want to retain and how easy your documents are to tokenize (html documents would probably be pretty difficult to tokenize, but you could conceivably do something with tags and context.)
Some of them have been mentioned by ffriend, and the paragraph vectors by user1133029 is a really solid one, but I just figured I would go into some more depth about plusses and minuses of different approaches.
- Cosine Distance - Tried a true, cosine distance is probably the most common distance metric used generically across multiple domains. With that said, there's very little information in cosine distance that can actually be mapped back to anything semantic, which seems to be non-ideal for this situation.
- Levenshtein Distance - Also known as edit distance, this is usually just used on the individual token level (words, bigrams, etc...). In general I wouldn't recommend this metric as it not only discards any semantic information, but also tends to treat very different word alterations very similarly, but it is an extremely common metric for this kind of thing
- LSA - Is a part of a large arsenal of techniques when it comes to evaluating document similarity called topic modeling. LSA has gone out of fashion pretty recently, and in my experience, it's not quite the strongest topic modeling approach, but it is relatively straightforward to implement and has a few open source implementations
- LDA - Is also a technique used for topic modeling, but it's different from LSA in that it actually learns internal representations that tend to be more smooth and intuitive. In general, the results you get from LDA are better for modeling document similarity than LSA, but not quite as good for learning how to discriminate strongly between topics.
- Pachinko Allocation - Is a really neat extension on top of LDA. In general, this is just a significantly improved version of LDA, with the only downside being that it takes a bit longer to train and open-source implementations are a little harder to come by
- word2vec - Google has been working on a series of techniques for intelligently reducing words and documents to more reasonable vectors than the sparse vectors yielded by techniques such as Count Vectorizers and TF-IDF. Word2vec is great because it has a number of open source implementations. Once you have the vector, any other similarity metric (like cosine distance) can be used on top of it with significantly more efficacy.
- doc2vec - Also known as paragraph vectors, this is the latest and greatest in a series of papers by Google, looking into dense vector representations of documents. The gensim library in python has an implementation of word2vec that is straightforward enough that it can pretty reasonably be leveraged to build doc2vec, but make sure to keep the license in mind if you want to go down this route
Hope that helps, let me know if you've got any questions.
|
694 | 1 | 695 | null | 150 | 125671 | I'm using Neural Networks to solve different Machine learning problems. I'm using Python and [pybrain](http://pybrain.org/) but this library is almost discontinued. Are there other good alternatives in Python?
| Best python library for neural networks | CC BY-SA 3.0 | null | 2014-07-07T19:17:04.973 | 2018-12-11T23:39:52.667 | 2017-05-29T17:43:26.890 | 8432 | 989 | [
"machine-learning",
"python",
"neural-network"
] | UPDATE: the landscape has changed quite a bit since I answered this question in July '14, and some new players have entered the space. In particular, I would recommend checking out:
- TensorFlow
- Blocks
- Lasagne
- Keras
- Deepy
- Nolearn
- NeuPy
They each have their strengths and weaknesses, so give them all a go and see which best suits your use case. Although I would have recommended using PyLearn2 a year ago, the community is no longer active so I would recommend looking elsewhere. My original response to the answer is included below but is largely irrelevant at this point.
---
[PyLearn2](http://deeplearning.net/software/pylearn2/) is generally considered the library of choice for neural networks and deep learning in python. It's designed for easy scientific experimentation rather than ease of use, so the learning curve is rather steep, but if you take your time and follow the tutorials I think you'll be happy with the functionality it provides. Everything from standard Multilayer Perceptrons to Restricted Boltzmann Machines to Convolutional Nets to Autoencoders is provided. There's great GPU support and everything is built on top of Theano, so performance is typically quite good. The source for PyLearn2 is available [on github](https://github.com/lisa-lab/pylearn2).
Be aware that PyLearn2 has the opposite problem of PyBrain at the moment -- rather than being abandoned, PyLearn2 is under active development and is subject to frequent changes.
| Best Julia library for neural networks | [Mocha.jl](https://github.com/pluskid/Mocha.jl) - Mocha is a Deep Learning framework for Julia, inspired by the C++ framework Caffe.
Project with good [documentation](http://mochajl.readthedocs.org/en/latest/) and examples.
Can be run on CPU and GPU backend.
|
700 | 1 | 900 | null | 7 | 2560 | I have a set of datapoints from the unit interval (i.e. 1-dimensional dataset with numerical values). I receive some additional datapoints online, and moreover the value of some datapoints might change dynamically. I'm looking for an ideal clustering algorithm which can handle these issues efficiently.
I know [sequential k-means clustering](https://www.cs.princeton.edu/courses/archive/fall08/cos436/Duda/C/sk_means.htm) copes with the addition of new instances, and I suppose with minor modification it can work with dynamic instance values (i.e. first taking the modified instance from the respective cluster, then updating the mean of the cluster and finally giving the modified instance as an input to the algorithm just as the addition of an unseen instance).
My concern with using the k-means algorithm is the requirement of supplying the number of clusters as an input. I know that they beat other clustering algorithms (GAs, MSTs, Hierarchical Methods etc.) in time&space complexity. Honestly I'm not sure, but maybe I can get away with using one of the aforementioned algorithms. Even that my datasets are relatively large, the existence of a single dimension makes me wonder.
More specifically a typical test case of mine would contain about 10K-200K 1-dimensional datapoints. I would like to complete the clustering preferably under a second. The dynamic changes in the value points are assumed to be smooth, i.e. relatively small. Thus being able to use existing solutions (i.e. being able to continue clustering on the existing one when a value is changed or new one is added) is highly preferred.
So all in all:
>
Can you think of an algorithm which will provide a sweet spot between computational efficiency and the accuracy of clusters wrt. the problem defined above?
Are there some nice heuristics for the k-means algorithm to automatically compute the value of K beforehand?
| Efficient dynamic clustering | CC BY-SA 3.0 | null | 2014-07-08T07:29:34.167 | 2018-05-21T20:48:53.453 | 2020-06-16T11:08:43.077 | -1 | 1386 | [
"machine-learning",
"algorithms",
"clustering",
"k-means",
"hierarchical-data-format"
] | I think hierarchical clustering would be more time efficient in your case (with a single dimension).
Depending on your task, you may implement something like this:
Having N datapoints di with their 1-dimension value xi:
- Sort datapoints based on their xi value.
- Calculate distances between adjacent datapoints (N-1 distances). Each distance must be assigned a pair of original datapoints (di, dj).
- Sort distances in descending order to generate list of datapoint pairs (di, dj), starting from the closest one.
- Iteratively unite datapoints (di, dj) into clusters, starting from beginning of the list (the closest pair). (Depending on current state of di and dj, uniting them means: (a) creating new cluster for two unclustered datapoints, (b) adding a datapoint to existing cluster and (c) uniting two clusters.)
- Stop uniting, if the distance is over some threshold.
- Create singleton clusters for datapoints which did not get into clusters.
This algorithm implements [single linkage](http://en.wikipedia.org/wiki/Single-linkage_clustering) clustering. It can be tuned easily to implement average linkage. [Complete linkage](http://en.wikipedia.org/wiki/Complete_linkage_clustering) will be less efficient, but maybe easier ones will give good results depending on your data and task.
I believe for 200K datapoints it must take under second, if you use proper data structures for above operations.
| Dynamic clustering for text documents | It sounds as if you don't need clustering.
But rather you are trying to detect near duplicates.
The difference is that clustering tries to organize everything with a focus on the larger, overall structure. But much of your data probably isn't duplicate. Clustering is difficult and slow. Near duplicates is much easier, and much faster (e.g., with MinHash or similarity search)
|
701 | 1 | 702 | null | 4 | 1246 | I have generated a dataset of pairwise distances as follows:
```
id_1 id_2 dist_12
id_2 id_3 dist_23
```
I want to cluster this data so as to identify the pattern. I have been looking at Spectral clustering and DBSCAN, but I haven't been able to come to a conclusion and have been ambiguous on how to make use of the existing implementations of these algorithms. I have been looking at Python and Java implementations so far.
Could anyone point me to a tutorial or demo on how to make use of these clustering algorithms to handle the situation in hand?
| Clustering pair-wise distance dataset | CC BY-SA 3.0 | null | 2014-07-08T07:37:57.123 | 2016-03-02T15:50:44.287 | 2016-03-02T15:50:44.287 | 13727 | 645 | [
"data-mining",
"clustering",
"dbscan"
] | In the scikit-learn implementation of Spectral clustering and DBSCAN you do not need to precompute the distances, you should input the sample coordinates for all `id_1` ... `id_n`. Here is a simplification of the [documented example comparison of clustering algorithms](http://scikit-learn.org/stable/auto_examples/cluster/plot_cluster_comparison.html):
```
import numpy as np
from sklearn import cluster
from sklearn.preprocessing import StandardScaler
## Prepare the data
X = np.random.rand(1500, 2)
# When reading from a file of the form: `id_n coord_x coord_y`
# you will need this call instead:
# X = np.loadtxt('coords.csv', usecols=(1, 2))
X = StandardScaler().fit_transform(X)
## Instantiate the algorithms
spectral = cluster.SpectralClustering(n_clusters=2,
eigen_solver='arpack',
affinity="nearest_neighbors")
dbscan = cluster.DBSCAN(eps=.2)
## Use the algorithms
spectral_labels = spectral.fit_predict(X)
dbscan_labels = dbscan.fit_predict(X)
```
| Clustering based on distance between points | I think for HAC (Hierachical Aglomeritive Clustering) it's always helpful to obtain the linkage matrix first which can give you some insight on how the clusters are formed iteratively. Besides that `scipy` also provides a `dendrogram` method for you to visualize the cluster formation, which can help you avoid treating the clustering process as a "black box".
```
import matplotlib.pyplot as plt
from scipy.cluster.hierarchy import dendrogram, linkage
# generate the linkage matrix
X = locations_in_RI[['Latitude', 'Longitude']].values
Z = linkage(X,
method='complete', # dissimilarity metric: max distance across all pairs of
# records between two clusters
metric='euclidean'
) # you can peek into the Z matrix to see how clusters are
# merged at each iteration of the algorithm
# calculate full dendrogram and visualize it
plt.figure(figsize=(30, 10))
dendrogram(Z)
plt.show()
# retrive clusters with `max_d`
from scipy.cluster.hierarchy import fcluster
max_d = 25 # I assume that your `Latitude` and `Longitude` columns are both in
# units of miles
clusters = fcluster(Z, max_d, criterion='distance')
```
The `clusters` is an array of cluster ids, which is what you want.
There is a very helpful (yet kinda long) [post](https://joernhees.de/blog/2015/08/26/scipy-hierarchical-clustering-and-dendrogram-tutorial/#Retrieve-the-Clusters) on HAC worth reading.
|
711 | 1 | 712 | null | 71 | 10739 | This question is in response to a comment I saw on another question.
The comment was regarding the Machine Learning course syllabus on Coursera, and along the lines of "SVMs are not used so much nowadays".
I have only just finished the relevant lectures myself, and my understanding of SVMs is that they are a robust and efficient learning algorithm for classification, and that when using a kernel, they have a "niche" covering number of features perhaps 10 to 1000 and number of training samples perhaps 100 to 10,000. The limit on training samples is because the core algorithm revolves around optimising results generated from a square matrix with dimensions based on number of training samples, not number of original features.
So does the comment I saw refer some real change since the course was made, and if so, what is that change: A new algorithm that covers SVM's "sweet spot" just as well, better CPUs meaning SVM's computational advantages are not worth as much? Or is it perhaps opinion or personal experience of the commenter?
I tried a search for e.g. "are support vector machines out of fashion" and found nothing to imply they were being dropped in favour of anything else.
And Wikipedia has this: [http://en.wikipedia.org/wiki/Support_vector_machine#Issues](http://en.wikipedia.org/wiki/Support_vector_machine#Issues) . . . the main sticking point appears to be difficulty of interpreting the model. Which makes SVM fine for a black-box predicting engine, but not so good for generating insights. I don't see that as a major issue, just another minor thing to take into account when picking the right tool for the job (along with nature of the training data and learning task etc).
| Are Support Vector Machines still considered "state of the art" in their niche? | CC BY-SA 3.0 | null | 2014-07-09T12:22:22.400 | 2020-08-17T07:11:34.777 | 2020-08-17T07:11:34.777 | 836 | 836 | [
"data-mining",
"svm",
"state-of-the-art"
] | SVM is a powerful classifier. It has some nice advantages (which I guess were responsible for its popularity)... These are:
- Efficiency: Only the support vectors play a role in determining the classification boundary. All other points from the training set needn't be stored in memory.
- The so-called power of kernels: With appropriate kernels you can transform feature space into a higher dimension so that it becomes linearly separable. The notion of kernels work with arbitrary objects on which you can define some notion of similarity with the help of inner products... and hence SVMs can classify arbitrary objects such as trees, graphs etc.
There are some significant disadvantages as well.
- Parameter sensitivity: The performance is highly sensitive to the choice of the regularization parameter C, which allows some variance in the model.
- Extra parameter for the Gaussian kernel: The radius of the Gaussian kernel can have a significant impact on classifier accuracy. Typically a grid search has to be conducted to find optimal parameters. LibSVM has a support for grid search.
SVMs generally belong to the class of "Sparse Kernel Machines". The sparse vectors in the case of SVM are the support vectors which are chosen from the maximum margin criterion. Other sparse vector machines such as the Relevance Vector Machine (RVM) perform better than SVM. The following figure shows a comparative performance of the two. In the figure, the x-axis shows one dimensional data from two classes y={0,1}. The mixture model is defined as P(x|y=0)=Unif(0,1) and P(x|y=1)=Unif(.5,1.5) (Unif denotes uniform distribution). 1000 points were sampled from this mixture and an SVM and an RVM were used to estimate the posterior. The problem of SVM is that the predicted values are far off from the true log odds.
![RVM vs. SVM](https://i.stack.imgur.com/zNYbt.png)
A very effective classifier, which is very popular nowadays, is the Random Forest. The main advantages are:
- Only one parameter to tune (i.e. the number of trees in the forest)
- Not utterly parameter sensitive
- Can easily be extended to multiple classes
- Is based on probabilistic principles (maximizing mutual information gain with the help of decision trees)
| Mathematical formulation of Support Vector Machines? | Your understandings are right.
>
deriving the margin to be $\frac{2}{|w|}$
we know that $w \cdot x +b = 1$
If we move from point z in $w \cdot x +b = 1$ to the $w \cdot x +b = 0$ we land in a point $\lambda$. This line that we have passed or this margin between the two lines $w \cdot x +b = 1$ and $w \cdot x +b = 0$ is the margin between them which we call $\gamma$
For calculating the margin, we know that we have moved from z, in opposite direction of w to point $\lambda$. Hence this margin $\gamma$ would be equal to $z - margin \cdot \frac{w}{|w|} = z - \gamma \cdot \frac{w}{|w|} =$ (we have moved in the opposite direction of w, we just want the direction so we normalize w to be a unit vector $\frac{w}{|w|}$)
Since this $\lambda$ point lies in the decision boundary we know that it should suit in line $w \cdot x + b = 0$
Hence we set is in this line in place of x:
$$w \cdot x + b = 0$$
$$w \cdot (z - \gamma \cdot \frac{w}{|w|}) + b = 0$$
$$w \cdot z + b - w \cdot \gamma \cdot \frac{w}{|w|}) = 0$$
$$w \cdot z + b = w \cdot \gamma \cdot \frac{w}{|w|}$$
we know that $w \cdot z +b = 1$ (z is the point on $w \cdot x +b = 1)$
$$1 = w \cdot \gamma \cdot \frac{w}{|w|}$$
$$\gamma= \frac{1}{w} \cdot \frac{|w|}{w} $$
we also know that $w \cdot w = |w|^2$, hence:
$$\gamma= \frac{1}{|w|}$$
Why is in your formula 2 instead of 1? because I have calculated the margin between the middle line and the upper, not the whole margin.
>
How can $y_i(w^Tx+b)\ge1\;\;\forall\;x_i$?
We want to classify the points in the +1 part as +1 and the points in the -1 part as -1, since $(w^Tx_i+b)$ is the predicted value and $y_i$ is the actual value for each point, if it is classified correctly, then the predicted and actual values should be same so their production $y_i(w^Tx+b)$ should be positive (the term >= 0 is substituded by >= 1 because it is a stronger condition)
The transpose is in order to be able to calculate the dot product. I just wanted to show the logic of dot product hence, didn't write transpose
---
For calculating the total distance between lines $w \cdot x + b = -1$ and $w \cdot x + b = 1$:
Either you can multiply the calculated margin by 2 Or if you want to directly find it, you can consider a point $\alpha$ in line $w \cdot x + b = -1$. then we know that the distance between these two lines is twice the value of $\gamma$, hence if we want to move from the point z to $\alpha$, the total margin (passed length) would be:
$$z - 2 \cdot \gamma \cdot \frac{w}{|w|}$$ then we can calculate the margin from here.
derived from ML course of UCSD by Prof. Sanjoy Dasgupta
|
716 | 1 | 718 | null | 22 | 6961 | I know that there is no a clear answer for this question, but let's suppose that I have a huge neural network, with a lot of data and I want to add a new feature in input. The "best" way would be to test the network with the new feature and see the results, but is there a method to test if the feature IS UNLIKELY helpful? Like [correlation measures](http://www3.nd.edu/%7Emclark19/learn/CorrelationComparison.pdf) etc?
| How to choose the features for a neural network? | CC BY-SA 4.0 | null | 2014-07-10T10:07:13.523 | 2021-03-11T19:26:46.897 | 2020-08-05T11:06:38.627 | 98307 | 989 | [
"machine-learning",
"neural-network",
"feature-selection",
"feature-extraction"
] | A very strong correlation between the new feature and an existing feature is a fairly good sign that the new feature provides little new information. A low correlation between the new feature and existing features is likely preferable.
A strong linear correlation between the new feature and the predicted variable is an good sign that a new feature will be valuable, but the absence of a high correlation is not necessary a sign of a poor feature, because neural networks are not restricted to linear combinations of variables.
If the new feature was manually constructed from a combination of existing features, consider leaving it out. The beauty of neural networks is that little feature engineering and preprocessing is required -- features are instead learned by intermediate layers. Whenever possible, prefer learning features to engineering them.
| How to determine feature importance in a neural network? | Don't remove a feature to find out its importance, but instead randomize or shuffle it.
Run the training 10 times, randomize a different feature column each time and then compare the performance. There is no need to tune hyper-parameters when done this way.
Here's the theory behind my suggestion: [feature importance](https://christophm.github.io/interpretable-ml-book/feature-importance.html)
|
717 | 1 | 723 | null | 6 | 101 | I'm using an experimental design to test the robustness of different classification methods, and now I'm searching for the correct definition of such design.
I'm creating different subsets of the full dataset by cutting away some samples. Each subset is created independently with respect to the others. Then, I run each classification method on every subset. Finally, I estimate the accuracy of each method as how many classifications on subsets are in agreement with the classification on the full dataset. For example:
```
Classification-full 1 2 3 2 1 1 2
Classification-subset1 1 2 2 3 1
Classification-subset2 2 3 1 1 2
...
Accuracy 1 1 1 1 0.5 1 1
```
Is there a correct name to this methodology? I thought it can fall under [bootstrapping](http://en.wikipedia.org/wiki/Bootstrapping_(statistics)) but I'm not sure about this.
| How to define a custom resampling methodology | CC BY-SA 3.0 | null | 2014-07-10T11:55:49.637 | 2014-07-11T11:32:13.550 | 2014-07-10T18:04:59.523 | 84 | 133 | [
"classification",
"definitions",
"accuracy",
"sampling"
] | Random subsampling seems appropriate, bootstrapping is a bit more generic, but also correct.
Here are some references and synonyms: [http://www.frank-dieterle.com/phd/2_4_3.html](http://www.frank-dieterle.com/phd/2_4_3.html)
| How to perform upsampling (and NOT interpolation) process theoretically modelled? | If I understood your question right, you want a mathematical expression for the I/O (input/output) relationship of a signal expander (name of the block that expands (upsamples) -without interpolation filering- an input signal $x[n]$)
Below is a block diagram of signal expander by a factor of $L$:
$$ x[n] \longrightarrow \boxed{ \uparrow L } \longrightarrow x_e[n] \tag{1}$$
An expression for the expanded sequence $x_e[n]$ can be written as :
$$ x_e[n] = \begin{cases} { x[\frac{n}{L}] ~~~,~~~ n=m\cdot L ~~~,~~~ m=...,-1,0,1,... \\ ~~~ 0 ~~~~~~,~~ ~ \text{otherwise} } \end{cases} \tag{2} $$
An identical expression is also the following :
$$x_e[n] = \sum_{k=-\infty}^{\infty} x[k] \delta[n-L\cdot k] \tag{3}$$
where $\delta[n]$ is a unit sample (discrete-time impulse).
With $L=3$, an input $x[n]=[1,2,3,4]$ becomes output $x_e[n] =[1,0,0,2,0,0,3,0,0,4,0,0]$.
|
730 | 1 | 1065 | null | 13 | 1888 | As far as I know the development of algorithms to solve the Frequent Pattern Mining (FPM) problem, the road of improvements have some main checkpoints. Firstly, the [Apriori](http://en.wikipedia.org/wiki/Apriori_algorithm) algorithm was proposed in 1993, by [Agrawal et al.](http://dl.acm.org/citation.cfm?id=170072), along with the formalization of the problem. The algorithm was able to strip-off some sets from the `2^n - 1` sets (powerset) by using a lattice to maintain the data. A drawback of the approach was the need to re-read the database to compute the frequency of each set expanded.
Later, on year 1997, [Zaki et al.](http://www.computer.org/csdl/trans/tk/2000/03/k0372-abs.html) proposed the algorithm [Eclat](http://en.wikibooks.org/wiki/Data_Mining_Algorithms_In_R/Frequent_Pattern_Mining/The_Eclat_Algorithm), which inserted the resulting frequency of each set inside the lattice. This was done by adding, at each node of the lattice, the set of transaction-ids that had the items from root to the referred node. The main contribution is that one does not have to re-read the entire dataset to know the frequency of each set, but the memory required to keep such data structure built may exceed the size of the dataset itself.
In 2000, [Han et al.](http://dl.acm.org/citation.cfm?doid=335191.335372) proposed an algorithm named [FPGrowth](http://en.wikibooks.org/wiki/Data_Mining_Algorithms_In_R/Frequent_Pattern_Mining/The_FP-Growth_Algorithm), along with a prefix-tree data structure named FPTree. The algorithm was able to provide significant data compression, while also granting that only frequent itemsets would be yielded (without candidate itemset generation). This was done mainly by sorting the items of each transaction in decreasing order, so that the most frequent items are the ones with the least repetitions in the tree data structure. Since the frequency only descends while traversing the tree in-depth, the algorithm is able to strip-off non-frequent itemsets.
Edit:
As far as I know, this may be considered a state-of-the-art algorithm, but I'd like to know about other proposed solutions. What other algorithms for FPM are considered "state-of-the-art"? What is the intuition/main-contribution of such algorithms?
Is the FPGrowth algorithm still considered "state of the art" in frequent pattern mining? If not, what algorithm(s) may extract frequent itemsets from large datasets more efficiently?
| Is FPGrowth still considered "state of the art" in frequent pattern mining? | CC BY-SA 3.0 | null | 2014-07-12T17:25:52.907 | 2014-08-30T18:36:07.490 | 2014-07-13T03:05:46.660 | 84 | 84 | [
"bigdata",
"data-mining",
"efficiency",
"state-of-the-art"
] | State of the art as in: used in practise or worked on in theory?
APRIORI is used everywhere, except in developing new frequent itemset algorithms. It's easy to implement, and easy to reuse in very different domains. You'll find hundreds of APRIORI implementations of varying quality. And it's easy to get APRIORI wrong, actually.
FPgrowth is much harder to implement, but also much more interesting. So from an academic point of view, everybody tries to improve FPgrowth - getting work based on APRIORI accepted will be very hard by now.
If you have a good implementation, every algorithm has it's good and it's bad situations in my opinion. A good APRIORI implementation will only need to scan the database k times to find all frequent itemsets of length k. In particular if your data fits into main memory this is cheap. What can kill APRIORI is too many frequent 2-itemsets (in particular when you don't use a Trie and similar acceleration techniques etc.). It works best on large data with a low number of frequent itemsets.
Eclat works on columns; but it needs to read each column much more often. There is some work on diffsets to reduce this work. If your data does not fit into main memory, Eclat suffers probably more than Apriori. By going depth first, it will also be able to return a first interesting result much earlier than Apriori, and you can use these results to adjust parameters; so you need less iterations to find good parameters. But by design, it cannot exploit pruning as neatly as Apriori did.
FPGrowth compresses the data set into the tree. This works best when you have lots of duplicate records. You could probably reap of quite some gains for Apriori and Eclat too if you can presort your data and merge duplicates into weighted vectors. FPGrowth does this at an extreme level. The drawback is that the implementation is much harder; and once this tree does not fit into memory anymore it gets a mess to implement.
As for performance results and benchmarks - don't trust them. There are so many things to implement incorrectly. Try 10 different implementations, and you get 10 very different performance results. In particular for APRIORI, I have the impression that most implementations are broken in the sense of missing some of the main contributions of APRIORI... and of those that have these parts right, the quality of optimizations varies a lot.
There are actually even papers on how to implement these algorithms efficiently:
>
Efficient Implementations of Apriori and Eclat. Christian BorgeltWorkshop of Frequent Item Set Mining Implementations (FIMI 2003, Melbourne, FL, USA).
You may also want to read these surveys on this domain:
-
Goethals, Bart. "Survey on frequent pattern mining." Univ. of Helsinki (2003).
-
Ferenc Bodon, A Survey on Frequent Itemset Mining, Technical Report, Budapest University of Technology and Economic, 2006,
-
Frequent Item Set MiningChristian BorgeltWiley Interdisciplinary Reviews: Data Mining and Knowledge Discovery 2(6):437-456. 2012
| Is sequential pattern mining possible with machine learning? | [Seq2Pat: Sequence-to-Pattern Generation Library](https://github.com/fidelity/seq2pat) might be relevant to your case.
The library is written in Cython to take advantage of a fast C++ backend with a high-level Python interface. It supports constraint-based frequent sequential pattern mining.
Here is an example that shows how to mine a sequence database while respecting an average constraint for the prices of the patterns found.
```
# Example to show how to find frequent sequential patterns
# from a given sequence database subject to constraints
from sequential.seq2pat import Seq2Pat, Attribute
# Seq2Pat over 3 sequences
seq2pat = Seq2Pat(sequences=[["A", "A", "B", "A", "D"],
["C", "B", "A"],
["C", "A", "C", "D"]])
# Price attribute corresponding to each item
price = Attribute(values=[[5, 5, 3, 8, 2],
[1, 3, 3],
[4, 5, 2, 1]])
# Average price constraint
seq2pat.add_constraint(3 <= price.average() <= 4)
# Patterns that occur at least twice (A-D)
patterns = seq2pat.get_patterns(min_frequency=2)
```
Notice that sequences can be of different lengths, and you can add/drop other Attributes and Constraints. The sequences can be any string, as in the example, or integers.
The underlying algorithm uses [Multi-valued Decision Diagrams](http://www.andrew.cmu.edu/user/vanhoeve/mdd/), and in particular, the [state-of-the-art algorithm from AAAI 2019](https://ojs.aaai.org//index.php/AAAI/article/view/3962).
Another source that might be relevant is [SPMF](http://www.philippe-fournier-viger.com/spmf/)
Hope this helps!
Disclaimer: I am a member of the research collaboration between Fidelity & CMU on the [Seq2Pat Library](https://github.com/fidelity/seq2pat).
|
731 | 1 | 732 | null | 59 | 26154 | When I started with artificial neural networks (NN) I thought I'd have to fight overfitting as the main problem. But in practice I can't even get my NN to pass the 20% error rate barrier. I can't even beat my score on random forest!
I'm seeking some very general or not so general advice on what should one do to make a NN start capturing trends in data.
For implementing NN I use Theano Stacked Auto Encoder with [the code from tutorial](https://github.com/lisa-lab/DeepLearningTutorials/blob/master/code/SdA.py) that works great (less than 5% error rate) for classifying the MNIST dataset. It is a multilayer perceptron, with softmax layer on top with each hidden later being pre-trained as autoencoder (fully described at [tutorial](http://deeplearning.net/tutorial/deeplearning.pdf), chapter 8). There are ~50 input features and ~10 output classes. The NN has sigmoid neurons and all data are normalized to [0,1]. I tried lots of different configurations: number of hidden layers and neurons in them (100->100->100, 60->60->60, 60->30->15, etc.), different learning and pre-train rates, etc.
And the best thing I can get is a 20% error rate on the validation set and a 40% error rate on the test set.
On the other hand, when I try to use Random Forest (from scikit-learn) I easily get a 12% error rate on the validation set and 25%(!) on the test set.
How can it be that my deep NN with pre-training behaves so badly? What should I try?
| How to fight underfitting in a deep neural net | CC BY-SA 3.0 | null | 2014-07-13T09:04:39.703 | 2017-05-01T07:46:22.633 | 2017-05-01T07:46:22.633 | 8820 | 2471 | [
"neural-network",
"deep-learning"
] | The problem with deep networks is that they have lots of hyperparameters to tune and very small solution space. Thus, finding good ones is more like an art rather than engineering task. I would start with working example from tutorial and play around with its parameters to see how results change - this gives a good intuition (though not formal explanation) about dependencies between parameters and results (both - final and intermediate).
Also I found following papers very useful:
- Visually Debugging Restricted Boltzmann Machine Training
with a 3D Example
- A Practical Guide to Training Restricted Boltzmann
Machines
They both describe RBMs, but contain some insights on deep networks in general. For example, one of key points is that networks need to be debugged layer-wise - if previous layer doesn't provide good representation of features, further layers have almost no chance to fix it.
| Neural Networks overfitting | Neural Networks basically act as a high memory-based machine learning algorithm. So for a given dataset the chance of it perfectly aligned with all the data at a given instance is high, as it most likely just ends up remembering every data point you give.
Overfitting occurs precisely because of this, when a new expansive data set is introduced there is no way it can adjust its fit to the new data, graphically it ends up missing more of the values than its supposed to fit.
In conclusion, it does not work well in the case where the scoring population is significantly different compared to training sample.
|
736 | 1 | 740 | null | 10 | 3533 | I have a dataset which contains ~100,000 samples of 50 classes. I have been using SVM with an RBF kernel to train and predict new data. The problem though is the dataset is skewed towards different classes.
For example, Class 1 - 30 (~3% each), Class 31 - 45 (~0.6% each), Class 46 - 50 (~0.2% each)
I see that the model tends to very rarely predict the classes which occur less frequent in the training set, even though the test set has the same class distribution as the training set.
I am aware that there are technique such as 'undersampling' where the majority class is scaled down to the minor class. However, is this applicable here where there are so many different classes? Are there other methods to help handle this case?
| Skewed multi-class data | CC BY-SA 3.0 | null | 2014-07-14T13:53:28.437 | 2015-10-01T20:31:46.680 | null | null | 802 | [
"classification",
"svm"
] | I would suggest you to use libsvm, which already has adjustable class weights implemented in it. Rather than replicating the training samples, one modifies the C parameter for different classes in the SVM optimization. For example if your data has 2 classes, and the first class is only 10% of the data, you would choose class weights to be 10 and 1 for class 1 and 2 respectively. Therefore, margin violations of the first class would cost 10 times more than the margin violations for second class, and per-class accuracies would be more balanced.
| Can this be a case of multi-class skewness? | Just telling the accuracy does not mean anything in classification problems. The first thing you must do is to calculate your baseline, that is, what is the percentage of your majority class? For the bar plot above it is difficult to measure, can you tell us in percentage, instead of counts? In this way we can assess better your results.
Also, have you plot a confusion matrix? In this way you can see where your model are getting more wrong and try to infer why this is happening.
And yes, since you have too many classes to predict and most of them are with low representativeness this will be difficult to overcome. Maybe you can try things such as Oversampling, Undersampling techniques considering one-vs-all approach. This is just an idea, I haven't yet encountering a problem with so many classes to predict.
|
744 | 1 | 773 | null | 80 | 62979 | It looks like the cosine similarity of two features is just their dot product scaled by the product of their magnitudes. When does cosine similarity make a better distance metric than the dot product? I.e. do the dot product and cosine similarity have different strengths or weaknesses in different situations?
| Cosine similarity versus dot product as distance metrics | CC BY-SA 3.0 | null | 2014-07-15T21:30:11.600 | 2020-09-04T15:43:15.887 | null | null | 2507 | [
"classification"
] | Think geometrically. Cosine similarity only cares about angle difference, while dot product cares about angle and magnitude. If you normalize your data to have the same magnitude, the two are indistinguishable. Sometimes it is desirable to ignore the magnitude, hence cosine similarity is nice, but if magnitude plays a role, dot product would be better as a similarity measure. Note that neither of them is a "distance metric".
| Correctly interpreting Cosine Angular Distance Similarity & Euclidean Distance Similarity | If you look at the definitions of the two distances, cosine distance is the normalized dot product of the two vectors and euclidian is the square root of the sum of the squared elements of the difference vector.
The cosine distance between M and J is smaller than between M and G because the normalization factor of M's vector still includes the numbers for which J didn't have any ratings. Even if you make J's vector more similar, like you did, the remaining numbers of M (2 and 5) get you the number you get.
The number for M and G is this high because they both have non-zeroes for all the books. Even though they seem quite different, the normalization factors in the cosine are more "neutralized" by the non-zeroes for corresponding entries in the dot product. Maths don't lie.
The books J didn't rate will be ignored if you make their numbers zero in the computation of the normalization factor for M. Maybe the fault in your thinking is that the books J didn't rate should be 0 while they shouldn't be any number.
Finally, for recommendation systems, I would like to refer to matrix factorization.
|
750 | 1 | 769 | null | 22 | 12101 | I am using OpenCV letter_recog.cpp example to experiment on random trees and other classifiers. This example has implementations of six classifiers - random trees, boosting, MLP, kNN, naive Bayes and SVM. UCI letter recognition dataset with 20000 instances and 16 features is used, which I split in half for training and testing. I have experience with SVM so I quickly set its recognition error to 3.3%. After some experimentation what I got was:
UCI letter recognition:
- RTrees - 5.3%
- Boost - 13%
- MLP - 7.9%
- kNN(k=3) - 6.5%
- Bayes - 11.5%
- SVM - 3.3%
Parameters used:
- RTrees - max_num_of_trees_in_the_forrest=200, max_depth=20,
min_sample_count=1
- Boost - boost_type=REAL, weak_count=200, weight_trim_rate=0.95,
max_depth=7
- MLP - method=BACKPROP, param=0.001, max_iter=300 (default values - too
slow to experiment)
- kNN(k=3) - k=3
- Bayes - none
- SVM - RBF kernel, C=10, gamma=0.01
After that I used same parameters and tested on Digits and MNIST datasets by extracting gradient features first (vector size 200 elements):
Digits:
- RTrees - 5.1%
- Boost - 23.4%
- MLP - 4.3%
- kNN(k=3) - 7.3%
- Bayes - 17.7%
- SVM - 4.2%
MNIST:
- RTrees - 1.4%
- Boost - out of memory
- MLP - 1.0%
- kNN(k=3) - 1.2%
- Bayes - 34.33%
- SVM - 0.6%
I am new to all classifiers except SVM and kNN, for these two I can say the results seem fine. What about others? I expected more from random trees, on MNIST kNN gives better accuracy, any ideas how to get it higher? Boost and Bayes give very low accuracy. In the end I'd like to use these classifiers to make a multiple classifier system. Any advice?
| How to increase accuracy of classifiers? | CC BY-SA 3.0 | null | 2014-07-16T09:49:15.933 | 2020-08-06T09:38:11.873 | 2014-07-16T15:09:44.907 | 1387 | 1387 | [
"machine-learning",
"classification",
"svm",
"accuracy",
"random-forest"
] | Dimensionality Reduction
Another important procedure is to compare the error rates on training and test dataset to see if you are overfitting (due to the "curse of dimensionality"). E.g., if your error rate on the test dataset is much larger than the error on the training data set, this would be one indicator.
In this case, you could try dimensionality reduction techniques, such as PCA or LDA.
If you are interested, I have written about PCA, LDA and some other techniques [here](http://sebastianraschka.com/index.html#machine_learning) and in my GitHub repo [here](https://github.com/rasbt/pattern_classification).
Cross validation
Also you may want to take a look at cross-validation techniques in order to evaluate the performance of your classifiers in a more objective manner
| Increase accuracy of classification problem | I'm not familiar with scikit but I'm assuming that TfidfVectorizer represents bag of words features right? By this I mean that it treats all the instructions in an instance as a set, i.e. doesn't take into account their sequential order.
I'm also not familiar with compilers but I'm guessing that the order of the instructions could be a relevant indication? I.e. a compiler may generate particular sequences of instructions.
Based on these remarks I would try to represent instances with [n-grams](https://en.wikipedia.org/wiki/N-gram) of instructions rather than individual instructions. Then you can still use some kind of bag-of-ngrams representation, possibly with TFIDF, but I would start with simple binary or frequency features. A simple feature selection step with something like information gain might be useful.
[edit] N-grams take order into account locally. In a bag of words model, words (or instructions in your case) are considered individually of each other: for instance the sequence `push, push, mov` is the same as `push, mov, push`. With bigrams this sequence would be represented as `(push,push), (push,mov)` whereas the second one is `(push,mov), (mov,push)`. This means two things:
- Higher level of detail about the instance, which can help the model capture the relevant indications
- More features so higher risk of overfitting (the model taking some random details as indication, which lead to errors on the test set).
|
758 | 1 | 759 | null | 63 | 10063 | I am working on a data science project using Python.
The project has several stages.
Each stage comprises of taking a data set, using Python scripts, auxiliary data, configuration and parameters, and creating another data set.
I store the code in git, so that part is covered.
I would like to hear about:
- Tools for data version control.
- Tools enabling to reproduce stages and experiments.
- Protocol and suggested directory structure for such a project.
- Automated build/run tools.
| Tools and protocol for reproducible data science using Python | CC BY-SA 3.0 | null | 2014-07-16T20:09:08.640 | 2018-10-18T03:48:09.637 | 2015-08-21T16:37:14.087 | 4647 | 895 | [
"python",
"tools",
"version-control"
] | The topic of reproducible research (RR) is very popular today and, consequently, is huge, but I hope that my answer will be comprehensive enough as an answer and will provide enough information for further research, should you decide to do so.
While Python-specific tools for RR certainly exist out there, I think it makes more sense to focus on more universal tools (you never know for sure what programming languages and computing environments you will be working with in the future). Having said that, let's take a look what tools are available per your list.
1) Tools for data version control. Unless you plan to work with (very) big data, I guess, it would make sense to use the same `git`, which you use for source code version control. The infrastructure is already there. Even if your files are binary and big, this advice might be helpful: [https://stackoverflow.com/questions/540535/managing-large-binary-files-with-git](https://stackoverflow.com/questions/540535/managing-large-binary-files-with-git).
2) Tools for managing RR workflows and experiments. Here's a list of most popular tools in this category, to the best of my knowledge (in the descending order of popularity):
- Taverna Workflow Management System (http://www.taverna.org.uk) - very solid, if a little too complex, set of tools. The major tool is a Java-based desktop software. However, it is compatible with online workflow repository portal myExperiment (http://www.myexperiment.org), where user can store and share their RR workflows. Web-based RR portal, fully compatible with Taverna is called Taverna Online, but it is being developed and maintained by totally different organization in Russia (referred there to as OnlineHPC: http://onlinehpc.com).
- The Kepler Project (https://kepler-project.org)
- VisTrails (http://vistrails.org)
- Madagascar (http://www.reproducibility.org)
EXAMPLE. Here's an interesting article on scientific workflows with an example of the real workflow design and data analysis, based on using Kepler and myExperiment projects: [http://f1000research.com/articles/3-110/v1](http://f1000research.com/articles/3-110/v1).
There are many RR tools that implement literate programming paradigm, exemplified by `LaTeX` software family. Tools that help in report generation and presentation is also a large category, where `Sweave` and `knitr` are probably the most well-known ones. `Sweave` is a tool, focused on R, but it can be integrated with Python-based projects, albeit with some additional effort ([https://stackoverflow.com/questions/2161152/sweave-for-python](https://stackoverflow.com/questions/2161152/sweave-for-python)). I think that `knitr` might be a better option, as it's modern, has extensive support by popular tools (such as `RStudio`) and is language-neutral ([http://yihui.name/knitr/demo/engines](http://yihui.name/knitr/demo/engines)).
3) Protocol and suggested directory structure. If I understood correctly what you implied by using term protocol (workflow), generally I think that standard RR data analysis workflow consists of the following sequential phases: data collection => data preparation (cleaning, transformation, merging, sampling) => data analysis => presentation of results (generating reports and/or presentations). Nevertheless, every workflow is project-specific and, thus, some specific tasks might require adding additional steps.
For sample directory structure, you may take a look at documentation for R package `ProjectTemplate` ([http://projecttemplate.net](http://projecttemplate.net)), as an attempt to automate data analysis workflows and projects:
![enter image description here](https://i.stack.imgur.com/0B2vo.png)
4) Automated build/run tools. Since my answer is focused on universal (language-neutral) RR tools, the most popular tools is `make`. Read the following article for some reasons to use `make` as the preferred RR workflow automation tool: [http://bost.ocks.org/mike/make](http://bost.ocks.org/mike/make). Certainly, there are other similar tools, which either improve some aspects of `make`, or add some additional features. For example: `ant` (officially, Apache Ant: [http://ant.apache.org](http://ant.apache.org)), `Maven` ("next generation `ant`": [http://maven.apache.org](http://maven.apache.org)), `rake` ([https://github.com/ruby/rake](https://github.com/ruby/rake)), `Makepp` ([http://makepp.sourceforge.net](http://makepp.sourceforge.net)). For a comprehensive list of such tools, see Wikipedia: [http://en.wikipedia.org/wiki/List_of_build_automation_software](http://en.wikipedia.org/wiki/List_of_build_automation_software).
| Python for data analytics | You're looking for this answer: [https://www.quora.com/Why-is-Python-a-language-of-choice-for-data-scientists](https://www.quora.com/Why-is-Python-a-language-of-choice-for-data-scientists)
|
761 | 1 | 764 | null | 66 | 100737 | What is the right approach and clustering algorithm for geolocation clustering?
I'm using the following code to cluster geolocation coordinates:
```
import numpy as np
import matplotlib.pyplot as plt
from scipy.cluster.vq import kmeans2, whiten
coordinates= np.array([
[lat, long],
[lat, long],
...
[lat, long]
])
x, y = kmeans2(whiten(coordinates), 3, iter = 20)
plt.scatter(coordinates[:,0], coordinates[:,1], c=y);
plt.show()
```
Is it right to use K-means for geolocation clustering, as it uses Euclidean distance, and not [Haversine formula](https://en.wikipedia.org/wiki/Haversine_formula) as a distance function?
| Clustering geo location coordinates (lat,long pairs) | CC BY-SA 3.0 | null | 2014-07-17T09:50:41.437 | 2023-01-12T18:50:07.947 | 2017-05-12T08:39:00.347 | 31513 | 2533 | [
"machine-learning",
"python",
"clustering",
"k-means",
"geospatial"
] | K-means should be right in this case. Since k-means tries to group based solely on euclidean distance between objects you will get back clusters of locations that are close to each other.
To find the optimal number of clusters you can try making an 'elbow' plot of the within group sum of square distance. [This](http://nbviewer.ipython.org/github/nborwankar/LearnDataScience/blob/master/notebooks/D3.%20K-Means%20Clustering%20Analysis.ipynb) may be helpful
| Clustering with geolocation (lat/long pairs) attributes | Don't treat clustering algorithms as black boxes. If you don't understand the question, don't expect to understand the answer.
So before dumping the data and hoping that magically a desired results comes out, understand what you are doing...
- Standardizing latitude/longitude is a horrible idea. These values are angles on a sphere. Linearly scaling these values breaks everything that these values mean. There are many valid transformations - even rotations can be good to get a desirable Mercator protection, for example. But standardizing them, I cannot imagine what this would be good for.
- Mixing variables with different meaning rarely works well. It's not just the problem of scale. Scaling often helps as a heuristic to prevent one variable dominating another. It also has the nice property that it doesn't matter if your data were feet or yards. But the need to do so usually means that there is something wrong with your approach at a deeper level: that you apparently are trying really hard to compare apples and oranges... You'll get out some result. It's probably even interesting. But once you try to explain or act on it, you're back to square one: what does it mean if you scale your data this way, and why is that better than the infinitely many alternative ways, infinitely many of which lead to other results?
|
762 | 1 | 766 | null | 11 | 5818 | t-SNE, as in [1], works by progressively reducing the Kullback-Leibler (KL) divergence, until a certain condition is met.
The creators of t-SNE suggests to use KL divergence as a performance criterion for the visualizations:
>
you can compare the Kullback-Leibler divergences that t-SNE reports. It is perfectly fine to run t-SNE ten times, and select the solution with the lowest KL divergence [2]
I tried two implementations of t-SNE:
- python: sklearn.manifold.TSNE().
- R: tsne, from library(tsne).
Both these implementations, when verbosity is set, print the error (Kullback-Leibler divergence) for each iteration. However, they don't allow the user to get this information, which looks a bit strange to me.
For example, the code:
```
import numpy as np
from sklearn.manifold import TSNE
X = np.array([[0, 0, 0], [0, 1, 1], [1, 0, 1], [1, 1, 1]])
model = TSNE(n_components=2, verbose=2, n_iter=200)
t = model.fit_transform(X)
```
produces:
```
[t-SNE] Computing pairwise distances...
[t-SNE] Computed conditional probabilities for sample 4 / 4
[t-SNE] Mean sigma: 1125899906842624.000000
[t-SNE] Iteration 10: error = 6.7213750, gradient norm = 0.0012028
[t-SNE] Iteration 20: error = 6.7192064, gradient norm = 0.0012062
[t-SNE] Iteration 30: error = 6.7178683, gradient norm = 0.0012114
...
[t-SNE] Error after 200 iterations: 0.270186
```
Now, as far as I understand, 0.270186 should be the KL divergence. However I cannot get this information, neither from model nor from t (which is a simple `numpy.ndarray`).
To solve this problem I could:
- Calculate KL divergence by my self,
- Do something nasty in python for capturing and parsing TSNE() function's output [3].
However:
- would be quite stupid to re-calculate KL divergence, when TSNE() has already computed it,
- would be a bit unusual in terms of code.
Do you have any other suggestion? Is there a standard way to get this information using this library?
I mentioned I tried R's tsne library, but I'd prefer the answers to focus on the python sklearn implementation.
---
References
[1] [http://nbviewer.ipython.org/urls/gist.githubusercontent.com/AlexanderFabisch/1a0c648de22eff4a2a3e/raw/59d5bc5ed8f8bfd9ff1f7faa749d1b095aa97d5a/t-SNE.ipynb](http://nbviewer.ipython.org/urls/gist.githubusercontent.com/AlexanderFabisch/1a0c648de22eff4a2a3e/raw/59d5bc5ed8f8bfd9ff1f7faa749d1b095aa97d5a/t-SNE.ipynb)
[2] [http://homepage.tudelft.nl/19j49/t-SNE.html](http://homepage.tudelft.nl/19j49/t-SNE.html)
[3] [https://stackoverflow.com/questions/16571150/how-to-capture-stdout-output-from-a-python-function-call](https://stackoverflow.com/questions/16571150/how-to-capture-stdout-output-from-a-python-function-call)
| t-SNE Python implementation: Kullback-Leibler divergence | CC BY-SA 4.0 | null | 2014-07-17T10:04:29.797 | 2020-08-02T14:02:51.077 | 2020-08-02T14:02:51.077 | 98307 | 131 | [
"machine-learning",
"python"
] | The TSNE source in scikit-learn is in pure Python. Fit `fit_transform()` method is actually calling a private `_fit()` function which then calls a private `_tsne()` function. That `_tsne()` function has a local variable `error` which is printed out at the end of the fit. Seems like you could pretty easily change one or two lines of source code to have that value returned to `fit_transform()`.
| Calculating KL Divergence in Python | First of all, `sklearn.metrics.mutual_info_score` implements mutual information for evaluating clustering results, not pure Kullback-Leibler divergence!
>
This is equal to the Kullback-Leibler divergence of the joint distribution with the product distribution of the marginals.
KL divergence (and any other such measure) expects the input data to have a sum of 1. Otherwise, they are not proper probability distributions. If your data does not have a sum of 1, most likely it is usually not proper to use KL divergence! (In some cases, it may be admissible to have a sum of less than 1, e.g. in the case of missing data.)
Also note that it is common to use base 2 logarithms. This only yields a constant scaling factor in difference, but base 2 logarithms are easier to interpret and have a more intuitive scale (0 to 1 instead of 0 to log2=0.69314..., measuring the information in bits instead of nats).
```
> sklearn.metrics.mutual_info_score([0,1],[1,0])
0.69314718055994529
```
as we can clearly see, the MI result of sklearn is scaled using natural logarithms instead of log2. This is an unfortunate choice, as explained above.
Kullback-Leibler divergence is fragile, unfortunately. On above example it is not well-defined: `KL([0,1],[1,0])` causes a division by zero, and tends to infinity. It is also asymmetric.
|
777 | 1 | 782 | null | 2 | 153 | New to the Data Science forum, and first poster here!
This may be kind of a specific question (hopefully not too much so), but one I'd imagine others might be interested in.
I'm looking for a way to basically query GitHub with something like this:
```
Give me a collection of all of the public repositories that have more than 10 stars, at
least two forks, and more than three committers.
```
The result could take any viable form: a JSON data dump, a URL to the web page, etc. It more than likely will consist of information from 10,000 repos or something large.
Is this sort of thing possible using the API or some other pre-built way, or am I going to have to build out my own custom solution where I try to scrape every page? If so, how feasible is this and how might I approach it?
| Getting GitHub repository information by different criteria | CC BY-SA 3.0 | null | 2014-07-18T22:29:05.017 | 2014-07-19T03:42:34.433 | null | null | 2599 | [
"bigdata",
"data-mining",
"python",
"dataset"
] | My limited understanding, based on brief browsing GitHub API documentation, is that currently there is NO single API request that supports all your listed criteria at once. However, I think that you could use the following sequence in order to achieve the goal from your example (at least, I would use this approach):
1) Request information on all public repositories (API returns summary representations only): [https://developer.github.com/v3/repos/#list-all-public-repositories](https://developer.github.com/v3/repos/#list-all-public-repositories);
2) Loop through the list of all public repositories retrieved in step 1, requesting individual resources, and save it as new (detailed) list (this returns detailed representations, in other words, all attributes): [https://developer.github.com/v3/repos/#get](https://developer.github.com/v3/repos/#get);
3) Loop through the detailed list of all repositories, filtering corresponding fields by your criteria. For your example request, you'd be interested in the following attributes of the parent object: stargazers_count, forks_count. In order to filter the repositories by number of committers, you could use a separate API: [https://developer.github.com/v3/repos/#list-contributors](https://developer.github.com/v3/repos/#list-contributors).
Updates or comments from people more familiar with GitHub API are welcome!
| Why are there over 600 open pull-requests in Scikit-learn github repo? | >
mean, reviewing a pull-request is not that big of a deal compared to writing commits and also this normally has higher priority than writing code, so why I see new commits in the project, but PRs are still not reviewed?
I assume that it's not that simple if they want to maintain the stability and the homogeneity of their code base. If they were just quickly accepting PRs the whole software would become an unmanageable mess in no time, causing problems for a large community of users. It's almost unavoidable for any significant software project to become harder and slower to maintain and develop when it reaches a certain size, especially if many users depend on it.
Also it's maintained by a community of volunteers so there might not be that many contributors who are both knowledgeable enough about the code and have enough free time to study all the PRs coming regularly.
|
778 | 1 | 781 | null | 14 | 17018 | I read in this post [Is the R language suitable for Big Data](https://datascience.stackexchange.com/questions/41/is-the-r-language-suitable-for-big-data) that big data constitutes `5TB`, and while it does a good job of providing information about the feasibility of working with this type of data in `R` it provides very little information about `Python`. I was wondering if `Python` can work with this much data as well.
| Is Python suitable for big data | CC BY-SA 3.0 | null | 2014-07-18T22:34:48.080 | 2022-08-18T09:59:32.430 | null | null | 890 | [
"bigdata",
"python"
] | To clarify, I feel like the original question references by OP probably isn't be best for a SO-type format, but I will certainly represent `python` in this particular case.
Let me just start by saying that regardless of your data size, `python` shouldn't be your limiting factor. In fact, there are just a couple main issues that you're going to run into dealing with large datasets:
- Reading data into memory - This is by far the most common issue faced in the world of big data. Basically, you can't read in more data than you have memory (RAM) for. The best way to fix this is by making atomic operations on your data instead of trying to read everything in at once.
- Storing data - This is actually just another form of the earlier issue, by the time to get up to about 1TB, you start having to look elsewhere for storage. AWS S3 is the most common resource, and python has the fantastic boto library to facilitate leading with large pieces of data.
- Network latency - Moving data around between different services is going to be your bottleneck. There's not a huge amount you can do to fix this, other than trying to pick co-located resources and plugging into the wall.
| Is Python a viable language to do statistical analysis in? | Python is more "general purpose" while R has a clear(er) focus on statistics. However, most (if not all) things you can do in R can be done in Python as well. The difference is that you need to use additional packages in Python for some things you can do in base R.
Some examples:
- Data frames are base R while you need to use Pandas in Python.
- Linear models (lm) are base R while you need to use statsmodels or scikit in Python. There are important conceptional differences to be considered.
- For some rather basic mathematical operations you would need to use numpy.
Overall this leads to some additional effort (and knowledge) needed to work fluently in Python. I personally often feel more comfortable working with base R since I feel like being "closer to the data" in (base) R.
However, in other cases, e.g. when I use boosting or neural nets, Python seems to have an advantage over R. Many algorithms are developed in `C++` (e.g. [Keras](https://github.com/jjallaire/deep-learning-with-r-notebooks), [LightGBM](https://lightgbm.readthedocs.io/en/latest/)) and adapted to Python and (often later to) R. At least when you work with Windows, this often works better with Python. You can use things like Tensorflow/Keras, LightGBM, Catboost in R, but it sometimes can be daunting to get the additional package running in R (especially with GPU support).
Many packages/methods are available for R and Python, such as GLMnet ([for R](https://web.stanford.edu/%7Ehastie/glmnet/glmnet_alpha.html) / [for Python](https://web.stanford.edu/%7Ehastie/glmnet_python/)). You can also see based on the Labs of "[Introduction to Statistical Learning](http://faculty.marshall.usc.edu/gareth-james/ISL/index.html)" - which are available [for R](http://faculty.marshall.usc.edu/gareth-james/ISL/code.html) and [for Python](https://github.com/JWarmenhoven/ISLR-python) as well - that there is not so much of a difference between the two languages in terms of what you can do. The difference is more like how things are done.
Finally, since Python is more "general purpose" than R (at least in my view), there are [interesting and funny things](https://realpython.com/what-can-i-do-with-python/) you can do with Python (beyond statistics) which you cannot do with R (at least it is harder).
|
784 | 1 | 821 | null | 6 | 1057 | I have a timeseries with hourly gas consumption. I want to use [ARMA](http://en.wikipedia.org/wiki/Autoregressive%E2%80%93moving-average_model)/[ARIMA](http://en.wikipedia.org/wiki/Autoregressive_integrated_moving_average) to forecast the consumption on the next hour, basing on the previous. Why should I analyze/find the seasonality (with [Seasonal and Trend decomposition using Loess](https://www.otexts.org/fpp/6/5) (STL)?)?
![enter image description here](https://i.stack.imgur.com/hYyH8.png)
| Why should I care about seasonal data when I forecast? | CC BY-SA 3.0 | null | 2014-07-19T18:31:36.573 | 2015-07-20T21:51:46.383 | 2014-07-20T19:34:08.117 | 84 | 989 | [
"machine-learning",
"time-series"
] | "Because its there".
The data has a seasonal pattern. So you model it. The data has a trend. So you model it. Maybe the data is correlated with the number of sunspots. So you model that. Eventually you hope to get nothing left to model than uncorrelated random noise.
But I think you've screwed up your STL computation here. Your residuals are clearly not serially uncorrelated. I rather suspect you've not told the function that your "seasonality" is a 24-hour cycle rather than an annual one. But hey you haven't given us any code or data so we don't really have a clue what you've done, do we? What do you think "seasonality" even means here? Do you have any idea?
Your data seems the have three peaks every 24 hours. Really? Is this 'gas'='gasoline'='petrol' or gas in some heating/electric generating system? Either way if you know a priori there's an 8 hour cycle, or an 8 hour cycle on top of a 24 hour cycle on top of what looks like a very high frequency one or two hour cycle you put that in your model.
Actually you don't even say what your x-axis is so maybe its days and then I'd fit a daily cycle, a weekly cycle, and then an annual cycle. But given how it all changes at time=85 or so I'd not expect a model to do well on both sides of that.
With statistics (which is what this is, sorry to disappoint you but you're not a data scientist yet) you don't just robotically go "And.. Now.. I.. Fit.. An... S TL model....". You look at your data, try and get some understanding, then propose a model, fit it, test it, and use the parameters it make inferences about the data. Fitting cyclic seasonal patterns is part of that.
| Use forecast weather data or actual weather data for prediction? | Using historical weather data implicitely means that you trust meteorologists and weather forecasters to improve their model over time and you let them the full responsability of it; however once your own model deployed a bias in the forecast may create a bias in your model response.
Using weather forecasts instead should give better results because your model will directly capture potential bias in the forecasts; however if the weather forecasters update their forecasting model, and if you miss this update, your model response may suffer from it.
I wouldn't use both historical weather data and weather forecasts in the same model; I would consider to build two models, one with historical data and one with weather forecasts, then go for historical data if the improvement of using weather forecasts is not significant.
|
786 | 1 | 789 | null | 10 | 1255 | I am trying to find stock data to practice with, is there a good resource for this? I found [this](ftp://emi.nasdaq.com/ITCH/) but it only has the current year.
I already have a way of parsing the protocol, but would like to have some more data to compare with. It doesn't have to be in the same format, as long as it has price, trades, and date statistics.
| NASDAQ Trade Data | CC BY-SA 4.0 | null | 2014-07-19T20:46:52.740 | 2020-08-16T18:02:33.567 | 2020-08-16T18:02:33.567 | 98307 | 2567 | [
"data-mining",
"dataset"
] | You can pull stock data very easyly in python and R (probably other languages as well) with the following packages:
In python with [ystockquote](https://pypi.python.org/pypi/ystockquote)
[This](http://nbviewer.ipython.org/github/twiecki/financial-analysis-python-tutorial/blob/master/1.%20Pandas%20Basics.ipynb) is also a really nice tutorial in iPython which shows you how to pull the stock data and play with it
In R with [quantmod](http://www.quantmod.com/)
HTH.
| looking for public dataset for stock market | You can have a look at the kaggle stock dataset.
[https://www.kaggle.com/borismarjanovic/price-volume-data-for-all-us-stocks-etfs](https://www.kaggle.com/borismarjanovic/price-volume-data-for-all-us-stocks-etfs)
This questions are normally done in OpenData stack exchange.
[https://opendata.stackexchange.com/](https://opendata.stackexchange.com/)
|
806 | 1 | 807 | null | 97 | 101137 | I was starting to look into area under curve(AUC) and am a little confused about its usefulness. When first explained to me, AUC seemed to be a great measure of performance but in my research I've found that some claim its advantage is mostly marginal in that it is best for catching 'lucky' models with high standard accuracy measurements and low AUC.
So should I avoid relying on AUC for validating models or would a combination be best?
| Advantages of AUC vs standard accuracy | CC BY-SA 4.0 | null | 2014-07-22T03:43:20.327 | 2023-05-04T17:16:21.923 | 2023-05-04T17:16:21.923 | 43000 | 2653 | [
"machine-learning",
"accuracy"
] | Really great question, and one that I find that most people don't really understand on an intuitive level. `AUC` is in fact often preferred over accuracy for binary classification for a number of different reasons. First though, let's talk about exactly what `AUC` is. Honestly, for being one of the most widely used efficacy metrics, it's surprisingly obtuse to figure out exactly how `AUC` works.
`AUC` stands for `Area Under the Curve`, which curve you ask? Well, that would be the `ROC` curve. `ROC` stands for [Receiver Operating Characteristic](http://en.wikipedia.org/wiki/Receiver_operating_characteristic), which is actually slightly non-intuitive. The implicit goal of `AUC` is to deal with situations where you have a very skewed sample distribution, and don't want to overfit to a single class.
A great example is in spam detection. Generally, spam datasets are STRONGLY biased towards ham, or not-spam. If your data set is 90% ham, you can get a pretty damn good accuracy by just saying that every single email is ham, which is obviously something that indicates a non-ideal classifier. Let's start with a couple of metrics that are a little more useful for us, specifically the true positive rate (`TPR`) and the false positive rate (`FPR`):
![ROC axes](https://i.stack.imgur.com/hNxTl.png)
Now in this graph, `TPR` is specifically the ratio of true positive to all positives, and `FPR` is the ratio of false positives to all negatives. (Keep in mind, this is only for binary classification.) On a graph like this, it should be pretty straightforward to figure out that a prediction of all 0's or all 1's will result in the points of `(0,0)` and `(1,1)` respectively. If you draw a line through these lines you get something like this:
![Kind of like a triangle](https://i.stack.imgur.com/B1WT1.png)
Which looks basically like a diagonal line (it is), and by some easy geometry, you can see that the `AUC` of such a model would be `0.5` (height and base are both 1). Similarly, if you predict a random assortment of 0's and 1's, let's say 90% 1's, you could get the point `(0.9, 0.9)`, which again falls along that diagonal line.
Now comes the interesting part. What if we weren't only predicting 0's and 1's? What if instead, we wanted to say that, theoretically we were going to set a cutoff, above which every result was a 1, and below which every result were a 0. This would mean that at the extremes you get the original situation where you have all 0's and all 1's (at a cutoff of 0 and 1 respectively), but also a series of intermediate states that fall within the `1x1` graph that contains your `ROC`. In practice you get something like this:
![Courtesy of Wikipedia](https://i.stack.imgur.com/13McM.png)
So basically, what you're actually getting when you do an `AUC` over accuracy is something that will strongly discourage people going for models that are representative, but not discriminative, as this will only actually select for models that achieve false positive and true positive rates that are significantly above random chance, which is not guaranteed for accuracy.
| When do I have to use aucPR instead of auROC? (and vice versa) | Yes, you are correct that the dominant difference between the area under the curve of a receiver operator characteristic curve ([ROC-AUC](https://en.wikipedia.org/wiki/Receiver_operating_characteristic)) and the area under the curve of a Precision-Recall curve ([PR-AUC](http://scikit-learn.org/stable/auto_examples/model_selection/plot_precision_recall.html)) lies in its tractability for unbalanced classes. They are very similar and have been shown to contain essentially the same information, however PR curves are slightly more finicky, but a well drawn curve gives a more complete picture. The issue with PR-AUC is that its difficult to interpolate between points in the PR curve and thus numerical integration to achieve an area under the curve becomes more difficult.
[Check out this discussion of the differences and similarities.](http://pages.cs.wisc.edu/~jdavis/davisgoadrichcamera2.pdf)
Quoting Davis' 2006 abstract:
>
Receiver Operator Characteristic (ROC)
curves are commonly used to present results
for binary decision problems in machine
learning. However, when dealing
with highly skewed datasets, Precision-Recall
(PR) curves give a more informative picture
of an algorithm’s performance. We show that
a deep connection exists between ROC space
and PR space, such that a curve dominates
in ROC space if and only if it dominates
in PR space. A corollary is the notion of
an achievable PR curve, which has properties
much like the convex hull in ROC space;
we show an efficient algorithm for computing
this curve. Finally, we also note differences
in the two types of curves are significant for
algorithm design. For example, in PR space
it is incorrect to linearly interpolate between
points. Furthermore, algorithms that optimize
the area under the ROC curve are not
guaranteed to optimize the area under the
PR curve.
[This was also discussed on Kaggle recently.](https://www.kaggle.com/forums/f/15/kaggle-forum/t/7517/precision-recall-auc-vs-roc-auc-for-class-imbalance-problems/41179)
[There is also some useful discussion on Cross Validated.](https://stats.stackexchange.com/questions/7207/roc-vs-precision-and-recall-curves)
|
808 | 1 | 809 | null | 11 | 2297 | i want to become a data scientist. I studied applied statistics (actuarial science), so i have a great statistical background (regression, stochastic process, time series, just for mention a few). But now, I am going to do a master degree in Computer Science focus in Intelligent Systems.
Here is my study plan:
- Machine learning
- Advanced machine learning
- Data mining
- Fuzzy logic
- Recommendation Systems
- Distributed Data Systems
- Cloud Computing
- Knowledge discovery
- Business Intelligence
- Information retrieval
- Text mining
At the end, with all my statistical and computer science knowledge, can i call myself a data scientist? , or am i wrong?
Thanks for the answers.
| Statistics + Computer Science = Data Science? | CC BY-SA 3.0 | null | 2014-07-22T08:39:33.810 | 2020-08-01T13:12:43.027 | 2016-02-25T13:36:09.703 | 11097 | 1117 | [
"machine-learning",
"statistics",
"career"
] | I think that you're on the right track toward becoming an expert data scientist. Recently I have answered related question [here on Data Science StackExchange](https://datascience.stackexchange.com/a/742/2452) (pay attention to the definition I mention there, as it essentially answers your question by itself, as well as to aspects of practicing software engineering and applying knowledge to solving real-world problems). I hope that you will find all that useful. Good luck in your career!
| Is Data Science the Same as Data Mining? | [@statsRus](https://datascience.stackexchange.com/users/36/statsrus) starts to lay the groundwork for your answer in another question [What characterises the difference between data science and statistics?](https://datascience.meta.stackexchange.com/q/86/98307):
>
Data collection: web scraping and online surveys
Data manipulation: recoding messy data and extracting meaning from linguistic and social network data
Data scale: working with extremely large data sets
Data mining: finding patterns in large, complex data sets, with an emphasis on algorithmic techniques
Data communication: helping turn "machine-readable" data into "human-readable" information via visualization
## Definition
[data-mining](/questions/tagged/data-mining) can be seen as one item (or set of skills and applications) in the toolkit of the data scientist. I like how he separates the definition of mining from collection in a sort of trade-specific jargon.
However, I think that data-mining would be synonymous with data-collection in a US-English colloquial definition.
As to where to go to become proficient? I think that question is too broad as it is currently stated and would receive answers that are primarily opinion based. Perhaps if you could refine your question, it might be easier to see what you are asking.
|
810 | 1 | 811 | null | 59 | 15562 | My 'machine learning' task is of separating benign Internet traffic from malicious traffic. In the real world scenario, most (say 90% or more) of Internet traffic is benign. Thus I felt that I should choose a similar data setup for training my models as well. But I came across a research paper or two (in my area of work) which have used a "class balancing" data approach to training the models, implying an equal number of instances of benign and malicious traffic.
In general, if I am building machine learning models, should I go for a dataset which is representative of the real world problem, or is a balanced dataset better suited for building the models (since certain classifiers do not behave well with class imbalance, or due to other reasons not known to me)?
Can someone shed more light on the pros and cons of both the choices and how to decide which one to go choose?
| Should I go for a 'balanced' dataset or a 'representative' dataset? | CC BY-SA 3.0 | null | 2014-07-22T12:29:10.050 | 2021-02-20T23:23:35.410 | 2016-11-29T10:52:50.630 | 26596 | 2661 | [
"machine-learning",
"dataset",
"class-imbalance"
] | I would say the answer depends on your use case. Based on my experience:
- If you're trying to build a representative model -- one that describes the data rather than necessarily predicts -- then I would suggest using a representative sample of your data.
- If you want to build a predictive model, particularly one that performs well by measure of AUC or rank-order and plan to use a basic ML framework (i.e. Decision Tree, SVM, Naive Bayes, etc), then I would suggest you feed the framework a balanced dataset. Much of the literature on class imbalance finds that random undersampling (down sampling the majority class to the size of the minority class) can drive performance gains.
- If you're building a predictive model, but are using a more advanced framework (i.e. something that determines sampling parameters via wrapper or a modification of a bagging framework that samples to class equivalence), then I would suggest again feeding the representative sample and letting the algorithm take care of balancing the data for training.
| When should we consider a dataset as imbalanced? | I think subsampling (downsampling) is a popular method to control class imbalance at the base level, meaning it fixes the root of the problem. So for all of your examples, randomly selecting 1,000 of the majority of the class each time would work. You could even play around with making 10 models (10 folds of 1,000 majority vs the 1,000 minority) so you will use your whole data set. You can use this method, but again you're kind of throwing away 9,000 samples unless you try some ensemble methods. Easy fix, but tough to get an optimal model based on your data.
The degree to which you need to control for the class imbalance is based largely on your goal. If you care about pure classification, then imbalance would affect the 50% probability cut off for most techniques, so I would consider downsampling. If you only care about the order of the classifications (want positives generally more higher than negatives) and use a measure such as AUC, the class imbalance will only bias your probabilities, but the relative order should be decently stable for most techniques.
Logistic regression is nice for class imbalance because as long as you have >500 of the minority class, the estimates of the parameters will be accurate enough and the only impact will be on the intercept, which can be corrected for if that is something you might want. Logistic regression models the probabilities rather than just classes, so you can do more manual adjustments to suit your needs.
A lot of classification techniques also have a class weight argument that will help you focus on the minority class more. It will penalize a miss classification of a true minority class, so your overall accucracy will suffer a little bit but you will start seeing more minority classes that are correctly classified.
|
842 | 1 | 843 | null | 28 | 42811 | I don't know if this is a right place to ask this question, but a community dedicated to Data Science should be the most appropriate place in my opinion.
I have just started with Data Science and Machine learning. I am looking for long term project ideas which I can work on for like 8 months.
A mix of Data Science and Machine learning would be great.
A project big enough to help me understand the core concepts and also implement them at the same time would be very beneficial.
| Data Science Project Ideas | CC BY-SA 3.0 | null | 2014-07-25T18:36:31.340 | 2020-08-20T18:57:12.080 | 2014-07-27T03:35:06.853 | 1352 | 2725 | [
"machine-learning",
"bigdata",
"dataset"
] | I would try to analyze and solve one or more of the problems published on [Kaggle Competitions](https://www.kaggle.com/competitions). Note that the competitions are grouped by their expected complexity, from `101` (bottom of the list) to `Research` and `Featured` (top of the list). A color-coded vertical band is a visual guideline for grouping. You can assess time you could spend on a project by adjusting the expected length of corresponding competition, based on your skills and experience.
A number of data science project ideas can be found by browsing [Coursolve](https://www.coursolve.org/browse-needs?query=Data%20Science) webpage.
If you have skills and desire to work on a real data science project, focused on social impacts, visit [DataKind](http://www.datakind.org/projects) projects page. More projects with social impacts focus can be found at [Data Science for Social Good](http://dssg.io/projects) webpage.
Science Project Ideas page at [My NASA Data](http://mynasadata.larc.nasa.gov/804-2) site looks like another place to visit for inspiration.
If you would like to use open data, this long list of applications on `Data.gov` can provide you with some interesting [data science](http://www.data.gov/applications) project ideas.
| ML project ideas for dataset | If this is your first ML project, you should try to predict one feature with the other features; i.e., see if highest bidder can be predicted based on cover image, synopsis, country of origin, etc. For this you can use a basic NN as a baseline, and keep adding layers until it's powerful enough. To choose the feature to be predicted, you can make a correlation matrix or use random forests to make sure there are some relationships within the data and you're not simply trying to predict an uncorrelated variable.
A little bit more difficult would be to predict the precise pricing a provider would give a movie based on the price it's given other movies (of course, without looking at the other competitors' pricing).
The hardest, I think, would be to use the cover image as input to a CNN in order to predict the genre of movie, country of origin, etc.
|
853 | 1 | 858 | null | 13 | 2818 | I have implemented NER system with the use of CRF algorithm with my handcrafted features that gave quite good results. The thing is that I used lots of different features including POS tags and lemmas.
Now I want to make the same NER for different language. The problem here is that I can't use POS tags and lemmas. I started reading articles about deep learning and unsupervised feature learning.
My question is:
Is it possible to use methods for unsupervised feature learning with CRF algorithm? Did anyone try this and got any good result? Is there any article or tutorial about this matter?
I still don't completely understand this way of feature creation so I don't want to spend to much time for something that won't work. So any information would be really helpful. To create whole NER system based on deep learning is a bit to much for now.
| Unsupervised feature learning for NER | CC BY-SA 3.0 | null | 2014-07-28T07:19:49.877 | 2018-04-19T02:02:47.127 | 2017-06-30T14:45:37.653 | 31513 | 2750 | [
"nlp",
"text-mining",
"feature-extraction"
] | Yes, it is entirely possible to combine unsupervised learning with the CRF model. In particular, I would recommend that you explore the possibility of using [word2vec](https://code.google.com/p/word2vec/) features as inputs to your CRF.
Word2vec trains a to distinguish between words that are appropriate for a given context and words that are randomly selected. Select weights of the model can then be interpreted as a dense vector representation of a given word.
These dense vectors have the appealing property that words that are semantically or syntactically similar have similar vector representations. Basic vector arithmetic even reveals some interesting learned relationships between words.
For example, vector("Paris") - vector("France") + vector("Italy") yields a vector that is quite similar to vector("Rome").
At a high level, you can think of word2vec representations as being similar to LDA or LSA representations, in the sense that you can convert a sparse input vector into a dense output vector that contains word similarity information.
For that matter, LDA and LSA are also valid options for unsupervised feature learning -- both attempt to represent words as combinations of "topics" and output dense word representations.
For English text Google distributes word2vec models pretrained on a huge 100 billion word Google News dataset, but for other languages you'll have to train your own model.
| NER with Unsupervised Learning? | >
If we treated NER as a classification/prediction problem, how would we handle name entities that weren't in training corpus?
The goal of a NER Tagger is to learn patterns in language that can be used to classify words (or more generally, tokens), given a pre-specified set of classes. These classes are defined before training and remain fixed. Classes such as: `PERSON`, `DATETIME`, `ORGANIZATION`, ... you name it.
A good NER Tagger will learn the structure of a language and recognize that `"Fyonair is from Fuabalada land."` follows some linguistic rules and regularities, and that from these regularities (learned autonomously during training) the classifier can attribute `Fyonair` class `PERSON` and to `Fuabalada` the class `LOCATION`.
---
>
How would our model identify it if it wasn't included in billions of corpus and tokens?
In fact, Deep Learning models tend to work better than others with very large datasets (the so called "big data"). On small datasets they are not extremely useful.
---
>
Can unsupervised learning achieve this task?
NER tagging is a supervised task. You need a training set of labeled examples to train a model for that. However, there is some unsupervised work one can do to slightly improve the performance of models. There is this useful paragraph that I took from [Geron's book](https://www.oreilly.com/library/view/hands-on-machine-learning/9781492032632/):
>
Suppose you want to tackle a complex task for which you don't have much labeled training data [...] If you can gather plenty of unlabeled training data, you can try to use it to train an unsupervised model, such as an autoencoder or a generative adversarial network [...] Then you can reuse the lower layers of the autoencoder or the lower layers of the GAN's discriminator, add the output layer for your task on top, and fine tune the final network using supervised learning (i.e. the label training examples).
It is this technique that Geoffrey Hinton and his team used in 2006 and which led to the revival of neural network and the success of Deep Learning.
[ p. 349, 2nd edition. ]
(Best book on Machine Learning ever, IMHO.)
This unsupervised pretraining is the only way to use unsupervised models for NER that I can think of.
Good luck with your task!
|
865 | 1 | 875 | null | 1 | 1712 | I need to build parse tree for some source code (on Python or any program language that describe by CFG).
So, I have source code on some programming language and BNF this language.
Can anybody give some advice how can I build parse tree in this case?
Preferably, with tools for Python.
| How to build parse tree with BNF | CC BY-SA 3.0 | null | 2014-07-30T10:24:54.180 | 2014-07-30T23:40:54.700 | null | null | 988 | [
"python",
"parsing"
] | I suggest you use [ANTLR](http://www.antlr.org/), which is a very powerful parser generator. It has a good GUI for entering your BNF. It has a [Python target](https://theantlrguy.atlassian.net/wiki/display/ANTLR4/Python+Target) capability.
| how each tree in random forest structured/built? | Great questions! I'll do my best to answer them:
- Once you have selected feature 2 as the root and features 1 and 5 as the candidates for the first split, you need to determine whether node 1 should be the left or right node. To make this decision, you will split the data based on the values of feature 2. Any data points with a feature 2 value less than or equal to the threshold value for node 1 will be assigned to the left node, and any data points with feature 2 value greater than the threshold value will be assigned to the right node. Once the data is split, you can calculate the Gini index for each child node using the remaining features.
- Node 5 is not automatically assigned to the left or right node after node 1 is determined. Instead, you will select a new random subset of the remaining 3 features (3, 4, 5) to create child nodes for node 1. You will repeat the same process as before:
Calculate the Gini index for each candidate feature.
Select the feature with the highest Gini index as the next node
Split the data based on the threshold value of that node.
Then, you can calculate the Gini index for each child node using the remaining features.
You will continue this process recursively until you reach a stopping criterion, such as a `maximum depth of the tree` or a `minimum number of samples per leaf node`. Each decision tree in the random forest will be trained on a bootstrapped sample of the training data. The final prediction will be the mode (for classification) or mean (for regression) of the predictions from all the trees in the forest.
|
866 | 1 | 881 | null | 12 | 1028 | I am currently working with a large set of health insurance claims data that includes some laboratory and pharmacy claims. The most consistent information in the data set, however, is made up of diagnosis (ICD-9CM) and procedure codes (CPT, HCSPCS, ICD-9CM).
My goals are to:
- Identify the most influential precursor conditions (comorbidities) for a medical condition like chronic kidney disease;
- Identify the likelihood (or probability) that a patient will develop a medical condition based on the conditions they have had in the past;
- Do the same as 1 and 2, but with procedures and/or diagnoses.
- Preferably, the results would be interpretable by a doctor
I have looked at things like the [Heritage Health Prize Milestone papers](https://www.heritagehealthprize.com/c/hhp/details/milestone-winners) and have learned a lot from them, but they are focused on predicting hospitalizations.
So here are my questions: What methods do you think work well for problems like this? And, what resources would be most useful for learning about data science applications and methods relevant to healthcare and clinical medicine?
EDIT #2 to add plaintext table:
CKD is the target condition, "chronic kidney disease", ".any" denotes that they have acquired that condition at any time, ".isbefore.ckd" means they had that condition before their first diagnosis of CKD. The other abbreviations correspond with other conditions identified by ICD-9CM code groupings. This grouping occurs in SQL during the import process. Each variable, with the exception of patient_age, is binary.
| Predicting next medical condition from past conditions in claims data | CC BY-SA 3.0 | null | 2014-07-30T11:45:08.313 | 2017-10-02T13:27:02.367 | 2017-10-02T13:27:02.367 | 2781 | 2781 | [
"machine-learning",
"r"
] | I've never worked with medical data, but from general reasoning I'd say that relations between variables in healthcare are pretty complicated. Different models, such as random forests, regression, etc. could capture only part of relations and ignore others. In such circumstances it makes sense to use general statistical exploration and modelling.
For example, the very first thing I would do is finding out correlations between possible precursor conditions and diagnoses. E.g. in what percent of cases chronic kidney disease was preceded by long flu? If it is high, it [doesn't always mean causality](http://en.wikipedia.org/wiki/Correlation_does_not_imply_causation), but gives pretty good food for thought and helps to better understand relations between different conditions.
Another important step is data visualisation. Does CKD happens in males more often than in females? What about their place of residence? What is distribution of CKD cases by age? It's hard to grasp large dataset as a set of numbers, plotting them out makes it much easier.
When you have an idea of what's going on, perform [hypothesis testing](http://en.wikipedia.org/wiki/Statistical_hypothesis_testing) to check your assumption. If you reject null hypothesis (basic assumption) in favour of alternative one, congratulations, you've made "something real".
Finally, when you have a good understanding of your data, try to create complete model. It may be something general like [PGM](https://www.coursera.org/course/pgm) (e.g. manually-crafted Bayesian network), or something more specific like linear regression or [SVM](http://en.wikipedia.org/wiki/Support_vector_machine), or anything. But in any way you will already know how this model corresponds to your data and how you can measure its efficiency.
---
As a good starting resource for learning statistical approach I would recommend [Intro to Statistics](https://www.udacity.com/course/st101) course by Sebastian Thrun. While it's pretty basic and doesn't include advanced topics, it describes most important concepts and gives systematic understanding of probability theory and statistics.
| Which algorithm can i use for predicting length of stay in coming year based on historical claims data? | If you want to do prediction using 2011 features, the answer is yes, you can do that.
However, as you don't want to use these features, the answer might be no.
Without using 2011 features, your dataset will have only 2 samples(2009 and 2010) under the assumption that every `memberID` is different. Prediction from two samples is neither reliable nor feasible.
|
869 | 1 | 871 | null | 33 | 25187 | So, I'm just starting to learn how a neural network can operate to recognize patterns and categorize inputs, and I've seen how an artificial neural network can parse image data and categorize the images ([demo with convnetjs](http://cs.stanford.edu/people/karpathy/convnetjs/demo/mnist.html)), and the key there is to downsample the image and each pixel stimulates one input neuron into the network.
However, I'm trying to wrap my head around if this is possible to be done with string inputs? The use-case I've got is a "recommendation engine" for movies a user has watched. Movies have lots of string data (title, plot, tags), and I could imagine "downsampling" the text down to a few key words that describe that movie, but even if I parse out the top five words that describe this movie, I think I'd need input neurons for every english word in order to compare a set of movies? I could limit the input neurons just to the words used in the set, but then could it grow/learn by adding new movies (user watches a new movie, with new words)? Most of the libraries I've seen don't allow adding new neurons after the system has been trained?
Is there a standard way to map string/word/character data to inputs into a neural network? Or is a neural network really not the right tool for the job of parsing string data like this (what's a better tool for pattern-matching in string data)?
| Neural Network parse string data? | CC BY-SA 3.0 | null | 2014-07-30T16:27:45.177 | 2017-08-26T15:57:49.753 | null | null | 2790 | [
"neural-network"
] | Using a neural network for prediction on natural language data can be a tricky task, but there are tried and true methods for making it possible.
In the Natural Language Processing (NLP) field, text is often represented using the bag of words model. In other words, you have a vector of length n, where n is the number of words in your vocabulary, and each word corresponds to an element in the vector. In order to convert text to numeric data, you simply count the number of occurrences of each word and place that value at the index of the vector that corresponds to the word. [Wikipedia does an excellent job of describing this conversion process.](https://en.wikipedia.org/wiki/Bag-of-words_model) Because the length of the vector is fixed, its difficult to deal with new words that don't map to an index, but there are ways to help mitigate this problem (lookup [feature hashing](http://en.wikipedia.org/wiki/Feature_hashing)).
This method of representation has many disadvantages -- it does not preserve the relationship between adjacent words, and results in very sparse vectors. Looking at [n-grams](http://en.wikipedia.org/wiki/N-gram) helps to fix the problem of preserving word relationships, but for now let's focus on the second problem, sparsity.
It's difficult to deal directly with these sparse vectors (many linear algebra libraries do a poor job of handling sparse inputs), so often the next step is dimensionality reduction. For that we can refer to the field of [topic modeling](http://en.wikipedia.org/wiki/Topic_model): Techniques like [Latent Dirichlet Allocation](http://en.wikipedia.org/wiki/Latent_Dirichlet_allocation) (LDA) and [Latent Semantic Analysis](http://en.wikipedia.org/wiki/Latent_semantic_analysis) (LSA) allow the compression of these sparse vectors into dense vectors by representing a document as a combination of topics. You can fix the number of topics used, and in doing so fix the size of the output vector producted by LDA or LSA. This dimensionality reduction process drastically reduces the size of the input vector while attempting to lose a minimal amount of information.
Finally, after all of these conversions, you can feed the outputs of the topic modeling process into the inputs of your neural network.
| How can I use a string as input in a neural network? | there is no other way i guess. since each one is of different type(Categorical varaible) you can only one hot encode them.
|
877 | 1 | 878 | null | 8 | 214 | Well this looks like the most suited place for this question.
Every website collect data of the user, some just for usability and personalization, but the majority like social networks track every move on the web, some free apps on your phone scan text messages, call history and so on.
All this data siphoning is just for selling your profile for advertisers?
| What is the use of user data collection besides serving ads? | CC BY-SA 3.0 | null | 2014-07-31T18:52:56.307 | 2020-08-16T21:51:11.787 | null | null | 2798 | [
"data-mining"
] | A couple of days ago developers from one product company asked me how they can understand why new users were leaving their website. My first question to them was what these users' profiles looked like and how they were different from those who stayed.
Advertising is only top of an iceberg. User profiles (either filled by users themselves or computed from users' behaviour) hold information about:
- user categories, i.e. what kind of people tend to use your website/product
- paying client portraits, i.e. who is more likely to use your paid services
- UX component performance, e.g. how long it takes people to find the button they need
- action performance comparison, e.g. what was more efficient - lower price for a weekend or propose gifts with each buy, etc.
So it's more about improving product and making better user experience rather than selling this data to advertisers.
| Is our data "Big Data" (Startup) | Yes, this is a how-long-is-a-piece-of-string question. I think it's good to beware of over-engineering, while also making sure you engineer for where you think you'll be in a year.
First I'd suggest you distinguish between processing and storage. Storm is a (stream) processing framework; NoSQL databases are a storage paradigm. These are not alternatives. The Hadoop ecosystem has HBase for NoSQL; I suspect Azure has some kind of stream processing story.
The bigger difference in your two alternatives is consuming a cloud provider's ecosystem vs Hadoop. The upside to Azure, or AWS, or GCE, is that these services optimize for integrating with each other, with billing, machine management, etc. The downside is being locked in to the cloud provider; you can't run Azure stuff anywhere but Azure. Hadoop takes more work to integrate since it's really a confederation of sometimes loosely-related projects. You're investing in both a distribution, and a place to run that distribution. But, you get a lot less lock-in, and probably more easy access to talent, and a broader choice of tools.
The Azure road is also a "big data" solution in that it has a lot of the scalability properties you want for big data, and the complexity as well. It does not strike me as an easier route. Do you need to invest in distributed/cloud anything at this scale? given your IoT-themed use case, I believe you will need to soon, if not now, so yes. You're not talking about gigabytes, but many terabytes in just the first year.
I'd give a fresh team 6-12 months to fully productionize something based on either of these platforms. That can certainly be staged as a POC, followed by more elaborate engineering.
|
887 | 1 | 889 | null | 1 | 100 | I am currently on a project that will build a model (train and test) on Client-side Web data, but evaluate this model on Sever-side Web data. Unfortunately building the model on Server-side data is not an option, nor is it an option to evaluate this model on Client-side data.
This model will be based on metrics collected on specific visitors. This is a real time system that will be calculating a likelihood based on metrics collected while visitors browse the website.
I am looking for approaches to ensure the highest possible accuracy on the model evaluation.
So far I have the following ideas,
- Clean the Server-side data by removing webpages that are never seen Client-side.
- Collect additional data Server-side data to make the Server-side data more closely resemble Client-side data.
- Collect data on the Client and send this data to the Server. This is possible and may be the best solution, but is currently undesirable.
- Build one or more models that estimate Client-side Visitor metrics from Server-side Visitor metrics and use these estimates in the Likelihood model.
Any other thoughts on evaluating over one Population while training (and testing) on another Population?
| Modelling on one Population and Evaluating on another Population | CC BY-SA 3.0 | null | 2014-08-02T00:07:09.267 | 2014-08-02T04:54:14.757 | null | null | 776 | [
"data-cleaning",
"model-evaluations"
] | If the users who you are getting client-side data from are from the same population of users who you would get server-side data from. If that is true, then you aren't really training on one population and applying to another. The main difference is that the client side data happened in the past (by necessity unless you are constantly refitting your model) and the server side data will come in the future.
Let's reformulate the question in terms of models rather than web clients and servers.
You are fitting a model on one dataset and applying it to another. That is the classic use of predictive modeling/machine learning. Models use features from the data to make estimates of some parameter or parameters. Once you have a fitted (and tested) model, all that you need is the same set of features to feed into the model to get your estimates.
Just make sure to model on a set of features (aka variables) that are available on the client-side and server-side. If that isn't possible, ask that question separately.
| How can I measure if a population has the same distribution as other? | What I'm gonna say might seem too simple, but I assume, it might not be bad to fit a multivariate distribution (like Gaussian) to each distribution and then figure out what the mean and covariance matrix are. Mean might not depict so much information but the variance and correlation that are exposed in the covariance matrix might be helpful.
For the case of determining if a variable is useful or not, analyzing its correlation with other parameters might turn out to be useful.
(These are my own thoughts)
|
896 | 1 | 932 | null | 4 | 1975 | I am trying to implement the Brown Clustering Algorithm.
Paper details: "Class-Based n-gram Models of Natural Language" by Brown et al
The algorithm is supposed to in `O(|V|k^2)` where `|V|` is the size of the vocabulary and k is the number of clusters. I am unable to implement it this efficiently. In fact, the best I can manage is `O(|V|k^3)` which is too slow. My current implementation for the main part of the algorithm is as follows:
```
for w = number of clusters + 1 to |V|
{
word = next most frequent word in the corpus
assign word to a new cluster
initialize MaxQuality to 0
initialize ArgMax vector to (0,0)
for i = 0 to number of clusters - 1
{
for j = i to number of clusters
{
Quality = Mutual Information if we merge cluster i and cluster j
if Quality > MaxQuality
{
MaxQuality = Quality
ArgMax = (i,j)
}
}
}
}
```
I compute quality as follows:
```
1. Before entering the second loop compute the pre-merge quality i.e. quality before doing any merges.
2. Every time a cluster-pair merge step is considered:
i. assign quality := pre-merge quality
ii. quality = quality - any terms in the mutual information equation that contain cluster i or cluster j (pre-merge)
iii. quality = quality + any terms in the mutual information equation that contain (cluster i U cluster j) (post-merge)
```
In my implementation, the first loop has approx |V| iterations, the second and third loop approx k iterations each. To compute quality at each step requires approx a further k iterations. In total it runs in `(|V|k^3)` time.
How do you get it to run in `(|V|k^2)`?
| How to implement Brown Clustering Algorithm in O(|V|k^2) | CC BY-SA 3.0 | null | 2014-08-03T16:38:38.853 | 2017-10-04T12:03:07.733 | 2017-10-04T12:03:07.733 | 11097 | 2817 | [
"nlp",
"efficiency",
"clustering"
] | I have managed to resolve this. There is an excellent and thorough explanation of the optimization steps in the following thesis: [Semi-Supervised Learning for Natural Language by Percy Liang](http://cs.stanford.edu/~pliang/papers/meng-thesis.pdf).
My mistake was trying to update the quality for all potential clusters pairs. Instead, you should initialize a table with the quality changes of doing each merge. Use this table to find the best merge, and the update the relevant terms that make up the table entries.
| How do I obtain the weight and variance of a k-means cluster? | It is valid to use the k-means to initialize the EM for Mixture of Gaussian modeling. As you said, the mean of each component will be the average of all samples belong to the same cluster (it depends on the used clustering algorithm, some times the centroid is not the average of the cluster but is one of the samples). for the weight you can use the following: the weight of cluster x = the number of samples belong to cluster x divided by the total number of samples. thus, the cluster with the highest number of samples is the cluster with the highest weight. for the variance: just find the variance of all samples belong to the same cluster.
|
902 | 1 | 908 | null | 4 | 1903 | Are there any general rules that one can use to infer what can be learned/generalized from a particular data set? Suppose the dataset was taken from a sample of people. Can these rules be stated as functions of the sample or total population?
I understand the above may be vague, so a case scenario: Users participate in a search task, where the data are their queries, clicked results, and the HTML content (text only) of those results. Each of these are tagged with their user and timestamp. A user may generate a few pages - for a simple fact-finding task - or hundreds of pages - for a longer-term search task, like for class report.
Edit: In addition to generalizing about a population, given a sample, I'm interested in generalizing about an individual's overall search behavior, given a time slice. Theory and paper references are a plus!
| When is there enough data for generalization? | CC BY-SA 3.0 | null | 2014-08-04T19:10:57.187 | 2014-08-07T00:20:05.817 | 2014-08-04T19:23:09.483 | 1097 | 1097 | [
"machine-learning",
"data-mining",
"statistics",
"search"
] | It is my understanding that random sampling is a mandatory condition for making any generalization statements. IMHO, other parameters, such as sample size, just affect probability level (confidence) of generalization. Furthermore, clarifying the @ffriend's comment, I believe that you have to calculate needed sample size, based on desired values of confidence interval, effect size, statistical power and number of predictors (this is based on Cohen's work - see References section at the following link). For multiple regression, you can use the following calculator: [http://www.danielsoper.com/statcalc3/calc.aspx?id=1](http://www.danielsoper.com/statcalc3/calc.aspx?id=1).
More information on how to select, calculate and interpret effect sizes can be found in the following nice and comprehensive paper, which is freely available: [http://jpepsy.oxfordjournals.org/content/34/9/917.full](http://jpepsy.oxfordjournals.org/content/34/9/917.full).
If you're using `R` (and even, if you don't), you may find the following Web page on confidence intervals and R interesting and useful: [http://osc.centerforopenscience.org/static/CIs_in_r.html](http://osc.centerforopenscience.org/static/CIs_in_r.html).
Finally, the following comprehensive guide to survey sampling can be helpful, even if you're not using survey research designs. In my opinion, it contains a wealth of useful information on sampling methods, sampling size determination (including calculator) and much more: [http://home.ubalt.edu/ntsbarsh/stat-data/Surveys.htm](http://home.ubalt.edu/ntsbarsh/stat-data/Surveys.htm).
| Robustness vs Generalization | Check [this paper](https://arxiv.org/pdf/1804.00504.pdf). Its introduction gives a very good definition of both:
>
The classic approach towards the assessment of any machine learning model
revolves around the evaluation of its generalizability i.e. its performance on unseen test scenarios.
>
Evaluating such models on
an available non-overlapping test set is popular, yet significantly limited in its
ability to explore the model’s resilience to outliers and noisy data / labels (i.e.
robustness).
For generalizability, unseen data does not have to be noisy or contain more outliers compared to original data. You can simply split your original data set into 3: training, validation and test; use training and validation for the model development and keep test data unseen for a final check after cross validation. This will check your model's generalizability. Test set created in this way won't be more noise or have more outliers compared to the other two.
|
917 | 1 | 918 | null | 12 | 2873 | I am attempting to solve a set of equations which has 40 independent variables (x1, ..., x40) and one dependent variable (y). The total number of equations (number of rows) is ~300, and I want to solve for the set of 40 coefficients that minimizes the total sum-of-square error between y and the predicted value.
My problem is that the matrix is very sparse and I do not know the best way to solve the system of equations with sparse data. An example of the dataset is shown below:
```
y x1 x2 x3 x4 x5 x6 ... x40
87169 14 0 1 0 0 2 ... 0
46449 0 0 4 0 1 4 ... 12
846449 0 0 0 0 0 3 ... 0
....
```
I am currently using a Genetic Algorithm to solve this and the results are coming out
with roughly a factor of two difference between observed and expected.
Can anyone suggest different methods or techniques which are capable of solving a set of equations with sparse data.
| Solving a system of equations with sparse data | CC BY-SA 3.0 | null | 2014-08-05T20:45:01.383 | 2016-10-18T15:44:16.657 | 2016-10-18T15:44:16.657 | 20343 | 802 | [
"machine-learning",
"regression",
"algorithms",
"genetic"
] | If I understand you correctly, this is the case of multiple linear regression with sparse data (sparse regression). Assuming that, I hope you will find the following resources useful.
1) NCSU lecture slides on sparse regression with overview of algorithms, notes, formulas, graphics and references to literature: [http://www.stat.ncsu.edu/people/zhou/courses/st810/notes/lect23sparse.pdf](http://www.stat.ncsu.edu/people/zhou/courses/st810/notes/lect23sparse.pdf)
2) `R` ecosystem offers many packages, useful for sparse regression analysis, including:
- Matrix (http://cran.r-project.org/web/packages/Matrix)
- SparseM (http://cran.r-project.org/web/packages/SparseM)
- MatrixModels (http://cran.r-project.org/web/packages/MatrixModels)
- glmnet (http://cran.r-project.org/web/packages/glmnet)
- flare (http://cran.r-project.org/web/packages/flare)
3) A blog post with an example of sparse regression solution, based on `SparseM`: [http://aleph-nought.blogspot.com/2012/03/multiple-linear-regression-with-sparse.html](http://aleph-nought.blogspot.com/2012/03/multiple-linear-regression-with-sparse.html)
4) A blog post on using sparse matrices in R, which includes a primer on using `glmnet`: [http://www.johnmyleswhite.com/notebook/2011/10/31/using-sparse-matrices-in-r](http://www.johnmyleswhite.com/notebook/2011/10/31/using-sparse-matrices-in-r)
5) More examples and some discussion on the topic can be found on StackOverflow: [https://stackoverflow.com/questions/3169371/large-scale-regression-in-r-with-a-sparse-feature-matrix](https://stackoverflow.com/questions/3169371/large-scale-regression-in-r-with-a-sparse-feature-matrix)
UPDATE (based on your comment):
If you're trying to solve an LP problem with constraints, you may find this theoretical paper useful: [http://web.stanford.edu/group/SOL/papers/gmsw84.pdf](http://web.stanford.edu/group/SOL/papers/gmsw84.pdf).
Also, check R package limSolve: [http://cran.r-project.org/web/packages/limSolve](http://cran.r-project.org/web/packages/limSolve). And, in general, check packages in CRAN Task View "Optimization and Mathematical Programming": [http://cran.r-project.org/web/views/Optimization.html](http://cran.r-project.org/web/views/Optimization.html).
Finally, check the book "Using R for Numerical Analysis in Science and Engineering" (by Victor A. Bloomfield). It has a section on solving systems of equations, represented by sparse matrices (section 5.7, pages 99-104), which includes examples, based on some of the above-mentioned packages: [http://books.google.com/books?id=9ph_AwAAQBAJ&pg=PA99&lpg=PA99&dq=r+limsolve+sparse+matrix&source=bl&ots=PHDE8nXljQ&sig=sPi4n5Wk0M02ywkubq7R7KD_b04&hl=en&sa=X&ei=FZjiU-ioIcjmsATGkYDAAg&ved=0CDUQ6AEwAw#v=onepage&q=r%20limsolve%20sparse%20matrix&f=false](http://books.google.com/books?id=9ph_AwAAQBAJ&pg=PA99&lpg=PA99&dq=r+limsolve+sparse+matrix&source=bl&ots=PHDE8nXljQ&sig=sPi4n5Wk0M02ywkubq7R7KD_b04&hl=en&sa=X&ei=FZjiU-ioIcjmsATGkYDAAg&ved=0CDUQ6AEwAw#v=onepage&q=r%20limsolve%20sparse%20matrix&f=false).
| Sparse Matrix - Effect and Solution | The idea is really simple, just look at some online resources like [https://en.m.wikipedia.org/wiki/Sparse_matrix](https://en.m.wikipedia.org/wiki/Sparse_matrix)
The implementation is also really simple. For pandas this page might help you [https://pandas.pydata.org/pandas-docs/stable/sparse.html](https://pandas.pydata.org/pandas-docs/stable/sparse.html)
|
919 | 1 | 939 | null | 9 | 2286 | Data set looks like:
- 25000 observations
- up to 15 predictors of different types: numeric, multi-class categorical, binary
- target variable is binary
Which cross validation method is typical for this type of problems?
By default I'm using K-Fold. How many folds is enough in this case? (One of the models I use is random forest, which is time consuming...)
| Which cross-validation type best suits to binary classification problem | CC BY-SA 3.0 | null | 2014-08-06T08:41:44.967 | 2017-02-25T19:47:29.907 | null | null | 97 | [
"classification",
"cross-validation"
] | You will have best results if you care to build the folds so that each variable (and most importantly the target variable) is approximately identically distributed in each fold. This is called, when applied to the target variable, stratified k-fold. One approach is to cluster the inputs and make sure each fold contains the same number of instances from each cluster proportional to their size.
| Which machine learning algorithms are more suitable for binary classification? | If you want to be highly literal, logistic regression is excellent for binary classes but completely inappropriate for $3+$ classes. No worries: there is multinomial logistic regression, the theory of which mimics binary logistic regression (one might consider logistic regression to be a special case of multinomial logistic regression). Depending on the sophistication of my audience, I might be comfortable referring to "logistic regression" and leaving it to them to realize that I mean "multinomial" logistic regression when there are $3+$ categories and "binary" logistic regression when there are $2$ categories.
Random forest can do the binary case but also the multiclass case. Ditto for k-nearest neighbors, support vector machines, and neural networks.
I cannot think of a model for binary classes that lacks a multiclass analogue.
|
922 | 1 | 925 | null | 3 | 60 | I have set of documents and I want classify them to true and false
My question is I have to take the whole words in the documents then I classify them depend on the similarity words in these documents or I can take only some words that I interested in then I compare it with the documents. Which one is more efficient in classify document and can work with SVM.
| Can I classify set of documents using classifying method using limited number of concepts ? | CC BY-SA 3.0 | null | 2014-08-06T09:08:08.113 | 2014-08-06T13:15:51.460 | null | null | 2850 | [
"machine-learning",
"classification",
"text-mining"
] | Both methods work. However, if you retain all words in documents you would essentially be working with high dimensional vectors (each term representing one dimension). Consequently, a classifier, e.g. SVM, would take more time to converge.
It is thus a standard practice to reduce the term-space dimensionality by pre-processing steps such as stop-word removal, stemming, Principal Component Analysis (PCA) etc.
One approach could be to analyze the document corpora by a topic modelling technique such as LDA and then retaining only those words which are representative of the topics, i.e. those which have high membership values in a single topic class.
Another approach (inspired by information retrieval) could be to retain the top K tf-idf terms from each document.
| How to create a Document Categorization Classifier for different contexts of Documents | This depends on whether you have enough training data for all languages. If yes, doing language ID and language-specific models might be a good choice, especially if there is a BERT-like model available for each language.
An alternative would be, do the language ID and than machine-translate the input into English and only train an English classifier. You can use e..g, high-quality [pre-trained Marian models](https://github.com/Helsinki-NLP/Opus-MT) recently published by the University of Helsinki.
Otherwise, I would use [pre-trained multilingual representations](https://huggingface.co/transformers/multilingual.html) (probably [XLM-R](https://huggingface.co/transformers/model_doc/xlmroberta.html) that is much better than Multilingual BERT) to get representation and train a single classifier for all languages. The multilingual representations seem to have even some [zero-shot abilities](https://www.aclweb.org/anthology/P19-1493), i.e., the classifiers seem to generalize even for languages that are no in the training data.
|
927 | 1 | 929 | null | 2 | 2862 | I'm working on the dataset with lots of NA values with sklearn and `pandas.DataFrame`. I implemented different imputation strategies for different columns of the dataFrame based column names. For example NAs predictor `'var1'` I impute with 0's and for `'var2'` with mean.
When I try to cross validate my model using `train_test_split` it returns me a `nparray` which does not have column names. How can I impute missing values in this nparray?
P.S. I do not impute missing values in the original data set before splitting on purpose so I keep test and validation sets separately.
| how to impute missing values on numpy array created by train_test_split from pandas.DataFrame? | CC BY-SA 4.0 | null | 2014-08-06T15:07:07.457 | 2020-07-31T14:25:42.320 | 2020-07-31T14:25:42.320 | 98307 | 2854 | [
"pandas",
"cross-validation",
"scikit-learn"
] | Can you just cast your `np.array` from `train_test_split` back into a `pandas.DataFrame` so you can carry out your same strategy. This is very common to what I do when dealing with pandas and scikit. For example,
```
a = train_test_split
new_df = pd.DataFrame(a)
```
| ValueError when trying to split DataFrame into train/test | You are trying to unpack the `data` variable into two separate pieces, each of which should contain another two outputs (an `x` and `y`) variable. However, `data` is simply a single output, which is a pandas dataframe for which unpacking doesn't work. Based on the code you provided it seems you are trying to split your data into a training and test dataset. This does not work this way if you have the data stored as a single dataframe. You will have to split the manually yourself into a feature array and an array of values you are trying to predict, which you can then split into a training and test dataset using the [train_test_split](https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.train_test_split.html) function from `scikit-learn`.
|
937 | 1 | 998 | null | 59 | 121439 | I am working on a problem with too many features and training my models takes way too long. I implemented a forward selection algorithm to choose features.
However, I was wondering does scikit-learn have a forward selection/stepwise regression algorithm?
| Does scikit-learn have a forward selection/stepwise regression algorithm? | CC BY-SA 4.0 | null | 2014-08-07T15:33:43.793 | 2021-08-15T21:08:31.770 | 2021-08-15T02:07:30.567 | 29169 | 2854 | [
"feature-selection",
"scikit-learn"
] | No, scikit-learn does not seem to have a forward selection algorithm. However, it does provide recursive feature elimination, which is a greedy feature elimination algorithm similar to sequential backward selection. See the [documentation here](http://scikit-learn.org/stable/modules/generated/sklearn.feature_selection.RFE.html)
| Scikit-learn pipeline with scaling, dimensionality reduction, average prediction of multiple regression models, and grid search cross validation | You might looking for [sklearn.ensemble.VotingRegressor](https://scikit-learn.org/stable/modules/generated/sklearn.ensemble.VotingRegressor.html) which takes the mean of two regression models.
Here is an example to get you started:
```
from sklearn.datasets import make_regression
from sklearn.decomposition import PCA
from sklearn.ensemble import GradientBoostingRegressor, RandomForestRegressor, VotingRegressor
from sklearn.model_selection import train_test_split, GridSearchCV
from sklearn.pipeline import Pipeline
from sklearn.preprocessing import StandardScaler
# Make fake data
X, y = make_regression(n_samples=1_000, n_features=20, random_state=42)
X_train, X_test, y_train, y_test = train_test_split(X, y,random_state=42)
pipe = Pipeline([('scl', StandardScaler()),
('pca', PCA()),
('vr', VotingRegressor([('gbr', GradientBoostingRegressor()), ('rfr', RandomForestRegressor())]))
])
search_space = [{'vr__gbr__learning_rate': [.07, .1, .15]}]
gs_cv = GridSearchCV(estimator=pipe,
param_grid=search_space,
n_jobs=-1)
gs_cv.fit(X_train, y_train)
gs_cv.predict(X_test)
```
|
945 | 1 | 947 | null | 5 | 2037 | We have a classification algorithm to categorize Java exceptions in Production.
This algorithm is based on hierarchical human defined rules so when a bunch of text forming an exception comes up, it determines what kind of exception is (development, availability, configuration, etc.) and the responsible component (the most inner component responsible of the exception). In Java an exception can have several causing exceptions, and the whole must be analyzed.
For example, given the following example exception:
```
com.myapp.CustomException: Error printing ...
... (stack)
Caused by: com.foo.webservice.RemoteException: Unable to communicate ...
... (stack)
Caused by: com.acme.PrintException: PrintServer002: Timeout ....
... (stack)
```
First of all, our algorithm splits the whole stack in three isolated exceptions. Afterwards it starts analyzing these exceptions starting from the most inner one. In this case, it determines that this exception (the second caused by) is of type `Availability` and that the responsible component is a "print server". This is because there is a rule that matches containing the word `Timeout` associated to the `Availability` type. There is also a rule that matches `com.acme.PrintException` and determines that the responsible component is a print server. As all the information needed is determined using only the most inner exception, the upper exceptions are ignored, but this is not always the case.
As you can see this kind of approximation is very complex (and chaotic) as a human have to create new rules as new exceptions appear. Besides, the new rules have to be compatible with the current ones because a new rule for classifying a new exception must not change the classification of any of the already classified exceptions.
We are thinking about using Machine Learning to automate this process. Obviously, I am not asking for a solution here as I know the complexity but I'd really appreciate some advice to achieve our goal.
| Classifying Java exceptions | CC BY-SA 3.0 | null | 2014-08-08T10:01:25.400 | 2022-07-08T14:46:48.740 | null | null | 2878 | [
"machine-learning",
"classification",
"algorithms"
] | First of all, some basics of classification (and in general any supervised ML tasks), just to make sure we have same set of concepts in mind.
Any supervised ML algorithm consists of at least 2 components:
- Dataset to train and test on.
- Algorithm(s) to handle these data.
Training dataset consists of a set of pairs `(x, y)`, where `x` is a vector of features and `y` is predicted variable. Predicted variable is just what you want to know, i.e. in your case it is exception type. Features are more tricky. You cannot just throw raw text into an algorithm, you need to extract meaningful parts of it and organize them as feature vectors first. You've already mentioned a couple of useful features - exception class name (e.g. `com.acme.PrintException`) and contained words ("Timeout"). All you need is to translate your row exceptions (and human-categorized exception types) into suitable dataset, e.g.:
```
ex_class contains_timeout ... | ex_type
-----------------------------------------------------------
[com.acme.PrintException, 1 , ...] | Availability
[java.lang.Exception , 0 , ...] | Network
...
```
This representation is already much better for ML algorithms. But which one to take?
Taking into account nature of the task and your current approach natural choice is to use decision trees. This class of algorithms will compute optimal decision criteria for all your exception types and print out resulting tree. This is especially useful, because you will have possibility to manually inspect how decision is made and see how much it corresponds to your manually-crafted rules.
There's, however, possibility that some exceptions with exactly the same features will belong to different exception types. In this case probabilistic approach may work well. Despite its name, Naive Bayes classifier works pretty well in most cases. There's one issue with NB and our dataset representation, though: dataset contains categorical variables, and Naive Bayes can work with numerical attributes only*. Standard way to overcome this problem is to use [dummy variables](http://en.wikipedia.org/wiki/Dummy_variable_%28statistics%29). In short, dummy variables are binary variables that simply indicate whether specific category presents or not. For example, single variable `ex_class` with values `{com.acme.PrintException, java.lang.Exception, ...}`, etc. may be split into several variables `ex_class_printexception`, `ex_class_exception`, etc. with values `{0, 1}`:
```
ex_class_printexception ex_class_exception contains_timeout | ex_type
-----------------------------------------------------------------------
[1, , 0 , 1 ] | Availability
[0, , 1 , 0 ] | Network
```
One last algorithm to try is Support Vector Machines (SVM). It neither provides helpful visualisation, nor is probabilistic, but often gives superior results.
---
* - in fact, neither Bayes theorem, nor Naive Bayes itself state anything about variable type, but most software packages that come to mind rely on numerical features.
| Error Analysis for misclassification of text documents | It seems to me that both of your questions could be answered by storing the retrieved neighbours on your test set and giving them a thorough analysis. Assuming you are using a unigram + tf-idf text representation and a cosine similarity distance metric for your K-NN retrieval, it would be trivial once you have a classified document to display the K neighbours and analyze their common unigrams and their respective tf-idf weights in order to see what influenced the classification. Moreover, doing it on your misclassified documents could help you understand which features caused the error.
I'd be interested to know if there is a more systematized approach to those issues.
|
955 | 1 | 988 | null | 8 | 827 | Does anyone know a libarary for performing coreference resolution on German texts?
As far as I know, OpenNLP and Stanford NLP are not able to perform coreference resolution for German Texts.
The only tool that I know is [CorZu](http://www.cl.uzh.ch/static/news.php?om=view&nid=163) which is a python library.
| Coreference Resolution for German Texts | CC BY-SA 4.0 | null | 2014-08-11T12:25:47.700 | 2019-06-07T17:10:38.600 | 2019-06-07T17:10:38.600 | 29169 | 979 | [
"machine-learning",
"nlp"
] | Here is a couple of tools that may be worth a look:
- Bart, an open source tool that have been used for several languages, including German. Available from the website
- Sucre is a tool developed at the University of Stuttgart. I don't know if it's available easily. You can see this paper about it.
| How should I treat these non-English documents in the NLP task? | You can use these tips :
>
Should I exclude them for the corpus and from training the model?
You can do this if you don't have a lack of data. But I think excluding 500 docs from 30K docs won't make a big difference in training. The model's generalisation power won't be compromised.
>
should I manually translate them (Requesting natives from each
language to translate it for me) and include them in the final corpus?
You should do this only when you need the 500 docs as they hold significant information. I would not recommend this method personally.
>
should I use Google translate/DeepL to translate these non-English
documents into English and then include them in the final corpus?
That is something you can do. Using Google Translate could be a help if the structure of the sentences in the docs is simple and sober. You can get good translations without the need of any natives of other countries.
I think you should opt for this method.
Conclusion:
- Translate the docs which hold importance using Google Translate.
- Else, omit them if you have an extra of 500 docs. They willl not affect the model's performance significantly.
Tip:
I don't think that you can paste the 500 docs in the Google Translate console. This will be time consuming if the docs belong to different languages. Hence, try to omit these documents. Otherwise, you will require a mini application which could use some translation API to transform the docs.
|
962 | 1 | 966 | null | 30 | 23334 | I know the difference between clustering and classification in machine learning, but I don't understand the difference between text classification and topic modeling for documents. Can I use topic modeling over documents to identify a topic? Can I use classification methods to classify the text inside these documents?
| What is difference between text classification and topic models? | CC BY-SA 4.0 | null | 2014-08-12T03:50:52.303 | 2021-02-09T01:33:51.537 | 2021-02-09T01:33:51.537 | 29169 | 2916 | [
"classification",
"text-mining",
"topic-model"
] | Text Classification
I give you a bunch of documents, each of which has a label attached. I ask you to learn why you think the contents of the documents have been given these labels based on their words. Then I give you new documents and ask what you think the label for each one should be. The labels have meaning to me, not to you necessarily.
Topic Modeling
I give you a bunch of documents, without labels. I ask you to explain why the documents have the words they do by identifying some topics that each is "about". You tell me the topics, by telling me how much of each is in each document, and I decide what the topics "mean" if anything.
You'd have to clarify what you me by "identify one topic" or "classify the text".
| Which are the worse machine learning models for text classifications? | First, the question is too broad because there are many different kinds of text classification tasks. For example one wouldn't use the same approach for say spam detection and author profiling (e.g. predict the gender of the author), two tasks which are technically text classification but have little in common (and there are many others).
Second, even with a more specific kind of problem, the question of the type of model is misleading because a lot of what makes a ML system perform better than another in text classification is due to other things: the type and amount of training data of course, but also crucially the features being used. There are many options in terms of representing text as features, and these different options usually have a massive impact on performance. I even think that most of the time the choice of a type of classification model does not matter as much as the design of the features.
Finally I'm actually going to answer the question but probably not in the way OP expects: the worst model in any classification task is exactly like the best model, but it swaps the answers in order to have as many wrong predictions as possible (e.g. class 1 -> class 2, class 2 -> class 3, .., class N -> class 1).
Since it's a lot of work to implement the best classifier just to obtain the worst one, a close to worst one can be done with a minority baseline classifier: just predict every instance as the least frequent class in the training data.
I hope a few of the things I said will be helpful, even though it's probably not what OP wished for! :)
|
979 | 1 | 980 | null | 17 | 12372 | I have a problem of clustering huge amount of sentences into groups by their meanings. This is similar to a problem when you have lots of sentences and want to group them by their meanings.
What algorithms are suggested to do this? I don't know number of clusters in advance (and as more data is coming clusters can change as well), what features are normally used to represent each sentence?
I'm trying now the simplest features with just list of words and distance between sentences defined as:
$|A \cup B$ \ $A \cap B|$/$|A \cup B|$
(A and B are corresponding sets of words in sentence A and B)
Does it make sense at all?
I'm trying to apply [Mean-Shift](http://scikit-learn.org/stable/auto_examples/cluster/plot_mean_shift.html#example-cluster-plot-mean-shift-py) algorithm from scikit library to this distance, as it does not require number of clusters in advance.
If anyone will advise better methods/approaches for the problem - it will be very much appreciated as I'm still new to the topic.
| Algorithms for text clustering | CC BY-SA 4.0 | null | 2014-08-15T13:10:20.937 | 2021-06-20T19:21:33.163 | 2021-03-13T23:09:28.130 | 29169 | 2958 | [
"clustering",
"text-mining",
"algorithms",
"scikit-learn"
] | Check the [Stanford NLP Group](http://www-nlp.stanford.edu/software)'s open source software, in particular, [Stanford Classifier](http://www-nlp.stanford.edu/software/classifier.shtml). The software is written in `Java`, which will likely delight you, but also has bindings for some other languages. Note, the licensing - if you plan to use their code in commercial products, you have to acquire commercial license.
Another interesting set of open source libraries, IMHO suitable for this task and much more, is [parallel framework for machine learning GraphLab](http://select.cs.cmu.edu/code/graphlab), which includes [clustering library](http://select.cs.cmu.edu/code/graphlab/clustering.html), implementing various clustering algorithms. It is especially suitable for very large volume of data (like you have), as it implements `MapReduce` model and, thus, supports multicore and multiprocessor parallel processing.
You most likely are aware of the following, but I will mention it just in case. [Natural Language Toolkit (NLTK)](http://www.nltk.org) for `Python` contains modules for clustering/classifying/categorizing text. Check the relevant chapter in the [NLTK Book](http://www.nltk.org/book/ch06.html).
UPDATE:
Speaking of algorithms, it seems that you've tried most of the ones from `scikit-learn`, such as illustrated in [this](http://scikit-learn.org/stable/auto_examples/applications/topics_extraction_with_nmf.html) topic extraction example. However, you may find useful other libraries, which implement a wide variety of clustering algorithms, including Non-Negative Matrix Factorization (NMF). One of such libraries is [Python Matrix Factorization (PyMF)](https://code.google.com/p/pymf) ([source code](https://github.com/nils-werner/pymf)). Another, even more interesting, library, also Python-based, is [NIMFA](http://nimfa.biolab.si), which implements various NMF algorithms. Here's a [research paper](http://jmlr.org/papers/volume13/zitnik12a/zitnik12a.pdf), describing `NIMFA`. [Here's](http://nimfa.biolab.si/nimfa.examples.documents.html) an example from its documentation, which presents the solution for very similar text processing problem of topic clustering.
| Using Clustering in text processing | I don't know if you ever read SenseCluster by Ted Pedersen : [http://senseclusters.sourceforge.net/](http://senseclusters.sourceforge.net/). Very good paper for sense clustering.
Also, when you analyze words, think that "computer", "computers", "computering", ... represent one concept, so only one feature. Very important for a correct analysis.
To speak about the clustering algorithm, you could use a [hierarchical clustering](http://en.wikipedia.org/wiki/Hierarchical_clustering). At each step of the algo, you merge the 2 most similar texts according to their features (using a measure of dissimilarity, euclidean distance for example). With that measure of dissimilarity, you are able to find the best number of clusters and so, the best clustering for your texts and articles.
Good luck :)
|
992 | 1 | 999 | null | 10 | 492 | I've been analyzing a data set of ~400k records and 9 variables The dependent variable is binary. I've fitted a logistic regression, a regression tree, a random forest, and a gradient boosted tree. All of them give virtual identical goodness of fit numbers when I validate them on another data set.
Why is this so? I'm guessing that it's because my observations to variable ratio is so high. If this is correct, at what observation to variable ratio will different models start to give different results?
| Why might several types of models give almost identical results? | CC BY-SA 3.0 | null | 2014-08-18T14:56:13.800 | 2014-08-20T11:06:54.447 | 2014-08-20T11:06:54.447 | 97 | 1241 | [
"data-mining",
"classification",
"binary"
] | This results means that whatever method you use, you are able to get reasonably close to the optimal decision rule (aka [Bayes rule](http://en.wikipedia.org/wiki/Admissible_decision_rule#Bayes_rules_and_generalized_Bayes_rules)). The underlying reasons have been explained in Hastie, Tibshirani and Friedman's ["Elements of Statistical Learning"](http://statweb.stanford.edu/~tibs/ElemStatLearn/). They demonstrated how the different methods perform by comparing Figs. 2.1, 2.2, 2.3, 5.11 (in my first edition -- in section on multidimensional splines), 12.2, 12.3 (support vector machines), and probably some others. If you have not read that book, you need to drop everything RIGHT NOW and read it up. (I mean, it isn't worth losing your job, but it is worth missing a homework or two if you are a student.)
I don't think that observations to variable ratio is the explanation. In light of my rationale offered above, it is the relatively simple form of the boundary separating your classes in the multidimensional space that all of the methods you tried have been able to identify.
| Multiple models have extreme differences during evaluation | A few thoughts:
- The first thing I would check is whether the other models overfit. You could check this by comparing the performance between the training set and the test set.
- Also there's something a bit strange about k-NN always predicting the majority class. This would happen only if any instance is always closer to more majority instances than minority instances. In this case there's something wrong with either the features or the distance measure.
- 100k instances looks like a large dataset but with only 6 features it's possible that the data contains many duplicates and/or near-duplicates which don't bring any information for the model. In general it's possible that the features are simply not good indicators, although in this case the decision tree models would fail as well.
- The better performance of the tree models points to something discontinuous in the features (btw you didn't mention if they are numerical or categorical?). Decision trees and especially random forests can handle discontinuity but like logistic regression might have trouble with it.
|
1002 | 1 | 1006 | null | 10 | 1420 | Caveat: I am a complete beginner when it comes to machine learning, but eager to learn.
I have a large dataset and I'm trying to find pattern in it. There may / may not be correlation across the data, either with known variables, or variables that are contained in the data but which I haven't yet realised are actually variables / relevant.
I'm guessing this would be a familiar problem in the world of data analysis, so I have a few questions:
- The 'silver bullet' would be to throw this all this data into a stats / data analysis program and for it to crunch the data looking for known / unknown patterns trying to find relations. Is SPSS suitable, or are there other applications which may be better suited.
- Should I learn a language like R, and figure out how to manually process the data. Wouldn't this comprimise finding relations as I would have to manually specify what and how to analyse the data?
- How would a professional data miner approach this problem and what steps would s/he take?
| What initial steps should I use to make sense of large data sets, and what tools should I use? | CC BY-SA 3.0 | null | 2014-08-19T17:50:52.583 | 2020-08-16T18:02:41.913 | 2016-07-17T14:45:20.330 | 9420 | 2861 | [
"machine-learning",
"data-mining",
"tools",
"beginner"
] | I will try to answer your questions, but before I'd like to note that using term "large dataset" is misleading, as "large" is a relative concept. You have to provide more details. If you're dealing with bid data, then this fact will most likely affect selection of preferred tools, approaches and algorithms for your data analysis. I hope that the following thoughts of mine on data analysis address your sub-questions. Please note that the numbering of my points does not match the numbering of your sub-questions. However, I believe that it better reflects general data analysis workflow, at least, how I understand it.
- Firstly, I think that you need to have at least some kind of conceptual model in mind (or, better, on paper). This model should guide you in your exploratory data analysis (EDA). A presence of a dependent variable (DV) in the model means that in your machine learning (ML) phase later in the analysis you will deal with so called supervised ML, as opposed to unsupervised ML in the absence of an identified DV.
- Secondly, EDA is a crucial part. IMHO, EDA should include multiple iterations of producing descriptive statistics and data visualization, as you refine your understanding about the data. Not only this phase will give you valuable insights about your datasets, but it will feed your next important phase - data cleaning and transformation. Just throwing your raw data into a statistical software package won't give much - for any valid statistical analysis, data should be clean, correct and consistent. This is often the most time- and effort-consuming, but absolutely necessary part. For more details on this topic, read this nice paper (by Hadley Wickham) and this (by Edwin de Jonge and Mark van der Loo).
- Now, as you're hopefully done with EDA as well as data cleaning and transformation, your ready to start some more statistically-involved phases. One of such phases is exploratory factor analysis (EFA), which will allow you to extract the underlying structure of your data. For datasets with large number of variables, the positive side effect of EFA is dimensionality reduction. And, while in that sense EFA is similar to principal components analysis (PCA) and other dimensionality reduction approaches, I think that EFA is more important as it allows to refine your conceptual model of the phenomena that your data "describe", thus making sense of your datasets. Of course, in addition to EFA, you can/should perform regression analysis as well as apply machine learning techniques, based on your findings in previous phases.
Finally, a note on software tools. In my opinion, current state of statistical software packages is at such point that practically any major software packages have comparable offerings feature-wise. If you study or work in an organization that have certain policies and preferences in term of software tools, then you are constrained by them. However, if that is not the case, I would heartily recommend open source statistical software, based on your comfort with its specific programming language, learning curve and your career perspectives. My current platform of choice is R Project, which offers mature, powerful, flexible, extensive and open statistical software, along with amazing ecosystem of packages, experts and enthusiasts. Other nice choices include Python, Julia and specific open source software for processing big data, such as Hadoop, Spark, NoSQL databases, WEKA. For more examples of open source software for data mining, which include general and specific statistical and ML software, see this section of a [Wikipedia page](http://en.wikipedia.org/wiki/Data_mining#Free_open-source_data_mining_software_and_applications).
UPDATE: Forgot to mention [Rattle](http://rattle.togaware.com), which is also a very popular open source R-oriented GUI software for data mining.
| Best way to visualize huge amount of data | I have three suggestions that may help.
- Reduce the point size
- Make the points highly transparent
- Downsample the points
Since you do not provide any sample data, I will use some random data to illustrate.
```
## The purpose of S1 is to intermix the two classes at random
S1 = sample(3000000)
x = c(rnorm(2000000, 0, 1), rnorm(1000000, 3,1))[S1]
y = c(rnorm(2000000, 0, 1), rnorm(1000000, 3,1))[S1]
z = c(rep(1,2000000), rep(2,1000000))[S1]
plot(x,y, pch=20, col=rainbow(3)[z])
```
[](https://i.stack.imgur.com/NaivE.png)
The base plot without any adjustments is not very nice. Let's apply suggestions 1 and 2.
```
plot(x,y, pch=20, cex=0.4, col=rainbow(3, alpha=0.01)[z])
```
[](https://i.stack.imgur.com/eQM4Z.png)
Reducing the point size and making the points highly transparent helps some. This gives a better idea of the overlap between the two distributions.
If we downsample, we don't need quite as much transparency.
```
## The purpose of S2 is to downsample the data
S2 = sample(3000000, 100000)
plot(x[S2],y[S2], pch=20, cex=0.4, col=rainbow(3, alpha=0.1)[z[S2]])
```
[](https://i.stack.imgur.com/gKlZi.png)
This gives a different view that provides a similar, but not identical understanding of the two distributions.
These are not magic, but I think that they are helpful.
|
1024 | 1 | 1030 | null | 3 | 194 | I've fit a GLM (Poisson) to a data set where one of the variables is categorical for the year a customer bought a product from my company, ranging from 1999 to 2012. There's a linear trend of the coefficients for the values of the variable as the year of sale increases.
Is there any problem with trying to improve predictions for 2013 and maybe 2014 by extrapolating to get the coefficients for those years?
| Extrapolating GLM coefficients for year a product was sold into future years? | CC BY-SA 3.0 | null | 2014-08-23T13:47:01.907 | 2014-08-26T11:36:01.420 | 2014-08-26T11:36:01.420 | 21 | 1241 | [
"statistics",
"glm",
"regression"
] | I believe that this is a case for applying time series analysis, in particular time series forecasting ([http://en.wikipedia.org/wiki/Time_series](http://en.wikipedia.org/wiki/Time_series)). Consider the following resources on time series regression:
- http://www.wiley.com/WileyCDA/WileyTitle/productCd-0471363553.html
- http://www.stats.uwo.ca/faculty/aim/tsar/tsar.pdf (especially section
4.6)
- http://arxiv.org/abs/0802.0219 (Bayesian approach)
| Predict the date an item will be sold using machine learning | A machine learning problem can be separated into a few modular parts. Of course these are all massive in reach and possibility. However, every problem you encounter should be thought of in this way at first. You can then skip things where you feel necessary.
- Data pre-processing and feature extraction
- The model
- Post-processing
## Data pre-processing and feature extraction
This and feature extraction are the two most important parts of a machine learning technique. That's right, NOT THE MODEL. If you have good features then even a very simple model will get amazing results.
Data pre-processing goes from your raw data and remolds it to be better suited to machine learning algorithms. This means pulling out important statistics from your data or converting your data into other formats in order to it being more representative. For example if you are using a technique which is sensitive to range, then you should normalize all your features. If you are using text data you should build word vectors. There are countless ways pre-processing and feature extraction can be implemented.
Then, you want to use feature selection. From all the information you extracted from your data not all of it will be useful. There are machine learning algorithms such as: PCA, LDA, cross-correlation, etc. Which will select the features that are the most representative and ignore the rest.
In your case
First, let's consider the data pre-processing. You notice that type might not be an integer value. This may cause problems when using most machine learning algorithms. You will want to bin these different types and map them onto numbers.
Feature selection: besides using the techniques I outlined above, you should also notice that productID is a useless feature. It should for sure NOT be included in your model. It will just confuse the model and sway it.
As a general rule of thumb, the amount of data that is suggested to have for shallow machine learning models is $10 \times \#features$. So you are limited by the size of your dataset. Also, make sure the outputs are quite well distributed. If you have some skew in your dataset, like a lot of examples where the item was sold right away, then the model will learn this tendency. You do not want this.
## The Model
Now that you have your feature-space, which might be entirely different from the original columns you posted, it is time to choose a model. You are trying to estimate the time of sale for an algorithm. Thus, this can be done in two different ways. Either as a classifier problem or as a regression problem.
The classifier problem would separate the different times of sales into distinct bins. For example
- Class 1: [0 - 5] days
- Class 2: [5 - 10] days
- etc...
Of course the more classes you will choose to have then the harder it will be to train the model. That means the resolution of your results is limited by the amount of the data you have available to you.
The other option is to use a regression algorithm. This will learn the tendency of your curve in higher dimensional space and then estimates where along that line a new example would fall. Think of it in 1-dimension. I give you a bunch of heights $x$ and running speeds $y$. The model will learn a function $y(x)$. Then if I give you just a height, you will be able to estimate the running speed of the individual. You will be doing the same thing but with more variables.
There are really a ton of methods that can do this. You can look through some literature reviews on the subject to get a hold of all of them. But, I warn you there is A LOT. I usually start with kernel-Support Vector Regression (k-SVR).
To test your model. Separate your dataset into three parts (training, validating, testing) if you have sufficient data. Otherwise two parts (training, testing) is also fine. Only, train your model on the training set and then evaluate it using the example it has not seen yet which are reserved in the testing set.
## Post-processing
This is the step where you can further model your output $y(x)$. In your case that might not be needed.
|
1028 | 1 | 2315 | null | 37 | 55112 | I have been reading around about Random Forests but I cannot really find a definitive answer about the problem of overfitting. According to the original paper of Breiman, they should not overfit when increasing the number of trees in the forest, but it seems that there is not consensus about this. This is creating me quite some confusion about the issue.
Maybe someone more expert than me can give me a more concrete answer or point me in the right direction to better understand the problem.
| Do Random Forest overfit? | CC BY-SA 3.0 | null | 2014-08-23T16:54:06.380 | 2021-02-08T02:11:50.497 | null | null | 3054 | [
"machine-learning",
"random-forest"
] | Every ML algorithm with high complexity can overfit. However, the OP is asking whether an RF will not overfit when increasing the number of trees in the forest.
In general, ensemble methods reduces the prediction variance to almost nothing, improving the accuracy of the ensemble. If we define the variance of the expected generalization error of an individual randomized model as:
![](https://i.stack.imgur.com/ZUORL.gif)
From [here](http://arxiv.org/abs/1407.7502), the variance of the expected generalization error of an ensemble corresponds to:
![](https://i.stack.imgur.com/5Zf9e.gif)
where `p(x)` is the Pearson’s correlation coefficient between the predictions of two randomized models trained on the same data from two independent seeds. If we increase the number of DT's in the RF, larger `M`, the variance of the ensemble decreases when `ρ(x)<1`. Therefore, the variance of an ensemble is strictly smaller than the variance of an individual model.
In a nutshell, increasing the number of individual randomized models in an ensemble will never increase the generalization error.
| How to avoid overfitting in random forest? | Relative to other models, Random Forests are less likely to overfit but it is still something that you want to make an explicit effort to avoid. Tuning model parameters is definitely one element of avoiding overfitting but it isn't the only one. In fact I would say that your training features are more likely to lead to overfitting than model parameters, especially with a Random Forests. So I think the key is really having a reliable method to evaluate your model to check for overfitting more than anything else, which brings us to your second question.
As alluded to above, running cross validation will allow to you avoid overfitting. Choosing your best model based on CV results will lead to a model that hasn't overfit, which isn't necessarily the case for something like out of the bag error. The easiest way to run CV in R is with the `caret` package. A simple example is below:
```
> library(caret)
>
> data(iris)
>
> tr <- trainControl(method = "cv", number = 5)
>
> train(Species ~ .,data=iris,method="rf",trControl= tr)
Random Forest
150 samples
4 predictor
3 classes: 'setosa', 'versicolor', 'virginica'
No pre-processing
Resampling: Cross-Validated (5 fold)
Summary of sample sizes: 120, 120, 120, 120, 120
Resampling results across tuning parameters:
mtry Accuracy Kappa Accuracy SD Kappa SD
2 0.96 0.94 0.04346135 0.06519202
3 0.96 0.94 0.04346135 0.06519202
4 0.96 0.94 0.04346135 0.06519202
Accuracy was used to select the optimal model using the largest value.
The final value used for the model was mtry = 2.
```
|
1050 | 1 | 11966 | null | 10 | 894 | General description of the problem
I have a graph where some vertices are labeled with a type with 3 or 4 possible values. For the other vertices, the type is unknown.
My goal is to use the graph to predict the type for vertices that are unlabeled.
Possible framework
I suspect this fits into the general framework of label propagation problems, based on my reading of the literature (e.g., see [this paper](http://lvk.cs.msu.su/~bruzz/articles/classification/zhu02learning.pdf) and [this paper](http://www.csc.ncsu.edu/faculty/samatova/practical-graph-mining-with-R/slides/pdf/Frequent_Subgraph_Mining.pdf))
Another method that is mentioned often is `Frequent Subgraph Mining`, which includes algorithms like `SUBDUE`,`SLEUTH`, and `gSpan`.
Found in R
The only label propagation implementation I managed to find in `R` is `label.propagation.community()` from the `igraph` library.
However, as the name suggests, it is mostly used to find communities, not for classifying unlabeled vertices.
There also seems to be several references to a `subgraphMining` library (here for example), but it looks like it is missing from CRAN.
Question
Do you know of a library or framework for the task described?
| Libraries for (label propagation algorithms/frequent subgraph mining) for graphs in R | CC BY-SA 3.0 | null | 2014-08-27T13:01:14.643 | 2016-05-27T18:36:35.830 | 2015-10-25T05:44:40.023 | 609 | 3108 | [
"classification",
"r",
"graphs"
] | This is an old post, but there is a subgraph package and accompanying book/documentation for doing this in R:
[https://www.csc.ncsu.edu/faculty/samatova/practical-graph-mining-with-R/PracticalGraphMiningWithR.html](https://www.csc.ncsu.edu/faculty/samatova/practical-graph-mining-with-R/PracticalGraphMiningWithR.html)
Although I personally don't get the connection between subgraph mining and label propagation in this case. SVD++ might be closer to what you're looking for (supported by GraphX of Spark, which I think also supports label propagation).
[http://spark.apache.org/graphx/](http://spark.apache.org/graphx/)
| Graph & Network Mining: clustering/community detection/ classification | Well ... Some points. Networked data is modeled with graphs. When you have different attributes you have [Property Graph](https://markorodriguez.com/2011/02/08/property-graph-algorithms/).
For clustering, you can extract the topology of the subgraph based on desired attributes and then use any [Modularity](https://en.wikipedia.org/wiki/Modularity_(networks))-based algorithm (most recommended is [Blondel algorithm](https://perso.uclouvain.be/vincent.blondel/research/louvain.html)). In Blondel algorithm you don't need to know the number of communities in advance.
Have a look at [Network Science book by Barabasi](http://networksciencebook.com/) to get more insight to networks.
For classification you may extract features from graphs and use common classification algorithms or use graph kernels and feed it to kernel methods such as SVM. Follow [this](https://datascience.stackexchange.com/a/27074/8878).
Page rank is one of the methods for ranking but you have simpler choices according to your problem. See [Centrality](https://en.wikipedia.org/wiki/Centrality) measures from the book above. There you can also see details of different ranking algorithms.
If you need more info you may drop a comment here.
Hope it helped. Good Luck!
|
1092 | 1 | 1097 | null | 14 | 2054 | Are there any machine learning libraries for Ruby that are relatively complete (including a wide variety of algorithms for supervised and unsupervised learning), robustly tested, and well-documented? I love Python's [scikit-learn](http://scikit-learn.org/) for its incredible documentation, but a client would prefer to write the code in Ruby since that's what they're familiar with.
Ideally I am looking for a library or set of libraries which, like `scikit` and `numpy`, can implement a wide variety of data structures like sparse matrices, as well as learners.
Some examples of things we'll need to do are binary classification using SVMs, and implementing bag of words models which we hope to concatenate with arbitrary numeric data, as described in [this StackOverflow post](https://stackoverflow.com/q/20106940/1435804).
| Machine learning libraries for Ruby | CC BY-SA 4.0 | null | 2014-09-08T21:25:26.183 | 2018-12-29T02:25:42.790 | 2018-12-29T02:25:42.790 | 134 | 2487 | [
"machine-learning"
] | I'll go ahead and post an answer for now; if someone has something better I'll accept theirs.
At this point the most powerful option appears to be accessing WEKA using jRuby. We spent yesterday scouring the 'net, and this combination was even used by a [talk at RailsConf 2012](http://www.confreaks.com/videos/867-railsconf2012-practical-machine-learning-and-rails), so I would guess if there were a comparable pure ruby package, they would have used it.
Note that if you know exactly what you need, there are plenty of individual libraries that either [wrap standalone packages like libsvm](https://github.com/febeling/rb-libsvm) or [re-implement some individual algorithms like Naive Bayes in pure Ruby](https://github.com/alexandru/stuff-classifier) and will spare you from using jRuby.
But for a general-purpose library, WEKA and jRuby seem to be the best bet at this time.
| Scalable open source machine learning library written in python | Is there a specific reason beside the fact that you would like to contribute? I am asking because there is always [pyspark](https://github.com/apache/spark/tree/master/python/pyspark/mllib) that you can use, the Spark python API that exposes the Spark programming model to Python.
For deep learning specifically, there are a lot of frameworks built on top of [Theano](https://github.com/Theano/Theano) -which is a python library for mathematical expressions involving multi-dimensional arrays-, like Lasagne, so they are able to use GPU for intense training. Getting an EC2 instance with GPU on AWS is always an option.
|
1094 | 1 | 4897 | null | 2 | 249 | Problem
For my machine learning task, I create a set of predictors.
Predictors come in "bundles" - multi-dimensional measurements (3 or 4 - dimensional in my case).
The hole "bundle" makes sense only if it has been measured, and taken all together.
The problem is, different 'bundles' of predictors can be measured only for small part of the sample, and those parts don't necessary intersect for different 'bundles'.
As parts are small, imputing leads to considerable decrease in accuracy(catastrophical to be more accurate)
Possible solutions
I could create dummy variables that would mark whether the measurement has taken place for each variable. The problem is, when random forests draws random variables, it does so individually.
So there are two basic ways to solve this problem:
1) Combine each "bundle" into one predictor. That is possible, but it seems information will be lost.
2) Make random forest draw variables not individually, but by obligatory "bundles".
Problem for random forest
As random forest draws variables randomly, it takes features that are useless (or much less useful) without other from their "bundle". I have a feeling that leads to a loss of accuracy.
Example
For example I have variables `a`,`a_measure`, `b`,`b_measure`.
The problem is, variables `a_measure` make sense only if variable `a` is present, same for `b`. So I either have to combine `a`and `a_measure` into one variable, or make random forest draw both, in case at least one of them is drawn.
Question
What are the best practice solutions for problems when different sets of predictors are measured for small parts of overall population, and these sets of predictors come in obligatory "bundles"?
Thank you!
| Creating obligatory combinations of variables for drawing by random forest | CC BY-SA 3.0 | null | 2014-09-09T06:33:00.730 | 2015-04-17T06:30:43.227 | 2014-09-09T06:45:25.510 | 3108 | 3108 | [
"machine-learning",
"r",
"random-forest"
] | You may want to consider gradient boosted trees rather than random forests. They're also an ensemble tree-based method, but since this method doesn't sample dimensions, it won't run in to the problem of not having a useful predictor available to split on at any particular time.
Different implementations of GBDT have different ways of handling missing values, which will make a big difference in your case; I believe R does ternary splits which is likely to work fine.
| Features selection/combination for random forest | The [Random Forest](http://scikit-learn.org/stable/modules/generated/sklearn.ensemble.RandomForestClassifier.html) model in sklearn has a `feature_importances_` attribute to tell you which features are most important. Here is a [helpful example](http://blog.datadive.net/selecting-good-features-part-iii-random-forests/).
There are a few other algorithms for selecting the best features that generalize to other models such as sequential backward selection and sequential forward selection. In the case of sequential forward selection, you begin by finding the single feature that provides you with the best accuracy. Then, you find the next feature in combination with the first that gives you the best accuracy. This pattern continues until you find $k$ features, where $k$ is the number of features you want to use. Sequential backward selection is just the opposite, where you start with all of the features and remove those which inhibit your accuracy the most. You can find more information on these algorithms [here](https://rasbt.github.io/mlxtend/user_guide/feature_selection/SequentialFeatureSelector/#example-2-toggling-between-sfs-sbs-sffs-and-sbfs).
|