text
stringlengths 83
79.5k
|
---|
H: Analyzing A/B test results which are not normally distributed, using independent t-test
I have a set of results from an A/B test (one control group, one feature group) which do not fit a Normal Distribution.
In fact the distribution resembles more closely the Landau Distribution.
I believe the independent t-test requires that the samples be at least approximately normally distributed, which discourages me using the t-test as a valid method of significance testing.
But my question is:
At what point can one say that the t-test is not a good method of significance testing?
Or put another way, how can one qualify how reliable the p-values of a t-test are, given only the data set?
AI: The distribution of your data doesn't need to be normal, it's the Sampling Distribution that has to be nearly normal. If your sample size is big enough, then the sampling distribution of means from Landau Distribution should to be nearly normal, due to the Central Limit Theorem.
So it means you should be able to safely use t-test with your data.
Example
Let's consider this example: suppose we have a population with Lognormal distribution with mu=0 and sd=0.5 (it looks a bit similar to Landau)
So we sample 30 observations 5000 times from this distribution each time calculating the mean of the sample
And this is what we get
Looks quite normal, doesn't it? If we increase the sample size, it's even more apparent
R code
x = seq(0, 4, 0.05)
y = dlnorm(x, mean=0, sd=0.5)
plot(x, y, type='l', bty='n')
n = 30
m = 1000
set.seed(0)
samp = rep(NA, m)
for (i in 1:m) {
samp[i] = mean(rlnorm(n, mean=0, sd=0.5))
}
hist(samp, col='orange', probability=T, breaks=25, main='sample size = 30')
x = seq(0.5, 1.5, 0.01)
lines(x, dnorm(x, mean=mean(samp), sd=sd(samp)))
n = 300
samp = rep(NA, m)
for (i in 1:m) {
samp[i] = mean(rlnorm(n, mean=0, sd=0.5))
}
hist(samp, col='orange', probability=T, breaks=25, main='sample size = 300')
x = seq(1, 1.25, 0.005)
lines(x, dnorm(x, mean=mean(samp), sd=sd(samp))) |
H: Solving a system of equations with sparse data
I am attempting to solve a set of equations which has 40 independent variables (x1, ..., x40) and one dependent variable (y). The total number of equations (number of rows) is ~300, and I want to solve for the set of 40 coefficients that minimizes the total sum-of-square error between y and the predicted value.
My problem is that the matrix is very sparse and I do not know the best way to solve the system of equations with sparse data. An example of the dataset is shown below:
y x1 x2 x3 x4 x5 x6 ... x40
87169 14 0 1 0 0 2 ... 0
46449 0 0 4 0 1 4 ... 12
846449 0 0 0 0 0 3 ... 0
....
I am currently using a Genetic Algorithm to solve this and the results are coming out
with roughly a factor of two difference between observed and expected.
Can anyone suggest different methods or techniques which are capable of solving a set of equations with sparse data.
AI: If I understand you correctly, this is the case of multiple linear regression with sparse data (sparse regression). Assuming that, I hope you will find the following resources useful.
1) NCSU lecture slides on sparse regression with overview of algorithms, notes, formulas, graphics and references to literature: http://www.stat.ncsu.edu/people/zhou/courses/st810/notes/lect23sparse.pdf
2) R ecosystem offers many packages, useful for sparse regression analysis, including:
Matrix (http://cran.r-project.org/web/packages/Matrix)
SparseM (http://cran.r-project.org/web/packages/SparseM)
MatrixModels (http://cran.r-project.org/web/packages/MatrixModels)
glmnet (http://cran.r-project.org/web/packages/glmnet)
flare (http://cran.r-project.org/web/packages/flare)
3) A blog post with an example of sparse regression solution, based on SparseM: http://aleph-nought.blogspot.com/2012/03/multiple-linear-regression-with-sparse.html
4) A blog post on using sparse matrices in R, which includes a primer on using glmnet: http://www.johnmyleswhite.com/notebook/2011/10/31/using-sparse-matrices-in-r
5) More examples and some discussion on the topic can be found on StackOverflow: https://stackoverflow.com/questions/3169371/large-scale-regression-in-r-with-a-sparse-feature-matrix
UPDATE (based on your comment):
If you're trying to solve an LP problem with constraints, you may find this theoretical paper useful: http://web.stanford.edu/group/SOL/papers/gmsw84.pdf.
Also, check R package limSolve: http://cran.r-project.org/web/packages/limSolve. And, in general, check packages in CRAN Task View "Optimization and Mathematical Programming": http://cran.r-project.org/web/views/Optimization.html.
Finally, check the book "Using R for Numerical Analysis in Science and Engineering" (by Victor A. Bloomfield). It has a section on solving systems of equations, represented by sparse matrices (section 5.7, pages 99-104), which includes examples, based on some of the above-mentioned packages: http://books.google.com/books?id=9ph_AwAAQBAJ&pg=PA99&lpg=PA99&dq=r+limsolve+sparse+matrix&source=bl&ots=PHDE8nXljQ&sig=sPi4n5Wk0M02ywkubq7R7KD_b04&hl=en&sa=X&ei=FZjiU-ioIcjmsATGkYDAAg&ved=0CDUQ6AEwAw#v=onepage&q=r%20limsolve%20sparse%20matrix&f=false. |
H: Avoid iterations while calculating average model accuracy
I am fitting a model in R.
use createFolds method to create several k folds from the data set
loop through the folds, repeating the following on each iteration:
train the model on k-1 folds
predict the outcomes for the i-th fold
calculate prediction accuracy
average the accuracy
Does R have a function that makes folds itself, repeats model tuning/predictions and gives the average accuracy back?
AI: Yes, you can do all this using the Caret (http://caret.r-forge.r-project.org/training.html) package in R. For example,
fitControl <- trainControl(## 10-fold CV
method = "repeatedcv",
number = 10,
## repeated ten times
repeats = 10)
gbmFit1 <- train(Class ~ ., data = training,
method = "gbm",
trControl = fitControl,
## This last option is actually one
## for gbm() that passes through
verbose = FALSE)
gbmFit1
which will give the output
Stochastic Gradient Boosting
157 samples
60 predictors
2 classes: 'M', 'R'
No pre-processing
Resampling: Cross-Validated (10 fold, repeated 10 times)
Summary of sample sizes: 142, 142, 140, 142, 142, 141, ...
Resampling results across tuning parameters:
interaction.depth n.trees Accuracy Kappa Accuracy SD Kappa SD
1 50 0.8 0.5 0.1 0.2
1 100 0.8 0.6 0.1 0.2
1 200 0.8 0.6 0.09 0.2
2 50 0.8 0.6 0.1 0.2
2 100 0.8 0.6 0.09 0.2
2 200 0.8 0.6 0.1 0.2
3 50 0.8 0.6 0.09 0.2
3 100 0.8 0.6 0.09 0.2
3 200 0.8 0.6 0.08 0.2
Tuning parameter 'shrinkage' was held constant at a value of 0.1
Accuracy was used to select the optimal model using the largest value.
The final values used for the model were n.trees = 150, interaction.depth = 3
and shrinkage = 0.1.
Caret offers many other options as well so should be able to suit your needs. |
H: Can I classify set of documents using classifying method using limited number of concepts ?
I have set of documents and I want classify them to true and false
My question is I have to take the whole words in the documents then I classify them depend on the similarity words in these documents or I can take only some words that I interested in then I compare it with the documents. Which one is more efficient in classify document and can work with SVM.
AI: Both methods work. However, if you retain all words in documents you would essentially be working with high dimensional vectors (each term representing one dimension). Consequently, a classifier, e.g. SVM, would take more time to converge.
It is thus a standard practice to reduce the term-space dimensionality by pre-processing steps such as stop-word removal, stemming, Principal Component Analysis (PCA) etc.
One approach could be to analyze the document corpora by a topic modelling technique such as LDA and then retaining only those words which are representative of the topics, i.e. those which have high membership values in a single topic class.
Another approach (inspired by information retrieval) could be to retain the top K tf-idf terms from each document. |
H: how to impute missing values on numpy array created by train_test_split from pandas.DataFrame?
I'm working on the dataset with lots of NA values with sklearn and pandas.DataFrame. I implemented different imputation strategies for different columns of the dataFrame based column names. For example NAs predictor 'var1' I impute with 0's and for 'var2' with mean.
When I try to cross validate my model using train_test_split it returns me a nparray which does not have column names. How can I impute missing values in this nparray?
P.S. I do not impute missing values in the original data set before splitting on purpose so I keep test and validation sets separately.
AI: Can you just cast your np.array from train_test_split back into a pandas.DataFrame so you can carry out your same strategy. This is very common to what I do when dealing with pandas and scikit. For example,
a = train_test_split
new_df = pd.DataFrame(a) |
H: How to implement Brown Clustering Algorithm in O(|V|k^2)
I am trying to implement the Brown Clustering Algorithm.
Paper details: "Class-Based n-gram Models of Natural Language" by Brown et al
The algorithm is supposed to in O(|V|k^2) where |V| is the size of the vocabulary and k is the number of clusters. I am unable to implement it this efficiently. In fact, the best I can manage is O(|V|k^3) which is too slow. My current implementation for the main part of the algorithm is as follows:
for w = number of clusters + 1 to |V|
{
word = next most frequent word in the corpus
assign word to a new cluster
initialize MaxQuality to 0
initialize ArgMax vector to (0,0)
for i = 0 to number of clusters - 1
{
for j = i to number of clusters
{
Quality = Mutual Information if we merge cluster i and cluster j
if Quality > MaxQuality
{
MaxQuality = Quality
ArgMax = (i,j)
}
}
}
}
I compute quality as follows:
1. Before entering the second loop compute the pre-merge quality i.e. quality before doing any merges.
2. Every time a cluster-pair merge step is considered:
i. assign quality := pre-merge quality
ii. quality = quality - any terms in the mutual information equation that contain cluster i or cluster j (pre-merge)
iii. quality = quality + any terms in the mutual information equation that contain (cluster i U cluster j) (post-merge)
In my implementation, the first loop has approx |V| iterations, the second and third loop approx k iterations each. To compute quality at each step requires approx a further k iterations. In total it runs in (|V|k^3) time.
How do you get it to run in (|V|k^2)?
AI: I have managed to resolve this. There is an excellent and thorough explanation of the optimization steps in the following thesis: Semi-Supervised Learning for Natural Language by Percy Liang.
My mistake was trying to update the quality for all potential clusters pairs. Instead, you should initialize a table with the quality changes of doing each merge. Use this table to find the best merge, and the update the relevant terms that make up the table entries. |
H: SUMMARIST: Automated Text Summarization
There is a text summarization project called SUMMARIST. Apparently it is able to perform abstractive text summarization. I want to give it a try but unfortunately the demo links on the website do not work. Does anybody have any information regarding this? How can I test this tool?
http://www.isi.edu/natural-language/projects/SUMMARIST.html
Regards,
PasMod
AI: It dates back to 1998, so most likely has been abandoned, or "acquired" by Microsoft as the creator currently works there and has done since publishing that research.
See https://www.microsoft.com/en-us/research/wp-content/uploads/2016/07/ists97.pdf
and http://research.microsoft.com/en-us/people/cyl for the author. Maybe you could try to contact him. |
H: Which cross-validation type best suits to binary classification problem
Data set looks like:
25000 observations
up to 15 predictors of different types: numeric, multi-class categorical, binary
target variable is binary
Which cross validation method is typical for this type of problems?
By default I'm using K-Fold. How many folds is enough in this case? (One of the models I use is random forest, which is time consuming...)
AI: You will have best results if you care to build the folds so that each variable (and most importantly the target variable) is approximately identically distributed in each fold. This is called, when applied to the target variable, stratified k-fold. One approach is to cluster the inputs and make sure each fold contains the same number of instances from each cluster proportional to their size. |
H: Classifying Java exceptions
We have a classification algorithm to categorize Java exceptions in Production.
This algorithm is based on hierarchical human defined rules so when a bunch of text forming an exception comes up, it determines what kind of exception is (development, availability, configuration, etc.) and the responsible component (the most inner component responsible of the exception). In Java an exception can have several causing exceptions, and the whole must be analyzed.
For example, given the following example exception:
com.myapp.CustomException: Error printing ...
... (stack)
Caused by: com.foo.webservice.RemoteException: Unable to communicate ...
... (stack)
Caused by: com.acme.PrintException: PrintServer002: Timeout ....
... (stack)
First of all, our algorithm splits the whole stack in three isolated exceptions. Afterwards it starts analyzing these exceptions starting from the most inner one. In this case, it determines that this exception (the second caused by) is of type Availability and that the responsible component is a "print server". This is because there is a rule that matches containing the word Timeout associated to the Availability type. There is also a rule that matches com.acme.PrintException and determines that the responsible component is a print server. As all the information needed is determined using only the most inner exception, the upper exceptions are ignored, but this is not always the case.
As you can see this kind of approximation is very complex (and chaotic) as a human have to create new rules as new exceptions appear. Besides, the new rules have to be compatible with the current ones because a new rule for classifying a new exception must not change the classification of any of the already classified exceptions.
We are thinking about using Machine Learning to automate this process. Obviously, I am not asking for a solution here as I know the complexity but I'd really appreciate some advice to achieve our goal.
AI: First of all, some basics of classification (and in general any supervised ML tasks), just to make sure we have same set of concepts in mind.
Any supervised ML algorithm consists of at least 2 components:
Dataset to train and test on.
Algorithm(s) to handle these data.
Training dataset consists of a set of pairs (x, y), where x is a vector of features and y is predicted variable. Predicted variable is just what you want to know, i.e. in your case it is exception type. Features are more tricky. You cannot just throw raw text into an algorithm, you need to extract meaningful parts of it and organize them as feature vectors first. You've already mentioned a couple of useful features - exception class name (e.g. com.acme.PrintException) and contained words ("Timeout"). All you need is to translate your row exceptions (and human-categorized exception types) into suitable dataset, e.g.:
ex_class contains_timeout ... | ex_type
-----------------------------------------------------------
[com.acme.PrintException, 1 , ...] | Availability
[java.lang.Exception , 0 , ...] | Network
...
This representation is already much better for ML algorithms. But which one to take?
Taking into account nature of the task and your current approach natural choice is to use decision trees. This class of algorithms will compute optimal decision criteria for all your exception types and print out resulting tree. This is especially useful, because you will have possibility to manually inspect how decision is made and see how much it corresponds to your manually-crafted rules.
There's, however, possibility that some exceptions with exactly the same features will belong to different exception types. In this case probabilistic approach may work well. Despite its name, Naive Bayes classifier works pretty well in most cases. There's one issue with NB and our dataset representation, though: dataset contains categorical variables, and Naive Bayes can work with numerical attributes only*. Standard way to overcome this problem is to use dummy variables. In short, dummy variables are binary variables that simply indicate whether specific category presents or not. For example, single variable ex_class with values {com.acme.PrintException, java.lang.Exception, ...}, etc. may be split into several variables ex_class_printexception, ex_class_exception, etc. with values {0, 1}:
ex_class_printexception ex_class_exception contains_timeout | ex_type
-----------------------------------------------------------------------
[1, , 0 , 1 ] | Availability
[0, , 1 , 0 ] | Network
One last algorithm to try is Support Vector Machines (SVM). It neither provides helpful visualisation, nor is probabilistic, but often gives superior results.
* - in fact, neither Bayes theorem, nor Naive Bayes itself state anything about variable type, but most software packages that come to mind rely on numerical features. |
H: What is difference between text classification and topic models?
I know the difference between clustering and classification in machine learning, but I don't understand the difference between text classification and topic modeling for documents. Can I use topic modeling over documents to identify a topic? Can I use classification methods to classify the text inside these documents?
AI: Text Classification
I give you a bunch of documents, each of which has a label attached. I ask you to learn why you think the contents of the documents have been given these labels based on their words. Then I give you new documents and ask what you think the label for each one should be. The labels have meaning to me, not to you necessarily.
Topic Modeling
I give you a bunch of documents, without labels. I ask you to explain why the documents have the words they do by identifying some topics that each is "about". You tell me the topics, by telling me how much of each is in each document, and I decide what the topics "mean" if anything.
You'd have to clarify what you me by "identify one topic" or "classify the text". |
H: Algorithms for text clustering
I have a problem of clustering huge amount of sentences into groups by their meanings. This is similar to a problem when you have lots of sentences and want to group them by their meanings.
What algorithms are suggested to do this? I don't know number of clusters in advance (and as more data is coming clusters can change as well), what features are normally used to represent each sentence?
I'm trying now the simplest features with just list of words and distance between sentences defined as:
$|A \cup B$ \ $A \cap B|$/$|A \cup B|$
(A and B are corresponding sets of words in sentence A and B)
Does it make sense at all?
I'm trying to apply Mean-Shift algorithm from scikit library to this distance, as it does not require number of clusters in advance.
If anyone will advise better methods/approaches for the problem - it will be very much appreciated as I'm still new to the topic.
AI: Check the Stanford NLP Group's open source software, in particular, Stanford Classifier. The software is written in Java, which will likely delight you, but also has bindings for some other languages. Note, the licensing - if you plan to use their code in commercial products, you have to acquire commercial license.
Another interesting set of open source libraries, IMHO suitable for this task and much more, is parallel framework for machine learning GraphLab, which includes clustering library, implementing various clustering algorithms. It is especially suitable for very large volume of data (like you have), as it implements MapReduce model and, thus, supports multicore and multiprocessor parallel processing.
You most likely are aware of the following, but I will mention it just in case. Natural Language Toolkit (NLTK) for Python contains modules for clustering/classifying/categorizing text. Check the relevant chapter in the NLTK Book.
UPDATE:
Speaking of algorithms, it seems that you've tried most of the ones from scikit-learn, such as illustrated in this topic extraction example. However, you may find useful other libraries, which implement a wide variety of clustering algorithms, including Non-Negative Matrix Factorization (NMF). One of such libraries is Python Matrix Factorization (PyMF) (source code). Another, even more interesting, library, also Python-based, is NIMFA, which implements various NMF algorithms. Here's a research paper, describing NIMFA. Here's an example from its documentation, which presents the solution for very similar text processing problem of topic clustering. |
H: Coreference Resolution for German Texts
Does anyone know a libarary for performing coreference resolution on German texts?
As far as I know, OpenNLP and Stanford NLP are not able to perform coreference resolution for German Texts.
The only tool that I know is CorZu which is a python library.
AI: Here is a couple of tools that may be worth a look:
Bart, an open source tool that have been used for several languages, including German. Available from the website
Sucre is a tool developed at the University of Stuttgart. I don't know if it's available easily. You can see this paper about it. |
H: Does scikit-learn have a forward selection/stepwise regression algorithm?
I am working on a problem with too many features and training my models takes way too long. I implemented a forward selection algorithm to choose features.
However, I was wondering does scikit-learn have a forward selection/stepwise regression algorithm?
AI: No, scikit-learn does not seem to have a forward selection algorithm. However, it does provide recursive feature elimination, which is a greedy feature elimination algorithm similar to sequential backward selection. See the documentation here |
H: Why might several types of models give almost identical results?
I've been analyzing a data set of ~400k records and 9 variables The dependent variable is binary. I've fitted a logistic regression, a regression tree, a random forest, and a gradient boosted tree. All of them give virtual identical goodness of fit numbers when I validate them on another data set.
Why is this so? I'm guessing that it's because my observations to variable ratio is so high. If this is correct, at what observation to variable ratio will different models start to give different results?
AI: This results means that whatever method you use, you are able to get reasonably close to the optimal decision rule (aka Bayes rule). The underlying reasons have been explained in Hastie, Tibshirani and Friedman's "Elements of Statistical Learning". They demonstrated how the different methods perform by comparing Figs. 2.1, 2.2, 2.3, 5.11 (in my first edition -- in section on multidimensional splines), 12.2, 12.3 (support vector machines), and probably some others. If you have not read that book, you need to drop everything RIGHT NOW and read it up. (I mean, it isn't worth losing your job, but it is worth missing a homework or two if you are a student.)
I don't think that observations to variable ratio is the explanation. In light of my rationale offered above, it is the relatively simple form of the boundary separating your classes in the multidimensional space that all of the methods you tried have been able to identify. |
H: What initial steps should I use to make sense of large data sets, and what tools should I use?
Caveat: I am a complete beginner when it comes to machine learning, but eager to learn.
I have a large dataset and I'm trying to find pattern in it. There may / may not be correlation across the data, either with known variables, or variables that are contained in the data but which I haven't yet realised are actually variables / relevant.
I'm guessing this would be a familiar problem in the world of data analysis, so I have a few questions:
The 'silver bullet' would be to throw this all this data into a stats / data analysis program and for it to crunch the data looking for known / unknown patterns trying to find relations. Is SPSS suitable, or are there other applications which may be better suited.
Should I learn a language like R, and figure out how to manually process the data. Wouldn't this comprimise finding relations as I would have to manually specify what and how to analyse the data?
How would a professional data miner approach this problem and what steps would s/he take?
AI: I will try to answer your questions, but before I'd like to note that using term "large dataset" is misleading, as "large" is a relative concept. You have to provide more details. If you're dealing with bid data, then this fact will most likely affect selection of preferred tools, approaches and algorithms for your data analysis. I hope that the following thoughts of mine on data analysis address your sub-questions. Please note that the numbering of my points does not match the numbering of your sub-questions. However, I believe that it better reflects general data analysis workflow, at least, how I understand it.
Firstly, I think that you need to have at least some kind of conceptual model in mind (or, better, on paper). This model should guide you in your exploratory data analysis (EDA). A presence of a dependent variable (DV) in the model means that in your machine learning (ML) phase later in the analysis you will deal with so called supervised ML, as opposed to unsupervised ML in the absence of an identified DV.
Secondly, EDA is a crucial part. IMHO, EDA should include multiple iterations of producing descriptive statistics and data visualization, as you refine your understanding about the data. Not only this phase will give you valuable insights about your datasets, but it will feed your next important phase - data cleaning and transformation. Just throwing your raw data into a statistical software package won't give much - for any valid statistical analysis, data should be clean, correct and consistent. This is often the most time- and effort-consuming, but absolutely necessary part. For more details on this topic, read this nice paper (by Hadley Wickham) and this (by Edwin de Jonge and Mark van der Loo).
Now, as you're hopefully done with EDA as well as data cleaning and transformation, your ready to start some more statistically-involved phases. One of such phases is exploratory factor analysis (EFA), which will allow you to extract the underlying structure of your data. For datasets with large number of variables, the positive side effect of EFA is dimensionality reduction. And, while in that sense EFA is similar to principal components analysis (PCA) and other dimensionality reduction approaches, I think that EFA is more important as it allows to refine your conceptual model of the phenomena that your data "describe", thus making sense of your datasets. Of course, in addition to EFA, you can/should perform regression analysis as well as apply machine learning techniques, based on your findings in previous phases.
Finally, a note on software tools. In my opinion, current state of statistical software packages is at such point that practically any major software packages have comparable offerings feature-wise. If you study or work in an organization that have certain policies and preferences in term of software tools, then you are constrained by them. However, if that is not the case, I would heartily recommend open source statistical software, based on your comfort with its specific programming language, learning curve and your career perspectives. My current platform of choice is R Project, which offers mature, powerful, flexible, extensive and open statistical software, along with amazing ecosystem of packages, experts and enthusiasts. Other nice choices include Python, Julia and specific open source software for processing big data, such as Hadoop, Spark, NoSQL databases, WEKA. For more examples of open source software for data mining, which include general and specific statistical and ML software, see this section of a Wikipedia page.
UPDATE: Forgot to mention Rattle, which is also a very popular open source R-oriented GUI software for data mining. |
H: Looking for Web scraping tool for unstructured data
I want to scrape some data from a website.
I have used import.io but still not much satisfied.. can any of you suggest about it.. whats the best tool to get the unstructured data from web
AI: Try BeautifulSoup - http://www.crummy.com/software/BeautifulSoup/
From the website "Beautiful Soup is a Python library designed for quick turnaround projects like screen-scraping."
I have no personally used it, but it often comes up in regards to a nice library for scraping. Here's a blog post on using it to scrape Craigslist http://www.gregreda.com/2014/07/27/scraping-craigslist-for-tickets/ |
H: Extrapolating GLM coefficients for year a product was sold into future years?
I've fit a GLM (Poisson) to a data set where one of the variables is categorical for the year a customer bought a product from my company, ranging from 1999 to 2012. There's a linear trend of the coefficients for the values of the variable as the year of sale increases.
Is there any problem with trying to improve predictions for 2013 and maybe 2014 by extrapolating to get the coefficients for those years?
AI: I believe that this is a case for applying time series analysis, in particular time series forecasting (http://en.wikipedia.org/wiki/Time_series). Consider the following resources on time series regression:
http://www.wiley.com/WileyCDA/WileyTitle/productCd-0471363553.html
http://www.stats.uwo.ca/faculty/aim/tsar/tsar.pdf (especially section
4.6)
http://arxiv.org/abs/0802.0219 (Bayesian approach) |
H: Using Heuristic Methods for AB Testing
I've just started reading about AB testing, as it pertains to optimizing website design. I find it interesting that most of the methods assume that changes to the layout and appearance are independent of each other. I understand that the most common method of optimization is the 'multi-armed bandit' procedure. While I grasp the concept of it, it seems to ignore the fact that changes (changes to the website in this case) are not independent to each other.
For example, if company is testing the placement and color of the logo on the website, they find the optimal color first then the optimal placement. Not that I'm some expert on human psychology, but shouldn't these be related? Can the multi-armed bandit method be efficiently used in this case or more complicated cases?
My first instinct is to say no. On that note, why haven't people used heuristic algorithms to optimize over complicated AB testing sample spaces? For an example, I thought someone might have used a genetic algorithm to optimize a website layout, but I can find no examples of something like this out there. This leads me to believe that I'm missing something important in my understanding of AB testing as it applies to website optimization.
Why isn't heuristic optimization used on more complicated websites?
AI: If I understand you question correctly, there are two reasons why genetic algorithm might not a good idea for optimizing website features:
1) Feedback data is coming in too slow, say once a day, genetic algorithm might take a while to converge.
2) In the process of testing genetic algorithm will probably come up with combinations that are 'strange' and that might not be the risk the company wants to take. |
H: How much data space is used by all scientific articles?
I was wondering if there is any research or study made to calculate the volume of space is used by all scientific articles. It could be in pdf, txt, compressed, or any other format. Is there even a way to measure it?
Can some one point me towards realizing this study?
Regards and thanks.
AI: Perhaps you are looking to quantify the amount of filespace used by a specific subset of data that we will label as "academic publications."
Well, to estimate, you could find stats on how many publications are housed at all the leading libraries (JSTOR, EBSCO, AcademicHost, etc) and then get the mean average size of each. Multiply that by the number of articles and whamo, you've got yourself an estimate.
Here's the problem, though: PDF files store the text from string s differently (in size) than, say, a text document stores that same string. Likewise, a compressed JPEG will store an amount of information i differently than a non-compressed JPEG. So you see we could have two of the same articles containing the same information i but taking up different amounts of memory m.
Are you looking to get a wordcount on the amount of scientific literature?
Are you looking to get an approximation of file system space used to store all academically published content in the world? |
H: SAP HANA vs Exasol
I am interested in knowing the differences in functionality between SAP HANA and Exasol. Since this is a bit of an open ended question let me be clear. I am not interested in people debating which is "better" or faster. I am only interested in what each was designed to do so please keep your opinions out of it. I suspect it is a bit like comparing HANA to Oracle Exalytics where there is some overlap but the functionality goals are different.
AI: There's not an enormous difference between what you can do with the two databases, it's more a question of the focus and the way the functionality is implemented and that's where it becomes difficult to explain without using words like "better" and "faster" (and for sure words like "cheaper")
EXASOL was designed for speed and ease of use with Analytical processing and is designed to run on clusters of commodity hardware. SAP is a more complex, aims to do more than "just" Analytical processing and runs only on a range of "approved" hardware.
What type of differences did you have in mind ? |
H: Machine learning libraries for Ruby
Are there any machine learning libraries for Ruby that are relatively complete (including a wide variety of algorithms for supervised and unsupervised learning), robustly tested, and well-documented? I love Python's scikit-learn for its incredible documentation, but a client would prefer to write the code in Ruby since that's what they're familiar with.
Ideally I am looking for a library or set of libraries which, like scikit and numpy, can implement a wide variety of data structures like sparse matrices, as well as learners.
Some examples of things we'll need to do are binary classification using SVMs, and implementing bag of words models which we hope to concatenate with arbitrary numeric data, as described in this StackOverflow post.
AI: I'll go ahead and post an answer for now; if someone has something better I'll accept theirs.
At this point the most powerful option appears to be accessing WEKA using jRuby. We spent yesterday scouring the 'net, and this combination was even used by a talk at RailsConf 2012, so I would guess if there were a comparable pure ruby package, they would have used it.
Note that if you know exactly what you need, there are plenty of individual libraries that either wrap standalone packages like libsvm or re-implement some individual algorithms like Naive Bayes in pure Ruby and will spare you from using jRuby.
But for a general-purpose library, WEKA and jRuby seem to be the best bet at this time. |
H: Improve CoreNLP POS tagger and NER tagger?
The CoreNLP parts of speech tagger and name entity recognition tagger are pretty good out of the box, but I'd like to improve the accuracy further so that the overall program runs better. To explain more about accuracy -- there are situations in which the POS/NER is wrongly tagged. For instance:
"Oversaw car manufacturing" gets tagged as NNP-NN-NN
Rather than VB* or something similar, since it's a verb-like phrase (I'm not a linguist, so take this with a grain of salt).
So what's the best way to accomplish accuracy improvement?
Are there better models out there for POS/NER that can be incorporated into CoreNLP?
Should I switch to other NLP tools?
Or create training models with exception rules?
AI: Your best best is to train your own models on the kind of data you're going to be working with. |
H: Quick guide into training highly imbalanced data sets
I have a classification problem with approximately 1000 positive and 10000 negative samples in training set. So this data set is quite unbalanced. Plain random forest is just trying to mark all test samples as a majority class.
Some good answers about sub-sampling and weighted random forest are given here: What are the implications for training a Tree Ensemble with highly biased datasets?
Which classification methods besides RF can handle the problem in the best way?
AI: Max Kuhn covers this well in Ch16 of Applied Predictive Modeling.
As mentioned in the linked thread, imbalanced data is essentially a cost sensitive training problem. Thus any cost sensitive approach is applicable to imbalanced data.
There are a large number of such approaches. Not all implemented in R: C50, weighted SVMs are options. Jous-boost. Rusboost I think is only available as Matlab code.
I don't use Weka, but believe it has a large number of cost sensitive classifiers.
Handling imbalanced datasets: A review: Sotiris Kotsiantis, Dimitris Kanellopoulos, Panayiotis Pintelas'
On the Class Imbalance Problem: Xinjian Guo, Yilong Yin, Cailing Dong, Gongping Yang, Guangtong Zhou |
H: Combine multiple classifiers to build a multi-modal classifier
Suppose I am interested in classifying a set of instances composed by different content types, e.g.:
a piece of text
an image
as relevant or non-relevant for a specific class C.
In my classification process I perform the following steps:
Given a sample, I subdivide it in text and image
A first SVM binary classifier (SVM-text), trained only on text, classifies the text as relevant/non-relevant for the class C
A second SVM binary classifier (SVM-image), trained only on images, classifies the image as relevant/non-relevant for the class C
Both SVM-text and SVM-image produce an estimate of the probability of the analyzed content (text or image) of being relevant for the class C. Given this, I am able to state whether the text is relevant for C and the image is relevant for C.
However, these estimates are valid for segments of the original sample (either the text or the image), while it is not clear how to obtain a general opinion on the whole original sample (text+image). How can I combine conveniently the opinions of the two classifiers, so as to obtain a classification for the whole original sample?
AI: Basically, you can do one of two things:
Combine features from both classifiers. I.e., instead of SVM-text and SVM-image you may train single SVM that uses both - textual and visual features.
Use ensemble learning. If you already have probabilities from separate classifiers, you can simply use them as weights and compute weighted average. For more sophisticated cases there are Bayesian combiners (each classifier has its prior), boosting algorithms (e.g. see AdaBoost) and others.
Note, that ensembles where initially created for combining different learners, not different sets if features. In this later case ensembles have advantage mostly in cases when different kinds of features just can't be combined in a single vector efficiently. But in general, combing features is simpler and more straightforward. |
H: Difference between tf-idf and tf with Random Forests
I am working on a text classification problem using Random Forest as classifiers, and a bag-of-words approach.
I am using the basic implementation of Random Forests (the one present in scikit), that creates a binary condition on a single variable at each split. Given this, is there a difference between using simple tf (term frequency) features. where each word has an associated weight that represents the number of occurrences in the document, or tf-idf (term frequency * inverse document frequency), where the term frequency is also multiplied by a value that represents the ratio between the total number of documents and the number of documents containing the word)?
In my opinion, there should not be any difference between these two approaches, because the only difference is a scaling factor on each feature, but since the split is done at the level of single features this should not make a difference.
Am I right in my reasoning?
AI: Decision trees (and hence Random Forests) are insensitive to monotone transformations of input features.
Since multiplying by the same factor is a monotone transformation, I'd assume that for Random Forests there indeed is no difference.
However, you eventually may consider using other classifiers that do not have this property, so it may still make sense to use the entire TF * IDF. |
H: How to do SVD and PCA with big data?
I have a large set of data (about 8GB). I would like to use machine learning to analyze it. So, I think that I should use SVD then PCA to reduce the data dimensionality for efficiency. However, MATLAB and Octave cannot load such a large dataset.
What tools I can use to do SVD with such a large amount of data?
AI: First of all, dimensionality reduction is used when you have many covariated dimensions and want to reduce problem size by rotating data points into new orthogonal basis and taking only axes with largest variance. With 8 variables (columns) your space is already low-dimensional, reducing number of variables further is unlikely to solve technical issues with memory size, but may affect dataset quality a lot. In your concrete case it's more promising to take a look at online learning methods. Roughly speaking, instead of working with the whole dataset, these methods take a little part of them (often referred to as "mini-batches") at a time and build a model incrementally. (I personally like to interpret word "online" as a reference to some infinitely long source of data from Internet like a Twitter feed, where you just can't load the whole dataset at once).
But what if you really wanted to apply dimensionality reduction technique like PCA to a dataset that doesn't fit into a memory? Normally a dataset is represented as a data matrix X of size n x m, where n is number of observations (rows) and m is a number of variables (columns). Typically problems with memory come from only one of these two numbers.
Too many observations (n >> m)
When you have too many observations, but the number of variables is from small to moderate, you can build the covariance matrix incrementally. Indeed, typical PCA consists of constructing a covariance matrix of size m x m and applying singular value decomposition to it. With m=1000 variables of type float64, a covariance matrix has size 1000*1000*8 ~ 8Mb, which easily fits into memory and may be used with SVD. So you need only to build the covariance matrix without loading entire dataset into memory - pretty tractable task.
Alternatively, you can select a small representative sample from your dataset and approximate the covariance matrix. This matrix will have all the same properties as normal, just a little bit less accurate.
Too many variables (n << m)
On another hand, sometimes, when you have too many variables, the covariance matrix itself will not fit into memory. E.g. if you work with 640x480 images, every observation has 640*480=307200 variables, which results in a 703Gb covariance matrix! That's definitely not what you would like to keep in memory of your computer, or even in memory of your cluster. So we need to reduce dimensions without building a covariance matrix at all.
My favourite method for doing it is Random Projection. In short, if you have dataset X of size n x m, you can multiply it by some sparse random matrix R of size m x k (with k << m) and obtain new matrix X' of a much smaller size n x k with approximately the same properties as the original one. Why does it work? Well, you should know that PCA aims to find set of orthogonal axes (principal components) and project your data onto first k of them. It turns out that sparse random vectors are nearly orthogonal and thus may also be used as a new basis.
And, of course, you don't have to multiply the whole dataset X by R - you can translate every observation x into the new basis separately or in mini-batches.
There's also somewhat similar algorithm called Random SVD. I don't have any real experience with it, but you can find example code with explanations here.
As a bottom line, here's a short check list for dimensionality reduction of big datasets:
If you have not that many dimensions (variables), simply use online learning algorithms.
If there are many observations, but a moderate number of variables (covariance matrix fits into memory), construct the matrix incrementally and use normal SVD.
If number of variables is too high, use incremental algorithms. |
H: Invariance Property of Vowpal Wabbit Updates - Explaination
One of the discussed nice aspects of the procedure that Vowpal Wabbit uses for updates to sgd
pdf is so-called weight invariance, described in the linked as:
"Among these updates we mainly focus on a novel
set of updates that satisfies an additional invariance
property: for all importance weights of h, the update
is equivalent to two updates with importance weight
h/2. We call these updates importance invariant."
What does this mean and why is it useful?
AI: Often different data samples have different weighting ( eg the costs of misclassification error for one group of data is higher than for other classes).
Most error metrics are of the form $\sum_i e_i$ where e_i is the loss ( eg squared error) on data point $i$. Therefore weightings of the form $\sum_i w_i e_i$ are equivalent to duplicating the data w_i times (eg for w_i integer).
One simple case is if you have repeated data - rather than keeping all the duplicated data points, you just "weight" your one repeated sample by the number of instances.
Now whilst this is easy to do in a batch setting, it is hard in vowpal wabbits online big data setting: given that you have a large data set, you do not just want to represent the data n times to deal with the weighting ( because it increases your computational load). Similarly, just multiplying the gradient vector by the weighting - which is correct in batch gradient descent - will cause big problems for stochastic/online gradient descent: essentially you shoot off in one direction ( think of large integer weights) then you shoot off in the other - causing significant instability. SGD essentially relies on all the errors to be of roughly the same order ( so that the learning rate can be set appropriately). So what they propose is to ensure that the update for training sample x_i with weight n is equivalent to presenting training sample x_i n times consecutively.
The idea being that presenting it consecutively reduces the problem because the error gradient (for that single example $x_i$) reduces for each consecutive presentation and update (as you get closer & closer to the minimum for that specific example). In other words the consecutive updates provides a kind of feedback control.
To me it sounds like you would still have instabilities (you get to zero error on x_i, then you get to zero error on x_i+1,...). the learning rate will need to be adjusted to take into account the size of the weights. |
H: Versatile data structure for combined statistics
Not sure if this is Math, Stats or Data Science, but I figured I would post it here to get the site used.
As a programmer, when you have a system/component implemented, you might want to allow some performance monitoring. For example to query how often a function call was used, how long it took and so on. So typically you care about count, means/percentile, max/min and similiar statistics. This could be measurements since startup, but also a rolling average or window.
I wonder if there is a good data structure which can be updated efficiently concurrently which can be used as the source for most of those queries. For example having a ringbuffer of rollup-metrics (count, sum, min, max) over increasing periods of time and a background aggregate process triggered regularly.
The focus here (for me) is on in-memory data structures with limited memory consumption. (For other things I would use a RRD type of library).
AI: It sounds like you would like the Boost Accumulators library:
Boost.Accumulators is both a library for incremental statistical
computation as well as an extensible framework for incremental
calculation in general. The library deals primarily with the concept
of an accumulator, which is a primitive computational entity that
accepts data one sample at a time and maintains some internal state.
These accumulators may offload some of their computations on other
accumulators, on which they depend. Accumulators are grouped within an
accumulator set. Boost.Accumulators resolves the inter-dependencies
between accumulators in a set and ensures that accumulators are
processed in the proper order. |
H: How to classify test objects with this ruleset in order of priority?
I'm coding a program that tests several classifiers over a database weather.arff, I found rules below, I want classify test objects.
I do not understand how the classification, it is described:
"In classification, let R be the set of generated rules and T the training data. The basic idea of the proposed method is to choose a set of high confidence rules in R to cover T. In classifying a test object, the first rule in the set of rules that matches the test object condition classifies it. This process ensures that only the highest ranked rules classify test objects. "
How to classify test objects?
No. outlook temperature humidity windy play
1 sunny hot high FALSE no
2 sunny hot high TRUE no
3 overcast hot high FALSE yes
4 rainy mild high FALSE yes
5 rainy cool normal FALSE yes
6 rainy cool normal TRUE no
7 overcast cool normal TRUE yes
8 sunny mild high FALSE no
9 sunny cool normal FALSE yes
10 rainy mild normal FALSE yes
11 sunny mild normal TRUE yes
12 overcast mild high TRUE yes
13 overcast hot normal FALSE yes
14 rainy mild high TRUE no
Rule found:
1: (outlook,overcast) -> (play,yes)
[Support=0.29 , Confidence=1.00 , Correctly Classify= 3, 7, 12, 13]
2: (humidity,normal), (windy,FALSE) -> (play,yes)
[Support=0.29 , Confidence=1.00 , Correctly Classify= 5, 9, 10]
3: (outlook,sunny), (humidity,high) -> (play,no)
[Support=0.21 , Confidence=1.00 , Correctly Classify= 1, 2, 8]
4: (outlook,rainy), (windy,FALSE) -> (play,yes)
[Support=0.21 , Confidence=1.00 , Correctly Classify= 4]
5: (outlook,sunny), (humidity,normal) -> (play,yes)
[Support=0.14 , Confidence=1.00 , Correctly Classify= 11]
6: (outlook,rainy), (windy,TRUE) -> (play,no)
[Support=0.14 , Confidence=1.00 , Correctly Classify= 6, 14]
Thanks,
Dung
AI: Suppose your test object is (sunny, hot, normal, TRUE). Look through the rules top to bottom and see if any of the conditions are matched. The first rule for example tests the outlook feature. The value doesn't match, so the rule isn't matched. Move on to the next rule. And so on. In this case, rule 5 matches the test case and the classification for the p lay variable is "yes".
More generally, for any test case, look at the values its features take and find the first rule that those values satisfy. The implication of that rule will be its classification. |
H: Recommended Language/Framework for Building a New Recommendation Engine
Next week I'm going to begin prototyping a recommendation engine for work. I've implemented/completed the Netflix Challenge in Java before (for college) but have no real idea what to use for a production/enterprise level recommendation engine. Taking into consideration everything from a standalone programming language to things like Apache Mahout and Neo4j, does anyone have any advice on how to proceed?
AI: If you merely want to scale up a simple collaborative filter (low rank matrix factorization), I'd suggest looking at graphlab. Another graph-based (or should I say Giraph?) solution is Okapi. Spark's MLLib is another option (details), and it also supports implicit feedback out of the box. Mahout's behind the curve today; I wouldn't bother with it until it is migrated to Spark.
If you want to do something that the libraries don't do, say with regularization, you'll have to roll your own solution in your general purpose programming language of choice. It's not hard to get a prototype running, but you might run into scale problems in production; that's why I recommended solutions that scale easily.
There are also black box recommender systems from commercial vendors but I have no experience with those. |
H: Career switch to Big Data Analytics
I am a 35 year old IT professional who is purely technical. I am good at programming, learning new technologies, understanding them and implementing. I did not like mathematics at school, so I didn't score well in mathematics. I am very much interested in pursuing a career in Big Data analytics. I am more interested in Analytics rather than Big Data technologies (Hadoop etc.), though I do not dislike it. However, when I look around in the internet, I see that, people who are good in analytics (Data Scientists) are mainly Mathematics graduates who have done their PHds and sound like intelligent creatures, who are far far ahead of me. I get scared sometimes to think whether my decision is correct, because learning advance statistics on your own is very tough and requires a of hard work and time investment.
I would like to know whether my decision is correct, or should I leave this piece of work to only intellectuals who have spend their life in studying in prestigious colleges and earned their degrees and PHDs.
AI: Due to high demand, it is possible to start a career in data science without a formal degree. My experience is that having a degree is often a 'requirement' in job descriptions, but if the employer is desperate enough, then that won't matter. In general, it's harder to get into large corporations with formalized job application processes than smaller companies without them. "Knowing people" can get you a long way, in either case.
Regardless of your education, no matter how high demand is, you must have the skills to do the job.
You are correct in noting that advanced statistics and other mathematics are very hard to learn independently. It is a matter of how badly you want to make the career change. While some people do have 'natural talent' in mathematics, everybody does have to do the work to learn. Some may learn more quickly, but everybody has to take the time to learn.
What it comes down to is your ability to show potential employers that you have a genuine interest in the field, and that you will be able to learn quickly on the job. The more knowledge you have, the more projects you can share in a portfolio, and the more work experience under your belt, the higher level jobs that will be available to you. You may have to start in an entry level position first.
I could suggest ways to study mathematics independently, but that isn't part of your question. For now, just know that it's hard, but possible if you are determined to make a career change. Strike while the iron is hot (while demand is high). |
H: Scikit Learn Logistic Regression Memory Leak
I'm curious if anyone else has run into this. I have a data set with about 350k samples, each with 4k sparse features. The sparse fill rate is about 0.5%. The data is stored in a scipy.sparse.csr.csr_matrix object, with dtype='numpy.float64'.
I'm using this as an input to sklearn's Logistic Regression classifier. The documentation indicates that sparse CSR matrices are acceptable inputs to this classifier. However, when I train the classifier, I get extremely bad memory performance; the memory usage of my process explodes from ~150 MB to fill all the available memory and then everything grinds to a halt as memory swapping to disk takes over.
Does anyone know why this classifier might expand the sparse matrix to a dense matrix? I'm using the default parameters for the classifier at the moment, within an updated anacoda distribution. Thanks!
scipy.__version__ = '0.14.0'
sklearn.__version__ = '0.15.2'
AI: Ok, this ended up being an RTFM situation, although in this case it was RTF error message.
While running this, I kept getting the following error:
DataConversionWarning: A column-vector y was passed when a 1d array was expected. Please change the shape of y to (n_samples, ), for example using ravel().
I assumed that, since this had to do with the target vector, and since it was a warning only, that it would just silently change my target vector to 1-D.
However, when I explicitly converted my target vector to 1-D, my memory problems went away. Apparently having the target vector in an incorrect form caused it to convert my input vectors into dense vectors from sparse vectors.
Lesson learned: follow the recommendations when sklearn 'suggests' you do something. |
H: Cosine Similarity for Ratings Recommendations? Why use it?
Lets say I have a database of users who rate different products on a scale of 1-5. Our recommendation engine recommends products to users based on the preferences of other users who are highly similar. My first approach to finding similar users was to use Cosine Similarity, and just treat user ratings as vector components. The main problem with this approach is that it just measures vector angles and doesn't take rating scale or magnitude into consideration.
My question is this:
Are there any drawbacks to just using the percentage difference between the vector components of two vectors as a measure of similarity? What disadvantages, if any, would I encounter if I used that method, instead of Cosine Similarity or Euclidean Distance?
For Example, why not just do this:
n = 5 stars
a = (1,4,4)
b = (2,3,4)
similarity(a,b) = 1 - ( (|1-2|/5) + (|4-3|/5) + (|4-4|/5) ) / 3 = .86667
Instead of Cosine Similarity :
a = (1,4,4)
b = (2,3,4)
CosSimilarity(a,b) =
(1*2)+(4*3)+(4*4) / sqrt( (1^2)+(4^2)+(4^2) ) * sqrt( (2^2)+(3^2)+(4^2) ) = .9697
AI: Rating bias and scale can easily be accounted for by standardization. The point of using Euclidean similarity metrics in vector space co-embeddings is that it reduces the recommendation problem to one of finding the nearest neighbors, which can be done efficiently both exactly and approximately. What you don't want to do in real-life settings is to have to compare every item/user pair and sort them according to some expensive metric. That just doesn't scale.
One trick is to use an approximation to cull the herd to a managable size of tentative recommendations, then to run your expensive ranking on top of that.
edit: Microsoft Research is presenting a paper that covers this very topic at RecSys right now: Speeding Up the Xbox Recommender System Using a
Euclidean Transformation for Inner-Product Spaces |
H: how to get the Polysemes of a word in wordnet or any other api?
how to get the Polysemes of a word in wordnet or any other api. I am looking for any api. with java any idea is appreciated?
AI: There are several third-party Java APIs for WordNet listed here: http://wordnet.princeton.edu/wordnet/related-projects/#Java
In the past, I've used JWNL the most: http://sourceforge.net/projects/jwordnet/
The documentation for JWNL isn't great, but it should provide the functionality you need. |
H: Hashing Trick - what actually happens
When ML algorithms, e.g. Vowpal Wabbit or some of the factorization machines winning click through rate competitions (Kaggle), mention that features are 'hashed', what does that actually mean for the model? Lets say there is a variable that represents the ID of an internet add, which takes on values such as '236BG231'. Then I understand that this feature is hashed to a random integer. But, my question is:
Is the integer now used in the model, as an integer (numeric) OR
is the hashed value actually still treated like a categorical variable and one-hot-encoded? Thus the hashing trick is just to save space somehow with large data?
AI: The a second bullet is the value in feature hashing. Hashing and one hot encoding to sparse data saves space. Depending on the hash algo you can have varying degrees of collisions which acts as a kind of dimensionality reduction.
Also, in the specific case of Kaggle feature hashing and one hot encoding help with feature expansion/engineering by taking all possible tuples (usually just second order but sometimes third) of features that are then hashed with collisions that explicitly create interactions that are often predictive whereas the individual features are not.
In most cases this technique combined with feature selection and elastic net regularization in LR acts very similar to a one hidden layer NN so it performs quite well in competitions. |
H: Where to start on neural networks
First of all I know the question may be not suitable for the website but I'd really appreciate it if you just gave me some pointers.
I'm a 16 years old programmer, I've had experience with many different programming languages, a while ago I started a course at Coursera, titled introduction to machine learning and since that moment i got very motivated to learn about AI, I started reading about neural networks and I made a working perceptron using Java and it was really fun but when i started to do something a little more challenging (building a digit recognition software), I found out that I have to learn a lot of math, I love math but the schools here don't teach us much, now I happen to know someone who is a math teacher do you think learning math (specifically calculus) is necessary for me to learn AI or should I wait until I learn those stuff at school?
Also what other things would be helpful in the path of me learning AI and machine learning? do other techniques (like SVM) also require strong math?
Sorry if my question is long, I'd really appreciate if you could share with me any experience you have had with learning AI.
AI: No, you should go ahead and learn the maths on your own. You will "only" need to learn calculus, statistics, and linear algebra (like the rest of machine learning). The theory of neural networks is pretty primitive at this point -- it more of an art than a science -- so I think you can understand it if you try. Ipso facto, there are a lot of tricks that you need practical experience to learn. There are lot of complicated extensions, but you can worry about them once you get that far.
Once you can understand the Coursera classes on ML and neural networks (Hinton's), I suggest getting some practice. You might like this introduction. |
H: What is the best Data Mining algorithm for prediction based on a single variable?
I have a variable whose value I would like to predict, and I would like to use only one variable as predictor. For instance, predict traffic density based on weather.
Initially, I thought about using Self-Organizing Maps (SOM), which performs unsupervised clustering + regression. However, since it has an important component of dimensionality reduction, I see it as more appropriated for a large number of variables.
Does it make sense to use it for a single variable as predictor? Maybe there are more adequate techniques for this simple case: I used "Data Mining" instead of "machine learning" in the title of my question, because I think maybe a linear regression could do the job...
AI: Common rule in machine learning is to try simple things first. For predicting continuous variables there's nothing more basic than simple linear regression. "Simple" in the name means that there's only one predictor variable used (+ intercept, of course):
y = b0 + x*b1
where b0 is an intercept and b1 is a slope. For example, you may want to predict lemonade consumption in a park based on temperature:
cons = b0 + temp * b1
Temperature is in well-defined continuous variable. But if we talk about something more abstract like "weather", then it's harder to understand how we measure and encode it. It's ok if we say that the weather takes values {terrible, bad, normal, good, excellent} and assign values numbers from -2 to +2 (implying that "excellent" weather is twice as good as "good"). But what if the weather is given by words {shiny, rainy, cool, ...}? We can't give an order to these variables. We call such variables categorical. Since there's no natural order between different categories, we can't encode them as a single numerical variable (and linear regression expects numbers only), but we can use so-called dummy encoding: instead of a single variable weather we use 3 variables - [weather_shiny, weather_rainy, weather_cool], only one of which can take value 1, and others should take value 0. In fact, we will have to drop one variable because of collinearity. So model for predicting traffic from weather may look like this:
traffic = b0 + weather_shiny * b1 + weather_rainy * b2 # weather_cool dropped
where either b1 or b2 is 1, or both are 0.
Note that you can also encounter non-linear dependency between predictor and predicted variables (you can easily check it by plotting (x,y) pairs). Simplest way to deal with it without refusing linear model is to use polynomial features - simply add polynomials of your feature as new features. E.g. for temperature example (for dummy variables it doesn't make sense, cause 1^n and 0^n are still 1 and 0 for any n):
traffic = b0 + temp * b1 + temp^2 * b2 [+ temp^3 * b3 + ...] |
H: Regression Model for explained model(Details inside)
I am kind of a newbie on machine learning and I would like to ask some questions based on a problem I have .
Let's say I have x y z as variable and I have values of these variables as time progresses like :
t0 = x0 y0 z0
t1 = x1 y1 z1
tn = xn yn zn
Now I want a model that when it's given 3 values of x , y , z I want a prediction of them like:
Input : x_test y_test z_test
Output : x_prediction y_prediction z_prediction
These values are float numbers. What is the best model for this kind of problem?
Thanks in advance for all the answers.
More details:
Ok so let me give some more details about the problems so as to be more specific.
I have run certain benchmarks and taken values of performance counters from the cores of a system per interval.
The performance counters are the x , y , z in the above example.They are dependent to each other.Simple example is x = IPC , y = Cache misses , z = Energy at Core.
So I got this dataset of all these performance counters per interval .What I want to do is create a model that after learning from the training dataset , it will be given a certain state of the core ( the performance counters) and predict the performance counters that the core will have in the next interval.
AI: AFAIK if you want to predict the value of one variable, you need to have one or more variables as predictors; i.e.: you assume the behaviour of one variable can be explained by the behaviour of other variables.
In your case you have three independent variables whose value you want to predict, and since you don't mention any other variables, I assume that each variable depends on the others. In that case you could fit three models (for instance, regression models), each of which would predict the value of one variable, based on the others. As an example, to predict x:
x_prediction=int+cy*y_test+cz*z_test
, where int is the intercept and cy, cz, the coefficients of the linear regression.
Likewise, in order to predict y and z:
y_prediction=int+cx*x_test+cx*z_test
z_prediction=int+cx*x_test+cy*y_test |
H: Scalable Outlier/Anomaly Detection
I am trying to setup a big data infrastructure using Hadoop, Hive, Elastic Search (amongst others), and I would like to run some algorithms over certain datasets. I would like the algorithms themselves to be scalable, so this excludes using tools such as Weka, R, or even RHadoop. The Apache Mahout Library seems to be a good option, and it features algorithms for regression and clustering tasks.
What I am struggling to find is a solution for anomaly or outlier detection.
Since Mahout features Hidden Markov Models and a variety of clustering techniques (including K-Means) I was wondering if it would be possible to build a model to detect outliers in time-series, using any of this. I would be grateful if somebody experienced on this could advice me
if it is possible, and in case it is
how-to do it, plus
an estimation of the effort involved and
accuracy/problems of this approach.
AI: I would take a look at t-digest algorithm. It's been merged into mahout and also a part of some other libraries for big data streaming. You can get more about this algorithm particularly and big data anomaly detection in general in next resources:
Practical machine learning anomaly detection book.
Webinar: Anomaly Detection When You Don't Know What You Need to Find
Anomaly Detection in Elasticsearch.
Beating Billion Dollar Fraud Using Anomaly Detection: A Signal Processing Approach using Argyle Data on the Hortonworks Data Platform with Accumulo |
H: R Script to generate random dataset in 2d space
I want to analyze the effectiveness and efficiency of kernel methods for which I would require 3 different data-set in 2 dimensional space for each of the following cases:
BAD_kmeans: The data set for which the kmeans clustering algorithm
will not perform well.
BAD_pca: The data set for which the Principal Component Analysis
(PCA) dimension reduction method upon projection of the original
points into 1-dimensional space (i.e., the first eigenvector) will
not perform well.
BAD_svm: The data set for which the linear Support Vector Machine
(SVM) supervised classification method using two classes of points
(positive and negative) will not perform well.
Which packages can I use in R to generate the random 2d data-set for each of the above cases ? A sample script in R would help in understanding
AI: None of the algorithms you mention are good with data that has uniform distribution.
size <- 20 #length of random number vectors
set.seed(1)
x <- runif(size) # generate samples from uniform distribution (0.0, 1.0)
y <-runif(size)
df <-data.frame(x,y)
# other distributions: rpois, rmvnorm, rnbinom, rbinom, rbeta, rchisq, rexp, rgamma, rlogis, rstab, rt, rgeom, rhyper, rwilcox, rweibull.
See this page for tutorial on generating random samples from distributions.
For specific set of randomized data sets that are 'hard' for these methods (e.r. linearly inseparable n-classes XOR patterns), see this blog post (incl. R code): http://tjo-en.hatenablog.com/entry/2014/01/06/234155. |
H: Which packages or functions can I use in R to plot 3D data like this?
There're many data points, each of which is associated with two coordinates and a numeral value, or three coordinates. And I wish it is coloured.
I checked packages "scatterplot3d" and "plot3D" but I couldn't find one like the example I give. It is like it has a fitting surface.
My data is basically like the following. In this way I think this kind of plot is gonna be perfectly suitble for this data:
ki,kt,Top10AverageF1Score
360,41,0.09371256716549396
324,41,0.09539634212851525
360,123,0.09473510831594467
36,164,0.09773486852645874
...
But I also may have one more additional variable, which makes it like:
NeighborhoodSize,ki,kt,Top10AverageF1Score
10,360,41,0.09371256716549396
15,324,41,0.09539634212851525
15,360,123,0.09473510831594467
20,36,164,0.09773486852645874
...
Do you also have any good idea for visualizing the second case? What kind of plot and which packages and functions, etc.
AI: You could use the wireframe function from the lattice package:
library("lattice")
wireframe(volcano[1:30, 1:30], shade=TRUE, zlab="") |
H: Masters thesis topics in big data
I am looking for a thesis to complete my master M2, I will work on a topic in the big data's field (creation big data applications), using hadoop/mapReduce and Ecosystem ( visualisation, analysis ...), Please suggest some topics or project that would make for a good masters thesis subject.
I add that I have bases in data warehouses, databases, data mining, good skills in programming, system administration and cryptography ...
Thanks
AI: Since it's a master's thesis, how about writing something regarding decision trees, and their "upgrades": boosting and Random Forests? And then integrate that with Map/Reduce, together with showing how to scale a Random Forest on Hadoop using M/R? |
H: MovieLens data set
I want to analyze MovieLens data set and load on my machine the M1 file. I combine actually two data files (ratings.dat and movies.dat) and sort the table according 'userId' and 'Time' columns. The head of my DataFrame looks like here (all columns values are corresponding to the original data sets):
In [36]: df.head(10)
Out[36]:
userId movieId Rating Time movieName \
40034 1 150 5 978301777 Apollo 13 (1995)
77615 1 1028 5 978301777 Mary Poppins (1964)
550485 1 2018 4 978301777 Bambi (1942)
400889 1 1962 4 978301753 Driving Miss Daisy (1989)
787274 1 1035 5 978301753 Sound of Music, The (1965)
128308 1 938 4 978301752 Gigi (1958)
497972 1 3105 5 978301713 Awakenings (1990)
28417 1 2028 5 978301619 Saving Private Ryan (1998)
6551 1 1961 5 978301590 Rain Man (1988)
35492 1 2692 4 978301570 Run Lola Run (Lola rennt) (1998)
genre
40034 Drama
77615 Children's|Comedy|Musical
550485 Animation|Children's
400889 Drama
787274 Musical
128308 Musical
497972 Drama
28417 Action|Drama|War
6551 Drama
35492 Action|Crime|Romance
[10 rows x 6 columns]
I can not understand that the same user with user Id 1 see or rated the different movies (Apollo13 (Id:150), Mary Poppins (Id:1028) and Bambi (Id:2018) exactly at the same time (up to the milleseconds). If somebody works already with this data set, please, clear this situation.
AI: When you enter ratings on movie lens, you get pages with 10 movies or so. You set all the ratings, then submit by clicking "next page" or something.
So I guess all the ratings for the same page are received at the same time, when you submit the page. |
H: Clustering strings inside strings?
I am not sure whether I formulated the question correctly. Basically, what I want to do is:
Let's suppose I have a list of 1000 strings which look like this:
cvzxcvzxstringcvzcxvz
otortorotrstringgrptprt
vmvmvmeopstring2vmrprp
vccermpqpstring2rowerm
proorororstring3potrprt
mprto2435string3famerpaer
etc.
I'd like to extract these reoccuring strings that occur on the list. What solution should I use? Does anyone know about algorithm that could do this?
AI: Interesting question! I have not encountered it before so here is a solution I just made up, inspired by the approach taken by the word2vec paper:
Define the pair-wise similarity based on the longest common substring (LCS), or the LCS normalized by the products of the string lengths. Cache this in a matrix for any pair of strings considered since it is expensive to calculate. Also consider approximations.
Find a Euclidean (hyperspherical, perhaps?) embedding that minimizes the error (Euclidean distance if using the ball, and the dot product if using the sphere). Assume random initialization, and use a gradient-based optimization method by taking the Jacobian of the error.
Now you have a Hilbert space embedding, so cluster using your algorithm of choice!
Response to deleted comment asking how to cluster multiple substrings: The bulk of the complexity lies in the first stage; the calculation of the LCS, so it depends on efficiently you do that. I've had luck with genetic algorithms. Anyway, what you'd do in this case is define a similarity vector rather than a scalar, whose elements are the k-longest pair-wise LCS; see this discussion for algorithms. Then I would define the error by the sum of the errors corresponding to each substring.
Something I did not address is how to choose the dimensionality of the embedding. The word2vec paper might provide some heuristics; see this discussion. I recall they used pretty big spaces, on the order of a 1000 dimensions, but they were optimizing something more complicated, so I suggest you start at R^2 and work your way up. Of course, you will want to use a higher dimensionality for the multiple LCS case. |
H: Rough vs Fuzzy vs Granular Computing
For my Computational Intelligence class, I'm working on classifying short text. One of the papers that I've found makes a lot of use of granular computing, but I'm struggling to find a decent explanation of what exactly it is.
From what I can gather from the paper, it sounds to me like granular computing is very similar to fuzzy sets. So, what exactly is the difference. I'm asking about rough sets as well, because I'm curious about them and how they relate to fuzzy sets. If at all.
Edit: Here is the paper I'm referencing.
AI: "Granularity" refers to the resolution of the variables under analysis. If you are analyzing height of people, you could use course-grained variables that have only a few possible values -- e.g. "above-average, average, below-average" -- or a fine-grained variable, with many or an infinite number of values -- e.g. integer values or real number values.
A measure is "fuzzy" if the distinction between alternative values is not crisp. In the course-grained variable for height, a "crisp" measure would mean that any given individual could only be assigned one value -- e.g. a tall-ish person is either "above-average", or "average". In contrast, a "fuzzy" measure allows for degrees of membership for each value, with "membership" taking values from 0 to 1.0. Thus, a tall-ish person could be a value of "0.5 above-average", "0.5 average", "0.0 below-average".
Finally, a measure is "rough" when two values are given: upper and lower bounds as an estimate of the "crisp" measure. In our example of a tall-ish person, the rough measure would be {UPPER = above-average, LOWER = average}.
Why use granular, fuzzy, or rough measures at all, you might ask? Why not measure everything in nice, precise real numbers? Because many real-world phenomena don't have a good, reliable intrinsic measure and measurement procedure that results in a real number. If you ask married couples to rate the quality of their marriage on a scale from 1 to 10, or 1.00 to 10.00, they might give you a number (or range of numbers), but how reliable are those reports? Using a course-grained measure (e.g. "happy", "neutral/mixed", "unhappy"), or fuzzy measure, or rough measure can be more reliable and more credible in your analysis. Generally, it's much better to use rough/crude measures well than to use precise/fine-grained measures poorly. |
H: Machine learning - features engineering from date/time data
What are the common/best practices to handle time data for machine learning application?
For example, if in data set there is a column with timestamp of event, such as "2014-05-05", how you can extract useful features from this column if any?
Thanks in advance!
AI: I would start by graphing the time variable vs other variables and looking for trends.
For example
In this case there is a periodic weekly trend and a long term upwards trend. So you would want to encode two time variables:
day_of_week
absolute_time
In general
There are several common time frames that trends occur over:
absolute_time
day_of_year
day_of_week
month_of_year
hour_of_day
minute_of_hour
Look for trends in all of these.
Weird trends
Look for weird trends too. For example you may see rare but persistent time based trends:
is_easter
is_superbowl
is_national_emergency
etc.
These often require that you cross reference your data against some external source that maps events to time.
Why graph?
There are two reasons that I think graphing is so important.
Weird trends
While the general trends can be automated pretty easily (just add them
every time), weird trends will often require a human eye and knowledge
of the world to find. This is one reason that graphing is so
important.
Data errors
All too often data has serious errors in it. For example, you may find that the dates were encoded in two formats and only one of them has been correctly loaded into your program. There are a myriad of such problems and they are surprisingly common. This is the other reason I think graphing is important, not just for time series, but for any data. |
H: Detecting Spam using Machine Learning
The most online tutorials like to use a simple example to introduce to machine learning by classify unknown text in spam or not spam. They say that this is a binary-class problem. But why is this a binary-class problem? I think it is a one-class problem! I do only need positive samples of my inbox to learn what is not spam. If I do take a bunch of not spam textes as positiv samples and a bunch of spam-mails as negativ samples, then of course it's possible to train a binary-classifier and make predictions from unlabeled data, but where is the difference to the onc-class-approach? There I would just define a training-set of all non spam examples and train some one-class classifier. What do you think?
AI: Strictly speaking, "one class classification" does not make sense as an idea. If there is only one possible state of a predicted value, then there is no prediction problem. The answer is always the single class.
Concretely, if you only have spam examples, you would always achieve 100% accuracy by classifying all email as spam. This is clearly wrong, and the only way to know how it is wrong is to know where the classification is wrong -- where emails are not in the spam class.
So-called one-class classification techniques are really anomaly detection approaches. They have an implicit assumption that things unlike the examples are not part of the single class, but, this is just an assumption about data being probably not within the class. There's a binary classification problem lurking in there.
What is wrong with a binary classifier? |
H: Text-Classification-Problem, what is the right approach?
I'm planing to write a classification program that is able to classify unknown text in around 10 different categories and if none of them fits it would be nice to know that. It is also possible that more then one category is right.
My predefined categories are:
c1 = "politics"
c2 = "biology"
c3 = "food"
...
I'm thinking about the right approach in how to represent my training-data or what kind of classification is the right one. The first challenge is about finding the right features. If I only have text (250 words each) what method would you recommend to find the right features? My first approach is to remove all stop-words and use the POS-Tagger (Stanford NLP POS-Tagger) to find nouns, adjective etc. I count them an use all frequently appeared words as features.
e.g. politics, I've around 2.000 text-entities. With the mentioned POS-Tagger I found:
law: 841
capitalism: 412
president: 397
democracy: 1007
executive: 112
...
Would it be right to use only that as features? The trainings-set would then look like:
Training set for politics:
feature law numeric
feature capitalism numeric
feature president numeric
feature democracy numeric
feature executive numeric
class politics,all_others
sample data:
politics,5,7,1,9,3
politics,14,4,6,7,9
politics,9,9,9,4,2,1
politics,5,8,0,7,6
...
all_others,0,2,4,1,0
all_others,0,0,1,1,1
all_others,7,4,0,0,0
...
Would this be a right approach for binary-classification? Or how would I define my sets? Or is multi-class classification the right approach? Then it would look like:
Training set for politics:
feature law numeric
feature capitalism numeric
feature president numeric
feature democracy numeric
feature executive numeric
feature genetics numeric
feature muscle numeric
feature blood numeric
feature burger numeric
feature salad numeric
feature cooking numeric
class politics,biology,food
sample data:
politics,5,7,1,9,3,0,0,2,1,0,1
politics,14,4,6,7,9,0,0,0,0,0,1
politics,9,9,9,4,2,1,1,1,1,0,3
politics,5,8,0,7,6,2,2,0,1,0,1
...
biology,0,2,4,1,0,4,19,5,0,2,2
biology,0,0,1,1,1,12,9,9,2,1,1
biology,7,4,0,0,0,10,10,3,0,0,7
...
What would you say?
AI: I think perhaps the first thing to decide that will help clarify some of your other questions is whether you want to perform binary classification or multi-class classification. If you're interested in classifying each instance in your dataset into more than one class, then this brings up a set of new concerns regarding setting up your data set, the experiments you want to run, and how you plan to evaluate your classifier(s). My hunch is that you could formulate your task as a binary one where you train and test one classifier for each class you want to predict, and simply set up the data matrix so that there are two classes to predict - (1) the one you're interested in classifying and (2) everything else.
In that case, instead of your training set looking like this (where each row is a document and columns 1-3 contain features for that document, and the class column is the class to be predicted):
1 2 3 class
feature1 feature2 feature3 politics
feature1 feature2 feature3 law
feature1 feature2 feature3 president
feature1 feature2 feature3 politics
it would look like the following in the case where you're interested in detecting the politics class against everything else:
1 2 3 class
feature1 feature2 feature3 politics
feature1 feature2 feature3 non-politics
feature1 feature2 feature3 non-politics
feature1 feature2 feature3 politics
You would need to do this process for each class you're interested in predicting, and then train and test one classifier per class and evaluate each classifier according to your chosen metrics (usually accuracy, precision, or recall or some variation thereof).
As far as choosing features, this requires quite a bit of thinking. Features can be highly dependent on the type of text you're trying to classify, so be sure to explore your dataset and get a sense for how people are writing in each domain. Qualitative investigation isn't enough to decide once and for all what are good features, but it is a good way to get ideas. Also, look into TF-IDF weighting of terms instead of just using their frequency within each instance of your dataset. This will help you pick up on (a) terms that are prevalent within a document (and possibly a target class) and (b) terms that distinguish a given document from other documents. I hope this helps a little. |
H: Best format for recording time stamp and GPS
In most data acquisition settings it is useful to tag your data with time and location. If I write the data to csv file, what are the best formats that I can use for this two variables if I want to create a heatmap on Google Maps?
AI: As Spacedman put it, "best" is pretty subjective. However, as we have found, a good format for time is Unix time (aka POSIX time, aka Epoch time). Most databases support it and it is still pretty human readable.
For location, we like decimal degrees as it is easy to read and stored and is compatible with Google Maps API. It's also easy to convert to other formats if needed. |
H: Python Machine Learning Experts
I'd like to apply some of the more complex supervised machine learning techniques in python - deep learning, generalized addative models, proper implementation of regularization, other cool stuff I dont even know about, etc.
Any recommendations how I could find expert ML folks that would like to collaborate on projects?
AI: You could try some competitions from kaggle.
Data Science courses from Coursera, edX, etc also provide forums for discussion.
Linkedin or freelance sites could be other possibilities. |
H: Clarification about Octave data size limit
I'm just starting to work on a relatively large dataset after ML course in Coursera.
Trying to work on https://archive.ics.uci.edu/ml/datasets/YearPredictionMSD.
Got an accuracy of 5.2 in training and test set with linear regression using gradient descent in octave.
I tried adding all possible quadratic features (515345 instances and 4275 features), but the code just won't stop executing in my HP Pavilion g6 2320tx, with 4GB RAM in Ubuntu 14.04.
Is this beyond the data size capacity of Octave ?
AI: You have about 4GB of RAM on your machine and Octave is an in memory application.
If you want to work with 515345 instances and 4275 features, assuming that you are using double precision (i.e. 8 bytes), you would need a memory of 515345*4275*8/1000000/1024 bytes ~ 17.6 GB. Even if you were using 4 bytes for each data point, you would require at least 9 GB for the computation to go through.
This issue might not be the Octave memory restriction in this case. See here for further details on Octave's memory usage. |
H: How can I create a custom tag in JPMML?
I'm trying to create a logistic regression model in jpmml, then write the PMML to a file. The problem I'm having, is that I can't find any way to create a custom tag, such as "shortForm" and "longForm" in the following example:
<MapValues outputColumn="longForm">
<FieldColumnPair field="gender" column="shortForm"/>
<InlineTable>
<row><shortForm>m</shortForm><longForm>male</longForm>
</row>
<row><shortForm>f</shortForm><longForm>female</longForm>
</row>
</InlineTable>
</MapValues>
Here's what I have so far:
MapValues mv = new MapValues("output")
.withFieldColumnPairs(
new FieldColumnPair( new FieldName("gender"), "shortForm" )
).withInlineTable(
new InlineTable().withRows(
new Row().with???( new ??? )
)))
In short, I am asking for an API call I can use to instantiate the "shortForm" element in the example, and attach it to the "row" object. I've been all through the API, examples, and Google/SO, and can't find a thing.
Thanks for your help!
AI: You can/should use a generic Java Architecture for XML Binding (JAXB) approach.
Simply put, call Row#withContent(Object...) with instances of org.w3c.dom.Element that represent the desired XML content.
For example:
Document document = documentBuilder.newDocument();
Element shortForm = document.createElement("shortForm");
shortForm.setTextContent("m");
Element longForm = document.createElement("longForm");
longForm.setTextContent("male");
row = row.withContent(shortForm, longForm); |
H: Classification of DNA Sequences
I have a database of 3190 instances of DNA consisting of 60 sequential DNA nucleotide positions classified according to 3 types: EI, IE, Other.
I want to formulate a supervised classifier.
My present approach is to formulate a 2nd order Markov Transition Matrix for each instance and apply the resulting data to a Neural Network.
How best to approach this classification problem, given that the Sequence of the data should be relevant? Is there a better approach than the one I came up with?
AI: One way would be to create 20 features (each feature representing a codon). In this way, you would have a dataset with 3190 instances and 20 categorical features. There is no need to treat the sequence as a Markov chain.
Once the dataset has been featurized as suggested above, any supervised classifier can work well. I would suggest using a gradient boosting machine as it might be better suited to handle categorical features. |
H: Visualizing Support Vector Machines (SVM) with Multiple Explanatory Variables
I was wondering if anyone was aware of any methods for visualizing an SVM model where there are more than three continuous explanatory variables. In my particular situation, my response variable is binomial, with 6 continuous explanatory variables (predictors), one categorical explanatory variable (predictor). I have already reduced the number of predictors and I am primarily using R for my analysis.
(I am unaware if such a task is possible/ worth pursuing.)
Thanks for your time.
AI: Does it matter that the model is created in the form of SVM?
If no, I have seen a clever 6-D visualization. Its varieties are becoming popular in medical presentations.
3 dimensions are shown as usual, in orthographic projection.
Dimension 4 is color (0..255)
Dimension 5 is thickness of the symbol
Dimension 6 requires animation. It is a frequency of vibration of a dot on the screen.
In static, printed versions, one can replace frequency of vibration by blur around the point, for a comparable visual perception.
If yes, and you specifically need to draw separating hyperplanes, and make them look like lines\planes, the previous trick will not produce good results. Multiple 3-D images are better. |
H: How to run R programs on multicore using doParallel package?
I am running SVM algorithm in R.It is taking long time to run the algorithm.I have system with 32GB RAM.How can I use that whole RAM memory to speed my process.
AI: I would add a comment but I do not have enough reputation points. I might suggest Using "R revolution open". It is a Build of R that includes a lot of native support for multi-core processing. I have not used it much as my computer is very old, but it is defiantly worth looking at. Plus it is free. |
H: How to run R scripts without closing X11
I would like to run an R script using a single command (e.g. bat file or shortcut).
This R script asks the user to choose a file and then plots information about that file. All is done via dialog boxes.
I don't want the user to go inside R - because they don't know it at all.
So, I was using r cmd and other similar stuffs, but as soon as the plots are displayed, R exits and closes the plots.
What can I do?
Thanks for your help.
AI: This looks like a similar kind of a problem.
Solutions: (taken from above source)
Just sleep via Sys.sleep(10) which would wait ten seconds.
Wait for user input via readLines(stdin()) or something like that [untested]
Use the tcltk package which comes with R and is available on all platforms to pop up a window the user has to click to make the click disappear. That solution has been posted a few times over the years on r-help.
2nd option is better to use for the user.
P.S. Since I did not come up with the answer myself, I tried to put it in comment but my reputation is too low for that. |
H: DBPedia as Table not having all the properties
I browsed a sample for available data at http://dbpedia.org/page/Sachin_Tendulkar. I wanted these properties as columns, so I downloaded the CSV files from http://wiki.dbpedia.org/DBpediaAsTables.
Now, when I browse the data for the same entity "Sachin_Tendulkar", I find that many of the properties are not available. e.g. the property "dbpprop:bestBowling" is not present.
How can I get all the properties that I can browse through the direct resource page.
AI: The question was already answered on the DBpedia-discussion mailing list, by Daniel:
Hi Abhay,
the DBpediaAsTables dataset only contains the properties in the
dbpedia-owl namespace (mapping-based infobox data) and not those from
the dbpprop (raw infobox properties) namespace (regarding the
differences see [1]).
However, as you are only interested in the data about specific
entities, take a look at the CSV link at the bottom of the entity's
description page, e.g., for your example this link is [2].
Cheers,
Daniel
[1] wiki.dbpedia.org/Datasets#h434-10
[2] dbpedia.org/sparql?default-graph-uri=http%3A%2F%2Fdbpedia.org&query=DESCRIBE+%3Chttp://dbpedia.org/resource/Sachin_Tendulkar%3E&format=text%2Fcsv
On the DBpediaAsTables web page, you can find out which datasets were used to generate the tables: instance_types_en, labels, short_abstracts_en, mappingbased_properties_en, geo_coordinates_en.
Also, I want to clarify that DBpediaAsTables contains all instances from DBpedia 2014, and with "we provide some of the core DBpedia data" we want to say that not all datasets are included in the tables (but only the 5 I stated before)
If you want to generate your own tables that will contain custom properties, please refer to the section Generate your own Custom Tables.
Cheers,
Petar |
H: Confused about description of YearPrediction Dataset
https://archive.ics.uci.edu/ml/datasets/YearPredictionMSD
According to the description given in the above link,
the Attribute information specifies "average and covariance over all 'segments', each segment being described by a 12-dimensional timbre vector". So the covariance matrix should have 12*12 = 144 elements. But why is the number of timbre covariance features only 78 ?
AI: You are right, the covariance matrix should have n^2 elements. However, since cov_{i,j} = cov_{j,i}, there is no need to have a repeated feature cov_{j,i} if cov_{i,j} is already accounted for. Hence there will be only n*(n+1)/2 = 12*13/2 = 78 unique covariances and thus only 78 unique covariance based features (n of those will be variances). |
H: Using Clustering in text processing
Hi this is my first question in the Data Science stack. I want to create an algorithm for text classification. Suppose i have a large set of text and articles. Lets say around 5000 plain texts. I first use a simple function to determine the frequency of all the four and above character words. I then use this as the feature of each training sample. Now i want my algorithm to be able to cluster the training sets to according to their features, which here is the frequency of each word in the article. (Note that in this example, each article would have its own unique feature since each article has a different feature, for example an article has 10 "water and 23 "pure" and another has 8 "politics" and 14 "leverage"). Can you suggest the best possible clustering algorithm for this example?
AI: I don't know if you ever read SenseCluster by Ted Pedersen : http://senseclusters.sourceforge.net/. Very good paper for sense clustering.
Also, when you analyze words, think that "computer", "computers", "computering", ... represent one concept, so only one feature. Very important for a correct analysis.
To speak about the clustering algorithm, you could use a hierarchical clustering. At each step of the algo, you merge the 2 most similar texts according to their features (using a measure of dissimilarity, euclidean distance for example). With that measure of dissimilarity, you are able to find the best number of clusters and so, the best clustering for your texts and articles.
Good luck :) |
H: Can some one explain how PCA is relevant in extracting parameters of Gaussian Mixture Models
I am having some difficulty in seeing connection between PCA on second order moment matrix in estimating parameters of Gaussian Mixture Models. Can anyone connect the above??
AI: I believe the claim that you are referring to is that the maximum-likelihood estimate of the component means in a GMM must lie in the span of the eigenvectors of the second moment matrix. This follows from two steps:
Each component mean in the maximum-likelihood estimate is a linear combination of the data points. (You can show this by setting the gradient of the log-likelihood function to zero.)
Any linear combination of the data points must lie in the span of the eigenvectors of the second moment matrix. (You can show this by first showing that any individual data point must lie in the span, and therefore any linear combination must also be in the span.) |
H: Method for solving problem with variable number of predictors
I've been toying with this idea for a while. I think there is probably some method in the text mining literature, but I haven't come across anything just right...
What is/are some methods for tackling a problem where the number of variables it its self a variable. This is not a missing data problem, but one where the nature of the problem fundamentally changes. Consider the following example:
Suppose I want to predict who will win a race, a simple multinomial classification problem. I have lots of past data on races, plenty to train on. Lets further suppose I have observed each contestant run multiple races. The problem however is that the number or racers is variable. Sometimes there are only 2 racers, sometimes there are as many as 100 racers.
One solution might be to train a separate model for each number or racers, resulting in 99 models in this case, using any method I choose. E.g. I could have 100 random forests.
Another solution might be to include an additional variable called 'number_of_contestants' and have input field for 100 racers and simply leave them blank when no racer is present. Intuitively, it seems that this method would have difficulties predicting the outcome of a 100 contestant race if the number of racers follows a Poisson distribution (which I didn't originally specify in the problem, but I am saying it here).
Thoughts?
AI: I don't see the problem. All you need is a learner to map a bit string as long as the total number of contestants, representing the subset who are taking part, to another bit string (with only one bit set) representing the winner, or a ranked list, if you want them all (assuming you have the whole list in your training data). In the latter case you would have a learning-to-rank problem.
If the contestant landscape can change it would help to find a vector space embedding for them so you can use the previous embeddings as an initial guess and rank anyone, even hypothetical, given their vector representation. As the number of users increases the embedding should stabilize and retraining should become less costly. The question is how to find the embedding, of course. If you have a lot of training data, you could probably find a randomized one along with the ranking function. If you don't, you would have to generate the embedding by some algorithm and estimate only the ranking function. I have not faced your problem before so I can't direct you to a particular paper, but the recent NLP literature should give you some inspiration, e.g. this. I still think it is feasible. |
H: Item based and user based recommendation difference in Mahout
I would like to know how exactly mahout user based and item based recommendation differ from each other.
It defines that
User-based: Recommend items by finding similar users. This is often harder to scale because of the dynamic nature of users.
Item-based: Calculate similarity between items and make recommendations. Items usually don't change much, so this often can be computed off line.
But though there are two kind of recommendation available, what I understand is that both these will take some data model ( say 1,2 or 1,2,.5 as item1,item2,value or user1,user2,value where value is not mandatory) and will perform all calculation as the similarity measure and recommender build-in function we chose and we can run both user/item based recommendation on the same data ( is this a correct assumption ?? ).
So I would like to know how exactly and in which all aspects these two type of algorithm differ.
AI: You are correct that both models work on the same data without any problem. Both items operate on a matrix of user-item ratings.
In the user-based approach the algorithm produces a rating for an item i by a user u by combining the ratings of other users u' that are similar to u. Similar here means that the two user's ratings have a high Pearson correlation or cosine similarity or something similar.
In the item-based approach we produce a rating for i by u by looking at the set of items i' that are similar to i (in the same sense as above except now we'd be looking at the ratings that items have received from users) that u has rated and then combines the ratings by u of i' into a predicted rating by u for i.
The item-based approach was invented at Amazon to address their scale challenges with user-based filtering. The number of things they sell is much less and much less dynamic than the number of users so the item-item similarities can be computed offline and accessed when needed. |
H: Good books for Hadoop, Spark, and Spark Streaming
Can anyone suggest any good books to learn hadoop and map reduce basics?
Also something for Spark, and Spark Streaming?
Thanks
AI: There's such an overwhelming amount of literature that with programming, databases, and Big Data I like to stick to the O'reilly series as my go-to source. O'reilly books are extremely popular in the industry and I've been very satisfied.
A current version of
Hadoop: The Definitive Guide,
MapReduce Design Patterns, and
Learning Spark
might suit your needs by providing high quality, immediately useful information and avoiding information overload -- all are published by O'reilly.
Spark Streaming is covered in Chapter 13 of "Learning Spark". |
H: Ethically and Cost-effectively Scaling Data Scrapes
Few things in life give me pleasure like scraping structured and unstructured data from the Internet and making use of it in my models.
For instance, the Data Science Toolkit (or RDSTK for R programmers) allows me to pull lots of good location-based data using IP's or addresses and the tm.webmining.plugin for R's tm package makes scraping financial and news data straightfoward. When going beyond such (semi-) structured data I tend to use XPath.
However, I'm constantly getting throttled by limits on the number of queries you're allowed to make. I think Google limits me to about 50,000 requests per 24 hours, which is a problem for Big Data.
From a technical perspective getting around these limits is easy -- just switch IP addresses and purge other identifiers from your environment. However, this presents both ethical and financial concerns (I think?).
Is there a solution that I'm overlooking?
AI: For many APIs (most I've seen) ratelimiting is a function of your API Key or OAuth credentials. (Google, Twitter, NOAA, Yahoo, Facebook, etc.) The good news is you won't need to spoof your IP, you just need to swap out credentials as they hit there rate limit.
A bit of shameless self promotion here but I wrote a python package specifically for handling this problem.
https://github.com/rawkintrevo/angemilner
https://pypi.python.org/pypi/angemilner/0.2.0
It requires a mongodb daemon and basically you make a page for each one of your keys. So you have 4 email addresses each with a separate key assigned. When you load the key in you specify the maximum calls per day and minimum time between uses.
Load keys:
from angemilner import APIKeyLibrarian
l= APIKeyLibrarian()
l.new_api_key("your_assigned_key1", 'noaa', 1000, .2)
l.new_api_key("your_assigned_key2", 'noaa', 1000, .2)
Then when you run your scraper for instance the NOAA api:
url= 'http://www.ncdc.noaa.gov/cdo-web/api/v2/stations'
payload= { 'limit': 1000,
'datasetid': 'GHCND',
'startdate': '1999-01-01' }
r = requests.get(url, params=payload, headers= {'token': 'your_assigned_key'})
becomes:
url= 'http://www.ncdc.noaa.gov/cdo-web/api/v2/stations'
payload= { 'limit': 1000,
'datasetid': 'GHCND',
'startdate': '1999-01-01' }
r = requests.get(url, params=payload, headers= {'token': l.check_out_api_key('noaa')['key']})
so if you have 5 keys, l.check_out_api_key returns the key that has the least uses and waits until enough time has elapsed for it to be used again.
Finally to see how often your keys have been used / remaining useage available:
pprint(l.summary())
I didn't write this for R because most scraping is done in python (most of MY scraping). It could be easily ported.
Thats how you can technically get around rate limiting. Ethically ...
UPDATE The example uses Google Places API here |
H: What is "data science"?
In recent years, the term "data" seems to have become a term widely used without specific definition. Everyone seems to use the phrase. Even people as technology-impaired as my grandparents use the term and seem to understand words like "data breach." But I don't understand what makes "data science" a new discipline. Data has been the foundation of science for centuries. Without data, there would be no Mendel, no Schrödinger, etc. You can't have science without interpreting and analyzing data.
But clearly it means something. Everyone is talking about it. So what exactly do people mean by data when they use terms like "big data" and why has this become a discipline in itself? Also, if it is an emerging discipline, where can I find more serious/in-depth information so I can better educate myself?
Thanks!
AI: I get asked this question all the time, so earlier this year I wrote an article (What is Data Science?) based on a presentation I've given a few times. Here's the gist...
First, a few definitions of data science offered by others:
Josh Wills from Cloudera says a data scientist is someone "who is better at statistics than any software engineer and better at software engineering than any statistician."
A frequently-heard joke is that a "Data Scientist" is a Data Analyst who lives in California.
According to Big Data Borat, Data Science is statistics on a Mac.
In Drew Conway's famous Data Science Venn Diagram, it's the intersection of Hacking Skills, Math & Statistics Knowledge, and Substantive Expertise.
Here's another good definition I found on the ITProPortal blog:
"A data scientist is someone who understands the domains of programming, machine learning, data mining, statistics, and hacking"
Here's how we define Data Science at Altamira (my current employer):
The bottom four rows are the table stakes -- the cost of admission just to play the game. These are foundational skills that all aspiring data scientists must obtain. Every data scientist must be a competent programmer. He or she must also have a solid grasp of math, statistics, and analytic methodology. Data science and "big data" go hand-in-hand, so all data scientists need to be familiar with frameworks for distributed computing. Finally, data scientists must have a basic understanding of the domains in which they operate, as well as excellent communications skills and the ability to tell a good story with data.
With these basics covered, the next step is to develop deep expertise in one or more of the vertical areas. "Data Science" is really an umbrella term for a collection of interrelated techniques and approaches taken from a variety of disciplines, including mathematics, statistics, computer science, and software engineering. The goal of these diverse methods is to extract actionable intelligence from data of all kinds, enabling clients to make better data-driven decisions. No one person can ever possibly master all aspects of data science; doing so would require multiple lifetimes of training and experience. The best data scientists are therefore "T-shaped" individuals -- that is, they possess a breadth of knowledge across all areas of data science, along with deep expertise in at least one. Accordingly, the best data science teams bring together a set of individuals with complementary skillsets spanning the entire spectrum. |
H: How to connect data-mining with machine learner process
I want to write a data-mining service in Google Go which collects data through scraping and APIs.
However as Go lacks good ML support I would like to do the ML stuff in Python.
Having a web background I would connect both services with something like RPC but as I believe that this is a common problem in data science I think that there is some better solution.
For example most (web) protocols lack at:
buffering between processes
clustering over multiple instances
So what (type of libraries) do data scientists use to connect different languages/processes?
Bodo
AI: I am not 100% if a message queue library will be the right tool for this job but so far it looks to me so.
With a messaging library like:
nsq
zeromq
mqtt (?)
You can connect different processes operating on different environment through a TCP based protocol. As these systems run distributed it is possible to connect multiple nodes.
For nsq we even have a library in Python and Go! |
H: Extract most informative parts of text from documents
Are there any articles or discussions about extracting part of text that holds the most of information about current document.
For example, I have a large corpus of documents from the same domain. There are parts of text that hold the key information what single document talks about. I want to extract some of those parts and use them as kind of a summary of the text. Is there any useful documentation about how to achieve something like this.
It would be really helpful if someone could point me into the right direction what I should search for or read to get some insight in work that might have already been done in this field of Natural language processing.
AI: What you're describing is often achieved using a simple combination of TF-IDF and extractive summarization.
In a nutshell, TF-IDF tells you the relative importance of each word in each document, in comparison to the rest of your corpus. At this point, you have a score for each word in each document approximating its "importance." Then you can use these individual word scores to compute a composite score for each sentence by summing the scores of each word in each sentence. Finally, simply take the top-N scoring sentences from each document as its summary.
Earlier this year, I put together an iPython Notebook that culminates with an implementation of this in Python using NLTK and Scikit-learn: A Smattering of NLP in Python. |
H: kNN - what happens if more than K observation have the same distance to the centroid of the cluster
EDIT It was pointed out in the Answers-section that I am confusing k-means and kNN. Indeed I was thinking about kNN but wrote k-means since I'm still new to this topic and confuse the terms quite often. So here is the changed question.
I was looking at kNN today and something struck me as odd or - to be more precise - something that I was unable to find information about namely the following situation.
Imagine that we pick kNN for some dataset. I want to remain as general as possible, thus $k$ will not be specified here. Further we select, at some point, an observation where the number of neighbors that fulfill the requirement to be in the neighbourhood are actually more than the specified $k$.
What criterion/criteria should be applied here if we are restricted to use the specific K and thus cannot alter the structure of the neighborhood (number of neighbors). Which observations will be left out and why? Also is this a problem that occurs often, or is it something of an anomaly?
AI: You are mixing up kNN classification and k-means.
There is nothing wrong with having more than k observations near a center in k-means. In fact, this it the usual case; you shouldn't choose k too large. If you have 1 million points, a k of 100 may be okay. K-means does not guarantee clusters of a particular size. Worst case, clusters in k-means can have only one element (outliers) or even disappear.
What you probably meant to write, but got mixed up, is what to do if a point is at the same distance to two centers.
From a statistical point of view, it doesn't matter. Both have the same squared error.
From an implementation point of view, choose any deterministic rule, so that your algorithm converges and doesn't go into an infinite loop of reassignment.
Update: with respect to kNN classification:
There are many ways to resolve this, that will surprisingly often work just as good as the other, without a clear advantage of one over the other:
randomly choose a winner from the tied objects
take all into account with equal weighting
if you have m objects at the same distance where you expected only r, then put a weight of r/k on each of them.
E.g. k=5.
distance label weight
0 A 1
1 B 1
1 A 1
2 A 2/3
2 B 2/3
2 B 2/3
yields A=2.66, B=2.33
The reason that randomly choosing works just as good as the others is that usually, the majority decision in kNN will not be changed by contributions with a weight of less than 1; in particular when k is larger than say 10. |
H: Web Framework Built for Recommendations
I'm wondering if there is a web framework well suited for placing recommendations on content.
In most cases, a data scientist goes through after the fact and builds (or uses) a completely different tool to create recommendations. This involves analyzing traffic logs, a history of shopping cart data, ratings, and so forth. It usually comes from multiples sources (the web server, the application's database, Google Analytics, etc) and then has to be cleaned up and processed, THEN delivered back to the application in way it understands.
Is there a web framework on the market which handles collecting this data up front, as to minimize the retrospective data wrangling?
AI: I haven't seen anything like that and very much doubt that such frameworks exist, at least, as complete frameworks. The reason for this is IMHO the fact that data transformation and cleaning is very domain- and project-specific. Having said that, there are multiple tools that can help with these activities in terms of partial automation and integration with and between existing statistical and Web frameworks.
For example, for Python, the use of data manipulation library pandas as well as machine learning library scikit-learn can be easily integrated with Web frameworks (especially Python-based, but not necessarily), as these libraries are also Python-based. These and other Python data science tools that might be of interest can be found here: http://pydata.org/downloads. Specifically, for cleaning and pre-processing tasks, which you asked about, pandas seem to be the first tool to explore. Again, for Python, the following discussion on StackOverflow on methods and approaches might be helpful: https://stackoverflow.com/q/14262433/2872891.
Consider an example of another platform. The use of pandas for data transformation and cleaning is rather low-level. The platform that I like very much and currently use as the platform of choice for data science tasks is R. Rich ecosystem of R packages especially shines in the area of data transformation and cleaning. This is because, in addition to very flexible low-level methods of performing these tasks, there are some R packages, which take a higher-level approach to the problem, which may potentially improve developer's productivity and decrease the amount of defects. In particular, I'm talking about two packages, which I find very promising: editrules and deducorrect. You can find more detailed information about these and other R packages for data transformation and cleaning in my another answer here on Data Science StackExchange (paper that I reference in the last link there could be especially useful, as it presents an approach to data transformation and cleaning that is generic enough, so that could be used as a framework for this on any decent platform): https://datascience.stackexchange.com/a/722/2452.
UPDATE: On the topic of recommender systems and their integration with data wrangling tools and Web frameworks, you may find my other answer here on DS SE useful: https://datascience.stackexchange.com/a/836/2452. |
H: Document classification: tf-idf prior to or after feature filtering?
I have a document classification project where I am getting site content and then assigning one of numerous labels to the website according to content.
I found out that tf-idf could be very useful for this. However, I was unsure as to when exactly to use it.
Assumming a website that is concerned with a specific topic makes repeated mention of it, this was my current process:
Retrieve site content, parse for plain text
Normalize and stem content
Tokenize into unigrams (maybe bigrams too)
Retrieve a count of each unigram for the given document, filtering low length and low occurrence words
Train a classifier such as NaiveBayes on the resulting set
My question is the following: Where would tf-idf fit in here? Before normalizing/stemming? After normalizing but before tokenizing? After tokenizing?
Any insight would be greatly appreciated.
Edit:
Upon closer inspection, I think I may have run into a misunderstanding at to how TF-IDF operates. At the above step 4 that I describe, would I have to feed the entirety of my data into TF-IDF at once? If, for example, my data is as follows:
[({tokenized_content_site1}, category_string_site1),
({tokenized_content_site2}, category_string_site2),
...
({tokenized_content_siten}, category_string_siten)}]
Here, the outermost structure is a list, containing tuples, containing a dictionary (or hashmap) and a string.
Would I have to feed the entirety of that data into the TF-IDF calculator at once to achieve the desired effect? Specifically, I have been looking at the scikit-learn TfidfVectorizer to do this, but I am a bit unsure as to its use as examples are pretty sparse.
AI: As you've described it, Step 4 is where you want to use TF-IDF. Essentially, TD-IDF will count each term in each document, and assign a score given the relative frequency across the collection of documents.
There's one big step missing from your process, however: annotating a training set. Before you train your classifier, you'll need to manually annotate a sample of your data with the labels you want to be able to apply automatically using the classifier.
To make all of this easier, you might want to consider using the Stanford Classifier. It will perform the feature extraction and build the classifier model (supporting several different machine learning algorithms), but you'll still need to annotate the training data by hand. |
H: Visualizing deep neural network training
I'm trying to find an equivalent of Hinton Diagrams for multilayer networks to plot the weights during training.
The trained network is somewhat similar to a Deep SRN, i.e. it has a high number of multiple weight matrices which would make the simultaneous plot of several Hinton Diagrams visually confusing.
Does anyone know of a good way to visualize the weight update process for recurrent networks with multiple layers?
I haven't found much papers on the topic. I was thinking to display time-related information on the weights per layer instead if I can't come up with something. E.g. the weight-delta over time for each layer (omitting the use of every single connection). PCA is another possibility, though I'd like to not produce much additional computations, since the visualization is done online during training.
AI: The closes thing I know is ConvNetJS:
ConvNetJS is a Javascript library for training Deep Learning models (mainly Neural Networks) entirely in your browser. Open a tab and you're training. No software requirements, no compilers, no installations, no GPUs, no sweat.
Demos on this site plot weighs and how do they change with time (bear in mind, its many parameters, as practical networks do have a lot of neurons). Moreover, if you are not satisfied with their plotting, there is access to networks parameters and you can plot as you wish (since it is JavaScript). |
H: Data Science Tools Using Scala
I know that Spark is fully integrated with Scala. It's use case is specifically for large data sets. Which other tools have good Scala support? Is Scala best suited for larger data sets? Or is it also suited for smaller data sets?
AI: Re: size of data
The short answer
Scala works for both small and large data, but its creation and development is motivated by needing something scalable. Scala is an acronym for “Scalable Language”.
The long answer
Scala is a functional programming language that runs on the jvm. The 'functional' part of this is a fundamental difference in the language that makes you think differently about programming. If you like that way of thinking, it lets you quickly work with small data. Whether you like it or not, functional languages are fundamentally easier to massively scale. The jvm piece is also important because the jvm is basically everywhere and, thus, Scala code can run basically everywhere. (Note there are plenty of other languages written on the jvm and plenty of other functional programming languages, and languages beyond Scala do appear in both lists.)
This talk give a good overview of the motivation behind Scala.
Re: other tools that have good Scala support:
As you mentioned, Spark (distributable batch processing better at iteratative algorithms than its counterpart) is a big one. With Spark comes its libraries Mllib for machine learning and GraphX for graphs. As mentioned by Erik Allik and Tris Nefzger, Akka and Factorie exist. There is also Play.
Generally, I can't tell if there is a specific use case you're digging for (if so, make that a part of your question), or just want a survey of big data tools and happen to know Scala a bit and want to start there. |
H: What are the best practices to anonymize user names in data?
I'm working on a project which asks fellow students to share their original text data for further analysis using data mining techniques, and, I think it would be appropriate to anonymize student names with their submissions.
Setting aside the better solutions of a url where students submit their work and a backend script inserts the anonymized ID, What sort of solutions could I direct students to implement on their own to anonymized their own names?
I'm still a noob in this area. I don't know what are the norms. I was thinking the solution could be a hashing algorithm. That sounds like a better solution than making up a fake name as two people could pick the same fake name.possible people could pick the same fake name. What are some of the concerns I should be aware of?
AI: I suspected you were using the names as identifiers. You shouldn't; they're not unique and they raise this privacy issue. Use instead their student numbers, which you can verify from their IDs, stored in hashed form. Use the student's last name as a salt, for good measure (form the string to be hashed by concatenating the ID number and the last name). |
H: Exporting R model to OpenCV's Machine Learning Library
I'm wonder if it's possible to export a model trained in R, to OpenCV's Machine Learning (ML) library format? The latter appears to save/read models in XML/YAML, whereas the former might be exportable via PMML. Specifically, I'm working with Random Forests, which are classifiers available both in R and OpenCV's ML library.
Any advice on how I can get the two to share models would be greatly appreciated.
AI: Instead of exporting your models, consider creating an R-based interoperable environment for your modeling needs. Such environment would consists of R environment proper as well as integration layers for your third-party libraries. In particular, for the OpenCV project, consider either using r-opencv open source project (https://code.google.com/p/r-opencv), or integration via OpenCV C++ APIs and R Rcpp package (http://dirk.eddelbuettel.com/code/rcpp.html). Finally, if you want to add PMML support to the mix and create a deployable-to-cloud solution, take a look at the following excellent blog post with relevant examples: http://things-about-r.tumblr.com/post/37861967022/predictive-modeling-using-r-and-the. |
H: How does the naive Bayes classifier handle missing data in training?
Naive Bayes apparently handles missing data differently, depending on whether they exist in training or testing/classification instances.
When classifying instances, the attribute with the missing value is simply not included in the probability calculation (reference)
In training, the instance [with the missing data] is not included in frequency count for attribute value-class combination. (reference)
Does that mean that particular training record simply isn't included in the training phase? Or does it mean something else?
AI: In general, you have a choice when handling missing values hen training a naive Bayes classifier. You can choose to either
Omit records with any missing values,
Omit only the missing attributes.
I'll use the example linked to above to demonstrate these two approaches. Suppose we add one more training record to that example.
Outlook Temperature Humidity Windy Play
------- ----------- -------- ----- ----
rainy cool normal TRUE no
rainy mild high TRUE no
sunny hot high FALSE no
sunny hot high TRUE no
sunny mild high FALSE no
overcast cool normal TRUE yes
overcast hot high FALSE yes
overcast hot normal FALSE yes
overcast mild high TRUE yes
rainy cool normal FALSE yes
rainy mild high FALSE yes
rainy mild normal FALSE yes
sunny cool normal FALSE yes
sunny mild normal TRUE yes
NA hot normal FALSE yes
If we decide to omit the last record due to the missing outlook value, we would have the exact same trained model as discussed in the link.
We could also choose to use all of the information available from this record. We could choose to simply omit the attribute outlook from this record. This would yield the following updated table.
Outlook Temperature Humidity
==================== ================= =================
Yes No Yes No Yes No
Sunny 2 3 Hot 3 2 High 3 4
Overcast 4 0 Mild 4 2 Normal 7 1
Rainy 3 2 Cool 3 1
----------- --------- ----------
Sunny 2/9 3/5 Hot 3/10 2/5 High 3/10 4/5
Overcast 4/9 0/5 Mild 4/10 2/5 Normal 7/10 1/5
Rainy 3/9 2/5 Cool 3/10 1/5
Windy Play
================= ========
Yes No Yes No
False 7 2 10 5
True 3 3
---------- ----------
False 7/10 2/5 10/15 5/15
True 3/10 3/5
Notice there are 15 observations for each attribute except Outlook, which has only 14. This is since that value was unavailable for the last record. All further development would continue as discussed in the linked article.
For example in the R package e1071 naiveBayes implementation has the option na.action which can be set to na.omit or na.pass. |
H: Machine Learning for hedging/ portfolio optimization?
With increasingly sophisticated methods that work on large scale datasets, financial applications are obvious. I am aware of machine learning being employed on financial services to detect fraud and flag fraudulent activities but I have a lesser understanding of how it helps to predict the price of the stock the next day and how many stocks of a particular company to buy.
Do the hedge funds still employ portfolio optimization techniques that are right out of the mathematical finance literature or have they started to use machine learning to hedge their bets? More importantly, what are the features that are used by these hedge funds and what is a representative problem set up?
AI: That is a rather broad question, and there is tons of literature about quantitative analysis and stock market prediction using machine learning.
The most classical example of predicting the stock market is employing neural networks; you can use whatever feature you think might be relevant for your prediction, for example the unemployment rate, the oil price, the gold price, the interest rates, and the timeseries itself, i. e. the volatility, the change in the last 2,3,7,..., days etc. - a more classical approach is the input-output-analysis in econometrics, or the autoregression analysis, but all of it can be modeled using neural networks or any other function approximator / regression in a very natural way.
But, as said, there are tons of other possibilities to model the market, to name a few: Ant Colony Optimization (ACO), Classical regression analysis, genetic algorithms, decision trees, reinforcement learning etc. you name it, almost EVERYTHING has probably been applied to the stock market prediction problem.
There are different fond manager types on the markets. There are still the Quants which are doing a quantitative analysis using classical financial maths and maths borrowed from the physics to describe the market movements. There are still the most conservative ones which do a long-term, fundamental analysis of the corporation, that is, looking in how the corporation earns money and where it spends money. Or the tactical analysts who just look for immediate signals to buy / sell a stock in the short term. And those quantitative guys who employ machine learning amongst other methods. |
H: Interactive Graphing while logging data
I'm looking to graph and interactively explore live/continuously measured data. There are quite a few options out there, with plot.ly being the most user-friendly. Plot.ly has a fantastic and easy to use UI (easily scalable, pannable, easily zoomable/fit to screen), but cannot handle the large sets of data I'm collecting. Does anyone know of any alternatives?
I have MATLAB, but don't have enough licenses to simultaneously run this and do development at the same time. I know that LabVIEW would be a great option, but it is currently cost-prohibitive.
Thanks in advance!
AI: For this answer, I have assumed that you prefer open source solutions to big data visualization. This assumption is based on budgetary details from your question. However, there is one exclusion to this - below I will add a reference to one commercial product, which I believe might be beneficial in your case (provided that you could afford that). I also assume that browser-based solutions are acceptable (I would even prefer them, unless you have specific contradictory requirements).
Naturally, the first candidate as a solution to your problem I would consider D3.js JavaScript library: http://d3js.org. However, despite flexibility and other benefits, I think that this solution is too low-level.
Therefore, I would recommend you to take a look at the following open source projects for big data visualization, which are powerful and flexible enough, but operate at a higher level of abstraction (some of them are based on D3.js foundation and sometimes are referred to as D3.js visualization stack).
Bokeh - Python-based interactive visualization library, which supports big data and streaming data: http://bokeh.pydata.org
Flot - JavaScript-based interactive visualization library, focused on jQuery: http://www.flotcharts.org
NodeBox - unique rapid data visualization system (not browser-based, but multi-language and multi-platform), based on generative design and visual functional programming: https://www.nodebox.net
Processing - complete software development system with its own programming language, libraries, plug-ins, etc., oriented to visual content: https://www.processing.org (allows executing Processing programs in a browser via http://processingjs.org)
Crossfilter - JavaScript-based interactive visualization library for big data by Square (very fast visualization of large multivariate data sets): http://square.github.io/crossfilter
bigvis - an R package for big data exploratory analysis (not a visualization library per se, but could be useful to process large data sets /aggregating, smoothing/ prior to visualization, using various R graphics options): https://github.com/hadley/bigvis
prefuse - Java-based interactive visualization library: http://prefuse.org
Lumify - big data integration, analysis and visualization platform (interesting feature: supports Semantic Web): http://lumify.io
Separately, I'd like to mention two open source big data analysis and visualization projects, focused on graph/network data (with some support for streaming data of that type): Cytoscape and Gephi. If you are interested in some other, more specific (maps support, etc.) or commercial (basic free tiers), projects and products, please see this awesome compilation, which I thoroughly curated to come up with the main list above and analyzed: http://blog.profitbricks.com/39-data-visualization-tools-for-big-data.
Finally, as I promised in the beginning, Zoomdata - a commercial product, which I thought you might want to take a look at: http://www.zoomdata.com. The reason I made an exclusion for it from my open source software compilation is due to its built-in support for big data platforms. In particular, Zoomdata provides data connectors for Cloudera Impala, Amazon Redshift, MongoDB, Spark and Hadoop, plus search engines, major database engines and streaming data.
Disclaimer: I have no affiliation with Zoomdata whatsoever - I was just impressed by their range of connectivity options (which might cost you dearly, but that's another aspect of this topic's analysis). |
H: What software is being used in this image recognition system?
I was wondering if anyone knew which piece of software is being used in this video? It is an image recognition system that makes the training process very simple.
http://www.ted.com/talks/jeremy_howard_the_wonderful_and_terrifying_implications_of_computers_that_can_learn#t-775098
The example is with car images, though the video should start at the right spot.
AI: I'm pretty sure that the software you're referring to is a some kind of internal research project software, developed by Enlitic (http://www.enlitic.com), where Jeremy Howard works as a founder and CEO. By "internal research project software" I mean either a proof-of-concept software, or a prototype software. |
H: Which Optimization method to use?
I have a non-function (not in closed form) that takes in a few parameters (about 20) and returns a real value. A few of these parameters are discrete while others are continuous. Some of these parameters can only be chosen from a finite space of values.
Since I don't have the function in closed form, I cannot use any gradient based methods. However, the discrete nature and the boxed constraints on a few of those parameters restrict even the number of derivative free optimization techniques at my disposal. I am wondering what are the options in terms of optimization methods that I can use.
AI: Bayesian optimization is a principled way of sequentially finding the extremum of black-box functions. What's more, there a numerous software packages that make it easy, such as BayesOpt and MOE. Another flexible Bayesian framework that you can use for optimization is Gaussian processes: Global Optimisation with Gaussian Processes |
H: What does 'contextual' mean in 'contextual bandits'?
I recently read a lot about the n-armed bandit problem and its solution with various algorithms, for example for webscale content optimization. Some discussions were referring to 'contextual bandits', I couldn't find a clear definition what the word 'contextual' should mean here. Does anyone know what is meant by that, in contrast to 'usual' bandits?
AI: A contextual bandit algorithm not only adapts to the user-click feedback as the algorithm progresses, it also utilizes pre-existing information about the user's (and similar users) browsing patterns to select which content to display.
So, rather than starting with no prediction (cold start) with what the user will click (traditional bandit and also traditional A/B testing), it takes other data into account (warm start) to help predict which content to display during the bandit test.
See: http://www.research.rutgers.edu/~lihong/pub/Li10Contextual.pdf |
H: Data transposition code in R
I've been working in SAS for a few years but as my time as a student with a no-cost-to-me license comes to an end, I want to learn R.
Is it possible to transpose a data set so that all the observations for a single ID are on the same line? (I have 2-8 observations per unique individual but they are currently arranged vertically rather than horizontally.) In SAS, I had been using PROC SQL and PROC TRANSPOSE depending on my analysis aims.
Example:
ID date timeframe fruit_amt veg_amt <br/>
4352 05/23/2013 before 0.25 0.75 <br/>
5002 05/24/2014 after 0.06 0.25 <br/>
4352 04/16/2014 after 0 0 <br/>
4352 05/23/2013 after 0.06 0.25 <br/>
5002 05/24/2014 before 0.75 0.25 <br/>
Desired:
ID B_fr05/23/2013 B_veg05/23/2013 A_fr05/23/2013 A_veg05/23/2013 B_fr05/24/2014 B_veg05/24/2014 (etc) <br/>
4352 0.25 0.75 0.06 0.25 . . <br/>
5002 . . . . 0.75 0.25 <br/>
AI: You can use the reshape2 package for this task.
First, transform the data to the long format with melt:
library(reshape2)
dat_m <- melt(dat, measure.vars = c("fruit_amt", "veg_amt"))
where dat is the name of your data frame.
Second, cast to the wide format:
dcast(dat_m, ID ~ timeframe + variable + date)
The result:
ID after_fruit_amt_04/16/2014 after_fruit_amt_05/23/2013 after_fruit_amt_05/24/2014 after_veg_amt_04/16/2014
1 4352 0 0.06 NA 0
2 5002 NA NA 0.06 NA
after_veg_amt_05/23/2013 after_veg_amt_05/24/2014 before_fruit_amt_05/23/2013 before_fruit_amt_05/24/2014
1 0.25 NA 0.25 NA
2 NA 0.25 NA 0.75
before_veg_amt_05/23/2013 before_veg_amt_05/24/2014
1 0.75 NA
2 NA 0.25
> |
H: Time series prediction
I am trying to predict a time serie from another one. My approach is based on a moving windows. I predict the output value of the serie from the following features: the previous value and the 6 past values of the source serie.
Is it usefull to add the previous value of the time serie ?
I feel like I don't use all the information contained in the curve to predict futures values. But I don't see how it would be possible to use all previous data to predict a value (first, the number of features would be growing trough time...).
What are the caveats of a 6 month time-window approach ?
Is there any paper about differents method of feature selection for time-series ?
AI: Let me give you a few simple approaches in time series analysis.
The first approach consists in using previous values of your time series $Y_{t}$ as in $Y_{t} = \phi_{1}Y_{t-1} + ... + \phi_{n}Y_{t-n}$. In case you don't know, these models are called autoregressive (AR) models. This answers your first question. Of course it is useful to include the previous value of your time series. There is a whole set of models based on that idea.
The second approach is taking a window and extracting some features to describe the time series at each point in time. Then you use a conventional machine learning technique to predict future values as typically done. This is more common in a classification or regression setting but future values can be thought of as classifying future values. This technique has the advantage of dramatically reducing the number of features, although you usually lose characteristics associated with time. This addresses your second concern.
Another model that could be helpful in your case is the vector autoregressive model (VAR) (using Wikipedia's notation):
$$\left( \begin{array}{ccc}
y_{1,t} \\
y_{2,t}
\end{array}\right) = \left( \begin{array}{ccc}c_{1} \\ c_{2}\end{array}\right) + \left( \begin{array}{ccc}A_{1,1} & A_{1,2} \\ A_{2,1} & A_{2,2}\end{array}\right)\left( \begin{array}{ccc}
y_{1,t-1} \\
y_{2,t-1}
\end{array}\right) + \left( \begin{array}{ccc}
e_{1,t} \\
e_{2,t}
\end{array}\right)$$
Here you can see that $y_{1,t}$ has a contribution from its previous value $t_{1,t-1}$ but also includes the value of the other series $y_{2,t-1}$ in a linear combination. As usual, the purpose is to find the elements of $A_{i,j}$ that minimize some measure of error between observed values and estimated values.
A general suggestion: The first thing you need to do is to test the autocorrelation of your first series in order to confirm that an autoregressive approach is suitable and then test the cross correlation between both series to support the idea that using the second series to improve your predictions is appropriate. |
H: How to merge monthly, daily and weekly data?
Google Trends returns weekly data so I have to find a way to merge them with my daily/monthly data.
What I have done so far is to break each serie into daily data, for exemple:
from:
2013-03-03 - 2013-03-09 37
to:
2013-03-03 37
2013-03-04 37
2013-03-05 37
2013-03-06 37
2013-03-07 37
2013-03-08 37
2013-03-09 37
But this is adding a lot of complexity to my problem. I was trying to predict google searchs from the last 6 months values, or 6 values in monthly data. Daily data would imply a work on 180 past values. (I have 10 years of data so 120 points in monthly data / 500+ in weekly data/ 3500+ in daily data)
The other approach would be to "merge" daily data in weekly/monthly data. But some questions arise from this process. Some data can be averaged because their sum represent something. Rainfall for example, the amount of rain in a given week will be the sum of the amounts for each days composing the weeks.
In my case I am dealing with prices, financial rates and other things. For the prices it is common in my field to take volume exchanged into account, so the weekly data would be a weighted average. For financial rates it is a bit more complex a some formulas are involved to build weekly rates from daily rates. For the other things i don't know the underlying properties. I think those properties are important to avoid meaningless indicators (an average of fiancial rates would be a non-sense for example).
So three questions:
For known and unknown properties, how should I proceed to go from daily to weekly/monthly data ?
I feel like breaking weekly/monthly data into daily data like i've done is somewhat wrong because I am introducing quantities that have no sense in real life. So almost the same question:
For known and unknown properties, how should I proceed to go from weekly/monthly to daily data ?
Last but not least: when given two time series with different time steps, what is better: Using the Lowest or the biggest time step ? I think this is a compromise between the number of data and the complexity of the model but I can't see any strong argument to choose between those options.
Edit: if you know a tool (in R Python even Excel) to do it easily it would be very appreciated.
AI: when given two time series with different time steps, what is better: Using the Lowest or the biggest time step ?
For your timeseries analysis you should do both: get to the highest granularity possible with the daily dataset, and also repeat the analysis with the monthly dataset. With the monthly dataset you have 120 data points, which is sufficient to get a timeseries model even with seasonality in your data.
For known and unknown properties, how should I proceed to go from daily to weekly/monthly data ?
To obtain say weekly or monthly data from daily data, you can use smoothing functions. For financial data, you can use moving average or exponential smoothing, but if those do not work for your data, then you can use the spline smoothing function "smooth.spline" in R: https://stat.ethz.ch/R-manual/R-patched/library/stats/html/smooth.spline.html
The model returned will have less noise than the original daily dataset, and you can get values for the desired time points. Finally, these data points can be used in your timeseries analysis.
For known and unknown properties, how should I proceed to go from weekly/monthly to daily data ?
To obtain daily data when you have monthly or weekly data, you can use interpolation. First, you should find an equation to describe the data. In order to do this you should plot the data (e.g. price over time). When factors are known to you, this equation should be influenced by those factors. When factors are unknown, you can use a best fit equation. The simplest would be a linear function or piecewise linear function, but for financial data this won't work well. In that case, you should consider piecewise cubic spline interpolation. This link goes into more detail on possible interpolation functions: http://people.math.gatech.edu/~meyer/MA6635/chap2.pdf.
In R, there is a method for doing interpolation of timeseries data. Here you would create a vector with say weekly values and NAs in the gaps for the daily values, and then use the "interpNA" function to get the interpolated values for the NAs. However, this function uses the "approx" function to get the interpolated values, which applies either a linear or constant interpolation. To perform cubic spline interpolation in R, you should use the "splinefun" function instead.
Something to be aware of is that timeseries models typically do some sort of averaging to forecast future values whether you are looking at exponential smoothing or Auto-Regressive Integrated Moving Average (ARIMA) methods amongst others. So a timeseries model to forecast daily values may not be the best choice, but the weekly or monthly models may be better. |
H: normalize identification values properly
I'm building a neural network to analyze a business' sales. I'm normalizing all input values to the range {0,1}.
I'm struggling with the day of the week column. Business days are identified by a number ranging {1-5} (1=Monday). Normalizing these values to the range {0,1} is straightforward, but results in a major bias in the final output.
The reason is that the full range of normalized values for the business day column is explored with every week worth of data, whereas other price-related column explore their full range of normalized values infrequently.
The business day column ends up being the largest contributor to the final output.
How can I normalize it to make its contribution more in tune with the rest of the inputs?
AI: It is possible that the other variables you're feeding into the NN are simply bad at predicting sales. Sell prediction is a notoriously hard problem.
Specifically the addressing of mapping a multi-state categorical variable to the NN's {0,1} input range: Another idea is to change that one, 5-state variable into five boolean variables. Rather than {0,0.25,0.5,0.75,1.0} on your one variable, make each of the five boolean variables represent a single day and make [1,0,0,0,0] equal Monday, [0,1,0,0,0] equal Tuesday, etc. I've personally had more success both with training good networks and introspecting the network itself when spreading out states of classes like that.
Other hacks you can try:
* Take out the the 'day' column all together and see if any of the other variables get used.
* Plot the distribution of spend as a function of day. Even if nothing else comes of this current model, it sounds like you've found one interesting insight already.
* Consider also trying different models. |
H: Finding unpredictability or uncertainty in a time series
I am interested in finding a statistic that tracks the unpredictability of a time series. For simplicity sake, assume that each value in the time series is either 1 or 0. So for example, the following two time series are entirely predictable
TS1: 1 1 1 1 1 1 1 1
TS2: 0 1 0 1 0 1 0 1 0 1 0 1
However, the following time series is not that predictable:
TS3: 1 1 0 1 0 0 1 0 0 0 0 0 1 1 0 1 1 1
I am looking for a statistic that given a time series, would return a number between 0 and 1 with 0 indicating that the series is completely predictable and 1 indicating the series in completely unpredictable.
I looked at some entropy measures like Kolmogorov Complexity and Shannon entropy, but neither seem to fit my requirement. In Kolmogorov complexity, the statistic value changes depending on the length of the time series (as in "1 0 1 0 1" and "1 0 1 0" have different complexities, so its not possible to compare predictability of two time series with differing number of observations). In Shannon entropy, the order of observations didn't seem to matter.
Any pointers on what would be a good statistic for my requirement?
AI: Since you have looked at Kolmogorov-Smirnov and Shannon entropy measures, I would like to suggest some other hopefully relevant options. First of all, you could take a look at the so-called approximate entropy $ApEn$. Other potential statistics include block entropy, T-complexity (T-entropy) as well as Tsallis entropy: http://members.noa.gr/anastasi/papers/B29.pdf
In addition to the above-mentioned potential measures, I would like to suggest to have a look at available statistics in Bayesian inference-based model of stochastic volatility in time series, implemented in R package stochvol: http://cran.r-project.org/web/packages/stochvol (see detailed vignette). Such statistics of uncertainty include overall level of volatility $\mu$, persistence $\phi$ and volatility of volatility $\sigma$: http://simpsonm.public.iastate.edu/BlogPosts/btcvol/KastnerFruwhirthSchnatterASISstochvol.pdf. A comprehensive example of using stochastic volatility model approach and stochvol package can be found in the excellent blog post "Exactly how volatile is bitcoin?" by Matt Simpson. |
H: Correlation threshold for Neural Network features selection
I'm trying to do a correlation analysis between inputs and outputs inspecting the data in order to understand which input variables to include. What could be a threshold in the correlation value to consider a variable eligible to be an input for my Neural Network?
AI: Given non-linearity of neural networks, I believe correlation analysis isn't a good way to estimate importance of variables. For example, imagine that you have 2 input variables - x1 and x2 - and following conditions hold:
cor(x2, y) = 1 if x1 = 1
cor(x2, y) = 0 otherwise
x1 = 1 in 10% of cases
That is, x2 is a very good predictor for y, but only given that x1 = 1, which is the case only in 10% of data. Taking into account correlations of x1 and x2 separately won't expose this dependency, and you will most likely drop out both variables.
There are other ways to perform feature selection, however. Simplest one is to train your model with all possible sets of variables and check the best subset. This is pretty inefficient with many variables, though, so many ways to improve it exist. For a good introduction in best subset selection see chapter 6.1 of Introduction to Statistical Learning. |
H: Similarity measure based on multiple classes from a hierarchical taxonomy?
Could anyone recommend a good similarity measure for objects which have multiple classes, where each class is part of a hierarchy?
For example, let's say the classes look like:
1 Produce
1.1 Eggs
1.1.1 Duck eggs
1.1.2 Chicken eggs
1.2 Milk
1.2.1 Cow milk
1.2.2 Goat milk
2 Baked goods
2.1 Cakes
2.1.1 Cheesecake
2.1.2 Chocolate
An object might be tagged with items from the above at any level, e.g.:
Omelette: eggs, milk (1.1, 1.2)
Duck egg omelette: duck eggs, milk (1.1.1, 1.2)
Goat milk chocolate cheesecake: goat milk, cheesecake, chocolate (1.2.2, 2.1.1, 2.1.2)
Beef: produce (1)
If the classes weren't part of a hierarchy, I'd probably I'd look at cosine similarity (or equivalent) between classes assigned to an object, but I'd like to use the fact that different classes with the same parents also have some similarity value (e.g. in the example above, beef has some small similarity to omelette, since they both have items from the class '1 produce').
If it helps, the hierarchy has ~200k classes, with a maximum depth of 5.
AI: While I don't have enough expertise to advise you on selection of the best similarity measure, I've seen a number of them in various papers. The following collection of research papers hopefully will be useful to you in determining the optimal measure for your research. Please note that I intentionally included papers, using both frequentist and Bayesian approaches to hierarchical classification, including class information, for the sake of more comprehensive coverage.
Frequentist approach:
Semantic similarity based on corpus statistics and lexical taxonomy
Can’t see the forest for the leaves: Similarity and distance measures for hierarchical taxonomies with a patent classification example (also see additional results and data)
Learning hierarchical similarity metrics
A new similarity measure for taxonomy based on edge counting
Hierarchical classification of real life documents
Hierarchical document classification using automatically generated hierarchy
Split-Order distance for clustering and classification hierarchies
A hierarchical k-NN classifier for textual data
Bayesian approach:
Improving classification when a class hierarchy is available using a hierarchy-based prior
Bayesian aggregation for hierarchical genre classification
Hierarchical classification for multiple, distributed web databases |
H: Correcting Datasets with artificially low starting values
I am working on a project where we would like to take the ratio of two measurements A/B and subject these ratios to a ranking algorithm. The ratio is normalized prior to ranking (though the ranking/normalization are not that import to my question).
In most cases measurement A (the starting measurement) is a count with values greater than 1000. We expect an increase for measurement B for positive effects and a decrease in measurement B for negative effects.
Here is the issue, some of our starting counts are nearly zero which we believe is an artifact of experimental preparation. This of course leads to some really high ratios/scaling issues for these data points.
What is the best way to adjust these values in order to better understand the real role in our experiment?
One suggestion we received was to add 1000 to all counts (from measurement A and B) to scale the values and remove the bias of such a low starting count, is this a viable option? Thank you in advance for your assistance, let me know if I am not being clear enough.
AI: Yes, the general idea is to add a baseline small count to every category. The technical term for this is Laplace smoothing. Really it's not so much of a hack, as encoding the idea that you think there is some (uniform?) prior distribution of the events occurring. |
H: Hardware requirements for Linux server to run R & RStudio
I want to build a home server/workstation to run my R projects. Based on what I have gathered, it should probably be Linux based. I want to buy the hardware now, but I am confused with the many available options for processors/ram/motherboards. I want to be able to use parallel processing, at least 64GB? of memory and enough storage space (~10TB?). Software wise, Ubuntu?, R, RStudio, PostgreSQL, some NOSQL database, probably Hadoop. I do a lot of text/geospatial/network analytics that are resource intensive. Budget ~$3000US.
My Questions:
What could an ideal configuration look like? (Hardware + Software)
What type of processor?
Notes:
No, I don't want to use a cloud solution.
I know it is a vague question, but any thoughts will help, please?
If it is off-topic or too vague, I will gladly delete.
Cheers B
AI: There is no ideal configuration, for R or in general - product selection is always a difficult task and many factors are at play. I think that the solution is rather simple - get the best computer that your budget allows.
Having said that, since you want to focus on R development and one of R's pressing issues is its critical dependence on the amount of available physical memory (RAM), I would suggest favoring more RAM to other parameters. The second most important parameter, in my opinion, would be number of cores (or processors - see details below), due to your potential multiprocessing focus. Finally, the two next most important criteria I'd pay attention to would be compatibility with Linux and system/manufacturer's quality.
As far as the storage goes, I suggest considering solid state drives (SSD), if you'd rather prefer to have a bit more more speed than more space (however, if your work will involve intensive disk operations, you might want to investigate the issue of SSD reliability or consult with people, knowledgeable in this matter). However, I think that for R-focused work, disk operations are much less critical than memory ones, as I've mentioned above.
When choosing a specific Linux distribution, I suggest using a well-supported one, such as Debian or, even better, Ubuntu (if you care more about support, choose their LTS version). I'd rather not buy parts and assemble custom box, but some people would definitely prefer that route - for that you really need to know hardware well, but potential compatibility could still be an issue. The next paragraph provides some examples for both commercial-off-the-shelf (COTS) and custom solutions.
Should you be interested in the custom system route, this discussion might be worth reading, as it contains some interesting pricing numbers (just to get an idea of potential savings) and also sheds some light on multiprocessor vs. multi-core alternatives (obviously, the context is different, but nevertheless could be useful). As I said, I would go the COTS route, mainly due to reliability and compatibility issues. In terms of single-processor multi-core systems, your budget is more than enough. However, when we go to multiprocessor workstations (I'm not even talking about servers), even two-processor configurations can go over your budget easily. Some, not far away, such as HP Z820 Workstation. It starts from 2439 USD, but in minimal configuration. When you upgrade it to match your desired specs (if it's even possible), I'm sure that we'll be talking about 5K USD price range (extrapolating from the series' higher-level models). What I like about HP Z820, though, is the fact that this system is Ubuntu certified. Considering system compatibility and assuming your desire to run Ubuntu, the best way to approach your problem is to go through Ubuntu-certified hardware lists and shortlist systems that you like. Just for the sake of completeness, take a look at this interesting multiprocessor system, which in compatible configuration might cost less than from HP or other major vendors. However, it's multimedia-oriented as well as it's reliability and compatibility are unknown, not to mention that it's way over your specified budget.
In terms of R and R-focused software, I highly recommend you to use RStudio Server instead of RStudio, as that will provide you with an opportunity to be able to work from any Internet-enabled location (provided you computer will be running, obviously). Another advice that I have is to keep an eye on alternative R distributions. I'm not talking about commercial expensive ones, but about emerging open source projects, such as pqR: http://www.pqr-project.org. Will update as needed. I hope this is helpful. |
H: Storing Big Matrix in DataBase
I have a mysql database with the following format:
id string
1 foo1...
2 foo2...
.. ...
There are >100k entries in this db.
What I want to do is for each string, compare it to each other string and store some metric of the comparison. Doing this will essentially yield a 2D matrix of size NxN where N is the number of row in the db.
My initial thought was creating another db where each index corresponds to the string of the index in the first db and each column is the value from comparing the two strings. For example, id 1 column 2 in the second db would be the value outputted from comparing id1 and id2 in the first db.
The format of the second db:
id col1 col2 col3 ....
1 1 0.4 0.5 .....
... ... ... ...
This way of creating the second db would result in 100k rows x 100k columns, which is the issue at hand. What is the best way to handle large data sets like this? Is storing the data in a text file more efficient (say each text file corresponds to one row in the second db.)
AI: It is usually more practical to compute the distances on the fly rather than storing $N^2$ values. If possible, you will want to use a heuristic to select only the items could be sufficiently similar so you don't waste time computing distance to irrelevant items. PostgreSQL can create an index on character trigrams in strings that enables efficient querying for other strings that are sufficiently similar (measured by the Jaccard similarity coefficient).
Storing an $N$ column wide table is not possible because the maximum number of columns allowed in MySQL is 4096. Even if that were not the case, using an RDBMS in such a way is considered bad practice and such a table would be difficult to work with.
If for some reason you must store a dense distance matrix, a format such as HDF5 would be more efficient than an RDBMS. |
H: Querying DBpedia from Python
How can I get information about an entity from DBpedia using Python?
Eg: I need to get all DBpedia information about USA. So I need to write the query from python (SPARQL) and need to get all attributes on USA as result.
I tried :
PREFIX db: <http://dbpedia.org/resource/>
SELECT ?p ?o
WHERE { db:United_States ?p ?o }
But here all DBpedia information is not displaying.
How can I do this and which all are the possible plugins/api available for python to connect with DBpedia ?
Also what will be the SPARQL query for generating the above problem result?
AI: You do not need a wrapper for DBPedia, you need a library that can issue a SPARQL query to its SPARQL endpoint. Here is an option for the library and here is the URL to point it to: http://dbpedia.org/sparql
You need to issue a DESCRIBE query on the United_States resource page:
PREFIX dbres: <http://dbpedia.org/resource/>
DESCRIBE dbres:United_States
Please note this is a huge download of resulting triplets.
Here is how you would issue the query:
from SPARQLWrapper import SPARQLWrapper, JSON
def get_country_description():
sparql = SPARQLWrapper("http://dbpedia.org/sparql")
sparql.setReturnFormat(JSON)
sparql.setQuery(query) # the previous query as a literal string
return sparql.query().convert() |
H: Appropriate way to store data in R
I have data, which looks like this:
These data are only for one subject. I will have a lot more.
These data will be analyzed in R.
Now I'm storing them like this:
subject <- rep(1, times = 24)
measurement <- factor(x = rep(x = 1:3, each = 8),
labels = c("Distance", "Frequency", "Energy"))
speed <- factor(x = rep(x = 1:2, each = 4, times = 3),
labels = c("speed1", "speed2"))
condition <- factor(x = rep(x = 1:2, each = 2, times = 6),
labels = c("Control", "Experm"))
Try <- factor(x = rep(x = 1:2, times = 12),
labels = c("Try1", "Try2"))
result <- c(1:8,
11:18,
21:28)
dt <- data.frame(subject, measurement, speed, condition, Try, result)
What is the appropriate way to store these data in R (in a data frame)?
AI: Without more information all I can say is that:
the say you're storing it is fine in general
you can further transform/store your data depending on your use case
To expand on #2, if I want to study Distance vs Energy across all subjects, then I would format my data like this:
> library(reshape2)
> dt2 <- dt[dt$measurement %in% c('Distance','Energy'),]
> dt_cast <- dcast(dt2, subject+Try~measurement+speed+condition, value.var='result')
The transformed data (dt_cast) would then look like:
subject Try Distance_speed1_Control Distance_speed1_Experm Distance_speed2_Control
1 1 Try1 1 3 5
2 1 Try2 2 4 6
Distance_speed2_Experm Energy_speed1_Control Energy_speed1_Experm Energy_speed2_Control
1 7 21 23 25
2 8 22 24 26
Energy_speed2_Experm
1 27
2 28
Allowing me to, for example, look at the relationship between the Distance_speed1_Control vs Energy_speed1_Control columns.
Basically subset/aggregate your data and then use the dcast to get the rows and columns the computer needs. |
H: What are the current killing machine learning methods?
I was wondering whether we could list machine learning winning methods to apply in many fields of interest: NLP, image, vision, medical, deep package inspection, etc. I mean, if someone will get started a new ML project, what are the ML methods that cannot be forgotten?
AI: The question is very general. However, there are some studies being conducted to test which algorithms perform relatively well in a broad range of problems (I'll add link to papers later), concerning regression and classification.
Lately Random Decision Forests, Support Vector Machines and certain variations of Neural Networks are being said to achieve the best results for very broad variety of problems.
This does not mean that these are "the best algorithms" for any problem, that does not exist, and actually is not very realistic to pursue. Also it must be observed that both RDF and SVM are rather-easy methods to initially grasp and obtain good results, so they are becoming really popular. NN have been used intensively since couple of decades (after they revived), so they appear often in implementations.
If you are interested in learning further you should look for an specific area and deal with a problem that can be solved nicely by machine learning to understand the main idea (and why is impossible to find the method).
You will find common the task to try to predict the expected behavior of something given some known or observable characteristics (to learn the function that models the problem given input data), the issues related to dealing with data in high-dimensional spaces, the need for good quality data, the notable improvements that can give data pre-processing, and many others. |
H: What is the difference between feature generation and feature extraction?
Can anybody tell me what the purpose of feature generation is? And why feature space enrichment is needed before classifying an image? Is it a necessary step?
Is there any method to enrich feature space?
AI: Feature Generation -- This is the process of taking raw, unstructured data and defining features (i.e. variables) for potential use in your statistical analysis. For instance, in the case of text mining you may begin with a raw log of thousands of text messages (e.g. SMS, email, social network messages, etc) and generate features by removing low-value words (i.e. stopwords), using certain size blocks of words (i.e. n-grams) or applying other rules.
Feature Extraction -- After generating features, it is often necessary to test transformations of the original features and select a subset of this pool of potential original and derived features for use in your model (i.e. feature extraction and selection). Testing derived values is a common step because the data may contain important information which has a non-linear pattern or relationship with your outcome, thus the importance of the data element may only be apparent in its transformed state (e.g. higher order derivatives). Using too many features can result in multicollinearity or otherwise confound statistical models, whereas extracting the minimum number of features to suit the purpose of your analysis follows the principle of parsimony.
Enhancing your feature space in this way is often a necessary step in classification of images or other data objects because the raw feature space is typically filled with an overwhelming amount of unstructured and irrelevant data that comprises what's often referred to as "noise" in the paradigm of a "signal" and "noise" (which is to say that some data has predictive value and other data does not). By enhancing the feature space you can better identify the important data which has predictive or other value in your analysis (i.e. the "signal") while removing confounding information (i.e. "noise"). |
H: Does high error rate in regression imply the data set is unpredictable?
I have a data set of video watching records in a 3G network. In this data set, 2 different kind of features are included:
user-side information, e.g., age, gender, data plan and etc;
Video watching records of these users, each of which associated with a download ratio and some detailed network condition metrics, say, download speed, RTT, and something similar.
Under the scenario of internet streaming, a video is divided into several chunks and downloaded to end device one by one, so we have download ratio = download bytes / file size in bytes
Now, Given this data set, I want to predict the download ratio of each video.
Since it is a regression problem, so I use gradient boosting regression tree as model and run 10-fold cross validation.
However, I have tried different model parameter configurations and even different models (linear regression, decision regress tree), the best root-mean-square error I can get is 0.3790, which is quite high, because if I don't use any complex models and just use the mean value of known labels as prediction values, then I can still have an RMSE of 0.3890. There is not obvious difference.
For this problem, I have some questions:
Does this high error rate imply that the label in data set is unpredictable?
Apart from the feature problem, is there any other possibilities? If yes, how can I validate them?
AI: It's a little hasty to make too many conclusions about your data based on what you presented here. At the end of the day, all the information you have right now is that "GBT did not work well for this prediction problem and this metric", summed up by a single RMSE comparison. This isn't very much information - it could be that this is a bad dataset for GBT and some
other model would work, it could be that the label can't be predicted from these features with any model, or there could be some error in model setup/validation.
I'd recommend checking the following hypotheses:
1) Maybe, with your dataset size and the features you have, GBT isn't a very high-performance model. Try something completely different - maybe just a simple linear regression! Or a random forest. Or GBDT with very different parameter settings. Or something else. This will help you diagnose whether it's an issue with choice of models or with something else; if a few very different approaches give you roughly similar results, you'll know that it's not the model choice that is causing these results, and if one of those models behaves differently, then that gives you additional information to help diagnose the issue.
2) Maybe there's some issue with model setup and validation? I would recommend doing some exploration to get some intuition as to whether the RMSE you're getting is reasonable or whether you should expect better. Your post contained very little detail about what the data actually represents, what you know about the features and labels, etc. Perhaps you know those things but didn't include them here, but if not, you should go back and try to get additional understanding of the data before continuing. Look at some random data points, plot the columns against the target, look at the histograms of your features and labels, that sort of thing. There's no substitute for looking at the data.
3) Maybe there just aren't enough data points to justify complex models. When you have low numbers of data points (< 100), a simpler parametric model built with domain expertise and knowledge of what the features are may very well outperform a nonparametric model. |
H: Could someone please offer me some guidance on some kind of particular, SPECIFIC project that I could attemp, to "get my feet wet, so to speak"
I am COMPLETELY new to the field of Data Science, mainly because every employer I have worked for, simply COULDN'T sell any customers anything that would use techniques learned in this field.
Of particular interest to me is machine learning/Predictive Analysis.
I have attempted many "test projects" myself, but I seem to NEED some sort of outside "catalyst" to tell me a specific goal, and a specific set of guidelines, when I am trying to learn something.
Otherwise, I tend to lose focus, and jump from one interesting topic to the next, without ever gaining any experience.
Thank you!!
AI: I would suggest Kaggle learning projects - http://www.kaggle.com/competitions
Look for the ones in the 101 section that offer knowledge. There's many pre-made solutions ready, which you can ingest and try variations of.
Also, I have bookmarked a Comprehensive learning path – Data Science in Python, which among other things gives a few answers to your specific question. |
H: Where did this NY Times op-ed get his Google Search data?
I hope this is a question appropriate for SO.
The article in question: http://www.nytimes.com/2015/01/25/opinion/sunday/seth-stephens-davidowitz-searching-for-sex.html
As far as I can tell, the only publicly available data from Google Search is through their Trends API. The help page states that
The numbers on the graph reflect how many searches have been done for a particular term, relative to the total number of searches done on Google over time. They don't represent absolute search volume numbers, because the data is normalized and presented on a scale from 0-100.
However in the article, the author reports (absolute) "average monthly searches". The source is stated as:
All monthly search numbers are approximate and derived from anonymous and aggregate web activity.
Source: analysis of Google data by (author)
So, how did he get this "anonymous and aggregate web activity"?
AI: Google AdWords. That has absolute search volumes. |
H: What is advantage of using Dryad instead of Spark?
I found that Apache-Spark very powerful in Big-Data processing. but I want to know about Dryad (Microsoft) benefits. Is there any advantage for this framework than Spark?
Why we must use Dryad instead of Spark?
AI: Dryad is an academic project, whereas Spark is widely deployed in production, and now has a company behind it for support. Just focus on Spark. |
H: Machine learning toolkit for Excel
Do you know of any machine learning add-ins that I could use within Excel? For example I would like to be able to select a range of data and use that for training purposes and then use another sheet for getting the results of different learning algorithms.
AI: As far as I know, currently there are not that many projects and products that allow you to perform serious machine learning (ML) work from within Excel.
However, the situation seems to be changing rapidly due to active Microsoft's efforts in popularizing its ML cloud platform Azure ML (along with ML Studio). The recent acquisition of R-focused company Revolution Analytics by Microsoft (which appears to me as more of acqui-hiring to a large extent) is an example of the company's aggressive data science market strategy.
In regard to ML toolkits for Excel, as a confirmation that we should expect most Excel-enabled ML projects and products to be Azure ML-focused, consider the following two projects (the latter is an open source):
Excel DataScope (Microsoft Research): https://www.microsoft.com/en-us/research/video/excel-datascope-overview/
Azure ML Excel Add-In (seems to be Microsoft sponsored): https://azuremlexcel.codeplex.com |
H: What are the differences between Apache Spark and Apache Flink?
Both Apache-Spark and Apache-Flink projects claim pretty much similar capabilities.
what is the difference between these projects. Is there any advantage in either Spark or Flink?
Thanks
AI: Flink is the Apache renaming of the Stratosphere project from several universities in Berlin. It doesn't have the same industrial foothold and momentum that the Spark project has, but it seems nice, and more mature than, say, Dryad. I'd say it's worth investigating, at least for personal or academic use, but for industrial deployment I'd still prefer Spark, which at this point is battle tested. For a more technical discussion, see this Quora post by committers on both projects. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.