pseudo,title,question,vote,medal,nbr_comment,date,url_post,url_competition,rank_competition_comment /mattieshoes,Avoiding the daily submission limit,"This is probably very obvious, but just in case nobody had considered it, I thought I'd mention... By taking the training data and chopping a few months off the end, you can essentially create your own testbed -- instead of 100 months of training data, maybe you have 95, then 5 months worth of games to test against where you know the real answers and can calculate your own root mean square error. It'll give you an idea whether the results you generate are sensible and allow you to tune your algorithm without running into the ""two submissions a day"" max. I find it handy -- for instance, if you're going to assign decreasing weight to older and older games, you can adjust how the weighting scheme works to find something fairly good first, and THEN submit.",0,None,22 ,Sat Aug 07 2010 07:04:01 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/54,/competitions/chess,49th /judowill,A few techniques you might try:,"In response to a comment on the No Free Hunch Blog [Link]:http://kaggle.com/blog/2010/04/27/beating-up-on-hiv/and on my website [Link]:http://www.willdampier.info/2010/a-few-ideas/ ... I'm listing a few ideas that may give you a head-start on how to get started:There are a whole lotta ways to do this sort of prediction, since I come from a machine-learning background I'll describe it from that perspective: So in my mind I need to extract a set of features (observations) from each sequences, then train a SVM, logistic-regression, decision-forest, or ensemble classifier to learn which features are important. Each of those classifiers have advantages and disadvantages that are very well documented ... a safari through Wikipedia should give you a pretty good idea.The hardest part is deciding which features are worth (or even possible) to put into your model:I know people who use ""k-mers"" as their features ... this involves finding and counting all of the 5 letter instances in the sequence. Then you can use these as features in a prediction model. K-mers are nice because they are easy to pull out with any programming language you can think of. There is also a list of regular-expressions which have some biological meaning here: http://elm.eu.org/browse.htmlOther people prefer to use the raw sequence. If you can align the sequences (since they don't all start at the same part of the gene) using a program like ClustalW then you can think of each column as a categorical feature. The problem here is that HIV-1 is highly variable and alignments are difficult ... although not impossible.If you wander around the Los Alamos HIV-1 database you can find a list of known resistance mutations: http://www.hiv.lanl.gov/content/sequence/RESDB/. These have been verified to be important in the viral resistance to certain drugs. You can use the presence or absence of these mutations as features to train a model.I'm sure there are dozens of ways to extract features that I've never even heard of so don't think that these are your only choices.",1,None,18 ,Thu Apr 29 2010 01:13:08 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1,/competitions/hivprogression,None /tcash21,Question regarding algorithms used,"Hi, I'm interested in participating in the contest, however I need to know what tools will actually need to be released upon entry.I have access to proprietary software and super computing power, but if our predictions work well we cannot hand over our platform. We can describe it and possibly even release a model that can be simulated by users, but the platform itself is trademarked and licensed.Please let me know as I would really like to make a contribution to this project!Thanks,-Tanya",0,None,2 ,Thu Apr 29 2010 17:48:46 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/2,/competitions/hivprogression,None /altons,Question about data,"Hi,I just read in csv files and I've got a question about the PR and RT sequences. The first one (PR) is 297 characters and the RT is 1476 characters and since I am not from the biomedical field I just wanted to double check.Regards,AlbertoP.S.Nice morete seoi nage or perhaps ippon seoi nage (not sure though)",0,None,2 ,Fri Apr 30 2010 16:32:29 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/3,/competitions/hivprogression,48th /lucassinclair,Biased sets,It seams that the training dataset contains about 80% of patients not responding to treatment while the test dataset seams to contain around 50% of none responding patients. I hence conclude that the training set is not a uniform sample of the total number of patients. Is this done on purpose ?,1,None,3 ,Fri Apr 30 2010 17:40:57 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/4,/competitions/hivprogression,None /brucetabor,Non-standard nucleotide codings,"Some of the nucleotide codings - I've spotted an ""N"", a ""Y"" and an ""M"" already - are not from the standard ACGT set.What do they represent?Also HIV is a retrovirus (hence the Reverse Transcriptase protein). Retroviruses are RNA viruses, which means the are usualyy coded with ACGU not ACGT - the U represents uracil found in RNA instead of thymine:http://en.wikipedia.org/wiki/RNA",0,None,1 Comment,Sat May 01 2010 13:00:57 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/5,/competitions/hivprogression,23rd /dalloliogm,Ideas by a biologist/bioinformatician,"my background is from biology and, even if I have been doing bioinformatics for a few years now, I don't have enough knowledge of machine learning to solve this by myself: therefore, if someone is interested in making a two-people team with me, I would be glad to collaborate, provided that you explain the machine learning part to me.In any case, since I am more interested in learning than in the prize of the competition, I will put here some ideas for everybody:the two sets of sequences represent coding sequences of two proteins; therefore, one thing to do is to translate them and compare the protein sequences. Even if two individuals have different DNA sequences for a gene, they can have the same protein sequences; and since only the protein is exposed to functional constraints, then it will be more interesting to see the differences in the protein sequences.analyzing k-mers doesn't seem very interesting to me. k-mers are usually used to identify regulatory motifs in DNA, which define when a gene is expressed, how, etc.. However, these signals usually are not inside the coding part of a gene sequence, but rather in the positions before or sorrounding the gene. So, the regulatory factors that you are looking with k-mers could be not included in the sequences given. For a similar reason, the GC content is not so informative.a possible approach would be to look at which sites are the most variable within the protein sequences.",0,None,5 ,Sat May 01 2010 13:56:47 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/6,/competitions/hivprogression,None /judowill,Technique Discussion,"Now that we have a handful of algorithms that are well above random, does anyone want to post a discussion on their particular algorithm, feature-set, approach, etc.I'm hoping to foster an open discussion of the techniques used here. You can post a link to a public repo, a blog post, etc.Good Luck,Will",0,None,1 Comment,Sun May 02 2010 16:37:35 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/7,/competitions/hivprogression,None /lucassinclair,Therapeutics,"The goal of the game is to answer the question ""Do patients respond to the treatment ?"". However, I have found almost no information about the aforementioned treatment and drugs the patients were given. Did they all follow the same therapy ? Was the posology strictly equivalent ? What drugs exactly were consumed ?",0,None,2 ,Tue May 04 2010 11:32:54 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/8,/competitions/hivprogression,None /paulharrigan,Self-disqualification - Fontanelles,"Hi there,The Fontanelles is a group which does HIV research professionally and so has some specialized information in this area. We're disqualifying our entry, but have put it in just for fun as a target. We may be back if someone beats it.",0,None,1 Comment,Wed May 05 2010 06:00:15 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/9,/competitions/hivprogression,31st /altons,k-mers,"Hi guys,Apologies in advance if this is a silly question but I feel like a fish out of the water with this PR and RT strings.I have been spending sometime on the PR and RT sequences and noticed that if I split the sequences in 3-mers I'll get 99 groups for PR (297/3) and 492 for RT (1476/3). My question is whether or not make sense to split the sequences in ternaries. Is there any other alternative, perhaps 2-mers?Does it make sense to calculate the odds of responding to the treament for each k-mer or may be re-group them into 2 consecutive k-mers and the calculate the odds?Thanks in advance for your help.Alberto",0,None,2 ,Thu May 06 2010 15:56:24 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/10,/competitions/hivprogression,48th /rajstennajbarrabas,Quickstart package,"[Link]:http://www.OkianWarrior.com/Enjoys/Kaggle/Images/HIV.zip is a quickstart package for people to get up and running without a lot of programming.It's in perl, you will also need List::Util and Statistics::Basic from CPAN. The data files for this contest are included.BasicStats.plThis will read in the data and print some basic statistics. You can use this as a framework for your own explorations of the data. The source informs several ways of accessing and manipulating the data.TestMethod.plThis will randomly select a test set from the training data, then call Train() on the remaining data and Test() on the test data and print out the MCE. Train() and Test() are stubs - rewrite these functions to test your own prediction methods.KaggleEntry.plThis will read in the test data and the training data, call Train() on the training data, then call Test() on the test data, then generate a .csv file properly formatted for Kaggle submission. Train() and Test() are stubs - rewrite these functions to submit an entry based your own prediction methods.There is a more comprehensive README in the package.If you find problems, please let me know (via the Kaggle contact link), I will update & repost.I expect bugs will be fixed and more functionality will be added over time, updates will be posted here.(Please be kind to my server!)",1,None,8 ,Fri May 07 2010 22:52:31 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/12,/competitions/hivprogression,4th /antgoldbloom,Studies that look at Eurovision voting patterns,"Here are some papers that analyze Eurovision voting patterns. You might find some of them helpful. Gatherer (2006)Comparison of Eurovision Song Contest Simulation with Actual Results Reveals Shifting Patterns of Collusive Voting Alliances. [Link]:http://jasss.soc.surrey.ac.uk/9/2/1.htmlThe Eurovision Song Contest Is Voting Political or Cultural? Ginburgh and Noury (2006) [Link]:http://164.15.69.62/ecare/personal/ginsburgh/papers/153.eurovision.pdfFenn, Suleman, Efstathiou and Johnson (2008) [Link]:http://arxiv.org/pdf/physics/0505071The Eurovision Song Contest as a ‘Friendship’ Network Dekker (2007) [Link]:http://members.ozemail.com.au/%7Edekker@ozemail.com.au/Connections07.pdf",1,None,1 Comment,Wed May 12 2010 06:39:30 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/16,/competitions/Eurovision2010,None /dalloliogm,three sequences are not coding,"There are three sequences that have a stop codon in the middle, so only a portion of them is coding.I wonder which is the best way to handle with this. Would you remove the whole sequences from the data? Will you consider the sequence after the stop codon, or not? or maybe it is better to ignore this fact completely?I don't know what is the best way to put this information into a machine learning software.",0,None,13 ,Tue May 18 2010 18:53:05 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/18,/competitions/hivprogression,None /dalloliogm,any good library for machine-learning in python?,"I prefer to program in python if I can. Can you recommend me any good library for machine learning in python?I have found [Link]:http://pypi.python.org/pypi/pcSVM/pre%201.0, but it seems to be not online anymore. So, I don't know libraries to play with support vector machines in python.For Neural network there is which looks good, but I haven't tried it yet.",0,None,2 ,Mon May 24 2010 11:23:11 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/19,/competitions/hivprogression,None /dalloliogm,comments on Kaggle,"This post is not related to the HIV Progression contest, but it is to send feedback about the Kaggle website.First of all, you should really use a better code for the forum... it is very uncomfortable to write here, and there are a lot of templates out there that work better.The second point is a more general complaint about the fact that having a prize for solving the competition reduces a lot the opportunity to collaborate to solve the problem together with other people. For example, I have some good ideas on which informations I could use to write a nice machine-learning method to make the prediction... but I am restrained from exaplaining them here because I won't obtain any credit from it :-(You should think of a way to reward the people most active in the forum, or in any case you have to reward those that collaborate the most and are more open to the dialogue.",0,None,12 ,Mon May 24 2010 17:19:35 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/20,/competitions/hivprogression,None /dalloliogm,positions associated with HIV resistance,"ok, I am a good person, so I am going to post this here... hoping that somebody will respond with a similar level of feedback and maybe collaborate with me to solve this competition. A nice hint to help solving the competition is this table/database:- http://hivdb.stanford.edu/cgi-bin/PositionPhenoSummary.cgiIt shows the list of all the positions that are known to be associated with resistance to an HIV treatment, one of AZT, D4T, TDF, ABC, DDI, DDC, 3TC. You see that not all the positions in the sequences are equally important, and it is not always true that the positions that vary the most are more correlated with resistance. It is probable that these positions correspond to key aminoacids in the sequence, that have a key structural role or participate to the catalytic site of the protein.My original approach was to use this table to write a machine-learning based software using these inputs, since using all the positions in the sequences would be too cpu-consuming.As I was saying in a previous post, I am not interested in winning the prize of this competition, but I would like to learn from people expert in machine-learning methods... I think I could find other applications for these methods to other biological problems, if I learn how to use them properly. So please, don't be shy with the feedback now :-)",0,None,7 ,Mon May 24 2010 17:21:18 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/21,/competitions/hivprogression,None /dirknbr,correlation between resp and patient id,How do you explain the correlation between resp and patient id?,0,None,1 Comment,Wed May 26 2010 10:11:51 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/22,/competitions/hivprogression,5th /coffin,Viral Load,"Hi all,I'm new to this contest, and am participating not as a serious competitor but just to get familiar with genetic analysis and machine learning. I've been playing around with the dataset, and I see viral load at t0 is highly correlated with survival chances. I've implemented this in my entry (and nothing else), but I still don't get above guessing. Did I do it wrong or is something else going wrong?Thanks,Coffin",0,None,4 ,Mon May 31 2010 15:30:13 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/24,/competitions/hivprogression,38th /rajstennajbarrabas,A question for Will,"For Will, the contest organizer.You started with the public data of 1692 entries. You then selected equal numbers (or something close to that ratio) of responded and non-responded in order to make the test set.Did you use any other criterion to decide which entries went into the test set?In other words, other than the 50/50 thing, were the test set entries chosen randomly?",1,None,5 ,Tue Jun 08 2010 02:03:49 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/25,/competitions/hivprogression,4th /antgoldbloom,"General feedback, bugs and feature requests","Please use this topic to give us feedback. If you'd rather do so in private, email me at anthony.goldbloom@kaggle.com.",0,None,72 ,Wed Jun 16 2010 03:48:25 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/26,None,None /antgoldbloom,Categorical and Binary Variables,"I accidentally deleted the following post (made by another user). I'm reposting it on their behalf:Are there categorical and/or binary variables in the data set (Other than the target variable)? For instance, Variable153Open (in test data) seems to have categories 5, 6, 7,... If there are categorical variables, do we get to know what the categories mean?Thank You!",0,None,3 ,Wed Jun 23 2010 02:41:04 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/30,/competitions/informs2010,127th /salimali,note on AUC...,"As far as I am aware, the calculation of the AUC for this comp means that 0.5 is a random model. A model giving an AUC of 0.75 is the same model as one giving 0.25, but the values have been mulitiplied by -1 to give a reverse ordering.So... You need to look at the bottom of the leaderboard as well as the top to see who is really doing well!",0,None,3 ,Fri Jun 25 2010 23:27:47 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/31,/competitions/informs2010,3rd /salimali,Dodgy Data???,"There are some variables called ....LAST_PRICEIf you join the the 2 datasets together and plot you will see there is a complete disconnect for these variables, and for one of them all the values in the scoring set are missing.Have these variables been manually maniplulated in some way?",0,None,1 Comment,Fri Jun 25 2010 23:39:36 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/32,/competitions/informs2010,3rd /salimali,Using future information !,"Because we have been give all the data, it is possible to actually use 'future information' in the models, which in reality would not be possible for a real time forecasting system!",0,None,43 ,Fri Jun 25 2010 23:43:05 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/33,/competitions/informs2010,3rd /salimali,how to get 0.658,"Logistic regression using the variables below should get you 0.658 on the leaderboard.This technique is simple, but not going to win though - 0.85 is the benchmark to beat.Variable159HIGH Variable160HIGH Variable160LOW Variable164LAST Variable165LAST Variable171LAST Variable172LAST Variable173LAST Variable8HIGH Variable9HIGH Variable11LOW Variable12OPEN Variable14OPEN Variable15OPEN Variable18OPEN Variable19OPEN Variable21LOW Variable24LOW Variable27OPEN Variable27LOW Variable28HIGH Variable33LOW Variable34OPEN Variable34HIGH Variable35OPEN Variable35HIGH Variable38LOW Variable42HIGH Variable44OPEN Variable44LOW Variable45OPEN Variable45LOW Variable46HIGH Variable46LOW Variable47LOW Variable48HIGH Variable49HIGH Variable52LOW Variable53HIGH Variable53LOW Variable54OPEN Variable54HIGH Variable54LOW Variable55OPEN Variable59HIGH Variable62OPEN Variable65OPEN Variable65HIGH Variable68HIGH Variable69LOW Variable74HIGH Variable74LOW Variable78LOW Variable79HIGH Variable82OPEN Variable83HIGH Variable88OPEN Variable88HIGH Variable89HIGH Variable89LOW Variable92HIGH Variable98OPEN Variable98LOW Variable102OPEN Variable102HIGH Variable102LOW Variable105HIGH Variable108OPEN Variable109OPEN Variable109LOW Variable123OPEN Variable123LOW Variable125OPEN Variable130HIGH Variable133LOW Variable136HIGH Variable137OPEN Variable139HIGH",0,None,11 ,Fri Jun 25 2010 23:59:58 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/34,/competitions/informs2010,3rd /larrydag,dealing with missing values,I understand why missing values are in the data set. That's just a matter of life and reality. I'm curious on what are some methods of dealing with them.I've noticed that in my methods using Logistic Regression with R that its not imputing a value for the Result.Data. I'm basically just using a mean of predicted values to impute for the missing values. That's not a great method but works okay. What are some of your methods?,0,None,2 ,Tue Jun 29 2010 19:12:04 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/35,/competitions/informs2010,97th /colingreen,Scoring system,"What are people's opinions on the scoring method on this task? I think I would prefer to see submissions using a continous scale from 0 to 1 instead binary 0/1. That is, we would predict the probability that a patient would respond to treatment. I appreciate this would result in submissions with many scores near to 0.5, but I think that would be a superior scoring method with respect to maximizing clinical outcome (that is, if the predictions were being used to guide treatment decisions in a clinical setting).Colin.",0,None,3 ,Wed Jun 30 2010 22:20:10 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/36,/competitions/hivprogression,40th /louisduclosgosselin,Is it possible to predict stock price movements at five minute intervals with accurate predictive an,"Dear Colleagues, Preliminary results on the leaderboard of this contest appear to confirm that this is possible to predict stock price movements at five minute intervals with accurate predictive analysis solutions! The leaderboard results of the contest are really amazing. They let me believe that it is possible to predict stock price movements at five minute intervals with accurate predictive analysis solutions. At the end of the competition, with predicted values of the top competitors, we will build trading strategies (and measuring their profitability) to confirm or not this fact. Are methods developed in this contest will have big impact on the finance industry? Thanks for participing! Thanks a lot. Let's keep in touch. I am looking forward earning your news. Best regards. Louis Duclos-Gosselin Chair of INFORMS Data Mining Contest 2010 Applied Mathematics (Predictive Analysis, Data Mining) Consultant at Sinapse INFORMS Data Mining Section Member E-Mail: Louis.Gosselin@hotmail.com http://www.sinapse.ca/En/Home.aspx http://dm.section.informs.org/ Phone: 1-866-565-3330 Fax: 1-418-780-3311 Sinapse (Quebec), 1170, Boul. Lebourgneuf Suite 320, Quebec (Quebec), Canada G2K 2E3",0,None,2 ,Wed Jul 07 2010 01:30:06 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/38,/competitions/informs2010,None /yashshah0,Variable Names,"Hi,I still haven't seen the test database, but seems that column/parameter names have been hidden to avoid competitors from figuring out the actual stock name and hence stopping them from misusing that information.But I have an alternate opinion, specially in the field of Finance having an intuitive sense what the variable means is very important for predicting prices. A sense whether column 1 was the closing index of a stock or the trading volume would help a lot to model the predictive prices.Without having any sense what the variable meant, its difficult to confide in a model. A model with 90% accuracy would be unreliable if the stock prices, suppose were predicted on the parameter : the amount of cattle traded in Japan. Although statistically we may find a correlation between Price movements and birth of baby boys in Ghana, but it would not sound logical would it? I hope I could put my point across and you would consider proving the column names rather then just non-intuitive generic names.",0,None,1 Comment,Thu Jul 08 2010 17:56:04 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/39,/competitions/informs2010,None /matthewpickering,World Cup 2010,Inspired by the World Cup 2010 - Take on the Quants contest I ran an exact copy among my friends and colleagues. The highest score attained here was 97.31 with an average score of 64.41 in comparison to the highest score of 87.31 and average of 54.96 between the entrants. Can the World Cup really be predicted by machines?,0,None,1 Comment,Mon Jul 12 2010 12:32:44 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/40,None,None /vijaygovindaswamyperumkulam,Clarification on the Submission Template,"If my understanding is correct, the column ""TargetVariable"" in the submission template indicates the score for the prediction of outcome ""1"" (increase in stock price) for each of the records in the test data file. The score for prediction of the outcome ""0"" (decrease in stock price) is not required.Please clarify.Regards,PG",0,None,15 ,Thu Jul 15 2010 18:49:33 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/41,/competitions/informs2010,27th /tcash21,Target variable?,Is anyone else having problems finding the target variable to be predicted in the test set?,0,None,4 ,Thu Jul 22 2010 20:17:01 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/42,/competitions/informs2010,86th /newpants,Can any of the 4 top guys tell me if you use future information?,"The accuracies higher than 0.9 is really impressive.. Can any of the 4 top guys tell me if you use future information? If so, I will change my major now :) My curiosity is really killing me... thx....",0,None,5 ,Sat Jul 24 2010 10:07:13 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/43,/competitions/informs2010,None /abhinav0,confusion with structure of data,"I am not sure that data is given for 180 different stocks or it's only one stock data and 180 expert view.Please if any one understood the data explain me.and if my first assumption is true then I have to predict for 180 different stock the probability of up (or down) ?also in the case of option 1 where is information about ""sectoral data, economic data, experts' predictions and indices""",0,None,3 ,Tue Jul 27 2010 13:46:17 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/44,/competitions/informs2010,None /salimali,how to get 0.95,It is possible to get 0.95 using XXXXXX variable. (sorry too late!) There are numerous clues scattered throughout this forum. ;-),0,None,2 ,Mon Aug 02 2010 07:16:02 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/45,/competitions/informs2010,3rd /antgoldbloom,The solution,Thanks for participating in this competition. I've attached the solution file to this post. UPDATE: The solution is no longer attached but you're welcome to make submissions to this competition.,0,None,7 ,Wed Aug 04 2010 00:24:40 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/46,/competitions/hivprogression,None /dirknbr,Elo algorithm,"Great that we have an Elo benchmark we need to beat. Is the Elo ranking based on the training set only? I am asking because my Elo scores lower than the benchmark. Here is how I derive the Elo, where players[] is an array of Elo scores, b is black and, w is white and s is the score. K is set to 32. Obviously I need to initialise the rating for unseen players, what value do you recommend, does it matter? if w in players: if b in players: ew=1/(1+math.pow(10,(players[b]-players[w])/400)) else: ew=1/(1+math.pow(10,(ini-players[w])/400)) players[w]=players[w]+32*(float(s)-ew) else: players[w]=ini if b in players: if w in players: eb=1/(1+math.pow(10,(players[w]-players[b])/400)) else: eb=1/(1+math.pow(10,(ini-players[b])/400)) players[b]=players[w]+32*((1-float(s))-eb) else: players[b]=ini",1,None,23 ,Wed Aug 04 2010 23:32:30 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/47,/competitions/chess,99th /diogoff,Question about disjoint subsets of players,"Suppose that the training data contains just 5 players and there are games between players 1, 2, 3 and games between players 4, 5 but no games between these two subsets.My question is the following:If the test data contains a game between player 2 and player 4 (for example), how are you supposed to come up with an estimate for the score of that game?",0,None,2 ,Thu Aug 05 2010 13:38:04 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/49,/competitions/chess,4th /perrutquist,How to get 0.88527,"Just give every game a score of 0.545647 (which is the average score for the games in the training set) and you will get a RMSE of 0.882573 If you're doing worse than this, then you need to rethink your strategy.UPDATE: After changing the leaderboard to use 20 % of the test set, the RMSE for my all=0.545647 submission has changed to 0.78846",0,None,9 ,Thu Aug 05 2010 15:29:35 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/50,/competitions/chess,23rd /johnlucas0,Sharing of Methodologies,"A question on the rules, if I may?You say that contestants will need to share their methodologies in order to qualify for prizes.Does that mean sharing with the organisers, or sharing with the public?Personally I'll be a lot more interested in participating / contributing if I know the winning methodologies are going to be publicly shared.Thanks.",0,None,2 ,Fri Aug 06 2010 01:29:15 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/51,/competitions/chess,22nd /chrisraimondi,Here is the BowTie Breakdown of Players in the Training Set,"The ""In"" in this case is White - the ""Out"" is black: Largest Strongly Connected Component 5643 In 707 Out 669 Tubes 5 Tendrils 91 Others 186 I have attached a text file with each players category.If you have no idea what this means - you can look for:Bow Tie WebOr look at this graphic:http://nlp.stanford.edu/IR-book/html/htmledition/img1832.png",0,None,3 ,Fri Aug 06 2010 06:53:01 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/52,/competitions/chess,121st /uriblass,I wonder if it is not possible to cheat in the competition,We talk about predicting the past so people may find the name of part of the chess players based on their result and use this information to have correct prediction of the results.I think that submitting a data file should not be enough and the person who submit it should explain how to calculate the expected result so other can reproduce the same results.,0,None,9 ,Fri Aug 06 2010 09:28:18 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/53,/competitions/chess,6th /del=6991293a5e2ea175,Some Clarifications,"Hello,I have a few clarifications regarding this contest. Correct me if my interpretation is incorrect:The data is about 180 different stocks and the 4 parameters of each of those stocks corresponds to prices. In some of the other discussions, it has been mentioned that the data could correspond to predictions. Does it mean that these are predicted values? If not, then what are the predictions?It has also been mentioned that the data could correspond to sectoral data. What might they be?Is there any physical interpretation for ""TargetValue""? What does it mean for it to be ""1"" or ""0""? How do we interpret its values across various timestamps?""TargetValue"" is only based on data for the first 60 min of training data. It has absolutely no relation with data after that time.Thanks,Vishal",0,None,5 ,Sun Aug 08 2010 22:47:06 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/55,/competitions/informs2010,None /domcastro,Are the ID# the player ranks?,Hi - subject says it all really. Are the ID# the player ranks?thanks,0,None,1 Comment,Mon Aug 09 2010 00:14:04 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/56,/competitions/chess,None /uriblass,questions about the data(order of games),1)Are the game results ordered based on date of playing them?2)Are the game of a specific month really from that month or maybe they can be from tournaments that are finished at that month and started at earlier month?,0,None,1 Comment,Mon Aug 09 2010 11:23:12 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/57,/competitions/chess,6th /dirknbr,data description,"I think the data could be described better. There is no number of obs in the second row.Since the series have different number of obs, you want us the predict the next 4 values after the last obs of each series. Is that correct? If that is correct, you could have aligned the data better so all series have a value in the last row (row 44 or so)Dirk",0,None,15 ,Mon Aug 09 2010 16:01:23 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/58,/competitions/tourism1,41st /leebaker,Programming languages poll,"Hello everyone,I participated in the Netflix prize a couple of years ago, and found it interesting how few languages were capable of successfully dealing with the quantity of data provided. I was excited to see that this contest is a bit different, in that the data set is small enough to allow the use of almost any language to process it.I'm a curious about which programming languages people are using to implement their solutions. I've seen in other threads that people use a variety of tools: Jeff Sonas appears to use MS SQL, Chris_R uses R, and Matt Shoemaker uses C#. I am currently using ~400 lines of Python, but am considering a switch to R.So here's the question- what programming language are you using? How many lines of code are you using to produce a submission?",0,None,33 ,Mon Aug 09 2010 18:36:59 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/59,/competitions/chess,140th /jpl2091,Bad idea,"Bad idea: using bayeselo over the entire 100 months of training data. Doing this gets an RMSE of ~.74, worse than the Elo benchmark.My gut tells me this is because 100 months is far too long a time period to find a single rating for a player. Remi Coulom's WHR would address this problem but (right now) his paper is a little bit out of my intellectual grasp. Otherwise I could rather easily apply the bayeselo algorithm to smaller chunks and develop a good method to weight the different time periods.Basically: Assuming constant skill over ~10 years is not good. Whoda thunk it.",0,None,6 ,Mon Aug 09 2010 19:55:33 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/60,/competitions/chess,50th /analytics360,Best MASE (3.20 or 2.28),"If I am not wrong, the best MASE obtained in the paper for forecasting 4 years is 3.20 (without explanatory variables) and not 2.28 as mentioned in this contest. Is that correct?",0,None,1 Comment,Wed Aug 11 2010 07:43:38 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/61,/competitions/tourism1,None /imagedoctor,Accuracy on full data set,"Now the test samples have been released, I thought it might be interesting to see what the results could be achieved on the complete data set from the HIV progression competition.Some of the competition entries seemed to focus on specifics of the training and test set distributions, and it is potentially unknown how these would translate into full data set results, it may be enlightening to see the difference in performance. MCE estimation method - Mean of 10 fold cross validation using all available samples.My best effort so far is 75.5 accuracy, giving an MCE of 24.5.This attempt used a forest approach with some additional features based on Smith Waterman similarities and multi-layer perceptrons.It would be great to hear how other techniques fair using the same data and estimation method.Cheers,Matt",0,None,2 ,Wed Aug 11 2010 20:12:05 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/62,/competitions/hivprogression,47th /gregwerner,Sumbission Descriptions,"Hi all,When I submit, the form asks me to enter a description of 600 charaacters or fewer. However, when I click on your submissions, my description gets cut off. How can I view my full submission description?",0,None,1 Comment,Wed Aug 11 2010 21:21:06 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/63,/competitions/tourism1,13th /davidcurran,Alzheimer's Challenge,"Did you see [Link]:http://www.nytimes.com/2010/08/13/health/research/13alzheimer.html?pagewanted=1&_r=1&ref=global-home piece in the New York times about the use of open data in Alzheimer's research?""Rare Sharing of Data Leads to Progress on Alzheimer’s"" do you think further investigation of [Link]:http://www.adni-info.org/Scientists/ADNIScientistsHome/ADNIPublicationByNorbertSchuff.aspx dataset could yield improved results?Is this a similar enough problem to the HIV Progression data mining competition that we could use [Link]:http://kaggle.com/blog/2010/08/09/how-i-won-the-hiv-progression-prediction-data-mining-competition/ as evidence that a new competition might produce interesting results?Who might sponsor such a competition? In terms of companies or [Link]:http://www.alzheimers-research.org.uk/ or [Link]:http://en.wikipedia.org/wiki/Terry_Pratchett?This is related to the Competitions requests [Link]:http://kaggle.com/view-postlist/forum-15-kaggle-forum/topic-27-competitions-requests/task_id but I think it is useful to get some opinions of the practicality of both the task and of getting funding/publicity",0,None,4 ,Fri Aug 13 2010 11:32:14 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/64,None,None /judowill,Calculating MASE from the excel file,"I've been trying to replicate the MASE calculation described in the excel file. However, it seems that the MASE is always 1.0. ... shown in cell I-26. Even playing with the prediction in the Naive column confirm this. Are we actually trying to minimize the In sample MAE ... shown in E-2? The description on the Evaluation page leaves much to be desired. Thanks, Will",0,None,5 ,Fri Aug 13 2010 23:59:21 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/65,/competitions/tourism1,None /colingreen,Ratings -> Probabilities,"I'm interested in learning about how people are translating from a pair of ratings (rating for white and black player) into a game outcome probabilty.From teh top of my head I'm thinking:1) Simple diff truncated to [0,1]2) Diff then apply a sigmoid (logistic function) with range [0,1]3) Some scaling based on highest and lowest ratings or standarrd deviations from mean rating.",0,None,5 ,Sun Aug 15 2010 02:13:58 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/66,/competitions/chess,12th /jasonbrownlee,How is final end-of-contest assessment calculated?,"I'm interested as to how the final results will be calculated. Will all submissions by all users be assessed against the full test set (as opposed to the 10% sample used in the leaderboard)? Alternatively, would the final ""best"" submission for each user listed on the leader be the only submission considered?The former would be more complete and not too computationally intensive if the total number of users/submissions is moderate, especially if 10% and full test set assessment is calculated at submission time. This case could also lead to participant confusion if there is a considerable mismatch between final (10% test set) leaderboard position and finalized (100% test set) outcomes. The latter case may provide an opportunity of missing out on some interesting techniques that turn out to perform well on the full dataset but not on the chosen 10% sample (if this is even reasonably possible). Anyway, I don't really mind either way - I'm just curious :)Jase.",0,None,10 ,Sun Aug 15 2010 12:24:11 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/67,/competitions/chess,17th /del=5d7200d9c1104e80,regarding submissions score,it seems to me that the 10% random choice of games from the test set leads to a large variance in the score,0,None,21 ,Mon Aug 16 2010 16:39:55 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/70,/competitions/chess,98th /uriblass,how many use a prediction that is based on simple rating calculation?,I define a simple rating calculation as a rating system when every player can calculate the next rating of himself only based on his results and the rating of his opponents and the old rating of himself(of course I assume that he know details about himself like how many games he played but he does not know all the results of the other players or specific details about them except their rating).I start with constant rating for every new player.The result prediction is based only on the rating of both players and the question who is white.I wonder how many here made prediction that is based on simple rating calculation and if the prediction of elo benchmark is based on simple rating calculation.I do not expect a simple rating calculation to win the competition but I hope that simple rating calculation can at least win against bench elo.,0,None,15 ,Tue Aug 17 2010 22:10:10 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/71,/competitions/chess,6th /uriblass,changing the leader board,I see a significant improvement in the result after changing the leaderboardNow my prediction has the number 0.679248 when benchelo had 0.722448 when earlier benchelo was better than my prediction.With this big change from the previous prediction I wonder how much the leaderboard is really reliable.I want to have 95% confidence that A has better prediction than B based on the leaderboard. The question is what is the difference between RMSE(prediction A) and RMSE(prediction B) in the leaderboard to have 95% confidence that the better one has the better result in the full test data.I am afraid that we still cannot trust an improvement by 0.02,0,None,5 ,Wed Aug 18 2010 09:43:09 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/72,/competitions/chess,6th /antgoldbloom,Solution file,Here is the solution file for anybody interested.,1,None,5 ,Thu Aug 19 2010 04:01:03 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/73,/competitions/worldcupconf,None /vateeshc,Submission format,"This may sound a bit weird. But I have noticed that if the submission csv files do not strictly adhere to the suggetsed format, then I obtain volatile/different AUC scores on the same submitted results. What i have been doing without realising was that I was labelling the score column as ""Target Variable"", with a space in between. But when I removed the space on the same submitted dataset then I got a different result.Can I kindly request the administrators of this contest to look into this. Thanks. Rgds",0,None,5 ,Thu Aug 19 2010 10:16:53 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/74,/competitions/informs2010,44th /leighherdman1,Evaluation Problem/Idea,"I'm quite happy with the format for the evaluation and understand why it was chosen. However there is one crucial problem with the evaluation processs. That is after using our training data, our ratings are then constant over the test dataset. Whereas is real life if being used they would be updated as results happen.obviously to do this, with people submitting their algorithms in different formats would make the evaluation process unfeasible. But maybe after the competition has ended, we could do a run-off against a more managable number, say the 10 to determine the grand winner ?",0,None,7 ,Thu Aug 19 2010 12:54:10 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/75,/competitions/chess,115th /salimali,What tools are people using?,"Hi all,Anybody care to share with the rest of what tools they are using for this task?I've been in touch with a few competitors and there seems to be a wide range. SQL Server Data Mining, SPSS, SAS, R, own code etc.I am using R for the data manipulation - primarily because I haven't really used it before that much and wanted to discover what it could do. So far so good - although it is a bit different knowing what you want to achieve and then actually finding a way to achieve it - but a good learning experience.For the modelling I am using Tiberius - because I wrote this software and am familiar with it. Hope to test out R to see if it can do the same job for me though.Sali",0,None,2 ,Thu Aug 19 2010 23:23:04 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/76,/competitions/informs2010,3rd /ahmeddassouki,Rule Clarification,"Thanks for putting out this competition. On page 5 of ""the tourism forecasting competition"" document are a set of rules. The fifth rule reads ""All models are estimated one only, and only one set of forecasts is produced for each series"". I was wondering if someone could expand on that statement.Thanks,AD",0,None,1 Comment,Fri Aug 20 2010 15:37:53 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/77,/competitions/tourism1,None /vess22,Leaderboard,"Ok, after reading that the leaderboard scores are formed based on only 10% of the Results/Test data set (some couple hundred records), I am beginning to worry that it might be way off the score based on the whole Test set and therefore using the leaderboard to tune our models may be a very wrong thing to do in this contest.To give an example of my concerns: I trained a model without any internal cross-validation which gave me a AUC of 0.75 on the Train set and when I submitted my score on the leaderboard was 0.73. So, I thought great, this model is barely overfitting and is a pretty good one (at least for us 2nd tier contestants who use absolutely NO future information). But when later I tried to recreate an internal split/ordering/validation which would let me know without submitting how well I should do on the ENTIRE Test data set, I got much lower AUC of around 0.64. Which I am more likely to believe is what I'd get on the entire Test set based on its size and the assumption that chronologically it follows the records in the Train set. And when I test internally the model that gave me 0.73 on the leaderboard, I get even lower AUC of 0.61. So, I'm confused. Should we or shouldn't we use the 10% leaderboard test set to tune our models, or is that throwing us in the wrong directions. Because if we didn't have the leaderboard at all and we were to just blindly submit, I'd tune my model in a completely different way based on what I've observed so far.",0,None,1 Comment,Fri Aug 20 2010 19:24:08 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/78,/competitions/informs2010,77th /jeffsonas,"Public leaderboard score for ""known FIDE ratings"" approach","In order to keep the contest both fair and manageable, only a subset of all known chess data has been made available to participants for downloading. For instance, FIDE ratings for many of the players in the training dataset are known as of the start of Month 1, and certainly as of the end of Month 100. But those are not provided for download, nor should it be feasible to work backwards from the data to figure out the identities of players, and therefore their real FIDE ratings at the time.The ""Elo Benchmark"" does not use the known FIDE ratings; it makes use of the same data that is available to all competitors, and nothing more. In fact all of the ""benchmark"" methodologies that we submit will be legitimate competitors in this regard. They are simply applications of known approaches, subject to the same restrictions as other submissions, entered in order to provide standards for measuring the effectiveness of novel approaches or tweaks to known approaches.Nevertheless, I thought it might be informative to ""cheat"" in order to determine one small piece of information to be made public. We used players' known FIDE ratings, as of the end of Month 100, in order to make predictions against the test dataset, and then calculated a leaderboard score for those predictions. Now, this is very unfair, competitively speaking, to the other submissions. FIDE ratings have several huge advantages over a comparable ""Elo Benchmark"" approach, especially these two: (1) Most of the players already had FIDE ratings going into Month 1, so relative rankings were already available (no need to spend precious months' data on seeding initial ratings) and the FIDE/Elo rating system could operate for a longer time, presumably a good thing. (2) The official FIDE ratings at the end of Month 100 incorporated a significantly larger set of game results for Months 1-100 than are available to competitors in the training dataset. Some of this was unavoidable (in many cases the game data literally does not exist electronically!), some of it was intentional (as part of the design of the contest), but it seems clear that having more data would help the predictive power of the FIDE ratings.Now, if competitors can overcome these huge handicaps, and still do better than the Elo-based FIDE ratings at predicting the same results in the test set, then we will almost certainly have identified a superior approach to Elo. Anthony and I remain committed to the idea of revealing nothing about the overall leaderboard while the contest is still running, but we have decided to reveal the public leaderboard score for this ""known FIDE ratings"" approach: 0.669205. Right now teams ""UriB"" and ""chaos"" have already surpassed this score, and surely others will as well in the weeks to come.",0,None,11 ,Sat Aug 21 2010 01:25:02 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/80,/competitions/chess,137th /edwardcollins,How long until you might see name on Leaderboard,"Greetings,I've looked but I can't find the answer to this question in the Forum, or anywhere else on the site.How long after you submit an entry can you expect to see your name and result on the Leaderboard? For example, is the Leaderboard updated just once per day?Thanks in advance for your reply.",0,None,5 ,Sun Aug 22 2010 02:16:58 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/81,/competitions/chess,151st /philippemanuelweidmann,Is anyone using neural networks?,Is anyone using or considering to use neural networks for prediction?Just curious.,0,None,3 ,Sun Aug 22 2010 09:50:51 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/82,/competitions/chess,5th /leighherdman1,Benchmark ELO leaderboard figure.,"At the moment I'm playing about with optimising ELO as much as possible so I have something to compare with when finally implementing some of my own ideas. Yesterday I was playing about with the optimal K-values and found the optimal value on my seeded ratings was K=0! then I realised that I've made a schoolboy error and that I had produced my seeded ratings on the entire training set! so in effect I have totally overfiited my seeded ratings over the entire dataset and K=0 was the best as the seeded ratings already contain the future results.This is obviously something I am fixing now, and i'm having to completly rework my seeded ratings, but the reason why I raise this is that it produces a value of RMSE of 0.700, which is better than the benchmark ELO figure which in theory it should not . Because essentially I have a averaged a players ratings over the entire dataset, whereas ELO is weights for recent performance. So I'm puzzled by this?Also when the leaderboard was recalculated, I noticed most people got a boost of between 0.04 & 0.08 and ELO benchmark hardly moved at all (if it did)",0,None,1 Comment,Sun Aug 22 2010 20:36:59 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/83,/competitions/chess,115th /philippemanuelweidmann,Players that don't play,It appears that a not insignificant portion of the 8631 players did not play a single game during the training period (player #15 being an example).Is that deliberate?,0,None,2 ,Tue Aug 24 2010 19:06:59 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/84,/competitions/chess,5th /jeffsonas,About the Benchmarks,"There are a few well-known rating systems already, and I have been intending to ""benchmark"" many of those approaches in the near future. Within the first week I built the ""Elo Benchmark"" submission, which is already way, way back on the leaderboard. One of the reasons for this relatively poor performance is that I used a ""pure Elo"" approach, intentionally. I didn't analyze whether the games predicted by those ratings seemed to follow the Elo expectancy table (the table that defines expected score, given a difference in two ratings); I just assumed that they did. Anyone who is familiar with my writings on the Elo system will certainly know my claims that a linear expectancy model matches the empirical data better than the logistic expectancy model. But I wanted to do an approach that was as close to the Elo philosophy as possible. And I would expect that this approach hurts the Elo Benchmark's ability to predict the results in the test set, although I haven't actually looked at this.In any event, if I were to do a reasonable try at these various well-known rating systems, and not embarrass their respective inventors, I decided I would need a little more sophistication. Most rating systems that are competing here (not just the benchmarks) will have three main parts, all of which will need to be optimized in a well-performing system:#1. The calculation of an initial set of ""seed"" ratings#2. The operation of the rating system over an extended period#3. The specific prediction of the games in the test setIt is really #2 that I am most concerned with exploring and optimizing, and it is mostly in #2 where the well-known rating systems differ. The prediction part of it is obviously not a requirement for the implementation of a rating system. So in order to benchmark them against each other, I will try to handle #1 and #3 in a standardized way, and also in a reasonably optimized way. Only thus can we really hope to see how existing approaches truly compare to the novel approaches that many of you are developing.Now, something interesting about the Chessmetrics rating formula. It actually doesn't depend on an initial set of ""seed"" ratings, since its basic approach is to actually create a ""seed"" set of ratings each month based on past results, without the need for any prior ratings. I realized that I needed to build this anyway, if I wanted to benchmark Chessmetrics, and so I should start with building the Chessmetrics benchmark, since then I would have a standardized way to calculate the initial set of seed ratings for the other systems.One other point, relative to #3. For each system, I intend to look at its performance over the last N months of the training set and to develop a predictive model that can calculate expected score, given the ratings of the two players. I don't want to use a blind adherence to the Elo expectancy approach or to the linear expectancy approach. I also need to handle unrated players appropriately, which might be done differently for different systems. So I do plan to try and improve the Elo Benchmark a bit, so we can see how it really does relative to its other well-known rivals.Finally, let me emphasize that I am not eligible for prizes even if one of my benchmarks does finish in the top ten. I do plan to be quite transparent with the methodology I use for implementing each benchmark approach, since I think that will benefit the competitors the most. I know there are many people who would prefer not to share the details of their system yet, as they are trying to win, but fortunately I don't have to worry about that. So I will be sharing my methodology and maybe some code. Although apparently nobody else in the world, so far, would use SQL to tackle this problem!!",0,None,56 ,Thu Aug 26 2010 10:06:02 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/85,/competitions/chess,137th /philippemanuelweidmann,Too few games,"There appears to be a rather large random element in the contest. About 80% of all players played less than 10 games during the entire training period. Whether or not an algorithm draws useful inference from less than 10 games depends as much on luck as on algorithm design. It might well be that inferior algorithms (or, especially, inferior parameters) perform better than more subtle designs which are simply overworked given the small amount of data they can work with. I have found that both Glicko-1 and Glicko-2 yield bad results for precisely that reason. I enjoy participating in the contest, but I do believe for the stated reasons that the contest is unlikely to produce the best possible system - rather, it will produce a system that fits the data best, but is unlikely to be optimal in most other cases (including real-world ones).",0,None,4 ,Thu Aug 26 2010 14:02:21 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/86,/competitions/chess,5th /jeffsonas,About the Chessmetrics Benchmark,"As I am not competing for a prize in the contest, I thought it would be helpful for me to describe in detail the methodology used for the Chessmetrics Benchmark. That description is in the attached PDF file. I plan to do this as well for the other benchmarks I submit, although I expect those will be much simpler to describe.EDIT: Based on a couple of typos that people found, and also a clarification needed regarding the connected pool, I have updated the PDF to version 2. Changes are in big ugly bold red.",0,None,41 ,Fri Aug 27 2010 11:12:35 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/87,/competitions/chess,137th /lt2062,Mistake in the example evaluation?,"Hi everyone!I took a closer look at the example evaluation ( http://kaggle.com/chess?viewtype=evaluation ). I suspect a mistake in table 2 in the calculation of the squared error for month 101, player 1.The total predicted score for player 1, month 101 is 0.18+0.35=0.53. The actual score is 2. Therefore the error is 1.47. For 1.47^2, I get 2.1609 which is not the 2.18 we see in the table.What's wrong?",0,None,1 Comment,Sat Aug 28 2010 21:28:40 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/89,/competitions/chess,60th /iankorf,Elo simulations... not so good,"We have had several foosball (table football) tournaments at work, and we have used the Elo system as a way of ranking players. Looking at the rankings, it was clear to me that several people were not ranked correctly. So I decided to do some simulations. I gave each player a ""skill level"" that determined how often they scored goals. Games were the first to 10 points. I used two sets of players: (1) everyone has the same skill level of 0.5 (2) skill levels between 0.0 and 1.0 were distributed evenly among players. The idea was that a good ranking system should show that all players in group (1) have similar rankings and players in group (2) are ranked in the same order as their given skill level. This is not what happened. Players in group (1) had incredibly different rankings, and some were predicted to win by 80% or more. This is despite the fact that almost every player had the exact same win:loss ratio over millions of games. In group (2), the rank order of the players was grossly correct, but specifically flawed. So, in the best case scenario, where data is plentiful, Elo doesn't work very well. I would therefore not expect it to work in a realistic setting when data is sparse.The reason I suspected that Elo wasn't working was that we had 2 leagues, and open league and a women's league. The women at work do not play as much foosball, and are therefore generally not as skilled. But a couple of them did play a lot and were quite good. Because they beat all the other women, and rarely faced equal competition, their rankings were incredibly high. They were good, but not THAT good. It may be that such a situation does not occur in chess. I am not a chess player, and I don't know how matches are scheduled. So there may be some mitigating factors that make such situations rare. But even so, it's clear to me that one of the central flaws of the Elo system is that one just trades points back and forth between players.Let's say that the best soccer team in the US goes to play the worst soccer team in Europe and loses. We have not only learned something about those two teams, but both leagues as well. We would probably want to rank all of the European teams higher than the US teams (let's assume a sufficient number of games to be statistically significant). I believe that a ranking system should take into account all games at all times. I've done this, and I solved the foosball problem at work to my satisfaction. I also used the same system to rank various aircraft in World War II Online. Here, the Germans fight against the French and English. Since no countries fight against themselves, and the French and English never fight, the matrix of aircraft vs. aircraft is sparse. But my system worked well for that too. I am slowly adapting my ranking software for chess. There are a few issues that I haven't completely solved yet. For example, it appears that there are some differences in white vs. black skill level. So a single skill level may not be appropriate. How those skill levels result in wins, ties, and losses is something I have to experiment with. There's also the issue of having so many players and such a sparse player vs player matrix.Since I only work on this project at odd moments, it may be a while before I actually post some predictions.",0,None,12 ,Sun Aug 29 2010 00:21:43 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/90,/competitions/chess,None /jeffsonas,About the Elo Benchmark,"I am trying to create and document a number of ""benchmark"" systems that implement well-known approaches to chess ratings. This will give us a ballpark estimate of which approaches seem to work better than others, as well as a forum for discussion about ideal implementations of these well-known systems. I know that many people are going to be hesitant to share too much about their methodologies, since they are trying to win the contest. This is perfectly understandable, but on the other hand I think it is good to get some concrete details out there. Since I am not eligible to win the contest, there is no reason why I shouldn't share my methodology for building the benchmark systems. In this post, I have attached a writeup on my implementation of the Elo Benchmark.The Elo Benchmark was one of the first submissions made in the contest, and is well back on the leaderboard. I just now submitted another try, using the Chessmetrics ratings as my seed (initial) ratings, as well as a linear expectancy model for predictions, and it did a lot better, but still is in no danger of finishing very high on the leaderboard. It will be very interesting to see whether any of the winners are fundamentally based on the Elo system, and what they did to improve so significantly upon the benchmark. Certainly if FIDE does eventually adopt the methodologies of any of these well-performing systems, the changes most likely to be generally accepted would be tweaks to the Elo system, rather than implementation of something drastic like a performance-rating-based list.",0,None,4 ,Sun Aug 29 2010 11:07:27 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/91,/competitions/chess,137th /philippemanuelweidmann,Only rating systems allowed?,"The Description page says: ""Competitors train their rating systems ...""Does that mean that you have to use a rating system for prediction? If that is the case, it would be helpful to precisely define what a ""rating system"" is.",0,None,3 ,Mon Aug 30 2010 09:48:37 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/93,/competitions/chess,5th /nuttacha,Question about data,"I am new in data mining and stock market but I need to enter this competition as a project of data mining course.I have some questions to ask.1. Are different variables are different stock? such as Var1 = IBM, Var2 = google, and Var3= oracle2. If yes, target variable of a row is a sum of all stock prices in that stock market that will decrease or increase after 60 min from that time?3. Some var don't have information because of what? Is it don't have, miss, or deleted?4. Why some var has only Last?5. I read some answer from other posts that most of the variables are numeric including stock prices, sectoral data, economic data, experts' predictions and indexes but If we don't know what number is that data how we can create a good model?Thank you",0,None,5 ,Tue Aug 31 2010 04:31:34 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/94,/competitions/informs2010,92nd /del=9147e4df7f188a91,Decoding the timestamp,"Hi,Could you shed some light on how to decode the timestamp into a date?Thanks",1,None,5 ,Wed Sep 01 2010 09:50:03 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/95,/competitions/informs2010,None /datalev,confirmation No real price value?,"Hi: just confirm We only have targetvalue variable which is either 1 or 0, but we do not have stock price (fot predicted stock). Then it is impossible to use timeseries method because it is impossible to know the auto-correlation or the trend of the target (stock price). Am I right?",0,None,1 Comment,Wed Sep 01 2010 23:04:31 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/96,/competitions/informs2010,12th /uriblass,a bug in the forum?,only make a quick replied worked for me in the last post and I also could not edit that post to make spaces.I simply got some server error andI try to edit this post as a testEdit:It seems that the problem does not repeat in this thread but I tried many times to post my last post not in this thread and could not do it because of some server error.,0,None,9 ,Thu Sep 02 2010 08:56:32 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/97,/competitions/chess,6th /philippemanuelweidmann,Cross-validated RMSE significantly worse than public RMSE,"Like many others, I use cross-validation to check my prediction system locally. I cut off months 96-100 and use months 1-95 to predict their results and calculate a RMSE.Lately, I've found that my local RMSEs computed by cross-validation are significantly and consistently worse than the public RMSEs from the same prediction systems, usually by at least 0.005, but sometimes by as much as 0.012. I find this to be quite strange since one should actually expect them to be better, given that almost 20% more data is available for the rating system to predict the future.Does anyone else experience the same phenomenon? Any possible explanations?",0,None,21 ,Thu Sep 02 2010 15:34:55 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/98,/competitions/chess,5th /mercator,Target Variable corrresponds to which continuos variable,"I realize that we're trying to predict a stock's price 60 minutes in advance using a binary outcome. The actual result is in a column labeled ""TargetVariable"". But which of the other columns is ""TargetVariable"" based on? I need to know which column has the actual stock price that I'm trying to predict.Thanks,Andrew",0,None,3 ,Thu Sep 02 2010 19:52:41 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/99,/competitions/informs2010,None /jeffsonas,Share your experiences so far?,"Hi everyone, thanks for all of the activity so far! I was thinking about writing a news update for Chessbase on the contest, now that it has been running for almost exactly one month. My own efforts at a well-performing Elo Benchmark haven't done so well, but I expect that some of you are using a modified Elo system with much better results. Or maybe you've adapted an existing approach, or (even better) introduced something not previously tried on chess ratings. Even spectacular failures would be interesting! Of course, I don't expect anyone to reveal all the details of their methodology, for competitive reasons. But if there is anyone out there who wants to share (at a high level) who you are, what is your connection to chess and/or data predicting and/or rating systems, and the results of your participation so far, I would really appreciate it! So for instance: what has worked, what hasn't worked, what you have noticed, basically anything about your experience so far participating in the contest. Especially if you are in the top 10 or 20, I would love to know just a few words about your best-performing approach, so I could characterize what we have learned so far about what works best, and whether Elo is still King...things like ""modified Elo"", ""tweaked Elo with player-specific color bonus"", ""Glicko but with activity bonus""... anything you want to share would be great!With the understanding that anything you share on this particular forum topic is fair game for me to include (exactly, or paraphrased) in a subsequent news article on Chessbase, sometime during the rest of 2010. Thanks!!",0,None,4 ,Thu Sep 02 2010 23:25:23 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/100,/competitions/chess,137th /jeffsonas,About the PCA Benchmark,"I am trying to create and document a number of ""benchmark"" systems that implement well-known approaches to chess ratings. This will give us a ballpark estimate of which approaches seem to work better than others, as well as a forum for discussion about ideal implementations of these well-known systems. I know that many people are going to be hesitant to share too much about their methodologies, since they are trying to win the contest. This is perfectly understandable, but on the other hand I think it is good to get some concrete details out there. Since I am not eligible to win the contest, there is no reason why I shouldn't share my methodology for building the benchmark systems. In this post, I have attached a writeup on my implementation of the PCA Benchmark.The PCA ratings are historically important, as the only rating system with much notoriety in the past 20 years (other than Elo) that has been applied to international grandmasters on an ongoing basis, with rating lists published every month for several years. However, in implementing this system I quickly realized that the training dataset is not large enough to use the standard 100-game approach, and so I had to resort to a 15-game approach that I knew wouldn't work nearly as well. For Elo you can experiment with different K-factors in order to make the system more or less responsive, whereas for the PCA system you should be able to do something analogous with the game count (e.g., 50 to make it more dynamic, 200 to make it more conservative) but here this is not an option because of having insufficient data. If I can manage to produce a larger dataset it will be very interesting to run the PCA method and see how it does relative to Chessmetrics or other well-performing systems identified during this contest. In any event, I did the best I could for this significant rating system approach, and certainly there are some important concepts illustrated by this system that you won't find elsewhere in the other benchmarked systems. So I encourage people to read through it, if you are looking for ideas. In particular the approach to performance rating for a non-linear expectancy distribution is quite interesting.Special thanks to Ken Thompson for sending me the C code for his implementation of the PCA system. Kind of surreal to get that, sort of like having Richard Feynman send you a Feynman Diagram illustrating something...",0,None,1 Comment,Sun Sep 05 2010 12:53:18 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/101,/competitions/chess,137th /pallavsarma,Scaling of predicted target variable,"Since the predicted target can be any real number, how are they scaled (if they are scaled at all) before comparing to the true target variable?Thanks",0,None,3 ,Tue Sep 07 2010 19:10:34 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/102,/competitions/informs2010,34th /zhongliu,A couple of questions about the data,"I apologize in advance if some of the following questions have been asked or answered already. Here is a list of my questions .1. The target variable is the movement (decrease/increase) of a certain stock within the enxt 60 minus. Are the predictors open/high/low/last of other stocks from the previous 60 minus? If not, we cannot use them to predict. Right?2. In the training dataset, many variables (variable142, variable154 ) have same integer values for open/high/low/last. Are those valid values or we can simply treat them as missing?3. What is the interval of the time stamps? Every minute? Are they continuous or there could be some gap between two consective time stamps?4. Is there any specific reason besides the timing issue why only a 10% random sample from the testing dataset be used to check the model performance? The whole testing dataset is not very big (<3000 rows).Thank you very much!",0,None,2 ,Tue Sep 07 2010 20:54:36 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/103,/competitions/informs2010,112th /onemillionmonkeys,Why Aggregate by Month?,Is there a good reason why the evaluation metric involves aggregating players' results by month? It seems like an unnecessary complication.,0,None,11 ,Fri Sep 10 2010 01:36:27 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/104,/competitions/chess,38th /onemillionmonkeys,Annoying submission issue,"I was stumped for a while trying to make my first submission. I was getting an error to the effect that my file should be in CSV format. Since my file was in CSV format, as far as I could tell, I didn't know how to proceed. I finally figured out that the *filename* has to end in "".csv"". For a Unix person, this is non-obvious - not sure if this would be more intuitive to a Windows person.I would suggest you not impose any requirements on the filename. Failing that, I would suggest you make your error message more explicit. Thanks.",0,None,1 Comment,Sat Sep 11 2010 03:12:26 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/105,/competitions/chess,38th /fredwu,coefficient of regression model,"Dear All, I may have one question regarding to the data structure. As I am new to predictive modelling, please point out if it is incorrect. For stock predict it might be other multivariate techniques can be used to predict the movement of the target price up or down, without building a regression model. However if someone building a regression model with other predictor stocks without knowing names, also like in other competition data, how do we know the sign of the coefficients of that variable is as we expected in the right direction?? ThanksG",0,None,7 ,Mon Sep 13 2010 06:22:51 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/106,/competitions/informs2010,None /jeffsonas,About the Glicko Benchmark,"I am trying to create and document a number of ""benchmark"" systems that implement well-known approaches to chess ratings. This will give us a ballpark estimate of which approaches seem to work better than others, as well as a forum for discussion about ideal implementations of these well-known systems. I know that many people are going to be hesitant to share too much about their methodologies, since they are trying to win the contest. This is perfectly understandable, but on the other hand I think it is good to get some concrete details out there. Since I am not eligible to win the contest, and I am following publicly available descriptions, there is no reason why I shouldn't share my methodology for building the benchmark systems. In this post, I have attached a writeup on my implementation of the Glicko Benchmark. Actually this system did not require much description, since the inventor (Mark Glickman) has already provided excellent instructions on his website. I mostly just referenced those instructions within my PDF. I was actually surprised this system didn't do better in the standings; I thought it might place very high. I am realizing more and more, that perhaps the Chessmetrics approach has a significant advantage over systems like Elo, PCA, or Glicko, in that it allows us to re-interpret the strength of your opponents, based on their subsequent results after you played them. Perhaps ratings are just too imprecise to justify discarding all that useful information about how a player subsequently did after you played them. I'm still talking about only using information from the past when calculating the present rating for a player; it's just that we are using games from the recent past to reinterpret the meaning of a game from the distant past.",0,None,5 ,Mon Sep 13 2010 13:09:57 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/107,/competitions/chess,137th /jeffsonas,About the Glicko-2 Benchmark,"I am trying to create and document a number of ""benchmark"" systems that implement well-known approaches to chess ratings. This will give us a ballpark estimate of which approaches seem to work better than others, as well as a forum for discussion about ideal implementations of these well-known systems. I know that many people are going to be hesitant to share too much about their methodologies, since they are trying to win the contest. This is perfectly understandable, but on the other hand I think it is good to get some concrete details out there. Since I am not eligible to win the contest, and I am following publicly available descriptions, there is no reason why I shouldn't share my methodology for building the benchmark systems. In this post, I am describing my implementation of the Glicko-2 Benchmark.There is very little to say here that I didn't already say for the Glicko system, under the ""About the Glicko Benchmark"" posting. There is an additional ""volatility"" parameter tracked for each player under Glicko-2. I found that the identical predictive model to Glicko worked well, and I used values of Tau=0.5 and Initial Volatility=0.6. It performed better than Glicko, by a small amount. Again, the system was very easy for me to implement because it was well documented by the inventor (Mark Glickman) here:http://math.bu.edu/people/mg/glicko/glicko2.doc/example.htmlEDIT: After some initial submissions where I let the Glicko system start its own rating pool from Month 1, I decided to try an approach where I used the Chessmetrics 48-month ratings as the initial ratings, and then started Glicko-2 running at Month 49 instead. This was with the formula for initial RD of 132/SQRT(TotalWeightedGames) + 25, for everyone who would thereby have a RD <= 350. This performed significantly better.",0,None,4 ,Mon Sep 13 2010 21:17:04 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/108,/competitions/chess,137th /seyhan,AUC calculation and its actual accuracy,"Hi,I built a model based on the training dataset (with 10 fold xval for testing) and I have got 91% AUC accuracy on the training/testing of the model. But when I upload the scoring of the result datase produced by the same model, I received 67% accuracy of the scored data.I understant that the 10% of the score dataset AUC accuracy is shown on the website. I felt that either the model could score very well of the unknown data or the AUC accuracy of the total result dataset may be diffrent than (in this case very diffrent) what is shown on the web site. Is the 10% of data represents first ten percent or 10% is the sample of the overall AUC?Regards,Seyhan",0,None,2 ,Tue Sep 14 2010 02:24:30 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/109,/competitions/informs2010,56th /tjohnson314,Arrow's Impossiblity Theorem,"Arrow created a list of axioms that an election must satisfy to be fair, and showed that no voting system could simultaneously fulfill all of them. http://en.wikipedia.org/wiki/Arrow%27s_impossibility_theoremThat's about the extent of my understanding, but I know that there are people here who have studied math for a lot longer than I have. I've been wondering: Does Arrow's Impossibility Theorem apply to chess rankings as well?",0,None,1 Comment,Wed Sep 15 2010 20:12:14 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/110,/competitions/chess,25th /haraldkorneliussen,Has anyone used Bayeselo?,"Hi,As it happened, I had read a couple of very interesting papers on this topic, so I originally planned to make an attempt. However, I see now that I don't have the time (and honestly, I forgot about the whole thing) so I thought I'd share what I found in the hope that someone else can make use of it.The French computer scientist Remi Coulom, well-known for his work on the Go program Crazy Stone, has also written about the topic of elo estimation. He invented an approach he called Whole History Rating, which according to his results gives better predictions than (traditional) Elo, Glicko, TrueSkill, and decayed-history algorithms.http://remi.coulom.free.fr/WHR/He also has written a program that estimates elo scores in a bayesian manner (this program does not, as far as I know, implement the method described in the WHR paper).http://remi.coulom.free.fr/Bayesian-Elo/I tipped him about this competition, and he does not intend to participate. Since bayeselo is open source, and the WHR paper is published, I think it would be permissible for participants in this competition to use both.",1,None,6 ,Wed Sep 15 2010 21:30:27 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/111,/competitions/chess,None /britvich,Alternatives to Month-Aggregated RMSE,"I did some testing of two alternatives to month-aggregated RMSE: game-by-game log-likelihood and predictability, to see if these prediction evaluators more accurately reflect the prediction ability of different rating systems. The details are in the attached PDF...",0,None,26 ,Thu Sep 16 2010 00:57:29 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/112,/competitions/chess,29th /parijat,Query regarding data,"Hi,I guess I am quite late for the contest; but I am still giving it a try.My question:Is the target variable similar to some kind of a sensitive index? Is it a function of all other stock prices?Thanks,Parijat",0,None,3 ,Thu Sep 16 2010 10:26:34 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/113,/competitions/informs2010,None /jorgealvarado0,Group joining,"Is group joining allowed in order to make linear combinations of groups? If so, what should we do to formalize it?",0,None,3 ,Fri Sep 17 2010 23:00:22 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/114,/competitions/tourism1,16th /jorge2865,why distribution of test data is very different of training data,"Hello all,I have observed the next problem:I divided the training set into two parts (80% Train, 20% Test of the Train set) and developed my algorithm on train (80%), obtaining an average error of 0.51 on Test (20%). But when I throw the algorithm on test data (the web data, 7809 games), I get an error of 1.18!! ... the test data structure not is the same of train data structure ... ? and if so that, what good is test data?Thank you",0,None,8 ,Mon Sep 20 2010 23:14:11 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/115,/competitions/chess,180th /jeffsonas,Cross Validation Dataset,"For purposes of analysis, it would certainly be desirable for the 5-month test set and the final 5 months of the training set to have similar characteristics. However, that approach would have brought a significant problem. There are always new players entering the pool of active players. Many of the games being played in any given month will include players that have either never played previously, or only played a few games previously. Thus the score of any given submission would depend heavily on the treatment of these ""unrated"" or ""provisional"" players, although they were not intended to be the focus of the competition; the focus was supposed to be on players with a reasonably established history of games. Therefore the decision was taken (during preparation of final datasets for the contest) to apply a filter to the games in the test set, so that it would focus primarily on more established players. Without filtering, the test set would have contained 18,739 games, whereas application of the filter resulted in a test set with only 7,809 games instead, and that was used as the final test set. All players included in the test set had played at least 12 ""fully rated"" games in the final 48 months of the training set, where a ""fully-rated"" game is one in which both players already have a FIDE rating at the time of the game. Some participants have expressed the desire to apply the same filtering methodology to the final 5 months of the training set, in order to perform appropriate cross-validation. However, with the previously available data, it is not possible for participants to recreate the logic of the filtering criteria on their own. This is because the filtering criteria included knowing whether players possessed official FIDE ratings at the time they played games in the training dataset, and this information was not included in the training dataset. So we have decided to provide a ""cross validation dataset"", a filtered set of the final 5 months of the training dataset that retains the characteristics of the test set as much as possible. This is because the cross validation dataset uses filters analogous to those that were originally applied to the test set. That is, all players included in the cross validation dataset had played at least 12 ""fully rated"" games in months 48-95 of the training set, where a ""fully-rated"" game is one in which both players already have a FIDE rating at the time of the game.So for instance, in the original training set, there are 2,216 games included for Month 98. Some of these games involved one or two brand-new players, and some of the games involved one or two ""provisional"" players whose ratings would be quite uncertain because they had only played a few games previously. And some of the games were between established players, who had played in at least 12 ""fully-rated"" games in the training set across the 48 months prior to Month 96 (i.e. months 48-95). Application of these filters to the full set of 2,216 training games played during Month 98 reveals that only 533 of the games played in Month 98 would pass the filters, and thus the ""cross validation dataset"" now contains exactly those 533 games for Month 98. The entire cross validation set (which only includes months 96-100) contains 28% of the games from the training set for months 96-100, or a total of 3,184 games.We have made this cross validation dataset available on the ""Data"" page. We expect that it can be productively used for cross validation where Months 96-100 are treated like the test set.",0,None,25 ,Tue Sep 21 2010 02:48:54 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/117,/competitions/chess,137th /philippemanuelweidmann,Plateau?,"Has anyone else noticed that we appear to have hit a performance plateau?The top position of the leaderboard has not improved for an incredible 3 weeks, while I myself would have kept my 4th place with a submission from 1 week ago. The number of people who have beaten the Chessmetrics benchmark barely changes at all (it only went from 8 to 9 in the past week).Remembering the end of August, where I would wake to find myself pushed back 5 places from where I had been when I went to sleep, this is quite a change.And all this in spite of the fact that most people in the top 10 appear to submit every single day!Almost every day, I try out a new idea, often with substantial improvements in local validation. But on the public list, I creep forward at a pace of about 0.0001/day. It appears that others are experiencing the same thing.Any explanations?",0,None,4 ,Tue Sep 21 2010 11:31:04 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/118,/competitions/chess,5th /jasontrigg,Naive Baseline,"Just want to check that my formatting is right here - when I set the prediction for month n+1,n+2,...n+24 = value at month n and quarter q+1,...q+8 = value at q, I get MASE of 4.40695 - anyone else getting that?",0,None,2 ,Thu Sep 23 2010 07:08:55 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/120,/competitions/tourism2,17th /philippemanuelweidmann,The Cross-Validation Score/Public Score Correlation Thread,"In the past two weeks, it has seemed to me that finding a good way of locally predicting public performance changes is more important than making systematic breakthroughs. I expect that most of the top 10 will have very solid systems by now, with several parameters that give vastly different scores when tuned.Now that thanks to Jeff we have a standardized cross-validation dataset available (on [Link]:http://kaggle.com/chess?viewtype=data) I think it is time to investigate correlation between cross-validated scores and public scores to see whether cross-validation is worthwhile at all or whether we're better off relying on intuition and public scores.I use the cross-validation dataset to calculate two local scores:The RMSE of months 96-100, as described on [Link]:http://kaggle.com/chess?viewtype=evaluation (""RMSE"")The sum of squared errors of all games in months 96-100 (""Score Deviation""), without any accumulation by player or monthYesterday and today, I have uploaded three predictions, which gave the following results:Standard prediction, roughly equivalent to my current best approach though with slightly different parametersPublic RMSE: 0.658927Cross-validated RMSE: 0.587583Cross-validated Score Deviation: 353.758845Engine from (1.), parameters optimized for best cross-validated RMSEPublic RMSE: 0.665807 Cross-validated RMSE: 0.581893Cross-validated Score Deviation: 348.038198Engine from (1.), parameters optimized for best cross-validated Score DeviationPublic RMSE: 0.671451 Cross-validated RMSE: 0.584796Cross-validated Score Deviation: 346.815002Needless to say, the data is highly discouraging. It would appear that there isn't any substantial correlation between cross-validated scores and public scores at all. Of course, though, three data points are not the end of the story. That's why I would like to encourage everyone to post their own cross-validated scores along with the corresponding public scores to this thread. Everyone will profit from the results we gather, in either one of two ways:If we find that there really is no correlation, we can simply stop cross-validating, and search for better approaches to local validationIf we find that there is a correlation after all, those whose own correlations are weak (as mine seem to be) are probably overfitting, and should reduce the number of parameters in their systemLet's hope this allows us to overcome the current plateau at last!Cheers,Philipp",0,None,26 ,Thu Sep 23 2010 09:29:55 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/121,/competitions/chess,5th /gregwerner,Past Submission Links Broken?,"I am getting dead links to all of my .csv submission files before today. I just submitted my seventh submission which I can retrieve, but my first six are not accessible.",0,None,2 ,Thu Sep 30 2010 05:26:29 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/122,/competitions/tourism2,12th /wilmahan,"Hi, I'm writing a new chess server","Hi all,For a few months now I've been working on a free and open source chess server modeled after freechess.org (FICS). So when I read about this competition I was immediately interested. As some might know, freechess.org currently uses Glicko ratings, and that system has worked well for years.Unfortunately, it seems I probably won't be able to use the winner of this competition as a drop-in replacement for Glicko. The problem is that players on FICS expect their rating to be updated immediately after a game (in fact, they are informed of prospective changes before each game; for example, ""Your lightning rating will change: Win: +3, Draw: -5, Loss: -14"").Based on my brief investigation of Chessmetrics, I think it isn't designed to allow such instant updates, although I would happy to be wrong about that. (Incidentally Glicko 2 has a similar drawback; it groups games into ""rating periods"" considered to take place simultaneously.) I don't think a rating period of a day or more, much less a month, is feasible on an online server, where users expect instant feedback.So my question is, does anyone have a system that outperforms Glicko that can provide this sort of instant updating? Is such a thing even possible? To be honest Glicko seems close to ideal for my needs, but I thought it couldn't hurt to search for something better.I expect many people in this forum have thought this subject more than than I have, so any hints or comments would be welcome.",0,None,11 ,Thu Sep 30 2010 22:46:59 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/123,/competitions/chess,None /aarthimanoharan,More Details on Data,"It would be good if we know what kind of data we have. All we have are 793 series and no date variable, or details about the region it is given for. Want to know if we can use causal variables, for which we need to know some details about the region. Can we get that data ?",0,None,6 ,Fri Oct 01 2010 11:01:29 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/124,/competitions/tourism2,None /dejavu,only last submission counts?,I apologize if thiis question has already been answered. Does only the last submission count?,0,None,8 ,Sat Oct 02 2010 18:13:18 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/125,/competitions/informs2010,1st /arimaagame,Error in Table 2 row 1,On the page describing the evaluation process: [Link]:http://kaggle.com/chess?viewtype=evaluation should the 'Squared Error' value in Table 2 row 1 be 2.16 instead of 2.18.,0,None,4 ,Sat Oct 02 2010 19:18:32 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/126,/competitions/chess,40th /arimaagame,Checking RMSE calculation,"Just to check that I am calculating RMSE correctly; if you predict draws for all the games in the cross_validation_dataset.csv file and calculate RMSE using all the games, do you get a value of 0.6554. I did a test and predicted draws for all the games in the test_data.csv file. It got a RMSE value of 0.7921.",0,None,5 ,Sun Oct 03 2010 06:36:53 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/127,/competitions/chess,40th /arimaagame,Game results for test data after contest,"Jeff, can you provide the test_data.csv file with the game results column added after this contest is over? That will allow us to verify our final RMSE. Thanks.",0,None,3 ,Mon Oct 04 2010 16:04:23 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/128,/competitions/chess,40th /dejavu,attending INFORMS,Just curious how many contest participants plan to attend. I will be there.,0,None,1 Comment,Wed Oct 06 2010 19:00:20 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/129,/competitions/informs2010,1st /louisduclosgosselin,Invitation to the INFORMS Data Mining Contest Special Session!,"Dear All, All of the INFORMS Data Mining Section crew invite you to join us at INFORMS Annual Meeting - Austin, Texas, November 7-10, 2010 At this event, there will be an INFORMS Data Mining Contest Special Session on Tuesday Nov 09, 08:00 - 09:30. In this session, we will present the INFORMS Data Mining Contest results and the methods used by the top competitors. We will also give the commemorative Awards/Plaques to the top three competitors (general ranking) and to the best competitor which did not using future information. If you will be at this event send me an e-mail at [Link]:mailto:louis.gosselin@hotmail.com. Moreover, we invite you to join us at our Data Mining Section Business Meeting 6:15pm-7:15 pm Sunday night, it should be in hilton. In addition, don’t miss all others Data Mining Session of the event ( [Link]:https://informs.emeetingsonline.com/emeetings/formbuilder/clustersessionlist.asp?clnno=2377&mmnno=196) That will be nice meeting you. Thanks a lot. Let's keep in touch. I am looking forward earning your news. Best regards. Louis Duclos-Gosselin Chair of INFORMS Data Mining Contest 2010 Applied Mathematics (Predictive Analysis, Data Mining) Consultant at Sinapse INFORMS Data Mining Section Member E-Mail: Louis.Gosselin@hotmail.com http://www.sinapse.ca/En/Home.aspx http://dm.section.informs.org/ Phone: 1-866-565-3330 Fax: 1-418-780-3311 Sinapse (Quebec), 1170, Boul. Lebourgneuf Suite 320, Quebec (Quebec), Canada G2K 2E3",0,None,3 ,Thu Oct 07 2010 13:49:30 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/130,/competitions/informs2010,None /dirknbr,Revised file,I am getting an import error with the revised file.Error: new-line character seen in unquoted field - do you need to open the file in universal-newline mode?What system was this file created on: Unix or Windows?Dirk,0,None,2 ,Fri Oct 08 2010 11:10:08 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/132,/competitions/tourism2,32nd /louisduclosgosselin,And the winner is ...,"Dear All, I am pretty proud to announce the following top 3 winners from the overall ranking: 1) Cole Harris from DejaVu Team 2) Christopher Hefele from Swedish Chef Team 3) Nan Zhou from Nan Zhou Team The top 3 winners from the “not using future information” ranking will follow in a couple of days, after asking to all competitors if they used or not future information. In brief, in the INFORMS Data Mining Contest 2010 there was: -893 participants -147 competitors which submitted their solutions -28 496 visits on the competition website We will give the commemorative Awards/Plaques to the top 3 competitors (overall ranking) and to the best competitor which did not using future information at the INFORMS Data Mining Contest Special Session at INFORMS Annual Meeting - Austin, Texas, November 7-10, 2010. If competitors can’t be there, we will send commemorative Awards/Plaques by mail. Moreover, we are writing an article about the competition’s results. We will share this article on this forum soon. Thank you all! It was a wonderful challenge! The most eminent Data Miners of the planet fought for the victory ;)! Similar challenge will be laugh next year for the INFORMS Data Mining Contest 2011. P.S.: Don’t forget to send us your abstract about the methods/techniques you used (louis.gosselin@hotmail.com). P.S.S.: Thanks to my sponsors, organizing team members and to Kaggle for making this competition happen! Thanks a lot. Let's keep in touch. I am looking forward earning your news. Best regards. Louis Duclos-Gosselin Chair of INFORMS Data Mining Contest 2010 Applied Mathematics (Predictive Analysis, Data Mining) Consultant at Sinapse INFORMS Data Mining Section Member E-Mail: Louis.Gosselin@hotmail.com http://www.sinapse.ca/En/Home.aspx http://dm.section.informs.org/ Phone: 1-866-565-3330 Fax: 1-418-780-3311 Sinapse (Quebec), 1170, Boul. Lebourgneuf Suite 320, Quebec (Quebec), Canada G2K 2E3",0,None,49 ,Sun Oct 10 2010 04:39:04 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/133,/competitions/informs2010,None /dirknbr,missing maintainer,"I just looked at the test data and between rows 2829 and 2847 the maintainer is missing. Sophie is also changing her email within the same package, why?",0,None,1 Comment,Sun Oct 10 2010 13:20:44 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/134,/competitions/R,14th /johnmyleswhite,Dealing with Messy Data,"Several people have pointed out various flaws in the data that we've released. We'd like to address these now before contestants start to worry.There are a variety of duplicate rows in the data we've provided: see, for example, the rows in 'installations.csv' pertaining to users with the package 'fuzzyOP' installed.There are also missing entries: see, for example, the rows in 'maintainers.csv' for the package 'brainwaver'. This is information that was either not present on CRAN or too difficult for us to parse during our first pass through the package source code.Hopefully it won't upset the sensibilities of the contestants to say this, but we see this messiness as a virtue rather than a vice: an algorithm that isn't robust to imperfect data could never be used in the wild as the backend for a recommendation system. You should use your own judgment to decide how to address imperfections in the data. Treat the duplications as you see fit. And address missing data using whatever tools you'd like, whether by acquiring the information directly or using statistical missing data tools to impute a reasonable substitution.",0,None,1 Comment,Sun Oct 10 2010 14:50:22 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/135,/competitions/R,None /dirknbr,depends,"The depends file has some line breaks after >=, can you correct this.",0,None,1 Comment,Mon Oct 11 2010 10:58:01 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/136,/competitions/R,14th /ricotero,which submission was the best?,"Hi,i'd like to know if there is a chance to see which of my submissions was the best.Regards,Ricardo Otero",0,None,2 ,Tue Oct 12 2010 00:30:45 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/137,/competitions/informs2010,37th /jeffsonas,Suggestions for the next chess ratings contest?,"Hi everyone, we are approaching the final month of the contest, which will end on November 14th. I have been, and continue to be, amazed by the level of participation so far. I had no idea so many people would participate. For the past dozen years I have found chess-related statistics to be a fascinating topic and apparently I am not alone! I look forward to the final results of the contest and learning the details from anyone who is willing to share their methodology.Certainly this contest has been a learning experience for me; I had never done anything like it before and I'm sure there is a lot of room for improvement. Over the course of the various forum threads, I have seen a lot of (mostly constructive) criticism about the contest, and so I wanted to take the chance to focus that energy as productively as possible. I would like to do a follow-up contest, and I would like the contest to be better, wherever possible. So... do you have any suggestions?It may seem like I haven't spent as much time on the contest lately, but actually I have been working very hard on a related task - coming up with more data. I have two distinct sources of data (datasets provided to me by the FIDE database programmers, and the Chessbase historical game databases) and unfortunately the only way to come up with suitable data for our needs in this contest is to manually reconcile the differences in tournament name spelling, player name spelling, and reported game outcomes, between the two sources. Across thousands of tournaments and millions of games. Although it is a daunting task, I have tried to be clever where possible and let the database do the heavy lifting. It is going quite well. Compared to the current ongoing contest, I anticipate having at least 5x the training data and at least 10x the test data (probably even more).Thanks to improvements in how FIDE has collected the data in recent years, there is a 20-30 month period at the end that I would like to use for the test dataset, since there is so much data. For the current contest, I didn't want to make the results of the test games available, because then people could cheat and simply submit those results as their predictions. I couldn't split the test dataset into two parts because there just wasn't enough of it. But keeping the test results secret meant that ratings would get increasingly stale as we moved further away from month 100. In the original contest design I decided that month 105 was as far as I was willing to go for this.However, if we are in possession of a very large dataset across those final 20-30 months, it seems that I could randomly split up the test dataset into two disjoint sets, and use one of them (S1) as the test dataset, and one of them (S2) as the final months of the training dataset. A drawback is that this would allow people to ""use the future to predict the past"" by, for instance, using a player's results across months 104-115 from S2 in order to predict a rating for the player going into month 104, and therefore make a more accurate prediction of the player's results in month 104 of the test dataset S1. I don't want people to do this, because it is not useful toward developing an ideal ""real-world"" rating system, but perhaps this could be enforced informally rather than being built into the design of the contest. There is a huge upside, in that the test set can stretch for a longer duration. People could use a player's results (for instance) across months 104-115 from S2 in order to predict a rating for the player going into month 116, and therefore make a more accurate prediction of the player's results in month 116 of the test dataset S1. In other words, ratings don't need to get stale and we can use a significantly longer test period, such as 20-30 months.So I am currently thinking to keep pushing forward on this data cleaning effort over the next few weeks, and then to start a second contest after the current one finishes, with significantly larger datasets. I would still keep player identities a secret, still exclude some small fraction of players and some small fraction of games (to keep people from looking up real results after identifying players). And I would split up the data from the final 30 months so that half of it is training data and half of it is test data. I will still need some sort of filter for that final period so that provisional players don't dominate the test set, and therefore we would still need a ""cross-validation training dataset"" for the final 30 months so that you can do cross-validation on a similar dataset. Presumably in this case the cross-validation training dataset would be more similar to the test dataset than in the current contest. But in any event I would build these transparently from the very start, instead of having to add them in mid-stream. And finally, I still need an evaluation function such as the RMSE we are currently using, or something better, if someone can convince me it is better.Any ideas? This is your big chance to make the next contest better, so please take the opportunity to share your thoughts now!",0,None,31 ,Tue Oct 12 2010 00:52:19 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/138,/competitions/chess,137th /louisduclosgosselin,ResultFile with TargetVariable values,"Dear All, For those which requested the ResultFile with TargetVariable values, I attached the file to this post. It’s highly appreciated. Thanks a lot. Let's keep in touch. I am looking forward earning your news. Best regards. Louis Duclos-Gosselin Chair of INFORMS Data Mining Contest 2010 Applied Mathematics (Predictive Analysis, Data Mining) Consultant at Sinapse INFORMS Data Mining Section Member E-Mail: Louis.Gosselin@hotmail.com http://www.sinapse.ca/En/Home.aspx http://dm.section.informs.org/ Phone: 1-866-565-3330 Fax: 1-418-780-3311 Sinapse (Quebec), 1170, Boul. Lebourgneuf Suite 320, Quebec (Quebec), Canada G2K 2E3",0,None,13 ,Wed Oct 13 2010 13:57:11 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/140,/competitions/informs2010,None /louisduclosgosselin,Preliminary “not using future information ranking”.,"Dear All, According to the preliminary answer you gave us and to our analysis, this is the preliminary “not using future information ranking”. This is the first attempt. Feel free to correct us and indicate which submission you done in not using future information. For the competitors having N/A symbol for AUC, please tell us which submission you did which not using future information, we will update the file. I mean by using future information: -Using information of time0+i to make prediction of time0. In consequence, not using future information is not using time t+ ... information to build the model, because this information will not be available in real time solution. -Using the information of the test set to build better predictive analysis solution. In consequence, ""not using future information"" is ""score"" the test set with model found in the training set. Congratulation to all! This was a nice challenge. Tell us more about (this is an important part of knoweldge discovery process ;)): Abstract (summarize the methods/techniques you used) Preprocessing (Replacement of the missing values?; Discretization?; Normalizations?; Grouping modalities?; Principal Component Analysis?; Other preprocessing techniques?) Feature selection (Feature ranking?; Filter method?; Forward / backward wrapper?; Embedded method?; Wrapper with search? Other Feature selection techniques?) Classifier (Decision Tree?; Gradient Boosted Decision Tree?; Random Forest?; Support Vector Machine?; Logistic Regression?; Discriminant Analysis?; Kernel Logistic Regression?; Multilayer Perceptron Neural Network?; RBF Neural Network?; Polynomial Neural Network?; Cascade Correlation Neural Network?; Bayesian Neural Network?; Other Neural Network?; Bayesian Network?; Markov techniques?; Naïve Bayes?; Nearest Neighbors?; Time series techniques?; Econometrics techniques?; Specialized financial techniques?; Other Classifiers?) Model selection (10% validation database?; K-fold or leave-one-out?; Out-of-bag?; Bootstrap?; Virtual leave-one-out?; Penalty-based?; Bi-level?; Bayesian?; Other cross-validation techniques?; Other model selection techniques?) Ram Memory used to build the model? Parallelism (No?; In parallel?; Multi-computer?; Cloud computing?; Other?) Software Platform (C?; C++?; Java?; Matlab?; SAS?; R?; Other?) Software availability (Proprietary in-house software?; Commercially available in-house software?; Freeware or shareware in-house software?; Off-the-shelf third party commercial software?; Off-the-shelf third party freeware or shareware?) Operating system (Windows?; Linux?; Unix?; Mac?) Did you use future information, if yes, explain how? Did you make use of the result database for training? Thanks a lot. Let's keep in touch. I am looking forward earning your news. Best regards. Louis Duclos-Gosselin Chair of INFORMS Data Mining Contest 2010 Applied Mathematics (Predictive Analysis, Data Mining) Consultant at Sinapse INFORMS Data Mining Section Member E-Mail: Louis.Gosselin@hotmail.com http://www.sinapse.ca/En/Home.aspx http://dm.section.informs.org/ Phone: 1-866-565-3330 Fax: 1-418-780-3311 Sinapse (Quebec), 1170, Boul. Lebourgneuf Suite 320, Quebec (Quebec), Canada G2K 2E3 # in Not using future information ranking Team Name AUC # in overall ranking 1 ams2009 0.755014 39 2 jumper 0.734956 40 3 piaomiao 0.688508 41 4 Data Diggers 0.670962 42 5 Sooners 0.635784 43 6 tigertail 0.612293 6 7 IAD 0.597695 44 8 Narad 0.585651 45 9 Tidy 0.584686 46 10 PedroM 0.578073 47 11 chandv 0.575193 48 12 trapezoidal 0.573176 49 13 Montgomery 0.571047 50 14 The Straightrollers 0.560668 27 15 Evacuation Path 0.559818 51 16 Seyhan 0.556843 52 17 La Pata de Condorito 2010 0.556464 53 18 Nikesh 0.555747 54 19 Dirk Nachbar 0.554522 55 20 Fabien 0.554414 56 21 Nonsense 0.554379 57 22 musimians 0.554157 58 23 linkers 0.553485 59 24 PRPILS 0.552445 60 25 Olteanu And Roberts 0.55208 61 26 free 0.55045 62 27 pivot 0.549926 23 28 SURF 0.549812 63 29 cubsnsox 0.548104 64 30 IEORTools 0.548002 65 31 maomiw 0.547772 66 32 pyk 0.545954 16 33 Gilles 0.545831 67 34 Troae 0.54564 68 35 SimplestModel 0.544957 69 36 RTech 0.54204 70 37 Blue Devils 0.539623 71 38 Terran 0.539199 72 39 MultiAlgo 0.538621 73 40 Julioxa69 0.537271 74 41 TeamBad 0.536395 75 42 mjahrer 0.53591 76 43 Groovy 0.534866 77 44 kebert xela 0.531118 78 45 user1 0.530795 79 46 Naif_professor 0.529691 80 47 Joe.l.lin 0.52933 81 48 InflectionPoint 0.529228 28 49 LYA 0.529173 82 50 apmid 0.528144 83 51 dermcnor 0.527202 84 52 Joe 0.526728 85 53 fguillem 0.526322 86 54 W Team 0.526231 87 55 UC Berkeley 0.525411 88 56 prashant215 0.523908 89 57 closer 0.523151 90 58 ANDRUVILLA 0.522205 91 59 MonkeyWrenchGang 0.521439 92 60 Braddon 0.519717 93 61 null 0.519266 94 62 moe1 0.517455 95 63 Mission Impossible 0.51687 96 64 Team Cash 0.514578 97 65 image_doctor 0.513651 98 66 lynn 0.513465 99 67 Analytics360 0.513386 18 68 Parkville 0.513237 100 69 404 0.512652 101 70 GoF 0.512652 102 71 GnohZnutlll 0.512439 103 72 testname 0.512362 10 73 Moprhism 0.511837 104 74 Les fous du volant 0.511796 105 75 unsown 0.510536 106 76 NoFI 0.509391 107 77 JustForFun 0.508996 108 78 mikejs 0.508228 109 79 JohnChachy 0.507712 110 80 JMOJPD 0.507137 111 81 JavierV 0.506663 112 82 DME 0.505491 113 83 Team 0.50418 114 84 zqzir 0.504105 34 85 standard_methods 0.503738 115 86 H. Solo 0.502834 7 87 JAGC 0.502822 116 88 jtdggt 0.502585 117 89 Team3256 0.502585 118 90 bubac 0.502145 119 91 FJ_TEAM 0.501876 120 92 Xenon 0.50117 121 93 crossroad 0.500889 122 94 Elgin 0.500145 123 95 Barrabas 0.5 124 96 Yan Papadakis 0.499957 125 97 BrainTrader 0.498423 126 98 Luis Manuel Pulido Moreno 0.496925 127 99 shahrdar 0.494812 128 100 PAYALE 0.493665 129 101 Stat 0.489821 130 102 Solo 0.489342 131 103 investor 0.489047 132 104 overdrive 0.488663 133 105 hcj 0.488458 134 106 example only 0.488458 135 107 NYAlfred 0.488151 136 108 JF_TEAM 0.487353 137 109 Agnesios 0.476311 138 110 awc 0.475525 139 111 R2C 0.475196 140 112 Bodner Mining 0.466368 141 113 MiningMaster 0.461172 142 114 Cruncher 0.456775 143 115 bayesTrees 0.452813 144 116 trenderIy 0.450456 145 117 delta 0.44343 146 118 SAM2009 0.259414 147 N/A dejavu N/A 1 N/A Swedish Chef N/A 2 N/A Nan Zhou N/A 3 N/A sali mali N/A 4 N/A DayTrader N/A 5 N/A DataKiller N/A 8 N/A atom N/A 9 N/A xli N/A 11 N/A Knock N/A 12 N/A datalev N/A 13 N/A Timo Alan N/A 14 N/A Jiahan Li N/A 15 N/A MTech QROR N/A 17 N/A 3Sigma N/A 19 N/A Passionalytics N/A 20 N/A Nambiar N/A 21 N/A LikeSushi N/A 22 N/A Allen_Zhou N/A 24 N/A hackerdojo N/A 25 N/A rwrw N/A 26 N/A Soumik N/A 29 N/A PG Vijay N/A 30 N/A SuperCorn N/A 31 N/A MarketMaker N/A 32 N/A kkoo N/A 33 N/A leverw N/A 35 N/A Aryabhatta N/A 36 N/A Robert N/A 37 N/A 3idiots N/A 38",0,None,2 ,Wed Oct 13 2010 20:01:54 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/141,/competitions/informs2010,None /jhoward,Re:Re:Re:Re:What the data looks like,"I've created a JavaScript-based [Link]:http://jhoward.fastmail.fm/test/KaggleTimeSeries2/..As you'll see from the charts, there's quite a range of interesting patterns visible in the data. Let me know if you find this useful, or if you have any thoughts.",0,None,3 ,Thu Oct 14 2010 04:27:03 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/142,/competitions/tourism2,2nd /stephendmckay,"AUC = 0.979, approaching perfection?",Anyone able to translate this into a rough measure of how many are being misclassified by such an excellent score!,0,None,1 Comment,Thu Oct 14 2010 20:29:02 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/143,/competitions/R,47th /johnmyleswhite,Re:Revised Example Model,"All, To encourage people to push forward with this contest and not despair over the high performance of the top teams, we're releasing a new example model. As you'll see, we've literally only changed one line in the example code, but the new model's AUC is much higher because it accounts for variability in the users, which was not accounted for at all by the original model. Go to the [Link]:http://github.com/johnmyleswhite/r_recommendation_system to see the revised model.",1,None,5 ,Sun Oct 17 2010 01:45:53 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/144,/competitions/R,None /ivan23133,Re:Re:Bug in train/test split,"Hi,So, we have 52 users and 2487 packages (btw, ""packages.csv"" is missing ""R"" and ""base"" packages). That gives 129324 user/package combinations. But$ wc -l test_data.csv training_data.csv 33126 test_data.csv 99374 training_data.csv 132500 totalSo there are 132500-2-129324 = 3174 records that are redundant or overlapping between train and test sets.I've checked ""installations.csv"" and it indeed contains 1103 user/package pairs for which there's a record with Installed= 'NA' (which means it's part of the test set) and another record with Installed='0' or '1'.",0,None,1 Comment,Sun Oct 17 2010 03:10:58 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/145,/competitions/R,7th /philippemanuelweidmann,Amazing new theme,"Anthony,let me say that I was just amazed by the beautiful new Kaggle theme. This is a gigantic improvement and looks very professional.Looking forward to competing in future contests on this wonderful site.Cheers,Philipp",0,None,1 Comment,Sun Oct 17 2010 08:30:23 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/146,/competitions/chess,5th /philippemanuelweidmann,4 Submissions per day???,"The submission page now reads: ""This contest only allows you to make 4 submissions per day.""I haven't tested that yet, but I presume this is a mistake caused by transition to the new theme.",0,None,2 ,Sun Oct 17 2010 08:44:27 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/147,/competitions/chess,5th /antgoldbloom,Why is the benchmark still leading?,"Hi all,Wondering why the benchmark is still leading when it is publically available ( [Link]:http://robjhyndman.com/papers/forecompijf.pdf). Have people had trouble replicating the authors' methodology? Or is everybody trying their own approaches?-- Anthony",0,None,6 ,Mon Oct 18 2010 05:43:48 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/149,/competitions/tourism2,None /nate1297,JSkills,"You have a link to Jeff Moser's TrueSkill implementation in C#. Is anyone interested in my Java fork of this project, JSkills? [Link]:http://github.com/nsp/JSkills I'd like to use it to enter the competition myself, but I'm not sure if I'll get around to it, so I'll just throw it out there.",0,None,1 Comment,Tue Oct 19 2010 04:45:20 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/150,/competitions/chess,None /tc1833272,Confirmation of submission requirements,"Hi all...When making a submission, the following instructions are listed:""Your entry must: > be in csv format;> have your predictions in columns 1 to 793;> provide 24 forecasts for the monthly series and eight forecasts for the quarterly series; and > be 16 lines long (empty lines will be ignored)."" Just want to confirm that the ""16 lines long"" instruction is an error. The ""example_submission"" file has 25 lines including the data labels: m1, m2, etc",0,None,1 Comment,Wed Oct 20 2010 01:59:24 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/151,/competitions/tourism2,9th /leebaker,"Athanasopoulos, et al paper corrections - what changed?","I just loaded another copy of the the Athanasopoulos, et al, and immediately noticed the change in the font. The front page lists the paper as being ""Corrected 20 September 2010"". Can one of the authors comment on what was corrected?",0,None,1 Comment,Wed Oct 20 2010 23:17:19 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/153,/competitions/tourism2,2nd /frankvp,Re:Re:Re:Re:Re:Re:Re:Re:Re:Re:Re:Re:Re:Re:Re:Re:Re:Re:Quick TIBCO Chess Data Visualization...,This is just a quick visualisation pulled together using TIBCO Spotfire to find novel insights.Checkout [Link]:http://alturl.com/p8poi. Enjoy.Frank [TIBCO],0,None,1 Comment,Thu Oct 21 2010 04:35:57 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/154,/competitions/chess,None /frankvp,Re:Re:Re:Quick Spotfire Visualization of Chess Data,[Link]:http://ondemand.spotfire.com/public/ViewAnalysis.aspx?file=/Users/TIBCO-SILVER-76220/Public/Kaggle.dxp&waid=ce0a9b3e554a6d389c336-b83e,0,None,1 Comment,Thu Oct 21 2010 05:06:21 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/155,None,None /tobiasschultze,matches as graph with statistics and visualization,"Hey, based on the training dataset I created a of a directed, weighted, cyclic, dynamic multigraph (multiple edges between two nodes). Nodes represent players, edges are matches between them directed from the winner to the loser.Result can be found here: [Link]:http://www.tobion.de/chessgraph/",0,None,1 Comment,Thu Oct 21 2010 06:08:49 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/156,/competitions/chess,None /georgeathanasopoulos,R code for Athanasopoulos et al (2010) benchmarks,"Here is the R code the replicates the forecasting results for the benchmark methods.For quarterly series y: require(forecast) fit <- ets(y,model=""AAA"", damped=TRUE, lower=c(rep(0.01,3),0.8), upper=c(rep(0.99,3),0.98)) # These bounds have been set because they were the default setting in older versions of the forecast package. fit <- forecast(fit,8) forecasts <- fit$mean For monthly series y: require(forecast) fit <- auto.arima(y,D=1) fit <- forecast(fit,24) forecasts=fit$mean Cheers,George",0,None,6 ,Fri Oct 22 2010 05:10:06 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/157,/competitions/tourism2,11th /tobiasschultze,ELO in Hollywood movie,Have you noticed that there's a scene in the movie The Social Network about the ELO rating?Funny coincidence.,0,None,1 Comment,Fri Oct 22 2010 06:06:35 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/158,/competitions/chess,None /dirknbr,revision 2 file,"I have the same line break error I had with 1st revision file, can you fix this again pls.",0,None,1 Comment,Fri Oct 22 2010 17:46:27 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/160,/competitions/tourism2,32nd /jasonbrownlee,Re:Cannot see my score when I submit,"Since the release of the new web site design I cannot see my past submission scores any more, and when I make new submissions and I cannot see the allocated score!Does anyone else have this problem (I tried on mac/linux, chrome/firefox)Is this by design or a bug? - it's very frustrating!EDIT: To clarify, I'm talking about the ""submissions page"" not the leaderboard.",0,None,5 ,Sat Oct 23 2010 00:49:37 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/161,/competitions/chess,17th /antgoldbloom,Deadline extension,"Hi all,Just to let you know that we have extended the deadline for this competition by just over a week. Both Jeff and I will be travellng around mid November, so wouldn't be able to deal with the competition's conclusion.Anhony",0,None,18 ,Mon Oct 25 2010 01:38:59 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/163,/competitions/chess,217th /timsalimans,missed part 1,"I missed part 1 of the competition, can I still compete?",0,None,4 ,Mon Oct 25 2010 14:03:07 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/164,/competitions/tourism2,None /lt2062,Question regarding downhill simplex / Nelder-Mead method,"Hi everyone!I set up a model with currently six parameters to do my predictions. To get good parameters (fast), I implemented the downhill simplex method, also known as Nelder-Mead method. See: http://en.wikipedia.org/wiki/Nelder%E2%80%93Mead_method Things seem to be working in general. BUT: My model is set up in a way that parameters must be in a certain interval, otherwise it screws up. What would be a meaningful way to take this fact into account and prevent the algorithm to choose invalid paramaters?I thought about three possible solutions:1. Return a high error value for invalid parameters. As a result the algorithm can enter invalid regions, but should avoid these after some iterations. I can imagine that in this case the algorithm is ""scared"" of the border regions.2. Return the error value for the parameter set whose values are closest to the invalid set but are in a valid range in all dimensions. In this case the algorithm can also enter invalid regions, but should converge to an optimum anyway(?).3. Don't let the algorithm choose invalid values altogether, but overwrite its decision immediately if it falls to an invalid range.Can you give me some advice for this?Thanks in advanceLuke",0,None,6 ,Tue Oct 26 2010 21:43:54 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/165,/competitions/chess,60th /antgoldbloom,Few charts,"This first chart how the leading score has changed on a day-by-day basis. The red line shows the Elo benchmark and the blue line shows the leading score. The Elo benchmark was outperformed within 24 hours, which is why it's always above the best entry. Interesting to see some recent progress after a period of stagnation (well done Philipp). My guess is that any major improvement from this point on will be the result of somebody trying something quite different.This chart shows the number of daily entries. Higher early but seems to have stabilised at around 30 per day. Happy to put up other charts if people have requests.",0,None,13 ,Thu Oct 28 2010 09:46:58 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/166,/competitions/chess,217th /steffen,mandatory implementation language,"Hello fellow number crunchesThe contest information suggests, that the resulting model has to be programmed in R (well, this seems logical). Is this correct ? What about all other data preparation steps performed along the way ? Do not get me wrong: I know and like R, but I do not like to use it for complicated programs, because my object-oriented mind keeps crashing during the coding :)kind regards,steffenPS: targeted languages are Java, Python (and R).",0,None,3 ,Sun Oct 31 2010 12:13:51 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/167,/competitions/R,26th /pierre,About values,"After the end of the contest, I thought the nature of values would be unveiled. When will we have these informations ?",0,None,3 ,Fri Nov 05 2010 11:09:20 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/168,/competitions/informs2010,94th /eduthie,Results without leaderboard portion of dataset,"For interest, the following list gives the competition results if the 10% leaderboard portion of the dataset was not used in the calculation of the final standings:1 dejavu 0.991501 2 Swedish Chef 0.9907163 Nan Zhou 0.9901224 sali mali 0.989765 DayTrader 0.9865496 tigertail 0.9851947 DataKiller 0.984528 H. Solo 0.9844699 atom 0.98350410 testname 0.98239511 xli 0.98157412 Knock 0.98096613 datalev 0.97922214 Jiahan Li 0.97728215 Timo Alan 0.97718116 pyk 0.97673917 MTech QROR 0.97147618 Analytics360 0.97126619 Nambiar 0.97126620 3Sigma 0.97122421 Passionalytics 0.97102322 hackerdojo 0.970723 LikeSushi 0.9706524 pivot 0.97029525 Allen_Zhou 0.97021226 rwrw 0.96877827 The Straightrollers 0.9619628 InflectionPoint 0.95739829 Soumik 0.93858930 PG Vijay 0.90836731 MarketMaker 0.88863132 SuperCorn 0.8881133 kkoo 0.88144634 Aryabhatta 0.86851935 zqzir 0.86753636 leverw 0.86709837 Robert 0.84661538 3idiots 0.80940339 ams2009 0.75628740 jumper 0.75542841 piaomiao 0.70605642 Data Diggers 0.66250943 Sooners 0.64043544 IAD 0.58568645 kebert xela 0.58286246 trapezoidal 0.58234147 Tidy 0.57781548 Olteanu And Roberts 0.57142349 chandv 0.57070450 RTech 0.56633251 Seyhan 0.56631752 PRPILS 0.56620353 PedroM 0.56393454 Joe.l.lin 0.56301155 Narad 0.56150556 Dirk Nachbar 0.55973957 Gilles 0.5565658 free 0.55572759 mjahrer 0.55455260 La Pata de Condorito 2010 0.55424661 Montgomery 0.55415862 user1 0.55316863 Troae 0.5524864 Blue Devils 0.54830665 Nonsense 0.54748266 dermcnor 0.54717367 apmid 0.54703168 Evacuation Path 0.54171469 NoFI 0.54150170 Fabien 0.5414971 MultiAlgo 0.54097572 IEORTools 0.53970373 mikejs 0.5383774 closer 0.53767175 W Team 0.53763176 SimplestModel 0.53707177 Moprhism 0.53692978 Elgin 0.5353579 PAYALE 0.53493680 cubsnsox 0.5347881 Julioxa69 0.53445882 Team Cash 0.53249983 JF_TEAM 0.53233384 Groovy 0.53218385 Parkville 0.53202186 crossroad 0.53192187 Joe 0.53179188 standard_methods 0.53136789 Terran 0.53128590 linkers 0.52976491 fguillem 0.52911492 UC Berkeley 0.52907893 musimians 0.52873994 FJ_TEAM 0.52856695 null 0.52740796 prashant215 0.52719597 JMOJPD 0.52641198 bubac 0.52639399 GnohZnutlll 0.525021100 Team 0.524529101 maomiw 0.524057102 Luis Manuel Pulido Moreno 0.52387103 TeamBad 0.523152104 JavierV 0.52294105 Braddon 0.520279106 Nikesh 0.520184107 Stat 0.519864108 moe1 0.519759109 JohnChachy 0.519596110 Naif_professor 0.519498111 Yan Papadakis 0.518525112 lynn 0.51843113 JAGC 0.517788114 ANDRUVILLA 0.517393115 NYAlfred 0.516196116 jtdggt 0.515579117 investor 0.514117118 LYA 0.514088119 DME 0.513282120 SURF 0.512337121 unsown 0.51045122 JustForFun 0.510341123 Team3256 0.508493124 image_doctor 0.507805125 Barrabas 0.506306126 shahrdar 0.504879127 Mission Impossible 0.504622128 Xenon 0.503607129 Les fous du volant 0.502432130 MonkeyWrenchGang 0.499168131 BrainTrader 0.49617132 Bodner Mining 0.48949133 hcj 0.489007134 example only 0.489007135 awc 0.487519136 overdrive 0.484843137 Solo 0.483767138 GoF 0.482091139 Cruncher 0.480446140 bayesTrees 0.480405141 Agnesios 0.479423142 MiningMaster 0.476043143 delta 0.474389144 trenderIy 0.473037145 R2C 0.472919146 404 0.468586147 SAM2009 0.253194",0,None,2 ,Mon Nov 08 2010 05:13:29 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/169,/competitions/informs2010,None /dirknbr,Benchmark,"I anticipate that the question will come up how the benchmark was derived. I would not like to reveal that yet because I would not want to steer people in one particular direction. However, if there is little progress I will publish it, it is 47 lines (including comments and blank rows) of Pyhton code and quite simple to understand (no packages required).",0,None,6 ,Mon Nov 08 2010 23:07:26 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/170,/competitions/socialNetwork,77th /nobody,can you provide more info?,What type of social network is this? Facebook alike or Twitter alike?What's the basic motivation people join this network? sharing hobbies or thoughts? keeping connected with friends or family? finding dating oppotunities? playing game together? I think answers to these questions can help us to undertand why people reaching out and grouping? A better understanding of context can help improve models.Thoughts?,0,None,7 ,Tue Nov 09 2010 06:10:44 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/171,/competitions/socialNetwork,68th /stephendmckay,More clarifications,"So, it's OK to produce a probability that a node exists, not just 0s and 1s?Can you explain the difference between an 'inbound node' and an 'outbound node'? They just seem to be pairs of IDs with a link between them?Thanks - Steve",0,None,2 ,Tue Nov 09 2010 09:54:32 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/172,/competitions/socialNetwork,None /dudarev,AUC calculation,Could you clarify how AUC is calculated in this case? We are submitting N numbers p_i ranging from 0 to 1. The real numbers are P_i that can be only 0 or 1. Is AUC in this case just:\sum_{i=1}^{N} | P_i - p_i | / N,0,None,10 ,Tue Nov 09 2010 13:34:01 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/173,/competitions/socialNetwork,34th /salimali,How to beat benchmark,"It is possible to beat the benchmark (at least on the 20%) by just 'ensembling' 4 of the methods the authors of the paper have already provided.some weights I came up with (by using a 1 year holdout set) were:Quarterly 8/15 * damped3/15 * arima1/15 * naive3/15 * etsMonthly 2/15 * damped 6/15 * arima 1/15 * naive 6/15 * etsAlso if you only predict 1 year ahead and then repeat this for the 2nd year then this helps (the paper says naive predictions for annual are hard to beat).If you do this, you should be able to get 1.41659The benchmark of 1.4385 isdamped (quarterly)arima (monthly)",1,None,8 ,Tue Nov 09 2010 22:01:21 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/174,/competitions/tourism2,1st /jonnnny,Leaderboard test set,"Just curious, does the test set used for the leaderboard change with each submission?That is, does it randomly select 20% of the test set to calculate the AUC or is the subset fixed?Thanks,Jon",0,None,1 Comment,Wed Nov 10 2010 07:11:13 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/175,/competitions/socialNetwork,20th /tamasnepusz,Order of predictions in submission,"I was wondering whether the order of predictions in the submission matters or not. I have just discovered that the best submission I've made so far got the IDs totally wrong, i.e. the IDs in the submission were not the original IDs from the published data but from my own internal IDs - however, otherwise the ordering was the same as in the sample submission. In this case, the AUC is significantly better than 0.5 despite the fact that I've used a fairly insophisticated algorithm, which indicates that the IDs used in the submission don't really matter as long as the predictions are in the same order as in the sample submission file.On the other hand, another submission of mine contained the correct IDs, but the order of predictions was not the same as in the sample submission, and this one reached a far lower AUC (not significantly different from 0.5). So, does the order of predictions in the submission matter?",0,None,2 ,Thu Nov 11 2010 12:07:07 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/177,/competitions/socialNetwork,63rd /byang1,question about creation of competition dataset,"Hi,If a node-pair does not exist in either the training set or the test set, can we assume there's no edge connecting them in the complete dataset (from which the competition dataset was built) ? That is, for all the nodes in the competition dataset, is there any edge that's in the complete dataset but was not picked for the competition dataset ?Thanks",0,None,21 ,Fri Nov 12 2010 19:27:59 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/178,/competitions/socialNetwork,2nd /jc13101,final standings based on participants' best (rather than last) entry,"Is that fair? This policy seems to really favor those who starts early and have lots of, like 100+ submissions already.Would it be better if we limit the number of final submissions, say only count for the last 10 submission?",0,None,11 ,Sat Nov 13 2010 05:14:20 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/179,/competitions/chess,56th /rickcooper,Elo System Use For Rating Card Game Players,"I hope it's not inappropriate for me to ask this question.A ""friend"" uses the Elo System to rate the players of a card game called Rook. If you are not familiar with this game, you can think of Spades or Hearts as card games of similar play.Since Elo works great for games of skill, would you as experts in the field of rating systems ever suggest it be appropriate for rating players of a game that is best described as a game of skill and luck (random deal of the cards)?If your answer is 'No', can you suggest what components you would use in devising a system of rating this game and similar games?I greatly appreciate your response(s)!",0,None,4 ,Sat Nov 13 2010 23:26:32 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/180,/competitions/chess,None /antgoldbloom,Congratulations!,"Thanks everyone for making this an amazing competition!Big congratulations to the winner, [Link]:http://kaggle.com/outis/. Also to the runner up [Link]:http://kaggle.com/jphoward/, who only joined the competition late in the piece and to [Link]:http://kaggle.com/pug/, who finished third.Hopefully we'll get some of the top ten to tell us about their methods on the blog. In the meantime, I encourage you all to tell us a little about what you tried on the forums. Also for interest, here's a chart that shows how the best score evolved over time. Rapid improvements initially but after a month progress stalled as participants approached the fronteir of what is possible from this dataset.",0,None,14 ,Wed Nov 17 2010 21:33:39 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/183,/competitions/chess,217th /uriblass,can we see the leaderboard?,It may be interesting now to compare between the leaderboard and the final results.I know that basically difference of 0.01 between results in the leaderboard mean nothing even if you do not try to optimize for the leaderboard.It may be also interesting if people can post the result of their best submission in the leader board that you cannot see even if you saw the leaderboard.I will start0.658957 leaderboard 0.696234 final resultFor comparisonChessmetrics Benchmark 0.659533 leaderboard 0.708662 final resultYou can see that being more than 0.012 better in the final result is translated to less than 0.001 difference in the leaderboardand I believe that both chessmetrics and my best submission are not based on optimizing for the leaderboard.,0,None,15 ,Wed Nov 17 2010 23:26:40 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/184,/competitions/chess,6th /jasonbrownlee,Released: my Source Code and Analysis,"I had a lot of fun with this competition and learned a lot about ratings systems. Sadly, I only came 18th :) If you're interested, you can download all of my code and analysis from my github repo: https://github.com/jbrownlee/ChessML There are implementations of a few rating systems (elo, glicko, chessmetrics, etc) and many attempts at improving them (a nice little experimentation framework). Thanks all. Looking forward to the next big comp! jasonb",0,None,3 ,Thu Nov 18 2010 01:06:46 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/185,/competitions/chess,17th /jhoward,2nd place: TrueSkill Through Time,"Wow, this is a surprise! I looked at this competition for the first time 15 days ago, and set myself the target to break into the top 100. So coming 2nd is a much better result than I had hoped for!... I'm slightly embarrassed too, because all I really did was to combine the clever techniques that others had already developed - I didn't really invent anything new, I'm afraid. Anyhoo, for those who are interested I'll describe here a bit about how I went about things. I suspect in many ways the process is more interesting than the result, since the lessons I learnt will perhaps be useful to others in future competitions.I realised that, by starting when there was only 2 weeks to go, I was already a long way behind. So my best bet was to leverage existing work as much as possible - use stuff which has already been shown to work! Also, I would have to stick to stuff I'm already familiar with, as much as possible. Therefore, I decided initially to look at Microsoft's TrueSkill algorithm: there is already a C# implementation available (a language which I'm very familiar with), and it's been well tested (both in practice, on XBox live, and theoretically, in various papers).So, step one: import the data. The excellent [Link]:http://www.filehelpers.com/ library meant that this was done in 5 minutes.Step two: try to understand the algorithm. Jeff Moser has a [Link]:http://www.moserware.com/2010/03/computing-your-skill.html about how TrueSkill works, along with full source code, which he most generously provides. I spent a few hours reading and re- reading this, and can't say I ever got to a point where I fully understood it, but at least I got enough of the gist to make a start. I also watched the very interesting [Link]:http://tv.theiet.org/technology/infopro/turing-2010.cfm by Chris Bishop (who's book on pattern recognition was amongst the most influential books I've read over the years), which discusses the modern Bayesian Graphical Model approach more generally, and briefly touches on the TrueSkill application.Step three: make sure I have a way to track my progress, other than through leaderboard results (since we only get 2 submissions per day). Luckily, the competition provides a validation set, so I tried to use that where possible. I only ever did my modelling (other than final submissions) using the first 95 months of data - there's no point drawing conclusions based on months that overlap with the validation set!I also figured I should try to submit twice everyday, just to see how things looked on the leaderboard. My day one submission was just to throw the data at Moser's class using the default settings. I noticed that if I reran the algorithm a few times, feeding in the previous scores as the starting points, I got better results. So I ran it twice, and submitted that. Result: 0.696 (1st place was about 0.640 - a long way away!) (For the predictions based on the scores, assuming the scores for [white, black] are [s1, s2], I simply used (s1+100)/(s1+s2). The 100 on the top is give white a little advantage, and was selected to get the 54% score that white gets on average).For the next few days, I went backwards. Rather than looking at graphs of score difference vs win%, I assumed that I should switch to a logistic function, which I did, and I optimised the parameters to the using a simple hill-climb algorithm. This sent my score back to 0.724. I also tried optimising the individual player scores directly. This sent my score back to 0.701. This wasted effort reminded me that I should look at pictures before I jump into algorithms. A graph of win% against white score (with separate lines for each quartile of black score), clearly showed the a logistic function was inappropriate, and also showed that there were interactions that I needed to think about.So, after 5 days, I still hadn't made much improvement (minor tweaks to Trueskill params had got me to 0.691, barely any improvement from day 1). So I figured I needed a whole different approach. And now I only had 10 days to go...It concerned me that Trueskill took each individual match and updated the scores after every one - it never fed the later results back to re-score the earlier matches. It turns out that (of course!) I wasn't the first person to think about this problem, and that it had been thoroughly tackled in the "" [Link]:http://blogs.technet.com/b/apg/archive/2008/04/05/trueskill-through-time.aspx"" paper from MS Research's Applied Games Group. This uses Bayesian inference to calculate a theoretically-optimal set of scores (both mean and standard deviation, by player).Unfortunately the code was written for an old version of F#, so it no longer works with the current version. And it's been a while since I've used F# (actually, all I've done with it is some Project Euler problems, back when Don Syme was first developing it; I've never actually done any Real Work with it). It took a few hours of hacking to get it to compile. I also had to make some changes to make it more convenient to use it as a class from C# (since it was originally designed to be consumed from an F# console app). I also changed my formula for calculating predictions from scores, to use a cumulative gaussian - since that is what is suggested in the TrueSkill Through Time paper. My score now jumped to 0.669.The paper used annual results, but it seemed to me that this was throwing away valuable information. I switched to monthly results, which meant I had to find a new set of parameters appropriate for this very different situation. Through simple trial and error I found which params were the most sensitive, and then used hill-climbing to find the optimum values. This took my score to 0.663.Then I added something suggested in the Chessmetrics writeup on the forum - I calculated the average score of the players that each person played against. I then calculated a weighted average of each player's actual score, and the average of their opponents. I used a hill-climb algorithm to find the weighting, and also weighted it by the standard deviation of their weighting (as output by Trueskill/Time). This got me to 0.660 - 20th position, although later someone else jumped above me to push me to 21st.The next 5 days I went backwards again! I tried an ensemble approach (weighted average of TrueSkill, TrueSkill/Time, and ELO), which didn't help - I think because TrueSkill/Time was so much better, and also because the approaches aren't different enough (ensemble approaches are best when combining approaches which are very different). I tried optimising some parameters in both the rating algorithm, and in the gaussian which turns that into probabilities for each result. I also tried directly estimating and using draw probabilities separate from win probabilities.I realised that one problem was that my results on the validation set weren't necessarily showing me what would happen on the final leaderboard. I tried doing some resampling of the validation set, and realised that different samples gave very different results. So, the validation set did generally effectively show the impact when I made a change which was based on a solid theoretical basis, but it was also easy to get meaningless increases by thoughtless parameter optimisation.On Nov 15 I finally made an improvement - previously in the gaussian predictor function I had made the standard deviation a linear function of the overall match level [i.e. (s1+s2)/2]. But I realised from looking at graphs that really it's that a stronger black player is better at forcing a draw - it's really driven by that, not by the combined skill. So I made the standard deviation a linear function of black's skill only. Result: 0.659.So, it was now Nov 16 - two days to go, and not yet even in the top 20! I finally decided to actually carefully measure which things were most sensitive, so that I could carefully manage my last 4 submissions. If I had been this thorough a week ago, I wouldn't have wasted so much valuable time! So, I discovered that the following had the biggest impact on the validation set:- Removing the first few months from the training data; removing the first 34 months was optimal for the validation set, so I figured removing the first 40 months would be best for the full set- Adjusting the constant in the calculation of the gaussian's standard deviation - if too high, the predictions varied too much, if too little, the predictions were all too close to 0.5- And a little trick: I don't know much (anything!) about chess, but I figured that there must be some knockout comps, so people who play more perhaps are doing so because they're not getting knocked out! So, I tried using the count of a player's matches in the test set as a predictor! It didn't make a huge difference to the results, but every little bit counts...Based on this, my next 3 submissions were:- Remove first 40 months: 0.658- Include count of matches as a prediction: 0.654- Increase the constant in the stdev formula by 5%: 0.653(My final submission was a little worse - I tried removing players who hadn't played at least 2 matches, and I also increased the weight of the count of matches: back to 0.654).For me, the main lesson from this process has been that I should more often step back and think about the fundamentals. It's easy to get lost in optimising the minor details, and focus on the solution you already have. But when I stepped away from the PC for a while, did some reading, and got back to basics with pen and paper, is when I had little breakthoughs.I also learnt a lot about how to use validation sets and the leaderboard. In particular, I realised that when you're missing a fundamental piece of the solution, then little parameter adjustments that you think are improvements, are actually only acting as factors that happen to correlate with some other more important predictor. So when I came across small improvements in the validation set, I actually didn't include them in my next submitted answer - I only included things that made a big difference. Later in the competition, when I had already included the most important things, I re-tested the little improvements I had earlier put aside.Please let me know if you have any questions. I would say that, overall, TrueSkill would be a great way to handle Chess leaderboards in the future. Not because it did well in this competition (which is better at finding historical ratings), but because, as shown in Chris Bishop's talk, it is amazingly fast at rating people's ""true skill"". Just 3 matches or so is enough for it to give excellent results.",3,None,15 ,Thu Nov 18 2010 03:44:10 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/186,/competitions/chess,2nd /vsu1664,test labels,"It was a great competition! Congratulations to all the winners !Can we expect that the Organisers will release the test labels or there will be an opportunity of post challenge submissions ? We would be interested to write a paper for a journal or top DM conference, and would be interested to conduct some additional experiments..",0,None,17 ,Thu Nov 18 2010 08:43:23 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/187,/competitions/chess,11th /louisduclosgosselin,Methods/techniques used by the top three competitors,"Dear All, The methods/techniques used by the top three competitors has been presented at the INFORMS Data Mining Contest Special Session at INFORMS Annual Meeting - Austin, Texas, November 7-10, 2010. I attached below theirs presentations. Thanks to: 1) Cole Harris from DejaVu Team 2) Christopher Hefele from Swedish Chef Team 3) Nan Zhou from Nan Zhou Team for sharing this useful materials with us. Congratulation again!!! ;) Thanks a lot. Let's keep in touch. I am looking forward earning your news. Best regards. Louis Duclos-Gosselin Chair of INFORMS Data Mining Contest 2010 Chair of the Data Mining Cluster of INFORMS Healthcare 2011 INFORMS Data Mining Section Council Members Applied Mathematics (Predictive Analysis, Data Mining) Consultant at Sinapse E-Mail: Louis.Gosselin@hotmail.com http://www.sinapse.ca/En/Home.aspx http://dm.section.informs.org/ Phone: 1-866-565-3330 Fax: 1-418-780-3311 Sinapse (Quebec), 1170, Boul. Lebourgneuf Suite 320, Quebec (Quebec), Canada G2K 2E3",3,None,25 ,Fri Nov 19 2010 22:36:19 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/190,/competitions/informs2010,None /uriblass,6th place(UriB) by Uri Blass,"I calculated rating for every player in months 101-105 and after having the rating I have a simple formula to calculate the expected result only based on the rating and the color.The tricks that I used were mainly in calculating the rating but I will start explaining the simple part.The first part was calculating the bonus for whiteI had the following formula for this part:bonus=maximum((white_rating+black_rating-3100)/40.0,50)Diff=white_rating+bonus-black_rating Expected_result=0.5+Diff/850 When I changed it to be not more than 0.970588 and not less than 0.1(practically it had a very small effect because the result was always bigger than 0.1 and there was only one case when I needed to reduce it to 0.970588) Now we go to the hard part that is how to calculate the rating for every player.For this purpose I admit that I used the future to predict the past(but I have also prediction based on a different model in the top 10 when I did not use the future to predict the past).I used a function that I called repeat_strength_estimateThe function get the following parameters:1)k that is the last month that is not missing.For the prediction of months 101-105 k=100 but for testing my parameters I used k=90,91,92,...992)max_months(practically get the value 81 and I admit that it is not a good name)The meaning of max_months=81 is practically that I do not use the first 20 months to predict month 101 and that I do not use the first 21 months to predict month 102 and generally I do not use the first m-81 months to predict month number m.3)big_dif=310big_dif was used to calculate performance rating and for some reason I found that small values give better resultsin my tests so I used this small valueMy formula for performance rating was performance_rating=avg_rating+((result-opponents)/opponents)*big_dif;the value of the division can be at most 1 and at least -1 because result is practically weighted half points and is something between 0 and twice the weight of the opponent.opponents in this formula mean the number of weight opponents(when the weight is based on the distance in month from the month to predict) This formula means that even if a player lost all the games against the opponents then he still got performance rating that is only 310 elo weaker than the average of the opponents because the result of the division is always between -1 for losing all games and 1 for winning all games.I guess that it was good because not all games are included so person who played against strong opponents probably performed practically better than his real score and it is not good for the real world when games are not missing.4)num_avg=5.9 similiar to chess metrics(I added 5.9 faked opponents with average rating)5)num_weak=2.2(added 2.2 faked weak opponents)6)value_weak=2210(rating of the weak opponents like chess metrics7)unrated=2285(I think that practically had no effect because players always have games in the last 80 months) 8)minimal_game_finished=15(I reduce rating to players with less than 15 weighted games similiar to chess metrics)9)reduction_per_game=12(the number that I reduce for less of experience for player without many weight games)10)adding=39(the number that I add to rating of players after every iteration)repeat_strength_estimate basically did 10 iterations for evaluating the strength of every player in every month.The evaluation of the strengh was based on 2 steps when step 1 was the function that calculate strength that is similiar to chess metrics but there are important differences and step 2 was deciding that place 50 has rating 2625 in the rating list that is exactly the same as chess metrics. calc_strength_chess_metric is the missing function to understand the algorithm and it basically got 11 parameters(all the 10 parameters that repeat_strength_estimate got and another parameter that is the month that we calculate estimate for it).Note that the estimate for month 50 of player 1 when months 101-105 are missing is important because if player 2 played with player 1 at month 50 then it is going to influence the rating of player 2 at month 101-105 that is used to calculate the expected result.I use the word estimate and not rating because rating by definition assume that we do not have future results.I had basically 2 steps incalc_strength_chess_metricThe first step was a loop that calculated the estimate for strength for every player in the relevant month.The second step is a step that I used only when I needed to predict the strength in the missing months and it is practically unfair trick but not something that is forbidden in the competition because I used the information about games and not about the results in the supposed missing months to calculate changes in the rating estimate in these months.I did not finish to explain my algorithm and I plan also to send code later but now I need only to explain the 2 steps of calc_strength_chess_metric to explain my algorithm and I will do it later in another post(this part of the program is only slightly more than 100 lines of code in C).",0,None,5 ,Sat Nov 20 2010 05:38:53 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/192,/competitions/chess,6th /martinreichert,3rd place: Chessmetrics - Variant,"Dear all,it was a great competition, thanks a lot. Here some notes regarding my approach. methodology: - all coding was done using perl - I tested some established basics like elo and chessmetrics using the training data - I found chessmetrics a very promising approach - regarding the parameters performance, opposition quality, activity, weighting, self consistant rating over a time period - I varied the rating formula depending on the parameters abovemy best submission materialized in using: - the complete training data from week 1 to 100, and not a subset of these data - using the iterative rating formula: rating = (weighed_performance + weighed_opponent_rating + weighed_tie_to_defined_rating_level + extra_points) / (sum(game_weight) + weight_opponent_rating + weight_tie_to defined_rating_level) weight_tie_to defined_rating_level = 2.5game_weight = (1 / (1 + ( 100 - month ) / 48 ) )**2 white_advantage = 27.5 extention = 850 weighed_performance = sum ( (opponent_rating +- white_advantage + extention * (result-0.5) ) * game_weight ) weight_opponent_rating = 12.5 weighed_opponent_ra ting = weight_opponent_rating * sum( (opponent_rating +- white_adavantage) * (game_weight) ) / sum(game_weight) defined_rating_level = 2300 weighed_tie_to_defi ned_rating_level = weight_tie_to defined_rating_level * defined_rating_level extra_points=24.5 - using the prediction formula: probability(win_player_white) = ((rating_player_white + white_advantage) - (rating_player_black - white_advantage)) / extention + 0.5 ; limited to {0..1} [Link]:http://www.ratatoek.de/elo_comp_23_test_pl.txtMeanwhile I also updated my profile on Kaggle ...CheersMartin",0,None,2 ,Sat Nov 20 2010 12:27:17 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/193,/competitions/chess,3rd /nickmirsky,"Using additional datasets -- eg., rain, fog, etc.","Hello,I was wondering if we were able to incorporate other datasets. For example, rain and fog would surely affect travel times.Thanks,Nick Mirsky",0,None,20 ,Wed Nov 24 2010 02:02:57 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/195,/competitions/RTA,None /antgoldbloom,Sample PHP Code,Attached is some sample code that can be used to constuct an entry that generates a forecast based on the average travel time on a given route on a given day of the week at a given time.,0,None,3 ,Wed Nov 24 2010 03:39:33 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/196,/competitions/RTA,255th /antgoldbloom,Sample Python Code,Attached is some sample Python code that generates forecasts based on the last known travel time. (I'm new to Python so happy to hear any feedback on the code.),0,None,16 ,Wed Nov 24 2010 03:44:04 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/197,/competitions/RTA,255th /peterwatts,Accurate location data,Is there information on the length of each route and the location of exit/entry points in relation to the routes? Surely this information is significant.,0,None,14 ,Wed Nov 24 2010 11:05:50 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/198,/competitions/RTA,None /dirknbr,Toll,It seems a toll was abolished in Feb 2010.http://news.smh.com.au/breaking-news-national/sydneys-m4-toll-to-be-abolished-20100215-nzox.html,0,None,1 Comment,Wed Nov 24 2010 15:05:25 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/199,/competitions/RTA,261st /dennisjaheruddin,Leaderboard,"I have two questions about the leaderboard:1) It is stated that the leaderboard is calculated using 30% of the test set, but is this 30% also included in the calculation of the final standings?2) Which part of the test set is represented in the leaderboard? (Both 30% of 61 and 30% of 29 may be undesirable)",0,None,5 ,Wed Nov 24 2010 18:31:45 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/200,/competitions/RTA,85th /uriblass,My early results in this competition,"I can say that I had already better result than chess metrics(but not better than my best result) by a forward approach but laterthe fact that chessmetrics did so well in the leaderboard was one of the reasons that caused me to try only modified chess metrics(I also liked the nice way that chessmetric used my own ideas when I also have similiar ideas but not in so nice way).I believe that chessmetrics was wrong in one thing and it is not considering games against non fully rated players(unfortunately my tries to use games against non fully rated players with smaller weight gave me worse results in the leaderboard) and only at the end I decided to use some different model that is also based on the ideas of chess metrics without the concept of fully rated players.I believe that the best is not to have the concept of fully rated players and give games different weights(even if they are from the same months) but I had no time to build a good model for it and decided at the end simply to give all games at the same month the same weight. Note that I got place 7 based on a modified chess metrics approach without using the future to predict the past so it seems that using the future to predict the past did not give me much. History of my best results (the number is number of prediction) 1)0.7148342)0.7088513)0.7083464)0.7040495)0.7035346)0.7033047)0.7027339)0.702352 my best forward approach and surprisingly a very good result enough for place 19 in the competition.20)0.701063 22)0.698297 place 824)0.697596 place 729)0.697353 place 7119)0.696234 place 6 In submission 119 I use some trick of using the future to predict the past but probably the help of it was relatively small.In the first submission I did not use the future to predict the past. I think that the leaderboard was counterproductive and people could probably do better without it. I suspected that it may be counterproductive but having worse results in the leaderboard destroyed my motivation to continue from the point that I was even when I made a progress(in cases that I got worse result in the leaderboard) and at some point I decided that I probably need a different model but I also have no time to program it so I simply tried to do small changes in parameters to optimize for the leaderboard. Only at the end of the competition I decided todo some last effort Maybe it is a good idea that in the future the leaderboard can include only 2 digits after the 0 so it is more clear to people that it is only indicative so people cannot get conclusions about their real place based on the leaderboard[in this competition even 2 digits were too much and it could be better to have only numbers like 0.65,0.7,0.75, when chessmetrics and many people(everyone with better score than 0.675) could get only a score of 0.65].",0,None,1 Comment,Wed Nov 24 2010 20:32:26 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/201,/competitions/chess,6th /waiyiptung,Confuse by the data definition,"It says ""The cells show the travel time in centiseconds"".Travel time from where to where?For example, under the column of route 40020, this first cell has the value of 804. 804 x 0.01s = 8.04s. Does it mean a vehicle takes 8.04s to complete the entire segment of route 40010?Also another reader has asked, what is the length of each route segment? The Google map under RTA does not should route number not exit number.",0,None,2 ,Wed Nov 24 2010 22:59:00 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/202,/competitions/RTA,None /carlosaydos,Eligibility,"I work for the RTA. Can I compete?A friend of mine works for a company that does business with the RTA, can he compete?",0,None,1 Comment,Thu Nov 25 2010 09:47:11 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/203,/competitions/RTA,None /frankvp,Quick look before your download the RTA data...,"A quick visualisation pulled together using TIBCO Spotfire. View below, or [Link]:http://alturl.com/7eznt. Enjoy. Frank [TIBCO]",0,None,7 ,Thu Nov 25 2010 14:01:45 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/204,/competitions/RTA,None /jhoward,Sample C# code,"Here's a C# translation of Lee's Python code for creating a naive ""model"": const string fmt = ""yyyy-MM-dd HH:mm"";// The cut-off pointsvar cutoffTimes = new[] {""2010-08-03 10:28"",""2010-08-06 18:55"",""2010-08-09 16:19"",""2010-08-12 17:22"",""2010-08-16 12:13"", ""2010-08-19 17:43"",""2010-08-22 10:19"",""2010-08-26 16:16"",""2010-08-29 15:04"",""2010-09-01 09:07"",""2010-09-04 09:07"", ""2010-09-07 08:37"",""2010-09-10 15:46"",""2010-09-13 18:43"",""2010-09-16 07:40"",""2010-09-20 08:46"",""2010-09-24 07:25"", ""2010-09-28 08:01"",""2010-10-01 13:04"",""2010-10-05 09:22"",""2010-10-08 16:43"",""2010-10-12 18:10"",""2010-10-15 14:19"", ""2010-10-19 17:16"",""2010-10-23 10:28"",""2010-10-26 19:34"",""2010-10-29 11:34"",""2010-11-03 17:49"",""2010-11-07 08:01""};//forecast horizon in multiples of 15 minutesvar forecastHorizon = new [] {1,2,3,4,6,8,24,48,72,96};var lines = File.ReadAllLines(""RTAData.csv"");var res = new List {lines[0]};foreach (var data in lines.Skip(1).Select(o => o.Split(','))) { if (!cutoffTimes.Contains(data[0])) continue; var currentDate = DateTime.Parse(data[0]); res.AddRange(forecastHorizon.Select(i => currentDate.AddMinutes(i*15).ToString(fmt) + "","" + String.Join("","", data.Skip(1))));}File.WriteAllLines(""sub.csv"", res);",0,None,3 ,Fri Nov 26 2010 09:28:08 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/205,/competitions/RTA,106th /danielhartmeier,Floating point predictions,"Why does the sample prediction contain floating point numbers instead of integers, like the input data?Is the hidden correct answer in floats or in integers?How is the comparison done for calculating the RMSE, in floating point arithmetic, or integer (truncating any prediction's floats)?Thanks,Daniel",0,None,4 ,Fri Nov 26 2010 18:03:52 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/206,/competitions/RTA,200th /dchudz,times constant for a whole day,"It looks to me like we have the same travel times (for each route) across an entire day, from 4/4/2010 2:10 to 4/5/2010 2:01:00.Is this a data quality issue?I haven't looked for more instances like this.Thanks,David",0,None,4 ,Sat Nov 27 2010 05:12:36 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/207,/competitions/RTA,None /paresh,RMSE vs percent error,"Hi,I was wondering why RMSE was chosen for this contest over mean percent error. Here, mean percent error = mean ( abs(actual - predicted) / predicted )Let us consider this example: there are two cases - one with travel time 100 and the other with travel time 1000. Let us say an algorithm predicts 300 for the first case and 1200 for the second.Now, RMSE will penalize the algorithm both of these cases equally. However, in the first case, the algorithm made a 200% error in prediction whereas it made only a 20% error in the second case.In a practical setting, this would correspond to a GPS making an error of 20 min on a 10 min drive vs a 20 min error on a 2 hr drive (approx). Shouldn't the first case be penalized more?By using RMSE as the metric, algorithms that work better on longer stretches of road / when travel times are larger are favored. On the other hand, percent error looks for algorithms that are equally good in all cases.Hence, wouldn't it be more fair to penalize based on percent error metric?I realize that I'm no authority in this matter. I was just wondering why RMSE was chosen. On a side note, I tried to create a new thread, but got no confirmation. Sorry if this ends up double posting.",1,None,2 ,Sat Nov 27 2010 10:13:11 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/208,/competitions/RTA,48th /salimali,Animation in Excel,"Here is some excel macro code that animates the traffic flow through time.Just paste the code into the code section of a sheet, change the first line to reference the data file, add a reference to microsoft scripting runtime and then run the macro 'animate'Green means good - red means slow.There are 2 lanes of traffic next to each other, but I'm not sure if they are driving in the right direction or not!The process of pasting the code in this forum causes a few rouge semicolons to appear from somewhere at the beginning of some lines. When you past the code in excel these lines will be red so you will just have to delete the semi colons to tidy up the code. Option Explicit'need reference to microsoft scripting runtimeConst sourcefile As String = ""C:\whereever\RTA\RTAData.csv""Const maxRows As Long = 3500Sub animation() Call setFormat Dim maxtime() As Long maxtime() = getMaxTime Call animate(maxtime) MsgBox ""finished"" End SubSub animate(maxtime() As Long) Dim fso As New FileSystemObject Dim ts As TextStream Dim s As String Dim v() As String Dim timestamp As String Dim i As Integer Dim mycolour As Integer Dim mycolourR As Integer Dim mycolourG As Integer Dim perc As Double Dim myrow As Integer Dim mycol As Integer Dim rowCount As Long Set ts = fso.OpenTextFile(sourcefile, ForReading) 'header line s = ts.ReadLine rowCount = 0 While Not ts.AtEndOfStream And rowCount <= maxRows rowCount = rowCount + 1 s = ts.ReadLine v() = Split(s, "","") timestamp = v(0) Cells(1, 1) = timestamp Cells(2, 1) = timestamp For i = 1 To UBound(v) If i <= 30 Then myrow = 4 mycol = i + 1 Else myrow = 6 mycol = i + 1 - 30 End If If IsNumeric(v(i)) Then perc = (CInt(v(i)) - maxtime(1, i)) / (maxtime(2, i) - maxtime(1, i)) 'range 0-1 mycolourG = 255 * (1 - perc) 'range 0 to 255, 255=quickest, 0=slowest mycolourR = 255 * perc Cells(myrow, mycol).Interior.Color = RGB(mycolourR, mycolourG, 0) Cells(myrow + 1, mycol) = CInt(perc * 100) Else Cells(myrow, mycol).Interior.Color = RGB(0, 0, 0) Cells(myrow + 1, mycol) = ""?"" End If Next i Wend ts.CloseEnd SubFunction getMaxTime() As Long() 'this actually gets the max and min time for each section Dim fso As New FileSystemObject Dim ts As TextStream Dim s As String Dim v() As String Dim timestamp As String Dim maxtime() As Long Dim i As Integer Dim rowCount As Integer Set ts = fso.OpenTextFile(sourcefile, ForReading) 'header line s = ts.ReadLine v() = Split(s, "","") ReDim maxtime(1 To 2, 1 To UBound(v)) rowCount = 0 While Not ts.AtEndOfStream And rowCount <= maxRows rowCount = rowCount + 1 s = ts.ReadLine v() = Split(s, "","") timestamp = v(0) For i = 1 To UBound(v) If IsNumeric(v(i)) Then If v(i) > maxtime(2, i) Then maxtime(2, i) = v(i) If rowCount = 1 Then maxtime(1, i) = maxtime(2, i) If v(i) < maxtime(1, i) Then maxtime(1, i) = v(i) End If Next i Wend ts.Close getMaxTime = maxtime End FunctionSub setFormat() 'format the date display Range(""A1"").Select Selection.NumberFormat = ""[$-F800]dddd, mmmm dd, yyyy"" Range(""A2"").Select Selection.NumberFormat = ""h:mm:ss;@"" Columns(""A:A"").Select Selection.ColumnWidth = 50 ActiveWindow.Zoom = 50 Cells.Select ActiveWindow.DisplayGridlines = False putBorderRoundRoad Range(""C1"").SelectEnd SubSub putBorderRoundRoad() 'just a recorded macro Range(""B4:AF4"").Select Selection.Borders(xlDiagonalDown).LineStyle = xlNone Selection.Borders(xlDiagonalUp).LineStyle = xlNone With Selection.Borders(xlEdgeLeft) .LineStyle = xlContinuous .Weight = xlMedium .ColorIndex = xlAutomatic End With With Selection.Borders(xlEdgeTop) .LineStyle = xlContinuous .Weight = xlMedium .ColorIndex = xlAutomatic End With With Selection.Borders(xlEdgeBottom) .LineStyle = xlContinuous .Weight = xlMedium .ColorIndex = xlAutomatic End With With Selection.Borders(xlEdgeRight) .LineStyle = xlContinuous .Weight = xlMedium .ColorIndex = xlAutomatic End With Selection.Borders(xlInsideVertical).LineStyle = xlNone Range(""B6:AF6"").Select Range(""AF6"").Activate Selection.Borders(xlDiagonalDown).LineStyle = xlNone Selection.Borders(xlDiagonalUp).LineStyle = xlNone With Selection.Borders(xlEdgeLeft) .LineStyle = xlContinuous .Weight = xlMedium .ColorIndex = xlAutomatic End With With Selection.Borders(xlEdgeTop) .LineStyle = xlContinuous .Weight = xlMedium .ColorIndex = xlAutomatic End With With Selection.Borders(xlEdgeBottom) .LineStyle = xlContinuous .Weight = xlMedium .ColorIndex = xlAutomatic End With With Selection.Borders(xlEdgeRight) .LineStyle = xlContinuous .Weight = xlMedium .ColorIndex = xlAutomatic End With Selection.Borders(xlInsideVertical).LineStyle = xlNone End Sub",0,None,1 Comment,Sat Nov 27 2010 12:24:36 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/209,/competitions/RTA,None /aidan2360,Strange patterns in data,"Just did a quick plot of some of the data. Im noticing on many of the routes that there appears to be a lower bound on the time taken for a large part of the sample, however this is broken through sometimes with fairly constant data. I've attached a screenshot which illustrates it for route 40125, where the lower bound is just above 140. I'm assuming this is some sort of default value when an error occurs?",0,None,11 ,Sun Nov 28 2010 20:17:36 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/211,/competitions/RTA,None /p4p44203,RMSE calculation,"how is the RMSE for my submission actually calculated ?errorsum from the whole table ?, from rows(per cutoff time), cols(routes) ?please enlighten me !",0,None,11 ,Tue Nov 30 2010 14:27:02 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/212,/competitions/RTA,169th /danielhartmeier,Selection of the 29 cut-off times?,"How were the 29 cut-off times selected? Randomly?Or manually, possibly selecting times where data was particularly non-average?I tried some obvious simple algorithms producing averages, picking my own random cut-off times (where data is visible), then calculating the RMSE, and got RMSE values around 120-130.Yet when I submit any of these, the resulting official RMSE is well above 300...",0,None,4 ,Tue Nov 30 2010 15:48:12 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/213,/competitions/RTA,200th /randomguess0,Why cannot submit my result?,"After I specify the adress of my result file and click the submit button, it will return to this same page again.My result looks like:776731,264324,0.1011666155,95308,0.0012535123,755942,0.0004786734,712538,0.00361064729,3905,0.0005486287,372384,0.014827458,865512,0.0079907001,1052167,0.0012......is there anything wrong?",0,None,2 ,Wed Dec 01 2010 06:54:28 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/214,/competitions/socialNetwork,75th /marcio0,Routes Length,Does someone know if each route has a different length? I can't find anything about this at the informations.,0,None,2 ,Wed Dec 01 2010 20:46:02 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/215,/competitions/RTA,131st /viveksharma0,Time order?,"Is there a time order between the training and test data?In other words, were the links in the test data formed after the links in the training data?",0,None,1 Comment,Wed Dec 01 2010 22:46:53 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/216,/competitions/socialNetwork,None /koonkiuyan0,Is the network directed or undirected?,"I assumed it's directed because out of the 7 million edges, a few percent of them are bi-directional. Is it the case?",0,None,1 Comment,Fri Dec 03 2010 03:41:06 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/219,/competitions/socialNetwork,52nd /hassant,More Details,"Hi dear friendsI don't have any experience in this area and the competitions like this.since i wanna a bit more help from my friends and know more details about this competition.first of all what is our data? we have 3 csv file to download, RTAdata, RTAHistorical and sampleEntry. I think we should use RTAdata as input, and upload a csv file like sampleEntry.csv as result. so as i read the last topics i thought we should make a model and someone write any kind of codes, so i will be confuse( code or csv file).Then what kind of tools we can use?Accept my real thanks before submit your helps!Hassan",0,None,5 ,Fri Dec 03 2010 12:29:28 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/220,/competitions/RTA,87th /edwardgrech,The map,"I have two queries regarding the map (m4-map.pdf). I understand that it is a sketch, however:1. Since Australians drive on the left, shouldn’t the pink Westbound routes be at the bottom and the blue Eastbound routes at the top?2. The pink arrow marking route № 40095 spans over “two routes”; is this an error? And if it is intentional, what does it mean?Thank you!",0,None,4 ,Fri Dec 03 2010 16:04:11 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/221,/competitions/RTA,None /vsu1664,new evaluation score,"The communication within the ""can we see leaderboard"" topic has inspired me to introduce the following evaluation criterion:new_score = test_score * (1.0 + test_score - leaderboard_score)where the second multiplier controls the overfitting (and, we are interested to reduce it).These are the corresponding results (based on the data provided by Anthony Goldbloom):rank new_score test_score leaderboard_score1 0.712900 0.694770 0.6686752 0.713787 0.694883 0.6676793 0.716188 0.701088 0.6795504 0.716308 0.701019 0.6792105 0.716457 0.701986 0.6813716 0.716820 0.701330 0.6792447 0.717328 0.702781 0.6820828 0.719240 0.695195 0.6606089 0.720143 0.698278 0.66696510 0.720282 0.695559 0.66001511 0.720973 0.696868 0.66227812 0.721191 0.698005 0.66478713 0.721247 0.704239 0.68008814 0.721438 0.699849 0.66900115 0.721514 0.704020 0.67917116 0.721609 0.697436 0.66277617 0.721667 0.696033 0.65920518 0.721797 0.704714 0.68047319 0.721821 0.695987 0.65886920 0.721959 0.696322 0.659504Remark. The idea of the ""new_score"" is not really a new one and was used before.",0,None,1 Comment,Sun Dec 05 2010 01:45:58 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/222,/competitions/chess,11th /britvich,New Method for Revealing Data and Evaluating Models,"I’ve been thinking a lot about the Chess Ratings Competition and Kaggle Competitions in general, and how they could be improved. The PDF below outlines the idea. It solves many of the problems with the current ways we test and evaluate our models. I’m curious what you think.Ron",0,None,7 ,Sun Dec 05 2010 05:25:15 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/223,/competitions/chess,29th /konstantinsavenkov,Validity of the control data,"Anthony,Could you please explicitly state that the speed data used for estimation of prediction quality is _valid_, i.e. received from properly working sensors.I ask this to make sure I need to predict traffic speed fluctuation, not the sensor malfunction laws :-)Regards, Konstantin Savenkov.",0,None,6 ,Wed Dec 08 2010 15:32:14 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/225,/competitions/RTA,77th /wcukierski,Rules and Regulations,"Hey Kaggle folks,As this website grows and these competitions become more popular, I think you'll find it in your best interest to start clarifying/enforcing certain rules. For example, in your terms it states that ""No individual or entity may register more than once (for example, by using a different username)"", but this rule alone leaves certain gray areas.Are teams allowed to merge? Is one individual allowed to be on multiple teams? The end of the Netflix competition was a flurry of ""mergers and acquisitions"" as teams joined together to blend results. is this allowed here? If you want to discourage this teaming up, it may make sense to close the contest to new teams a certain number of weeks before conclusion. Otherwise, you may see dummy accounts created for the purposes of refining final predictions or joining teams.I'm not trying to call anybody a cheater or point fingers. I just know that if the site gets more popular and the prizes grow in value, this kind of gaming the system will become more prevalent. It would be helpful to have the rules to each contest explicitly shown up front so that there are no surprises at the end.Thanks for listening!",0,None,1 Comment,Thu Dec 09 2010 22:41:30 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/226,/competitions/socialNetwork,2nd /martinreichert,Yannis Sismanis (outis) - 1st place - any detail description ?,"I am very interested in a more detailed description of Yannis Sismanis' (Outis) 1st place solution.The last message from him I found here in the forum is from November 18 - where he told us, he would publish his documentation in a few days ...Did he publish in any other place?Thanks a lot for any hintMartin",0,None,4 ,Fri Dec 10 2010 11:52:22 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/227,/competitions/chess,3rd /dirknbr,data error,Should Country.of.Birth in column BH be Country.of.Birth.2?,0,None,9 ,Mon Dec 13 2010 11:34:05 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/229,/competitions/unimelb,76th /antgoldbloom,Sample R Code,"Attached is some R code to create a GLM entry for this competition. As always, happy to hear feedback from others about how this could have been done more elegantly. Anthony",0,None,2 ,Mon Dec 13 2010 14:37:58 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/230,/competitions/unimelb,153rd /ahmedjawad,Availabity of data for reseacrh,"Hi Dear RTA,I am a PhD student at fraunhofer institute, Geramny and I am participating in the competition. My thesis relates relates to prediction of traffic. I have a natural interest in publishing my method and results on this data set in some conference\journal. Am I allowed to do that because otherwise participation does not make sense to me",0,None,2 ,Mon Dec 13 2010 15:26:15 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/231,/competitions/RTA,None /datalev,What is the meaning of missing data?,Hi: Anthony What is the meaning of missing data (Mar 4 2010 for example)? not recorded? no traffic? or just leave it on purpose?,0,None,4 ,Mon Dec 13 2010 21:37:45 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/232,/competitions/RTA,None /p4p44203,RFCD/SEO Code definition missing,"the data page promises definitions for the RFCD and SEO codes.could you add those, please ?",0,None,6 ,Thu Dec 16 2010 19:23:24 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/235,/competitions/unimelb,140th /ssalahi,What is the difference between outbound and inbound node?,I have read the description of the data. Could you please describe in more detail what the difference between outbound and inbound node is.,0,None,7 ,Sat Dec 18 2010 10:47:37 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/236,/competitions/socialNetwork,11th /salimali,how many entries?,"Anthony, Is it still the case that all entries made will get a chance of winning?",0,None,1 Comment,Sat Dec 18 2010 12:30:56 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/237,/competitions/unimelb,192nd /pvk680,Eligibility for Prize,"In the description page, its been mentioned that winning method must be implementable by the university of melbourne? Does it have any implications towards the use of specific softwares or techniques etc? If so, please let us know about that in more detail.",0,None,4 ,Sat Dec 18 2010 22:31:08 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/238,/competitions/unimelb,None /grahamdennis,"Routes 40045, 41125, 41135, 41145 always failing?","Hi,I've just had a quick look at the data from RTAError.csv, and it appears as though some routes are marked as always having failing sensors despite there being some meaningful data (apparently) in RTAData.csv for those routes. The routes for which have the value 1 always in RTAError.csv are 40045, 41125, 41135 and 41145. There are also a few other routes whose sensors function occasionally. Does anyone know what's going on?Cheers,Graham Dennis",0,None,2 ,Sun Dec 19 2010 09:13:45 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/239,/competitions/RTA,218th /rambeaux,Internal consistency of data,"Hi All, I was looking at a time series view of successful and unsuccessful grants for each PersonID within the dataset, and am not sure that the 'Successful Grant' and 'Unsuccessful Grant' fields are consistent with the outcomes of past grant applications within the dataset.For example, consider applications where PersonID 407 is the primary applicant.From November 2005 through to January 2007, this applicant has 7 applications where the Grant Status is equal to 1, however on subsequent apps for this person, the value of the 'Number of Successful Grants' field remains 1 - even in applications lodged in 2009.I understand there may be a lag between application and success of an app, but surely not 3 or more years.Could there be a problem with the dataset, or have I missed something?",0,None,8 ,Mon Dec 20 2010 22:59:28 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/240,/competitions/unimelb,72nd /mdagost,Is the probability interval open or closed?,"In the rules, it says that we need to provide the ""probability of success-between 0 and 1"". Is that interval open or closed? That is, is there any problem if our predictions return identically 0 or 1?Thanks!Michelangelo",0,None,1 Comment,Wed Dec 22 2010 17:13:30 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/241,/competitions/unimelb,62nd /antgoldbloom,Important information regarding external data and model inputs,"Many thanks to everyone for all your great activity on this fascinating problem - insightful questions and comments on the forum, good early results on the leaderboard and interesting discussions! There have been a lot of questions about exactly what constitutes an acceptable model for the RTA. So far, my guidance on this matter has possibly been too fuzzy, and I hear a lot of you looking for more definite rules. Therefore, we have come up with the following specific rule regarding the allowed model inputs: Your model can be of any form you like, as long as it takes its input only from the following parameters:- Time of prediction- Day of week, Is holiday?, Month of year- Route number to be predicted- The time taken for route r for date/time t (where r is any route, and t is any time less than the date/time being predicted), for as - many routes and date/times as you wish- The sensor accuracy measurements for any routes r and dates/times t (defined as above)- The estimated route distances (as provided by Kaggle)To clarify, the following are not permitted:- The use of any data other than those provided by Kaggle for this competition, and the [Link]:http://www.industrialrelations.nsw.gov.au/About_NSW_IR/Public_Holidays.html.- The time taken for any routes ""in the future"" (compared to the prediction being made) - your model can still be trained using all data, as long as the resultant model only uses the inputs listed above.Furthermore, the algorithm must not be encumbered by patent or other IP issues, and must be fully documented such that the RTA can completely replicate it without relying on any ""black box"" libraries or systems.",0,None,16 ,Thu Dec 23 2010 09:40:44 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/242,/competitions/RTA,255th /predictor,Who Analyzes the Analysts?,I found it interesting to note the relationship between number of submissions and leaderboard performance: [Link]:http://farm6.static.flickr.com/5001/5287397111_4e6ed371df_b.jpg.,0,None,2 ,Fri Dec 24 2010 14:15:49 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/243,/competitions/unimelb,78th /tropics,Not in leaderboard?,"Hello,My team is not appearing in the leaderboard. I suspect we are tied with other team. Is this a bug or am I overlooking something?Thanks!Jose",0,None,1 Comment,Tue Dec 28 2010 16:49:04 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/244,/competitions/RTA,1st /byang1,visualization software,"Hi, can anyone recommend a free or shareware visualization software suitable for the data used in this contest ?",0,None,2 ,Wed Dec 29 2010 07:03:34 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/245,/competitions/socialNetwork,2nd /dchudz,animation created in R (code included),"I wrote some R code for creating [Link]:http://eigensomething.blogspot.com/2011/01/video-from-kaggle-traffic-prediction.html out of this data.Maybe someone will find it enjoyable, or useful.",0,None,5 ,Sun Jan 02 2011 08:32:26 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/246,/competitions/RTA,None /byang1,AUC Calculation Check,"Hi,I'd like to get some verificaiton of my AUC calculation routine.I put together [Link]:http://members.shaw.ca/byang/b/CheckAUC.zip containing 5 .csv files. Each .csv file contains 4480 lines, and each line contains 3 columns. The first 2 columns are all zeros, and the 3rd column is a floating point ""prediction"" value.Now let's say the true answers are 0 for the first 2240 lines, and 1 for the second 2240 lines. Here's the AUC values I calculated:a.csv: 0.5373b.csv: 0.7626c.csv: 0.8092d.csv: 0.8454e.csv: 0.9262Can someone verify these numbers ?I'd especially like to ask the contest organizers to calculate AUC on these files too, just trying to make sure my AUC routine is correct. :)Thanks.",0,None,3 ,Mon Jan 03 2011 22:54:22 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/247,/competitions/socialNetwork,2nd /nilxela,"Where is Team ""One Old Dog""?","Where is Team ""One Old Dog""? Also website is throwing some kind of error while browsing.""The team: has selected too many submissions: 19""Thanks",0,None,1 Comment,Tue Jan 04 2011 20:39:32 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/248,/competitions/R,4th /nottelling,in the money indicator,"Anthony:Does the in the money indicator (green star) actually mean that the user's rmse is lowest on the hidden part of the test, or not?Could it happen sometime that 2nd team on the leaderboard gets the in the money star, if their algorithm happens to be the best on the hidden set rather than the first team's?Thank you.",0,None,1 Comment,Wed Jan 05 2011 16:33:34 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/249,/competitions/RTA,None /borisgorelik,CHIEF_INVESTIGATOR vs. PRINCIPAL_SUPERVISOR,CHIEF_INVESTIGATOR vs. PRINCIPAL_SUPERVISOR: are these the same roles?,0,None,1 Comment,Sun Jan 09 2011 17:02:15 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/250,/competitions/unimelb,None /borisgorelik,Negative number of years,"The fields ""No..of.Years.in.Uni.at.Time.of.Grant.#"", where # is 1, 2, ..., contains value ""Less than 0"". How should it be interpreted?",0,None,1 Comment,Sun Jan 09 2011 17:05:17 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/251,/competitions/unimelb,None /dirknbr,Congratulations,"... to IND CCA for winning this contest. It was a close race at the end, with some new entrants racing to the top.I hope you enjoyed this.I will release the sampling code if you want.",0,None,5 ,Tue Jan 11 2011 23:12:11 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/252,/competitions/socialNetwork,77th /chefele,Sharing Techniques Used,"Congratulations to all the leaders in this contest! Unfortunately, these forums have been pretty quiet during the contest, but now that it's over, I'm wondering if people are willing to disclose the techniques they used so others can learn something new.In a few emails with a couple contestants, I know there are a mix of techniques being used out there --- KNNs, neural-nets, SVDs, node metrics (like Adar/Adamic, Jaccard, number of common neighbors), and some graph-based techniques (shortest paths, edge-betweenness centrality, etc.). So, what techniques did you use? What worked, and what didn't? Thanks!",0,None,5 ,Tue Jan 11 2011 23:52:23 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/254,/competitions/socialNetwork,12th /jhoward,A thought for future competitions,"For this competition, there was a really useful compendium of approaches, along with rankings of them, in the paper ""The Link Prediction Problem for Social Networks (2004)"". I only discovered this paper on the last day of the competition, so wasn't able to use any of the insights to improve my algorithm (4th place). Yes, that does suggest I'm pretty crap at Googling... but it also makes me wonder: perhaps competition hosts in the future should consider including links to a couple of key papers, in their competition description page.This would probably improve the overall results at the end of the competition, by ensuring that everyone could at least use the current best approaches as a starting point.Of course, some people may as a result become overly narrow in their focus - by looking only at extensions of the existing methods, they may miss new ideas. But I'm sure some people would be able to both harness the both the existing research, whilst also looking at new approaches.",2,None,4 ,Wed Jan 12 2011 22:04:20 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/255,/competitions/socialNetwork,4th /arvindnarayanan,How we did it (IND CCA),"Hi everyone,This is Arvind Narayanan from the IND CCA team, and I’d like to share our techniques.First things first: in case anyone is wondering about our team name, we are all computer scientists, and most of us work in cryptography or related fields. IND CCA refers to a [Link]:http://en.wikipedia.org/wiki/Ciphertext_indistinguishability#Indistinguishability_under_chosen_ciphertext_attack.2Fadaptive_chosen_ciphertext_attack_.28IND-CCA.2C_IND-CCA2.29 of an encryption algorithm. Other than that, no particular significance.I myself work in computer security and privacy, and my specialty is de-anonymization. That explains why the other team members (Elaine Shi, Ben Rubinstein, and Yong J Kil) invited me to join them with the goal of de-anonymizing the contest graph and combining that with machine learning.To clarify: our goal was to map the nodes in the training dataset to the real identities in the social network that was used to create the data. That would allow us to simply look up the pairs of nodes in the test set in the real graph to see whether or not the edge exists. There would be a small error rate because some edges may have changed after the Kaggle crawl was conducted, but we assumed this would be negligible.Knowing that the social network in question is Flickr, we crawled a few million users' contacts from the site. The crawler was written in python, using the curl library, and was run on a small cluster of 2-4 machines.While our crawl covered only a fraction of Flickr users, it was biased towards high degree nodes (we explicitly coded such a bias, but even a random walk is biased towards high degree nodes), so we were all set. By the time we crawled 1 million nodes we were hitting a 60-70% coverage of the 38k nodes in the test set. But more on that later.Our basic approach to deanonymization is described in [Link]:http://33bits.org/2009/03/19/de-anonymizing-social-networks/. Broadly, there are two steps: “seed finding” and “propagation.” In the former step we somehow deanonymize a small number of nodes; in the latter step we use these as “anchors” to propagate the deanonymization to more and more nodes. In this step the algorithm feeds on its own output.Let me first describe propagation because it is simpler. As the algorithm progresses, it maintains a (partial) mapping between the nodes in the true Flickr graph and the Kaggle graph. We iteratively try to extend the mapping as follows: pick an arbitrary as-yet-unmapped node in the Kaggle graph, find the “most similar” node in the Flickr graph, and if they are “sufficiently similar,” they get mapped to each other.Similarity between a Kaggle node and a Flickr node is defined as cosine similarity between the already-mapped neighbors of the Kaggle node and the already-mapped neighbors of the Flickr node (nodes mapped to each other are treated as identical for the purpose of cosine comparison).In the diagram, the blue nodes have already been mapped. The similarity between A and B is 2 / (√3·√3) = ⅔. Whether or not edges exist between A and A’ or B and B’ is irrelevant.There are many heuristics that go into the “sufficiently similar” criterion, which will be described in our upcoming paper. There are two reasons why the similarity between a node and its image may not be 100%: because the contest graph is slightly different from our newer crawled graph, and because the mapping itself might have inaccuracies. The latter is minimal, and in fact the algorithm occasionally revisits already-mapped nodes to correct errors in the light of more data.I have elided over many details — edge directionality makes the algorithm significantly more complex, and there are some gotchas due to the fact that the Kaggle graph is only partially available. Overall, however, this was a relatively straightforward adaptation of the algorithm in the abovementioned paper with Shmatikov.Finding seeds was much harder. Here the fact that the Kaggle graph is partial presented a serious roadblock, and rules out the techniques we used in the paper. The idea of looking at the highest-degree nodes was obvious enough, but the key observation was that if you look at the nodes by highest in-degree, then the top nodes in the two graphs roughly correspond to each other (whereas ordered by out-degree, only about 1/10 of the Flickr nodes are in the Kaggle dataset). This is because of the way the Kaggle graph is constructed: all the contacts of each of the 38K nodes are reported, and so the top in-degree nodes will show up in the dataset whether or not they’re part of the 38K.Still, it’s far from a straightforward mapping if you look at the top 20 in the two graphs. During the contest I found the mapping by hand after staring at two matrices of numbers for a couple of hours, but later I was able to automate it using simulated annealing. We will describe this in detail in the paper. Once we get 10-20 seeds, the propagation kicks off fairly easily.On to the results. We were able to deanonymize about 80% of the nodes, including the vast majority of the high-degree nodes (both in- and out-degree.) We’re not sure what the overall error rate is, but for the high-degree nodes it is essentially zero.Unfortunately this translated to only about 60% of the edges. This is because the edges in the test set aren’t sampled uniformly from the training graph; it is biased towards low-degree nodes, and deanonymization succeeds less often on low-degree nodes.Thus, deanonymization alone would have been far from sufficient. Fortunately, Elaine, Ben and Yong did some pretty cool machine learning, which, while it would not have won the contest on its own, would have given the other machine-learning solutions a run for their money. Elaine sends me the following description:====Our ML algorithm is quite similar to what vsh described earlier. However, we implemented fewer features, and spent less time fine tuning parameters. That's why our ML performance is a bit lower, with an *estimated* AUC of 93.5-94%. (Note that the AUC for ML is not corroborated with Kaggle due to the submission quota, but rather, is computed over the ground truth from the deanonymization. The estimate is biased, since the deanonymized subset is not sampled randomly from the test set. [The 93.5-94% number is after applying Elaine’s debiasing heuristic. -- Arvind])We also used Random Forest over features a bunch of features, including the following (with acknowledgements to the ""social network link prediction"" paper): 1) whether reverse edge is present2) Adamic3) |intersection of neighbors| / |union of neighbors|4) Neighborhood clustering coefficient5) Localized random walk 2 - 4 steps6) Degree of n1/n2.7) ... and a few other features, most are 1-3 hop neighborhood characteristics. For some of the above-mentioned features, we computed values for the out-graph, in-graph, and the bi-directional graph (i.e., union of in-graph and out-graph). We wanted to implement more features, but did not due to lack of timeThe best feature by itself is the localized random walk 3-4 steps. This feature on its own has an *estimated* AUC of 92%. The Random Forest implementation used the Python Milk library. http://pypi.python.org/pypi/milk/Combining DA and ML:1) Naive algorithm:For each test case: output DA prediction if DA prediction exists, else ML score2) Improved version: For edge (n1, n2), if n1 and/or n2 has multiple DA candidates, and all candidates unanimously vote yes or no, output the corresponding prediction.Finally, like every one else, there were some trial-and-error tuning and adjustments.====Once we hit No. 1, we emailed the organizers to ask if what we did was OK. Fortunately, they said it was cool. Nevertheless, it is interesting to wonder how a future contest might prevent deanonymization-based solutions. There are two basic approaches: construct the graph so that it is hard to deanonymize, or require the winner to submit the code for human inspection. Neither is foolproof; I’m going to do some thinking about this but I’d like to hear other ideas.Last but not least, a big thanks to Kaggle and IJCNN!",0,None,9 ,Sat Jan 15 2011 10:21:29 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/257,/competitions/socialNetwork,1st /byang1,Contesting the Result of This Contest,"I read IND CCA's ""How we did it"" post with great interest. First of all, congratulations to IND CCA for an impressive deanonymization effort. But, at the risk of being a sour loser, I think the contest organizers erred in accepting IND CCA's ""solution"" to the contest, because a significant part of it is basically looking up the answers on Flickr's web site. I'd like to respectfully ask the contest organizers to remove IND CCA from their winning position. I think it goes without saying that you can't just go to the source of data to look up the answers, no matter how cleverly done, in any contest. There's no rule in this contest explicitly saying so, but frankly such a rule is not necessary. Common sense dictates this form of solution should not be acceptable. We seem to have a case confirming the ""common sense is not so common"" quote here. Once it was revealed that the contest data came from Flickr, the idea of crawling Flickr's web site for answers occurred me too, and I'm sure it occurred to many contestants as well. But I quickly dismissed it because I thought, and still think, it is an obvious (perhaps blindingly obvious) form of cheating. Consider a similar situation that occurred in the ""RTA Freeway Travel Time Prediction"" contest, where contestant Jeremy Howard found some traffic details data on an Australian governmet web site. Jeremy asked in the forum if using this data as answers is considered cheating, and the answer is ""this would most definitely be considered cheating"". You can see it in this thread: [Link]:http://www.kaggle.com/view-postlist/forum-29-rta-freeway-travel-time-prediction/topic-195-using-additional-datasets-eg-rain-fog-etc/task_id-2467 Again, I think IND CCA's ""solution"" should not be acceptable for this contest.",0,None,8 ,Sun Jan 16 2011 22:09:01 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/258,/competitions/socialNetwork,2nd /grec4852,Any bias in train-test split?,"When generating false entries for test, why sample in prim_universe and sec_universe sets? That means the edges in train set with outdegree=1 or indegree<=1 are definitely true entries. It will impact about 5% entries and affect the AUC result dramatically. BTW, there are still 5 entries in false set with indegree=1 according to the published result.",1,None,3 ,Mon Jan 17 2011 11:33:49 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/259,/competitions/socialNetwork,5th /del=0ab3b8c772f1c5cd,DAE Have trouble with line count error?,I've been trying to submit but keep getting an error saying I have the incorrect number of lines. I've checked every which way and I'm sure I have 291 lines. I'm using linux. Could it be a file encoding issue? Has anybody else had this problem before? Thanks.,0,None,1 Comment,Tue Jan 18 2011 10:22:00 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/260,/competitions/RTA,None /rcpinto,Prize for Teams,"""The winner receives free registration to the [Link]:http://www.ijcnn2011.org/ (San Jose, California July 31 - August 5, 2011), which is valued at $950. The winner will also be invited to present their solution at the conference.""It's valid for every member of the winner team?",0,None,1 Comment,Wed Jan 19 2011 22:23:06 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/262,/competitions/stayalert,156th /predictor,Use of lagged values permitted?,"Is the use of lagged values permitted? In other words, for a given trial, is it allowed to use data from time indices 1 through 8 when predicting time index 9?",0,None,10 ,Thu Jan 20 2011 02:08:46 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/263,/competitions/stayalert,114th /dirknbr,3 variables are redundant,"P8, V7 and V9 have only zeros, so could easily be left out.",0,None,1 Comment,Thu Jan 20 2011 21:43:42 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/264,/competitions/stayalert,91st /zachpardos,How was the test set generated?,Are the same trials in both and is the test set a chronological extension of the trial or randomly sampled?thanks,0,None,8 ,Fri Jan 21 2011 03:16:40 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/265,/competitions/stayalert,22nd /edwardraff,Discrete or Continuous Values,"Clearly most of the Value such as P1 are continuous values, however it is not clear for all of them. For example, E3 only has whole numbers, and in a small range - is it a categorical attribute with a limited number of values, or should we assume it is a continuous variable? E4 is also perplexing, as it has only specific whole number values but in a great range. Is E4 continuous, but the sensor that captured the data not that fine grained, or is it categorical? Could you please provide a list of which attributes are categorical and which are continuous. Also, if there is a finite range of possible values for the continuous variables, that as well please.",0,None,1 Comment,Sat Jan 22 2011 04:11:11 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/266,/competitions/stayalert,None /samosvategor,Were units changed?,It seems that linear transforms (x -> a * x + b) were made with some data. Am I right or data units were stayed original? But in general what is the reason to hide actual variables meaning?,0,None,1 Comment,Sun Jan 23 2011 17:57:38 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/267,/competitions/stayalert,163rd /samosvategor,Were units changed?,It seems that linear transforms (x -> a * x + b) were made with some data. Am I right or data units were stayed original? But in general what is the reason to hide actual variables meaning?,0,None,1 Comment,Sun Jan 23 2011 17:57:42 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/268,/competitions/stayalert,163rd /inference,Relationship between trials,"What is the relationship between trials? Trials are not independent of each other. There is an obvious dependency between nearby trials; the attached image plots the mean alertness during each trial and structure is visible. In particular some regions show a batching into trials of size 11.In particular, it's hard to imagine that data with this type of structure comes from having a random allocation of trials between the test and training set.Thanks.",0,None,4 ,Sun Jan 23 2011 19:37:12 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/269,/competitions/stayalert,1st /goodi1342,Sweeping terms and conditions,"Competitors are required to sign on extreme conditions, such as in item 9 in the contract which demands from winners to ""submit to the Competition Host any Model used or consulted by You in generating Your entry""… It makes no sense to claim rights over the tools that we use to create the results. The competition is about results only not about all owr knowledge. Also, such draconian legal terms do not exist in any other competition site that I know. If these conditions are not eliminated, it is hard to see how any professional could agree to take part in here.",0,None,1 Comment,Mon Jan 24 2011 00:18:35 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/270,/competitions/unimelb,None /stromnov,Route 40095,"Readings for Route 40095:RTAData.csv, line 1: 213 deciseconds RTAError.csv, line 1: 0 (no errors)RouteLengthApprox.csv, line 20: 2500 meters speed = 2500m / 21.3s * 3600 = 422 kph Most of readings (more than 99%) for Route 40095 are wrong with specified defaults from RouteLengthApprox.csv. What kind of data must be in submission for this Route 40095?1) Real (computed) data?2) Specially ""errorred"" data (some strange value between 191 and 212 deciseconds)?",0,None,1 Comment,Tue Jan 25 2011 15:52:55 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/272,/competitions/RTA,249th /ahassaine,Is prize eligible if the paper has not been submitted?,"In other words, is the winner able to keep his/her method secret ?",0,None,2 ,Wed Jan 26 2011 08:19:21 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/273,/competitions/stayalert,14th /minghen,Final Evaluation,"Hi there, What submission will be evaluated for the final AUC?Regards,",0,None,2 ,Thu Jan 27 2011 05:53:55 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/274,/competitions/stayalert,143rd /ccccat,plagiarism,To all people who asked me to “sent one of submission with rank under 20” because RTA competition is midterm project for their Data Mining course. And to those who was planning to do it: Please be advised that I do not support academic plagiarism. Mooma,0,None,3 ,Fri Jan 28 2011 15:40:26 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/276,/competitions/RTA,2nd /dchudz,sharing method/result?,"I'm interested in what people would think of a participant sharing his/her method or results. I'm sure this has been discussed in the forums for other competitions, but I don't want to look there, and I'd like to know what the participants in this one think.Personally, I'm about as interested in having a place for sharing and dialogue about modeling as I am in the competitive aspects. So if I give up (or if I just feel like it), I'd like it to be okay for me to share what I've learned, even before the end of the competition.On the other hand, I could imagine someone who's invested in the competitive aspects feeling like that's giving an 'unfair' advantage to people who might benefit from the information.On the third hand, I'd sort of like to suggest to those people that maybe that kind of ""sharing"" should be considered 'just part of the game', and therefore entirely fair.What do you think?Thanks,David",0,None,5 ,Thu Feb 03 2011 14:33:42 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/278,/competitions/stayalert,18th /meliponemoody,Solutions,I assume that the solutions for the test dataset will be posted at the end of the challenge but I'm also interested in recognizing individual patterns in this dataset so I was wondering if we could also have an anonymized id added to the dataset when providing solutions?,0,None,2 ,Thu Feb 03 2011 21:17:18 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/279,/competitions/stayalert,32nd /wuweiw,meaning of error data?,"Although I know that a value in the error data is the proportion of loops that have failed, how should it affect the interpretation of relevant travel time? Does the failure of some loops necessarily mean that the travel time is wrong? Can I think the error as the probability of the travel time being wrong? For example, if error is 0.5, does it mean that the travel time is wrong with a probability 0.5? If error is 0.33, what does it mean that the travel time is wrong with a probability 0.33?Or shall I think the error as the extent to which the travel time is wrong?Thank you.",0,None,1 Comment,Sun Feb 06 2011 10:44:59 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/281,/competitions/RTA,None /mdagost,How does the contest end?,"I have a question about how the contest ends. Somewhere it says that the winner will be chosen based on the other 75% of the test dataset (the leaderboard is only calculated using 25%, which will be discarded). Does this mean that we'll get additional data to apply our method to? Or is all of the test dataset in the current test set that we have access to, and the leaderboard only uses 25% of what we submit to calculate the standings?Thanks!",0,None,3 ,Mon Feb 07 2011 17:06:11 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/282,/competitions/unimelb,62nd /hernandezurbina,"null, 0 and - values","hi, I've just downloaded the data set and at taking a look through it I see that there the variables can take either a numeric value or a '-' or simply nothing. I'd like to ask you, what's the difference between a zero value, null value and '-'?Regards,Victor.",0,None,2 ,Tue Feb 08 2011 14:26:06 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/283,/competitions/informs2010,None /ccccat,Congratulations,Congratulations to Team Irazu with best RTA Travel Time Prediction! Mooma,0,None,14 ,Sun Feb 13 2011 23:49:01 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/284,/competitions/RTA,2nd /hungeg,Binomial deviance,"Hi,how the binomial deviance when generating the leaderboard is computed? The equation what I know for binom deviance is -sum( i,j; outcome_is_i*log(predicted_probabilty_for_j)) But there are problems with applying this formula:1. We submit only one probability value per game, which means that this type of deviation works only for a twofold outcome. But in our case obviously we have three possibilites (W/L/D).2. the range of binom deviance is from 0 to positive infinity (even for only one observation). However, looking at the leaderboard I can see values ranging between 0 and 1 (actually from 0.25 to 0.99, but it fits finely into (0,1) )So I would appreciate some clarification on how the competitors score is calculated.Cheers,Gergely",0,None,9 ,Mon Feb 14 2011 02:03:52 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/285,/competitions/ChessRatings2,None /abcdefg3381,Any plan to publish the answer of the competition?,"Hi everyone, I'm new to this website.Is there any plan from kaggle.com or IJCNN to publish the answer of this competition? i.e. of the 8,960 edges, which 4,480 are true?Would be good if we can further test our algorithms.Cheers,XF",0,None,8 ,Mon Feb 14 2011 04:38:15 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/286,/competitions/socialNetwork,None /rcpinto,Winning Ideas,"Congratulations to everyone, mainly the top competitors!So, please, I'd like to ask if some top competitors could share their ideas here. No code, no data, just the main ideas that you think contributed to your success!Thanks in advance!",0,None,1 Comment,Mon Feb 14 2011 12:43:39 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/287,/competitions/RTA,211th /savvy5565,How to do justice to the less usage of P?,"I'd like to know how the final submission will be evaluated, besides the correct rate, but on the number of usage of P, since it is written in the description that the less usage of the P features is considered valuable.So say I have a 80% done by using 8 P's, and some has 75% using 5, and some got 70% without using any P.... how would you rank those three?",0,None,1 Comment,Tue Feb 15 2011 19:00:25 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/288,/competitions/stayalert,None /imaxus,Games played at the same month.,"Hi, Kaggle team!Suppose game 'A' with PTID t_1, and game 'B' with PTID t_2, and t_1 < t_2 holds.Am I right, that there is no guarantee, that game 'A' was played earlier than game 'B'?I mean the case of the same month.",0,None,3 ,Tue Feb 15 2011 23:28:34 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/289,/competitions/ChessRatings2,None /inference,Ford,"I'm curious, what is the relationship between Ford and this competition? Their name is in the title and their logo is being used but I can't obviously find a mention of this competition on a Ford website. I'm surprised they don't want to advertise this competition themselves? Most companies are keen to claim any publicity they can!Thanks.",0,None,3 ,Wed Feb 16 2011 19:17:42 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/295,/competitions/stayalert,1st /del=c80d14d0aa54bfa6,Results and Use of Data Set,"Hi,Fist let me say that it was a fun competition and I enjoyed it. Now, that it will be ending soon I was wandering what happens afterwards.Will the test data with the correct answers be made available to everyone?Also, to what extent is this data set public? Do we have permission to use it in papers and publications? If so, how should it be referenced?How long will this data set remain on-line for reference purposes?Best Regards,Greg",0,None,7 ,Thu Feb 17 2011 08:33:32 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/296,/competitions/unimelb,None /pavelbelchev,Ignore if you find this question stupid...,"... but isn't it possible by sending multiple submissions and comparing their performance (through the binomial deviance) to figure out some crucial information about the outcomes of the chess games in the test database? I am not exactly a master in quantitative statistical analysis, however, I think that in this way it is possible (even though quite timetaking) to even go to the actual scores in each single game played. Moreover, thus, you will not even need any information on the identity of the players. And it seems like current rules do not define this as cheating. To say the least the binomial deviance provides good data on the number of draws that happened so far. This approach will of course bias the otherwise nice idea to predict as good as possible the outcomes based solely on previous data and actual statistical skills. And for this reason I would suggest that you keep updating the leaderboard table publically only for games that have finished already (or let's say after the first month of the 3 months of test dataset games). And eventually publishing the standing of the overall performance as soon as all games finish.In this way you will still have a good number of observations on which the competitors will try to increase their binomial deviance score, but you will hide the ""Rosetta Stone"" and thus will prevent having some unwelcome submissions. By unwelcome I mean exactly those guys that were 'receiving faxes from the future' with the real outcomes of (potentially all) games. Of course they will not submit in a way to reveal they did so and would mask it allowing the necessary variance to just win the contest. In any case, if what I am afraid of actually makes sense, I do not see a different way of hindering this unwanted behavior than the one I proposed in the above paragraph. Excuse me, in case what I wrote is just a misunderstanding or simple nonsense.",0,None,2 ,Thu Feb 17 2011 16:19:32 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/297,/competitions/ChessRatings2,None /go4breakthru,public revelation,"From the rules page:""In order to receive a main prize, the top finishers must publicly reveal all details of their methodology within seven days after the completion of the contest...""Perhaps Jeff Sonas could spell out for us, the expected ""what"" and ""how"" of this.Thanks!George Kangas",0,None,4 ,Fri Feb 18 2011 00:35:56 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/298,/competitions/ChessRatings2,94th /dirknbr,Chessmetrics algorithm,"I was trying to code the chessmetrics algorithm but I am struggling a bit. This is a Python version without padding or weighting which I don't understand. What also seems weird that a rating only depends on the opponents but not on a player's own previous rating. m=month, w=white, b=black, s=score ------------------------------------------------------#initialisefor p in players: players[p]=2200#aggregate at monthplayermonth={}for g in games: m=g[0] w=g[1] b=g[2] s=g[3] if [w,m] not in playermonth: playermonth[w,m]=[0,0,[],0] #sumscore games opponents rating playermonth[w,m][0]+=s playermonth[w,m][1]+=1 playermonth[w,m][2].append(b) if [b,m] not in playermonth: playermonth[b,m]=[0,0,[],0] playermonth[b,m][0]+=1-s playermonth[b,m][1]+=1 playermonth[b,m][2].append(w) for m in range(130): for p in players: if [p,m] in playermonth: #get opponent ratings oppsum=0 for o in playermonth[p,m][2]: oppsum+=player[o] n=playermonth[p,m][1] rating=oppsum/n+850*(playermonth[p,m][0]/n-.5) playermonth[b,m][3]=rating player[p]=rating#testfor g in testgames: w=g[1] b=g[2] pred=(player[w]-player[b])/850+.5",0,None,3 ,Sun Feb 20 2011 10:41:45 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/299,/competitions/ChessRatings2,68th /ejlok1,Well done to everyone!,"Hi all First and fore most, congratulations to Jeremy Howard for winning the contest, and the top 5 contestants. Also, well done to everyone in the top 10 as well. It was a very very very close match, with just a tiny fraction that sets us all apart. And kudo's to everyone who tried their best. This was my first competition and I've enjoyed it thoroughly, so thanks to Anthony for organising this. I've learned alot during the course of this competition and would love to hear everyone else's story and their experience in this competition.ThanksEu Jin",0,None,1 Comment,Sun Feb 20 2011 23:44:07 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/300,/competitions/unimelb,10th /jhoward,I'm ineligible for the prize - congrats to Quan Sun,"Because I have recently started employment with Kaggle, I am not eligible to win any prizes. Which means the prize-winner for this comp is Quan Sun (team 'student1')! Congratulations!My approach to this competition was to first analyze the data in Excel pivottables. I looked for groups which had high or low application success rates. In this way, I found a large number of strong predictors - including by date (new years day is a strong predictor, as are applications processed on a Sunday), and for many fields a null value was highly predictive.I then used C# to normalize the data into Grants and Persons objects, and constructed a dataset for modeling including these features: CatCode, NumPerPerson, PersonId, NumOnDate, AnyHasPhd, Country, Dept, DayOfWeek, HasPhd, IsNY, Month, NoClass, NoSpons, RFCD, Role, SEO, Sponsor, ValueBand, HasID, AnyHasID, AnyHasSucc, HasSucc, People.Count, AStarPapers, APapers, BPapers, CPapers, Papers, MaxAStarPapers, MaxCPapers, MaxPapers, NumSucc, NumUnsucc, MinNumSucc, MinNumUnsucc, PctRFCD, PctSEO, MaxYearBirth, MinYearUni, YearBirth, YearUni .Most of these are fairly obvious as to what they mean. Field names starting with 'Any' are true if any person attached to the grant has that feature (e.g. 'AnyHasPhd'). For most fields I had one predictor that just looks at person 1 (e.g. 'APapers' is number of A papers from person 1), and one for the maximum of all people in the application (e.g. 'MaxAPapers').Once I had created these features, I used a generalization of the random forest algorithm to build a model. I'll try to write some detail about how this algorithm works when I have more time, but really, the difference between it and a regular random forest is not that great.I pre-processed the data before running it through the model by grouping up small groups in categorical variables, and replacing continuous columns with null values with 2 columns (one containing a binary predictor that is true only where the continuous column is null, the other containing the original column, with nulls replaced by the median). Other than the Excel pivottables at the start, all the pre-processing and modelling was done in C#, using libraries I developed during this competition. I hope to document and release these libraries at some point - perhaps after tuning them in future comps.",3,None,4 ,Mon Feb 21 2011 03:33:44 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/301,/competitions/unimelb,1st /antgoldbloom,Solution,"The solution file is attached to this post.Thanks all for participating,Anthony",11,None,11 ,Mon Feb 21 2011 09:08:43 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/302,/competitions/unimelb,153rd /vatodorov,Binary or continuous predictions for the predicted outcome,"Hi, what should be the type of the final predictions - binary (0/1) or continuous in the range 0-1? I looked at the submission example, but all of the values for the predicted variable are 0s.thanks,Valentin",0,None,3 ,Mon Feb 21 2011 15:45:58 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/303,/competitions/stayalert,77th /go4breakthru,suggestion for Jeff: FIDE prize leader board,"The Deloitte competition, and the FIDE competition, have very different rules. It appears to me, that a viable Deloitte competitor will necessarily be much too data intensive too qualify for the FIDE prize. The Deloitte competitors can see how they're doing, by a look at the leaderboard (I'm assuming it's all Deloitte near the top); the FIDE hopefuls, on the other hand, don't have any idea who their top competitors are. A separate leader board, or even just a ""FIDE qualified?"" column in the current one, is the remedy I'm suggesting. It will only work if the competing teams volunteer to submit that bit of info, so let me be the first: my entry (if I ever get it together) will not be remotely qualified for the FIDE prize. Anybody else? (Edit) Perhaps some teams would wish to make separate entries: one to qualify for Deloitte, the other for FIDE. (reEdit) This damn box is making some of my text bold! How do I stop it?",0,None,7 ,Tue Feb 22 2011 17:49:58 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/304,/competitions/ChessRatings2,94th /vatodorov,AUC for training and test datasets,"I don't have a question, but want to comment on the calculated AUC statistics.I developed a few different predictive models using only 2/3 of the training dataset (, records) and tested them on the remaining 1/3 (200,649 records), which are in a holdout dataset. The observations were randomly distributed between the datasets. The AUCs I calculated on the holdout are in the range 0.831-0.875. The lift tables and c-statistic for the models are also quite good. However, I am surprised to see that the AUCs calculated upon submission on Kaggle's website are lower than 0.765.If the data between train and test files is randomly distributed, I expect to see the AUC for the test dataset similar to the one I calculated for holdout. However, they are quite different.Any thoughts?",0,None,31 ,Wed Feb 23 2011 04:51:45 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/305,/competitions/stayalert,77th /frandom,cross validation vs leader board,"Hello all - I am seeing massive differnces between my cross validation set and the leaderboard binomial deviance results. Has anyone else observed similar issues? I am using the last 100000 games in the training set for cross validation, discarding any games where either player has not completed 12 games as per test set design.V.",0,None,24 ,Wed Feb 23 2011 17:36:02 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/306,/competitions/ChessRatings2,31st /georgechen0,why final result is so different than leaderboard?,"I am not really following this competition. But I vaguely remember that [Link]:http://www.kaggle.com/team?team_id=2196 was leading with a big margin. Now Li is in position 14 ??? Two possible reasons: 1) Li (over)tuned his results toward partial data set, or 2) the leaderboard data set is not representative of the final result data set? I wonder if anyone cares to shed some light here? And how to prevent this from happening again?",0,None,2 ,Thu Feb 24 2011 02:31:45 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/307,/competitions/unimelb,None /meliponemoody,Number of submissions,I was not clear from the FAQ. Can we make several submissions?,0,None,2 ,Thu Feb 24 2011 22:53:00 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/308,/competitions/stayalert,32nd /johnlucas0,Chessmetrics Benchmark,Jeff - can I ask a question about the ChessMetrics benchmark? In the contest information you say that this time you used a logistic function rather than a linear function. Was this just when making predictions? Or also when calculating ratings (i.e. in the iterative loop that calculates ratings for the connected pool)? Thanks.,0,None,2 ,Sat Feb 26 2011 17:38:50 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/309,/competitions/ChessRatings2,29th /salimali,Benchmarks,"Here is the first benchmark to beat - a simple decision tree.Gives 0.89 on the training file, but only 0.52 on the leaderboard.Below is the R code used to generate the model and submission file.################# Load the Data################setwd(""C:/somewhere"")mydata <- read.csv(""overfitting.csv"", header=TRUE)colnames(mydata)#############################create train and test sets############################trainset = mydata[mydata$train == 1,]testset = mydata[mydata$train == 0,]##############################################eliminate unwanted columns from train set#############################################trainset$case_id = NULLtrainset$train = NULLtrainset$Target_Evaluate = NULLtrainset$Target_Practice = NULLcolnames(trainset)NROW(trainset)NROW(testset)########################################## Build a Tree#########################################library(rpart)tree_model <- rpart(Target_Leaderboard ~ ., data=trainset, method=""class"")train_TREE <- predict(tree_model, trainset)test_TREE <- predict(tree_model, testset)########################################## CALCULATE THE AUC ON THE TRAINING DATA#########################################library(caTools)trainAct = trainset$Target_LeaderboardtrainModel = train_TREE[,2]cat(""TREE training:"",(colAUC(trainModel,trainAct)))# TREE training: 0.8981357#########################################Generate a file for submission########################################testID <- testset$case_idpredictions <- test_TREE[,2]submit_file = cbind(testID,predictions)write.csv(submit_file, file=""tree_benchmark.csv"", row.names = FALSE)",0,None,21 ,Tue Mar 01 2011 01:40:45 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/310,/competitions/overfitting,98th /byang1,manual visual inspection OK ?,"Is result based on manual visual inspection allowed at all ? For example, if I see one writer likes to underline words, and then I manually attribute all test writings with a lot of underlining to him, instead of writing an underline detector.",0,None,1 Comment,Tue Mar 01 2011 08:51:13 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/311,/competitions/WIC2011,None /byang1,2 questions about contests in general,"Hi, I have two questions that are not specific to this contest, but apply to contests on Kaggle in general: A. Why does Kaggle like to use 30% or less of the test data for public leaderboard scores ? I think with smallish sample sizes, it leads to large and random difference between public scores and hidden test scores. Why not just use 50-50 split ? If you're worried about people gaming the system by using public scores, just explicitly ban this method. B. Why not release the actual code used to calculate scores ? And a sample test submission, a sample answer set, and the corresponding score.",0,None,6 ,Tue Mar 01 2011 08:55:39 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/312,/competitions/overfitting,None /sashikanthdareddy,Data?,"Hello,I'm a bit lost with the way targets are presented in this dataset. case_id train Target_Practice Target_Leaderboard Target_Evaluate Why are there 3 targets?My understanding is that, when one builds a model, it should be trained on the data where train=1keeping target = ""Target_Practice"". Now, am I supposed to validate my model on target =""Target_Leaderboard""?",0,None,5 ,Tue Mar 01 2011 14:38:46 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/313,/competitions/overfitting,149th /suelevene,rank ordering challenge?,"What do you mean by a rank ordering challenge? I am not clear on how the results are evaluated. If I submit values between 0 and 1, how if the accuracy calculated.Thanks -- Sue",0,None,2 ,Tue Mar 01 2011 23:16:17 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/314,/competitions/overfitting,None /trilobite17,What is the evaluation method?,What is the evaluation method? The other contests describe that.,0,None,15 ,Wed Mar 02 2011 03:10:32 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/315,/competitions/WIC2011,None /tteravai,discrepancy?,"maybe i'm missing something totally obvious, but the training set includes 54 authors (2 paragfs each for a total of 108 training vectors). adding one for the unknown, this makes 55 probabilities per line. however the instructions and sample_train.csv have only space for 54 (including the unknown). specifically, the sample_train.csv seems to omit author ""040"" who does appear in train.csv. what's up with this?i'm currently working only with the data in train.csv, but there is an author ""040"" in the images data too.",0,None,10 ,Wed Mar 02 2011 06:11:41 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/316,/competitions/WIC2011,12th /arussantoso,About the parameter,"I want to ask about the parameter (for each column), represent what kind of condition?Example:P4 --> represent emotion, if >70 then angry; or if >50 then happyetc...Please tell me. Thank you, thank you for your attention",0,None,2 ,Wed Mar 02 2011 15:07:19 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/317,/competitions/stayalert,None /uriblass,bug in the leaderboard,My last 2 submissions are0.254318 and0.254338but I see in the leaderboard the 0.254338 number and not the 0.254318 numberI plan to continue to develop my code from the last submission and the difference between the numbers is too small not to trust the leaderboard but I wonder if the leaderboard gives the last submission and not the best submission.,0,None,25 ,Thu Mar 03 2011 08:18:49 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/319,/competitions/ChessRatings2,7th /georgechen0,K Factor in Elo,"A perhaps silly question but still would like to know the answer.If a player with K factor = 16 played against another with K factor = 32. After the game, when I update their ratings, which K factor should I use? Or should I apply their own K factors (in other words, the guy with K factor = 16 gets a smaller update and another gets a bigger update)? Thanks in advance!",0,None,1 Comment,Thu Mar 03 2011 21:09:03 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/320,/competitions/ChessRatings2,28th /ahassaine,DATASET CHANGE IS NOT NECESSARY,"Since there were an error in the evaluation metric and no one has achieved 0 error yet. We finally believe that the change of dataset is not necessary.If several participants happen to score a 0 error (on the final leaderboard), the first one to have scored it will be the winner of the competition.However, be aware that scoring 0 in the public leaderboard does in no way mean that it will also be 0 in the final leaderboard.Anthony will kindly put this in the data description page.Apologies again and good luck.",0,None,6 ,Fri Mar 04 2011 05:14:51 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/321,/competitions/WIC2011,7th /salimali,Relevant Reading?,This recent kdnuggets article may be relevant [Link]:http://www.kdnuggets.com/2011/02/course-regression-modeling-correlated-predictors.html The book ESL it refers to can be downloaded from [Link]:http://www.stanford.edu/%7Ehastie/local.ftp/Springer/,0,None,5 ,Fri Mar 04 2011 11:44:11 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/322,/competitions/overfitting,98th /uriblass,Note that I used the test dataset as an additional source,"to predict results in my submissions(for example I give a bonus based on experience so players who played more games get higher rating in later months based on the fact that they played) I used it in the previous competition and Jeff asked me to give a result based on my method.Only now I read that the test data includes false games that never happened so I am surprised that my result in the leaderboard is so good and maybe the problem is that inspite of wrong games there is still a correlation between the number of real games that a player played in the data set and the number of games that are reported so the bonus for experience help.Only now when I read details about the data I read the following:""Please note that you should NOT use the test dataset as an additional source of clues about a player's strength. The predictions for months 133-135 should be based upon the players' estimated playing abilities at the end of month 132, and these predictions must be completely prospective, as though you made the predictions right at the end of month 132.""Note that I did not use the details in order to cheat in the competition and I really got the impression that everything is allowed except using the real results of part of the players that it may be possible to find (and I did not try to do it).Jeff even asked me to use my previous method that also use the test dataset as an additional source so I did not suspect that my previous method is not allowed in this competition.I am afraid that now the leaderboard is meaningless because people can get better results in the leaderboard by using the test data as an additional source.",0,None,19 ,Fri Mar 04 2011 12:07:02 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/323,/competitions/ChessRatings2,7th /cerin5111,Site Problems,"This site seems to be experiencing a tremendously large number of problems. It appeared to be down almost all day, and even though it seems to be up now, I can't upload a submission due to the server error: ""Error JFolder::create: Could not create directory Error uploading file"". Anyone else having problems with the site?",0,None,6 ,Sun Mar 06 2011 01:17:47 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/324,/competitions/WIC2011,15th /salimali,Leaderboard,The attached image is the leaderboard as of 7th March.1. Cole Harris seems to have discovered something no one else has yet.2. The current benchmark seems to have been replicated OK by a few competitors,0,None,21 ,Sun Mar 06 2011 21:46:21 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/325,/competitions/overfitting,98th /dejavu,leaderboard auc,Is the leaderboard AUC computed from all 19750 datapoints (251-20000)?,0,None,1 Comment,Sun Mar 06 2011 22:09:25 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/326,/competitions/overfitting,24th /zachpardos,top two teams with same AUC,"Rosanne jumped out to an impressive lead with an AUC of 0.934222. This level of AUC would appear to either be explained by an over fitting procedure based on leaderboard feedback of previous submissions or an approach that nearly ""solves"" this dataset.Somehow, though, a second leader ""shen"" has submitted with the exact same AUC score. Matching the AUC with six significant digits is not likely to happen by chance. Anyone believe this could be the result of two independent participants utilizing the same killer knowledge representation method with the same parameters at the same time in the competition?Whatever the explanation is for the identical AUCs, it's quite clear that someone has done an admirable job with this challenge.",0,None,21 ,Tue Mar 08 2011 07:59:57 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/327,/competitions/stayalert,22nd /jaysen,"Methods/Tips From Non-Top 3 Participants,","While it's universally interesting to understand what methods were used by the top participants (especially in this contest where there are some large gaps in AUC at the top), I suspect that many others who participated also have clever methods or insights. While we wait for the top finishers to post on ""No Free Hunch"", I thought it would be interesting to hear from anyone else who might wish to share. Many of the models are quite good and would produce better results than the methods used by persons in industry.My results (#15):Overall method: randomForest() in R, 199 trees, min node size of 25, default setting for other valuesSampling: Used 10% of the training dataset to train the randomForest. Also included any data points that were within 500ms of a state change (where isalert shifted from 1 to 0 or vice-versa). About 110,000 rows total.Data Transformations: Tossed out correlated variables, such as p7 (inverse correlation with p6) and p4 (inverse correlation with p3)Transformed p3 into an element of {""High"", ""Mid"", ""Low""} based on the probability of being alert. Where p3 is an even multiple of 100, the probability of being alert is systematically higher. Where ""p3 mod 100"" is 84, 16, or 32, there is also a greater chance of being alert (""Mid""). Call everything else ""Low"".The histogram of p5 clearly shows a bimodal distribution. Transformed p5 into a 1/0 indicator variable with a breakpoint at p5=0.1750.Transformed e7 and e8 to lump together all buckets greater than or equal to 4.Transformed v11 into 20-tiles to convert strangely shaped distribution into a discrete variable.Tried and Denied:Lagging valuesMoving averageColor Commentary:RandomForest's ability to ""fit"" the training data presented was very strong. However, the out-of-bucket (OOB) error rate, as reported by R, was highly misleading. The OOB error rate could be driven down to the 1-3% range. However, those models produced somewhat worse results on a true out-of-sample validation set. Keeping randomForest tuned to produce OOB error rates of 8-10% produced the best results in this case.Because many of the training cases are similar, randomForest performed better when using just a sample of the overall training data (hence the decision to train on only about 110,000 rows). RandomForest also under-performed when the default nodesize (either 1 or 5) was used. The explicit adjustment of nodesize to other values, such as 10, 25, and 50, produced noticeably different error rates on true out-of-sample data.",2,None,8 ,Thu Mar 10 2011 03:28:15 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/328,/competitions/stayalert,13th /jonred,Nature of values,"Is there any information on the nature of the data and methodologies used to create it?eg chaincode8order4_4096[1234] - i (barely) understand chain codes but what exactly do the 8,4,4096,1234 mean?Thanks Jon",0,None,2 ,Fri Mar 11 2011 23:10:18 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/330,/competitions/WIC2011,None /antgoldbloom,Test labels,The solution is attached.Thanks all for participating!Anthony,5,None,10 ,Sun Mar 13 2011 00:04:11 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/331,/competitions/stayalert,170th /sashikanthdareddy,How do we test for overfitting,"Hello All,My understanding, & as noted in the information section of this competition, is that when one overfits their model it shows up with suboptimal predictions on test data. For example: as shown with the Random forests technique, you may get 100% AUC on training data but only ~75% on test data - is that not sub-optimal? One more question - how does one measure overfitting?",0,None,3 ,Sun Mar 13 2011 06:18:09 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/332,/competitions/overfitting,149th /ahassaine,Benchmark,"Dear participants,Below is the c++ code used to generate the benchmark.Comments are welcome.#include ""stdafx.h""#include #include #include #include #include #include #include #include using namespace std;//class handling the provided featuresclass document_features{public: document_features() { ThicknessLengthsCircleHist30=new double[30]; XProjectionHist10=new double[10]; YProjectionHist10=new double[10]; Distribution10x10_100=new double[100]; tortuosityHist10=new double[10]; tortuosityDirectionHist10=new double[10]; tortuosityDerivateHist10=new double[10]; tortuosityDerivateDirectionHist10=new double[10]; nbbranches_1=new double[1]; LengthsOfBranchesHist_10=new double[10]; numberOfPixels1=new double[1]; XFilledProjectionHist10=new double[10]; YFilledProjectionHist10=new double[10]; Barycenter2=new double[2]; DirectionPerpendicular5Hist10=new double[10]; CurvaturePerpendicular5Hist100=new double[100]; luminanceHist256=new double[256]; CurvatureAli5Hist100=new double[100]; CurvaturesDerivateAli5Hist100=new double[100]; CurvatureAli10Hist100=new double[100]; CurvaturesDerivateAli10Hist100=new double[100]; CurvatureAli15Hist100=new double[100]; CurvaturesDerivateAli15Hist100=new double[100]; CurvatureAli20Hist100=new double[100]; CurvaturesDerivateAli20Hist100=new double[100]; chaincodeHist_4=new double[4]; chaincodeHist_8=new double[8]; chaincode8order2_64=new double[64]; chaincode4order2_16=new double[16]; chaincode4order3_64=new double[64]; chaincode8order3_512=new double[512]; chaincode4order4_256=new double[256]; chaincode8order4_4096=new double[4096]; NumberOfConnectedComponents_1=new double[1]; NumberOfHoles_1=new double[1]; Fourier_1=new double[1]; Fourier_5=new double[5]; Fourier_9=new double[9]; Fourier_15=new double[15]; } double*ThicknessLengthsCircleHist30,*XProjectionHist10,*YProjectionHist10,*Distribution10x10_100,*tortuosityHist10,*tortuosityDirectionHist10,*tortuosityDerivateHist10,*tortuosityDerivateDirectionHist10,*nbbranches_1,*LengthsOfBranchesHist_10,*numberOfPixels1,*XFilledProjectionHist10,*YFilledProjectionHist10,*Barycenter2,*DirectionPerpendicular5Hist10,*CurvaturePerpendicular5Hist100,*luminanceHist256,*CurvatureAli5Hist100,*CurvaturesDerivateAli5Hist100,*CurvatureAli10Hist100,*CurvaturesDerivateAli10Hist100,*CurvatureAli15Hist100,*CurvaturesDerivateAli15Hist100,*CurvatureAli20Hist100,*CurvaturesDerivateAli20Hist100,*chaincodeHist_4,*chaincodeHist_8,*chaincode8order2_64,*chaincode4order2_16,*chaincode4order3_64,*chaincode8order3_512,*chaincode4order4_256,*chaincode8order4_4096,*NumberOfConnectedComponents_1,*NumberOfHoles_1,*Fourier_1,*Fourier_5,*Fourier_9,*Fourier_15; double SpatialMoment00,SpatialMoment10,SpatialMoment20,SpatialMoment30,SpatialMoment01,SpatialMoment11,SpatialMoment21,SpatialMoment02,SpatialMoment12,SpatialMoment03,CentralMoment00,CentralMoment10,CentralMoment20,CentralMoment30,CentralMoment01,CentralMoment11,CentralMoment21,CentralMoment02,CentralMoment12,CentralMoment03,NormalizedCentralMoment00,NormalizedCentralMoment10,NormalizedCentralMoment20,NormalizedCentralMoment30,NormalizedCentralMoment01,NormalizedCentralMoment11,NormalizedCentralMoment21,NormalizedCentralMoment02,NormalizedCentralMoment12,NormalizedCentralMoment03,HuMoment1,HuMoment2,HuMoment3,HuMoment4,HuMoment5,HuMoment6,HuMoment7; int writer_id;};void main(){ //training phase std::ifstream data(""train.csv""); int linenum=0; string line; document_features**my_document_features=new document_features*[108]; string item;//=new char[4]; //reading the training documents while (getline (data, line)) { cout << ""\nLine #"" << linenum << "":"" << endl; istringstream linestream(line); if(linenum>0) { int row_index=linenum-1; my_document_features[row_index]=new document_features(); getline (linestream, item, ','); my_document_features[row_index]->writer_id=atoi(item.c_str()); getline (linestream, item, ','); my_document_features[row_index]->NumberOfConnectedComponents_1[0]=atof(item.c_str()); getline (linestream, item, ','); my_document_features[row_index]->NumberOfHoles_1[0]=atof(item.c_str()); getline (linestream, item, ','); my_document_features[row_index]->SpatialMoment00=atof(item.c_str()); getline (linestream, item, ','); my_document_features[row_index]->SpatialMoment10=atof(item.c_str()); getline (linestream, item, ','); my_document_features[row_index]->SpatialMoment20=atof(item.c_str()); getline (linestream, item, ','); my_document_features[row_index]->SpatialMoment30=atof(item.c_str()); getline (linestream, item, ','); my_document_features[row_index]->SpatialMoment01=atof(item.c_str()); getline (linestream, item, ','); my_document_features[row_index]->SpatialMoment11=atof(item.c_str()); getline (linestream, item, ','); my_document_features[row_index]->SpatialMoment21=atof(item.c_str()); getline (linestream, item, ','); my_document_features[row_index]->SpatialMoment02=atof(item.c_str()); getline (linestream, item, ','); my_document_features[row_index]->SpatialMoment12=atof(item.c_str()); getline (linestream, item, ','); my_document_features[row_index]->SpatialMoment03=atof(item.c_str()); getline (linestream, item, ','); my_document_features[row_index]->CentralMoment00=atof(item.c_str()); getline (linestream, item, ','); my_document_features[row_index]->CentralMoment10=atof(item.c_str()); getline (linestream, item, ','); my_document_features[row_index]->CentralMoment20=atof(item.c_str()); getline (linestream, item, ','); my_document_features[row_index]->CentralMoment30=atof(item.c_str()); getline (linestream, item, ','); my_document_features[row_index]->CentralMoment01=atof(item.c_str()); getline (linestream, item, ','); my_document_features[row_index]->CentralMoment11=atof(item.c_str()); getline (linestream, item, ','); my_document_features[row_index]->CentralMoment21=atof(item.c_str()); getline (linestream, item, ','); my_document_features[row_index]->CentralMoment02=atof(item.c_str()); getline (linestream, item, ','); my_document_features[row_index]->CentralMoment12=atof(item.c_str()); getline (linestream, item, ','); my_document_features[row_index]->CentralMoment03=atof(item.c_str()); getline (linestream, item, ','); my_document_features[row_index]->NormalizedCentralMoment00=atof(item.c_str()); getline (linestream, item, ','); my_document_features[row_index]->NormalizedCentralMoment10=atof(item.c_str()); getline (linestream, item, ','); my_document_features[row_index]->NormalizedCentralMoment20=atof(item.c_str()); getline (linestream, item, ','); my_document_features[row_index]->NormalizedCentralMoment30=atof(item.c_str()); getline (linestream, item, ','); my_document_features[row_index]->NormalizedCentralMoment01=atof(item.c_str()); getline (linestream, item, ','); my_document_features[row_index]->NormalizedCentralMoment11=atof(item.c_str()); getline (linestream, item, ','); my_document_features[row_index]->NormalizedCentralMoment21=atof(item.c_str()); getline (linestream, item, ','); my_document_features[row_index]->NormalizedCentralMoment02=atof(item.c_str()); getline (linestream, item, ','); my_document_features[row_index]->NormalizedCentralMoment12=atof(item.c_str()); getline (linestream, item, ','); my_document_features[row_index]->NormalizedCentralMoment03=atof(item.c_str()); getline (linestream, item, ','); my_document_features[row_index]->HuMoment1=atof(item.c_str()); getline (linestream, item, ','); my_document_features[row_index]->HuMoment2=atof(item.c_str()); getline (linestream, item, ','); my_document_features[row_index]->HuMoment3=atof(item.c_str()); getline (linestream, item, ','); my_document_features[row_index]->HuMoment4=atof(item.c_str()); getline (linestream, item, ','); my_document_features[row_index]->HuMoment5=atof(item.c_str()); getline (linestream, item, ','); my_document_features[row_index]->HuMoment6=atof(item.c_str()); getline (linestream, item, ','); my_document_features[row_index]->HuMoment7=atof(item.c_str()); for(int x=0;x<10;x++) { getline (linestream, item, ','); my_document_features[row_index]->XProjectionHist10[x]=atof(item.c_str()); } for(int x=0;x<10;x++) { getline (linestream, item, ','); my_document_features[row_index]->YProjectionHist10[x]=atof(item.c_str()); } for(int x=0;x<10;x++) { getline (linestream, item, ','); my_document_features[row_index]->XFilledProjectionHist10[x]=atof(item.c_str()); } for(int x=0;x<10;x++) { getline (linestream, item, ','); my_document_features[row_index]->YFilledProjectionHist10[x]=atof(item.c_str()); } for(int x=0;x<100;x++) { getline (linestream, item, ','); my_document_features[row_index]->Distribution10x10_100[x]=atof(item.c_str()); } getline (linestream, item, ','); my_document_features[row_index]->Barycenter2[0]=atof(item.c_str()); getline (linestream, item, ','); my_document_features[row_index]->Barycenter2[1]=atof(item.c_str()); getline (linestream, item, ','); my_document_features[row_index]->numberOfPixels1[0]=atof(item.c_str()); getline (linestream, item, ','); my_document_features[row_index]->Fourier_1[0]=atof(item.c_str()); for(int x=0;x<5;x++) { getline (linestream, item, ','); my_document_features[row_index]->Fourier_5[x]=atof(item.c_str()); } for(int x=0;x<9;x++) { getline (linestream, item, ','); my_document_features[row_index]->Fourier_9[x]=atof(item.c_str()); } for(int x=0;x<15;x++) { getline (linestream, item, ','); my_document_features[row_index]->Fourier_15[x]=atof(item.c_str()); } getline (linestream, item, ','); my_document_features[row_index]->nbbranches_1[0]=atof(item.c_str()); for(int x=0;x<10;x++) { getline (linestream, item, ','); my_document_features[row_index]->LengthsOfBranchesHist_10[x]=atof(item.c_str()); } for(int x=0;x<30;x++) { getline (linestream, item, ','); my_document_features[row_index]->ThicknessLengthsCircleHist30[x]=atof(item.c_str()); } for(int x=0;x<10;x++) { getline (linestream, item, ','); my_document_features[row_index]->tortuosityHist10[x]=atof(item.c_str()); } for(int x=0;x<10;x++) { getline (linestream, item, ','); my_document_features[row_index]->tortuosityDirectionHist10[x]=atof(item.c_str()); } for(int x=0;x<10;x++) { getline (linestream, item, ','); my_document_features[row_index]->tortuosityDerivateHist10[x]=atof(item.c_str()); } for(int x=0;x<10;x++) { getline (linestream, item, ','); my_document_features[row_index]->tortuosityDerivateDirectionHist10[x]=atof(item.c_str()); } for(int x=0;x<10;x++) { getline (linestream, item, ','); my_document_features[row_index]->DirectionPerpendicular5Hist10[x]=atof(item.c_str()); } for(int x=0;x<100;x++) { getline (linestream, item, ','); my_document_features[row_index]->CurvaturePerpendicular5Hist100[x]=atof(item.c_str()); } for(int x=0;x<256;x++) { getline (linestream, item, ','); my_document_features[row_index]->luminanceHist256[x]=atof(item.c_str()); } for(int x=0;x<100;x++) { getline (linestream, item, ','); my_document_features[row_index]->CurvatureAli5Hist100[x]=atof(item.c_str()); } for(int x=0;x<100;x++) { getline (linestream, item, ','); my_document_features[row_index]->CurvaturesDerivateAli5Hist100[x]=atof(item.c_str()); } for(int x=0;x<100;x++) { getline (linestream, item, ','); my_document_features[row_index]->CurvatureAli10Hist100[x]=atof(item.c_str()); } for(int x=0;x<100;x++) { getline (linestream, item, ','); my_document_features[row_index]->CurvaturesDerivateAli10Hist100[x]=atof(item.c_str()); } for(int x=0;x<100;x++) { getline (linestream, item, ','); my_document_features[row_index]->CurvatureAli15Hist100[x]=atof(item.c_str()); } for(int x=0;x<100;x++) { getline (linestream, item, ','); my_document_features[row_index]->CurvaturesDerivateAli15Hist100[x]=atof(item.c_str()); } for(int x=0;x<100;x++) { getline (linestream, item, ','); my_document_features[row_index]->CurvatureAli20Hist100[x]=atof(item.c_str()); } for(int x=0;x<100;x++) { getline (linestream, item, ','); my_document_features[row_index]->CurvaturesDerivateAli20Hist100[x]=atof(item.c_str()); } for(int x=0;x<4;x++) { getline (linestream, item, ','); my_document_features[row_index]->chaincodeHist_4[x]=atof(item.c_str()); } for(int x=0;x<8;x++) { getline (linestream, item, ','); my_document_features[row_index]->chaincodeHist_8[x]=atof(item.c_str()); } for(int x=0;x<64;x++) { getline (linestream, item, ','); my_document_features[row_index]->chaincode8order2_64[x]=atof(item.c_str()); } for(int x=0;x<16;x++) { getline (linestream, item, ','); my_document_features[row_index]->chaincode4order2_16[x]=atof(item.c_str()); } for(int x=0;x<64;x++) { getline (linestream, item, ','); my_document_features[row_index]->chaincode4order3_64[x]=atof(item.c_str()); } for(int x=0;x<512;x++) { getline (linestream, item, ','); my_document_features[row_index]->chaincode8order3_512[x]=atof(item.c_str()); } for(int x=0;x<256;x++) { getline (linestream, item, ','); my_document_features[row_index]->chaincode4order4_256[x]=atof(item.c_str()); } for(int x=0;x<4096;x++) { getline (linestream, item, ','); my_document_features[row_index]->chaincode8order4_4096[x]=atof(item.c_str()); } } linenum++; } //difference between features of all documents in the training set std::ofstream differences(""train_differences.csv""); differences<<""SameWriter,NumberOfConnectedComponents_1,NumberOfHoles_1,SpatialMoment00,SpatialMoment10,SpatialMoment20,SpatialMoment30,SpatialMoment01,SpatialMoment11,SpatialMoment21,SpatialMoment02,SpatialMoment12,SpatialMoment03,CentralMoment00,CentralMoment10,CentralMoment20,CentralMoment30,CentralMoment01,CentralMoment11,CentralMoment21,CentralMoment02,CentralMoment12,CentralMoment03,NormalizedCentralMoment00,NormalizedCentralMoment10,NormalizedCentralMoment20,NormalizedCentralMoment30,NormalizedCentralMoment01,NormalizedCentralMoment11,NormalizedCentralMoment21,NormalizedCentralMoment02,NormalizedCentralMoment12,NormalizedCentralMoment03,HuMoment1,HuMoment2,HuMoment3,HuMoment4,HuMoment5,HuMoment6,HuMoment7,XProjectionHist10,YProjectionHist10,XFilledProjectionHist10,YFilledProjectionHist10,Distribution10x10_100,Barycenter2,numberOfPixels1,Fourier_1,Fourier_5,Fourier_9,Fourier_15,nbbranches_1,LengthsOfBranchesHist_10,ThicknessLengthsCircleHist30,tortuosityHist10,tortuosityDirectionHist10,tortuosityDerivateHist10,tortuosityDerivateDirectionHist10,DirectionPerpendicular5Hist10,CurvaturePerpendicular5Hist100,luminanceHist256,CurvatureAli5Hist100,CurvaturesDerivateAli5Hist100,CurvatureAli10Hist100,CurvaturesDerivateAli10Hist100,CurvatureAli15Hist100,CurvaturesDerivateAli15Hist100,CurvatureAli20Hist100,CurvaturesDerivateAli20Hist100,chaincodeHist_4,chaincodeHist_8,chaincode8order2_64,chaincode4order2_16,chaincode4order3_64,chaincode8order3_512,chaincode4order4_256,chaincode8order4_4096\n""; //comparing each pair of documents for(int document_index1=0;document_index1<107;document_index1++) for(int document_index2=document_index1+1;document_index2<108;document_index2++) { double NumberOfConnectedComponents_1=0,NumberOfHoles_1=0,SpatialMoment00=0,SpatialMoment10=0,SpatialMoment20=0,SpatialMoment30=0,SpatialMoment01=0,SpatialMoment11=0,SpatialMoment21=0,SpatialMoment02=0,SpatialMoment12=0,SpatialMoment03=0,CentralMoment00=0,CentralMoment10=0,CentralMoment20=0,CentralMoment30=0,CentralMoment01=0,CentralMoment11=0,CentralMoment21=0,CentralMoment02=0,CentralMoment12=0,CentralMoment03=0,NormalizedCentralMoment00=0,NormalizedCentralMoment10=0,NormalizedCentralMoment20=0,NormalizedCentralMoment30=0,NormalizedCentralMoment01=0,NormalizedCentralMoment11=0,NormalizedCentralMoment21=0,NormalizedCentralMoment02=0,NormalizedCentralMoment12=0,NormalizedCentralMoment03=0,HuMoment1=0,HuMoment2=0,HuMoment3=0,HuMoment4=0,HuMoment5=0,HuMoment6=0,HuMoment7=0,ThicknessLengthsCircleHist30=0,XProjectionHist10=0,YProjectionHist10=0,Distribution10x10_100=0,tortuosityHist10=0,tortuosityDirectionHist10=0,tortuosityDerivateHist10=0,tortuosityDerivateDirectionHist10=0,nbbranches_1=0,LengthsOfBranchesHist_10=0,numberOfPixels1=0,XFilledProjectionHist10=0,YFilledProjectionHist10=0,Barycenter2=0,DirectionPerpendicular5Hist10=0,CurvaturePerpendicular5Hist100=0,luminanceHist256=0,CurvatureAli5Hist100=0,CurvaturesDerivateAli5Hist100=0,CurvatureAli10Hist100=0,CurvaturesDerivateAli10Hist100=0,CurvatureAli15Hist100=0,CurvaturesDerivateAli15Hist100=0,CurvatureAli20Hist100=0,CurvaturesDerivateAli20Hist100=0,chaincodeHist_4=0,chaincodeHist_8=0,chaincode8order2_64=0,chaincode4order2_16=0,chaincode4order3_64=0,chaincode8order3_512=0,chaincode4order4_256=0,chaincode8order4_4096=0,Fourier_1=0,Fourier_5=0,Fourier_9=0,Fourier_15=0; NumberOfConnectedComponents_1+=fabs(my_document_features[document_index1]->NumberOfConnectedComponents_1[0]-my_document_features[document_index2]->NumberOfConnectedComponents_1[0]); NumberOfHoles_1+=fabs(my_document_features[document_index1]->NumberOfHoles_1[0]-my_document_features[document_index2]->NumberOfHoles_1[0]); SpatialMoment00+=fabs(my_document_features[document_index1]->SpatialMoment00-my_document_features[document_index2]->SpatialMoment00); SpatialMoment10+=fabs(my_document_features[document_index1]->SpatialMoment10-my_document_features[document_index2]->SpatialMoment10); SpatialMoment20+=fabs(my_document_features[document_index1]->SpatialMoment20-my_document_features[document_index2]->SpatialMoment20); SpatialMoment30+=fabs(my_document_features[document_index1]->SpatialMoment30-my_document_features[document_index2]->SpatialMoment30); SpatialMoment01+=fabs(my_document_features[document_index1]->SpatialMoment01-my_document_features[document_index2]->SpatialMoment01); SpatialMoment11+=fabs(my_document_features[document_index1]->SpatialMoment11-my_document_features[document_index2]->SpatialMoment11); SpatialMoment21+=fabs(my_document_features[document_index1]->SpatialMoment21-my_document_features[document_index2]->SpatialMoment21); SpatialMoment02+=fabs(my_document_features[document_index1]->SpatialMoment02-my_document_features[document_index2]->SpatialMoment02); SpatialMoment12+=fabs(my_document_features[document_index1]->SpatialMoment12-my_document_features[document_index2]->SpatialMoment12); SpatialMoment03+=fabs(my_document_features[document_index1]->SpatialMoment03-my_document_features[document_index2]->SpatialMoment03); CentralMoment00+=fabs(my_document_features[document_index1]->CentralMoment00-my_document_features[document_index2]->CentralMoment00); CentralMoment10+=fabs(my_document_features[document_index1]->CentralMoment10-my_document_features[document_index2]->CentralMoment10); CentralMoment20+=fabs(my_document_features[document_index1]->CentralMoment20-my_document_features[document_index2]->CentralMoment20); CentralMoment30+=fabs(my_document_features[document_index1]->CentralMoment30-my_document_features[document_index2]->CentralMoment30); CentralMoment01+=fabs(my_document_features[document_index1]->CentralMoment01-my_document_features[document_index2]->CentralMoment01); CentralMoment11+=fabs(my_document_features[document_index1]->CentralMoment11-my_document_features[document_index2]->CentralMoment11); CentralMoment21+=fabs(my_document_features[document_index1]->CentralMoment21-my_document_features[document_index2]->CentralMoment21); CentralMoment02+=fabs(my_document_features[document_index1]->CentralMoment02-my_document_features[document_index2]->CentralMoment02); CentralMoment12+=fabs(my_document_features[document_index1]->CentralMoment12-my_document_features[document_index2]->CentralMoment12); CentralMoment03+=fabs(my_document_features[document_index1]->CentralMoment03-my_document_features[document_index2]->CentralMoment03); NormalizedCentralMoment00+=fabs(my_document_features[document_index1]->NormalizedCentralMoment00-my_document_features[document_index2]->NormalizedCentralMoment00); NormalizedCentralMoment10+=fabs(my_document_features[document_index1]->NormalizedCentralMoment10-my_document_features[document_index2]->NormalizedCentralMoment10); NormalizedCentralMoment20+=fabs(my_document_features[document_index1]->NormalizedCentralMoment20-my_document_features[document_index2]->NormalizedCentralMoment20); NormalizedCentralMoment30+=fabs(my_document_features[document_index1]->NormalizedCentralMoment30-my_document_features[document_index2]->NormalizedCentralMoment30); NormalizedCentralMoment01+=fabs(my_document_features[document_index1]->NormalizedCentralMoment01-my_document_features[document_index2]->NormalizedCentralMoment01); NormalizedCentralMoment11+=fabs(my_document_features[document_index1]->NormalizedCentralMoment11-my_document_features[document_index2]->NormalizedCentralMoment11); NormalizedCentralMoment21+=fabs(my_document_features[document_index1]->NormalizedCentralMoment21-my_document_features[document_index2]->NormalizedCentralMoment21); NormalizedCentralMoment02+=fabs(my_document_features[document_index1]->NormalizedCentralMoment02-my_document_features[document_index2]->NormalizedCentralMoment02); NormalizedCentralMoment12+=fabs(my_document_features[document_index1]->NormalizedCentralMoment12-my_document_features[document_index2]->NormalizedCentralMoment12); NormalizedCentralMoment03+=fabs(my_document_features[document_index1]->NormalizedCentralMoment03-my_document_features[document_index2]->NormalizedCentralMoment03); HuMoment1+=fabs(my_document_features[document_index1]->HuMoment1-my_document_features[document_index2]->HuMoment1); HuMoment2+=fabs(my_document_features[document_index1]->HuMoment2-my_document_features[document_index2]->HuMoment2); HuMoment3+=fabs(my_document_features[document_index1]->HuMoment3-my_document_features[document_index2]->HuMoment3); HuMoment4+=fabs(my_document_features[document_index1]->HuMoment4-my_document_features[document_index2]->HuMoment4); HuMoment5+=fabs(my_document_features[document_index1]->HuMoment5-my_document_features[document_index2]->HuMoment5); HuMoment6+=fabs(my_document_features[document_index1]->HuMoment6-my_document_features[document_index2]->HuMoment6); HuMoment7+=fabs(my_document_features[document_index1]->HuMoment7-my_document_features[document_index2]->HuMoment7); for(int x=0;x<10;x++) XProjectionHist10+=fabs(my_document_features[document_index1]->XProjectionHist10[x]-my_document_features[document_index2]->XProjectionHist10[x]); for(int x=0;x<10;x++) YProjectionHist10+=fabs(my_document_features[document_index1]->YProjectionHist10[x]-my_document_features[document_index2]->YProjectionHist10[x]); for(int x=0;x<10;x++) XFilledProjectionHist10+=fabs(my_document_features[document_index1]->XFilledProjectionHist10[x]-my_document_features[document_index2]->XFilledProjectionHist10[x]); for(int x=0;x<10;x++) YFilledProjectionHist10+=fabs(my_document_features[document_index1]->YFilledProjectionHist10[x]-my_document_features[document_index2]->YFilledProjectionHist10[x]); for(int x=0;x<10;x++) Distribution10x10_100+=fabs(my_document_features[document_index1]->Distribution10x10_100[x]-my_document_features[document_index2]->Distribution10x10_100[x]); for(int x=0;x<2;x++) Barycenter2+=fabs(my_document_features[document_index1]->Barycenter2[x]-my_document_features[document_index2]->Barycenter2[x]); numberOfPixels1+=fabs(my_document_features[document_index1]->numberOfPixels1[0]-my_document_features[document_index2]->numberOfPixels1[0]); for(int x=0;x<1;x++) Fourier_1+=fabs(my_document_features[document_index1]->Fourier_1[x]-my_document_features[document_index2]->Fourier_1[x]); for(int x=0;x<5;x++) Fourier_5+=fabs(my_document_features[document_index1]->Fourier_5[x]-my_document_features[document_index2]->Fourier_5[x]); for(int x=0;x<9;x++) Fourier_9+=fabs(my_document_features[document_index1]->Fourier_9[x]-my_document_features[document_index2]->Fourier_9[x]); for(int x=0;x<15;x++) Fourier_15+=fabs(my_document_features[document_index1]->Fourier_15[x]-my_document_features[document_index2]->Fourier_15[x]); nbbranches_1+=fabs(my_document_features[document_index1]->nbbranches_1[0]-my_document_features[document_index2]->nbbranches_1[0]); for(int x=0;x<10;x++) LengthsOfBranchesHist_10+=fabs(my_document_features[document_index1]->LengthsOfBranchesHist_10[x]-my_document_features[document_index2]->LengthsOfBranchesHist_10[x]); for(int x=0;x<30;x++) ThicknessLengthsCircleHist30+=fabs(my_document_features[document_index1]->ThicknessLengthsCircleHist30[x]-my_document_features[document_index2]->ThicknessLengthsCircleHist30[x]); for(int x=0;x<10;x++) tortuosityHist10+=fabs(my_document_features[document_index1]->tortuosityHist10[x]-my_document_features[document_index2]->tortuosityHist10[x]); for(int x=0;x<10;x++) tortuosityDirectionHist10+=fabs(my_document_features[document_index1]->tortuosityDirectionHist10[x]-my_document_features[document_index2]->tortuosityDirectionHist10[x]); for(int x=0;x<10;x++) tortuosityDerivateHist10+=fabs(my_document_features[document_index1]->tortuosityDerivateHist10[x]-my_document_features[document_index2]->tortuosityDerivateHist10[x]); for(int x=0;x<10;x++) tortuosityDerivateDirectionHist10+=fabs(my_document_features[document_index1]->tortuosityDerivateDirectionHist10[x]-my_document_features[document_index2]->tortuosityDerivateDirectionHist10[x]); for(int x=0;x<10;x++) DirectionPerpendicular5Hist10+=fabs(my_document_features[document_index1]->DirectionPerpendicular5Hist10[x]-my_document_features[document_index2]->DirectionPerpendicular5Hist10[x]); for(int x=0;x<100;x++) CurvaturePerpendicular5Hist100+=fabs(my_document_features[document_index1]->CurvaturePerpendicular5Hist100[x]-my_document_features[document_index2]->CurvaturePerpendicular5Hist100[x]); for(int x=0;x<256;x++) luminanceHist256+=fabs(my_document_features[document_index1]->luminanceHist256[x]-my_document_features[document_index2]->luminanceHist256[x]); for(int x=0;x<100;x++) CurvatureAli5Hist100+=fabs(my_document_features[document_index1]->CurvatureAli5Hist100[x]-my_document_features[document_index2]->CurvatureAli5Hist100[x]); for(int x=0;x<100;x++) CurvaturesDerivateAli5Hist100+=fabs(my_document_features[document_index1]->CurvaturesDerivateAli5Hist100[x]-my_document_features[document_index2]->CurvaturesDerivateAli5Hist100[x]); for(int x=0;x<100;x++) CurvatureAli10Hist100+=fabs(my_document_features[document_index1]->CurvatureAli10Hist100[x]-my_document_features[document_index2]->CurvatureAli10Hist100[x]); for(int x=0;x<100;x++) CurvaturesDerivateAli10Hist100+=fabs(my_document_features[document_index1]->CurvaturesDerivateAli10Hist100[x]-my_document_features[document_index2]->CurvaturesDerivateAli10Hist100[x]); for(int x=0;x<100;x++) CurvatureAli15Hist100+=fabs(my_document_features[document_index1]->CurvatureAli15Hist100[x]-my_document_features[document_index2]->CurvatureAli15Hist100[x]); for(int x=0;x<100;x++) CurvaturesDerivateAli15Hist100+=fabs(my_document_features[document_index1]->CurvaturesDerivateAli15Hist100[x]-my_document_features[document_index2]->CurvaturesDerivateAli15Hist100[x]); for(int x=0;x<100;x++) CurvatureAli20Hist100+=fabs(my_document_features[document_index1]->CurvatureAli20Hist100[x]-my_document_features[document_index2]->CurvatureAli20Hist100[x]); for(int x=0;x<100;x++) CurvaturesDerivateAli20Hist100+=fabs(my_document_features[document_index1]->CurvaturesDerivateAli20Hist100[x]-my_document_features[document_index2]->CurvaturesDerivateAli20Hist100[x]); for(int x=0;x<4;x++) chaincodeHist_4+=fabs(my_document_features[document_index1]->chaincodeHist_4[x]-my_document_features[document_index2]->chaincodeHist_4[x]); for(int x=0;x<8;x++) chaincodeHist_8+=fabs(my_document_features[document_index1]->chaincodeHist_8[x]-my_document_features[document_index2]->chaincodeHist_8[x]); for(int x=0;x<64;x++) chaincode8order2_64+=fabs(my_document_features[document_index1]->chaincode8order2_64[x]-my_document_features[document_index2]->chaincode8order2_64[x]); for(int x=0;x<16;x++) chaincode4order2_16+=fabs(my_document_features[document_index1]->chaincode4order2_16[x]-my_document_features[document_index2]->chaincode4order2_16[x]); for(int x=0;x<64;x++) chaincode4order3_64+=fabs(my_document_features[document_index1]->chaincode4order3_64[x]-my_document_features[document_index2]->chaincode4order3_64[x]); for(int x=0;x<512;x++) chaincode8order3_512+=fabs(my_document_features[document_index1]->chaincode8order3_512[x]-my_document_features[document_index2]->chaincode8order3_512[x]); for(int x=0;x<256;x++) chaincode4order4_256+=fabs(my_document_features[document_index1]->chaincode4order4_256[x]-my_document_features[document_index2]->chaincode4order4_256[x]); for(int x=0;x<4096;x++) chaincode8order4_4096+=fabs(my_document_features[document_index1]->chaincode8order4_4096[x]-my_document_features[document_index2]->chaincode8order4_4096[x]); if(my_document_features[document_index1]->writer_id==my_document_features[document_index2]->writer_id)differences<<""1,""; else differences<<""0,""; differences<0) { int row_index=linenum-1; my_questioned_document_features[row_index]=new document_features(); getline (linestream, item, ','); //my_questioned_document_features[row_index]->writer_id=atoi(item.c_str()); getline (linestream, item, ','); my_questioned_document_features[row_index]->NumberOfConnectedComponents_1[0]=atof(item.c_str()); getline (linestream, item, ','); my_questioned_document_features[row_index]->NumberOfHoles_1[0]=atof(item.c_str()); getline (linestream, item, ','); my_questioned_document_features[row_index]->SpatialMoment00=atof(item.c_str()); getline (linestream, item, ','); my_questioned_document_features[row_index]->SpatialMoment10=atof(item.c_str()); getline (linestream, item, ','); my_questioned_document_features[row_index]->SpatialMoment20=atof(item.c_str()); getline (linestream, item, ','); my_questioned_document_features[row_index]->SpatialMoment30=atof(item.c_str()); getline (linestream, item, ','); my_questioned_document_features[row_index]->SpatialMoment01=atof(item.c_str()); getline (linestream, item, ','); my_questioned_document_features[row_index]->SpatialMoment11=atof(item.c_str()); getline (linestream, item, ','); my_questioned_document_features[row_index]->SpatialMoment21=atof(item.c_str()); getline (linestream, item, ','); my_questioned_document_features[row_index]->SpatialMoment02=atof(item.c_str()); getline (linestream, item, ','); my_questioned_document_features[row_index]->SpatialMoment12=atof(item.c_str()); getline (linestream, item, ','); my_questioned_document_features[row_index]->SpatialMoment03=atof(item.c_str()); getline (linestream, item, ','); my_questioned_document_features[row_index]->CentralMoment00=atof(item.c_str()); getline (linestream, item, ','); my_questioned_document_features[row_index]->CentralMoment10=atof(item.c_str()); getline (linestream, item, ','); my_questioned_document_features[row_index]->CentralMoment20=atof(item.c_str()); getline (linestream, item, ','); my_questioned_document_features[row_index]->CentralMoment30=atof(item.c_str()); getline (linestream, item, ','); my_questioned_document_features[row_index]->CentralMoment01=atof(item.c_str()); getline (linestream, item, ','); my_questioned_document_features[row_index]->CentralMoment11=atof(item.c_str()); getline (linestream, item, ','); my_questioned_document_features[row_index]->CentralMoment21=atof(item.c_str()); getline (linestream, item, ','); my_questioned_document_features[row_index]->CentralMoment02=atof(item.c_str()); getline (linestream, item, ','); my_questioned_document_features[row_index]->CentralMoment12=atof(item.c_str()); getline (linestream, item, ','); my_questioned_document_features[row_index]->CentralMoment03=atof(item.c_str()); getline (linestream, item, ','); my_questioned_document_features[row_index]->NormalizedCentralMoment00=atof(item.c_str()); getline (linestream, item, ','); my_questioned_document_features[row_index]->NormalizedCentralMoment10=atof(item.c_str()); getline (linestream, item, ','); my_questioned_document_features[row_index]->NormalizedCentralMoment20=atof(item.c_str()); getline (linestream, item, ','); my_questioned_document_features[row_index]->NormalizedCentralMoment30=atof(item.c_str()); getline (linestream, item, ','); my_questioned_document_features[row_index]->NormalizedCentralMoment01=atof(item.c_str()); getline (linestream, item, ','); my_questioned_document_features[row_index]->NormalizedCentralMoment11=atof(item.c_str()); getline (linestream, item, ','); my_questioned_document_features[row_index]->NormalizedCentralMoment21=atof(item.c_str()); getline (linestream, item, ','); my_questioned_document_features[row_index]->NormalizedCentralMoment02=atof(item.c_str()); getline (linestream, item, ','); my_questioned_document_features[row_index]->NormalizedCentralMoment12=atof(item.c_str()); getline (linestream, item, ','); my_questioned_document_features[row_index]->NormalizedCentralMoment03=atof(item.c_str()); getline (linestream, item, ','); my_questioned_document_features[row_index]->HuMoment1=atof(item.c_str()); getline (linestream, item, ','); my_questioned_document_features[row_index]->HuMoment2=atof(item.c_str()); getline (linestream, item, ','); my_questioned_document_features[row_index]->HuMoment3=atof(item.c_str()); getline (linestream, item, ','); my_questioned_document_features[row_index]->HuMoment4=atof(item.c_str()); getline (linestream, item, ','); my_questioned_document_features[row_index]->HuMoment5=atof(item.c_str()); getline (linestream, item, ','); my_questioned_document_features[row_index]->HuMoment6=atof(item.c_str()); getline (linestream, item, ','); my_questioned_document_features[row_index]->HuMoment7=atof(item.c_str()); for(int x=0;x<10;x++) { getline (linestream, item, ','); my_questioned_document_features[row_index]->XProjectionHist10[x]=atof(item.c_str()); } for(int x=0;x<10;x++) { getline (linestream, item, ','); my_questioned_document_features[row_index]->YProjectionHist10[x]=atof(item.c_str()); } for(int x=0;x<10;x++) { getline (linestream, item, ','); my_questioned_document_features[row_index]->XFilledProjectionHist10[x]=atof(item.c_str()); } for(int x=0;x<10;x++) { getline (linestream, item, ','); my_questioned_document_features[row_index]->YFilledProjectionHist10[x]=atof(item.c_str()); } for(int x=0;x<100;x++) { getline (linestream, item, ','); my_questioned_document_features[row_index]->Distribution10x10_100[x]=atof(item.c_str()); } getline (linestream, item, ','); my_questioned_document_features[row_index]->Barycenter2[0]=atof(item.c_str()); getline (linestream, item, ','); my_questioned_document_features[row_index]->Barycenter2[1]=atof(item.c_str()); getline (linestream, item, ','); my_questioned_document_features[row_index]->numberOfPixels1[0]=atof(item.c_str()); getline (linestream, item, ','); my_questioned_document_features[row_index]->Fourier_1[0]=atof(item.c_str()); for(int x=0;x<5;x++) { getline (linestream, item, ','); my_questioned_document_features[row_index]->Fourier_5[x]=atof(item.c_str()); } for(int x=0;x<9;x++) { getline (linestream, item, ','); my_questioned_document_features[row_index]->Fourier_9[x]=atof(item.c_str()); } for(int x=0;x<15;x++) { getline (linestream, item, ','); my_questioned_document_features[row_index]->Fourier_15[x]=atof(item.c_str()); } getline (linestream, item, ','); my_questioned_document_features[row_index]->nbbranches_1[0]=atof(item.c_str()); for(int x=0;x<10;x++) { getline (linestream, item, ','); my_questioned_document_features[row_index]->LengthsOfBranchesHist_10[x]=atof(item.c_str()); } for(int x=0;x<30;x++) { getline (linestream, item, ','); my_questioned_document_features[row_index]->ThicknessLengthsCircleHist30[x]=atof(item.c_str()); } for(int x=0;x<10;x++) { getline (linestream, item, ','); my_questioned_document_features[row_index]->tortuosityHist10[x]=atof(item.c_str()); } for(int x=0;x<10;x++) { getline (linestream, item, ','); my_questioned_document_features[row_index]->tortuosityDirectionHist10[x]=atof(item.c_str()); } for(int x=0;x<10;x++) { getline (linestream, item, ','); my_questioned_document_features[row_index]->tortuosityDerivateHist10[x]=atof(item.c_str()); } for(int x=0;x<10;x++) { getline (linestream, item, ','); my_questioned_document_features[row_index]->tortuosityDerivateDirectionHist10[x]=atof(item.c_str()); } for(int x=0;x<10;x++) { getline (linestream, item, ','); my_questioned_document_features[row_index]->DirectionPerpendicular5Hist10[x]=atof(item.c_str()); } for(int x=0;x<100;x++) { getline (linestream, item, ','); my_questioned_document_features[row_index]->CurvaturePerpendicular5Hist100[x]=atof(item.c_str()); } for(int x=0;x<256;x++) { getline (linestream, item, ','); my_questioned_document_features[row_index]->luminanceHist256[x]=atof(item.c_str()); } for(int x=0;x<100;x++) { getline (linestream, item, ','); my_questioned_document_features[row_index]->CurvatureAli5Hist100[x]=atof(item.c_str()); } for(int x=0;x<100;x++) { getline (linestream, item, ','); my_questioned_document_features[row_index]->CurvaturesDerivateAli5Hist100[x]=atof(item.c_str()); } for(int x=0;x<100;x++) { getline (linestream, item, ','); my_questioned_document_features[row_index]->CurvatureAli10Hist100[x]=atof(item.c_str()); } for(int x=0;x<100;x++) { getline (linestream, item, ','); my_questioned_document_features[row_index]->CurvaturesDerivateAli10Hist100[x]=atof(item.c_str()); } for(int x=0;x<100;x++) { getline (linestream, item, ','); my_questioned_document_features[row_index]->CurvatureAli15Hist100[x]=atof(item.c_str()); } for(int x=0;x<100;x++) { getline (linestream, item, ','); my_questioned_document_features[row_index]->CurvaturesDerivateAli15Hist100[x]=atof(item.c_str()); } for(int x=0;x<100;x++) { getline (linestream, item, ','); my_questioned_document_features[row_index]->CurvatureAli20Hist100[x]=atof(item.c_str()); } for(int x=0;x<100;x++) { getline (linestream, item, ','); my_questioned_document_features[row_index]->CurvaturesDerivateAli20Hist100[x]=atof(item.c_str()); } for(int x=0;x<4;x++) { getline (linestream, item, ','); my_questioned_document_features[row_index]->chaincodeHist_4[x]=atof(item.c_str()); } for(int x=0;x<8;x++) { getline (linestream, item, ','); my_questioned_document_features[row_index]->chaincodeHist_8[x]=atof(item.c_str()); } for(int x=0;x<64;x++) { getline (linestream, item, ','); my_questioned_document_features[row_index]->chaincode8order2_64[x]=atof(item.c_str()); } for(int x=0;x<16;x++) { getline (linestream, item, ','); my_questioned_document_features[row_index]->chaincode4order2_16[x]=atof(item.c_str()); } for(int x=0;x<64;x++) { getline (linestream, item, ','); my_questioned_document_features[row_index]->chaincode4order3_64[x]=atof(item.c_str()); } for(int x=0;x<512;x++) { getline (linestream, item, ','); my_questioned_document_features[row_index]->chaincode8order3_512[x]=atof(item.c_str()); } for(int x=0;x<256;x++) { getline (linestream, item, ','); my_questioned_document_features[row_index]->chaincode4order4_256[x]=atof(item.c_str()); } for(int x=0;x<4096;x++) { getline (linestream, item, ','); my_questioned_document_features[row_index]->chaincode8order4_4096[x]=atof(item.c_str()); } } linenum++; } //comparing all the documents of the test set with all the documents of the training set for(int document_index1=0;document_index1<53;document_index1++) { double maxprob=0; for(int document_index2=0;document_index2<108;document_index2++) { double NumberOfConnectedComponents_1=0,NumberOfHoles_1=0,SpatialMoment00=0,SpatialMoment10=0,SpatialMoment20=0,SpatialMoment30=0,SpatialMoment01=0,SpatialMoment11=0,SpatialMoment21=0,SpatialMoment02=0,SpatialMoment12=0,SpatialMoment03=0,CentralMoment00=0,CentralMoment10=0,CentralMoment20=0,CentralMoment30=0,CentralMoment01=0,CentralMoment11=0,CentralMoment21=0,CentralMoment02=0,CentralMoment12=0,CentralMoment03=0,NormalizedCentralMoment00=0,NormalizedCentralMoment10=0,NormalizedCentralMoment20=0,NormalizedCentralMoment30=0,NormalizedCentralMoment01=0,NormalizedCentralMoment11=0,NormalizedCentralMoment21=0,NormalizedCentralMoment02=0,NormalizedCentralMoment12=0,NormalizedCentralMoment03=0,HuMoment1=0,HuMoment2=0,HuMoment3=0,HuMoment4=0,HuMoment5=0,HuMoment6=0,HuMoment7=0,ThicknessLengthsCircleHist30=0,XProjectionHist10=0,YProjectionHist10=0,Distribution10x10_100=0,tortuosityHist10=0,tortuosityDirectionHist10=0,tortuosityDerivateHist10=0,tortuosityDerivateDirectionHist10=0,nbbranches_1=0,LengthsOfBranchesHist_10=0,numberOfPixels1=0,XFilledProjectionHist10=0,YFilledProjectionHist10=0,Barycenter2=0,DirectionPerpendicular5Hist10=0,CurvaturePerpendicular5Hist100=0,luminanceHist256=0,CurvatureAli5Hist100=0,CurvaturesDerivateAli5Hist100=0,CurvatureAli10Hist100=0,CurvaturesDerivateAli10Hist100=0,CurvatureAli15Hist100=0,CurvaturesDerivateAli15Hist100=0,CurvatureAli20Hist100=0,CurvaturesDerivateAli20Hist100=0,chaincodeHist_4=0,chaincodeHist_8=0,chaincode8order2_64=0,chaincode4order2_16=0,chaincode4order3_64=0,chaincode8order3_512=0,chaincode4order4_256=0,chaincode8order4_4096=0,Fourier_1=0,Fourier_5=0,Fourier_9=0,Fourier_15=0; NumberOfConnectedComponents_1+=fabs(my_questioned_document_features[document_index1]->NumberOfConnectedComponents_1[0]-my_document_features[document_index2]->NumberOfConnectedComponents_1[0]); NumberOfHoles_1+=fabs(my_questioned_document_features[document_index1]->NumberOfHoles_1[0]-my_document_features[document_index2]->NumberOfHoles_1[0]); SpatialMoment00+=fabs(my_questioned_document_features[document_index1]->SpatialMoment00-my_document_features[document_index2]->SpatialMoment00); SpatialMoment10+=fabs(my_questioned_document_features[document_index1]->SpatialMoment10-my_document_features[document_index2]->SpatialMoment10); SpatialMoment20+=fabs(my_questioned_document_features[document_index1]->SpatialMoment20-my_document_features[document_index2]->SpatialMoment20); SpatialMoment30+=fabs(my_questioned_document_features[document_index1]->SpatialMoment30-my_document_features[document_index2]->SpatialMoment30); SpatialMoment01+=fabs(my_questioned_document_features[document_index1]->SpatialMoment01-my_document_features[document_index2]->SpatialMoment01); SpatialMoment11+=fabs(my_questioned_document_features[document_index1]->SpatialMoment11-my_document_features[document_index2]->SpatialMoment11); SpatialMoment21+=fabs(my_questioned_document_features[document_index1]->SpatialMoment21-my_document_features[document_index2]->SpatialMoment21); SpatialMoment02+=fabs(my_questioned_document_features[document_index1]->SpatialMoment02-my_document_features[document_index2]->SpatialMoment02); SpatialMoment12+=fabs(my_questioned_document_features[document_index1]->SpatialMoment12-my_document_features[document_index2]->SpatialMoment12); SpatialMoment03+=fabs(my_questioned_document_features[document_index1]->SpatialMoment03-my_document_features[document_index2]->SpatialMoment03); CentralMoment00+=fabs(my_questioned_document_features[document_index1]->CentralMoment00-my_document_features[document_index2]->CentralMoment00); CentralMoment10+=fabs(my_questioned_document_features[document_index1]->CentralMoment10-my_document_features[document_index2]->CentralMoment10); CentralMoment20+=fabs(my_questioned_document_features[document_index1]->CentralMoment20-my_document_features[document_index2]->CentralMoment20); CentralMoment30+=fabs(my_questioned_document_features[document_index1]->CentralMoment30-my_document_features[document_index2]->CentralMoment30); CentralMoment01+=fabs(my_questioned_document_features[document_index1]->CentralMoment01-my_document_features[document_index2]->CentralMoment01); CentralMoment11+=fabs(my_questioned_document_features[document_index1]->CentralMoment11-my_document_features[document_index2]->CentralMoment11); CentralMoment21+=fabs(my_questioned_document_features[document_index1]->CentralMoment21-my_document_features[document_index2]->CentralMoment21); CentralMoment02+=fabs(my_questioned_document_features[document_index1]->CentralMoment02-my_document_features[document_index2]->CentralMoment02); CentralMoment12+=fabs(my_questioned_document_features[document_index1]->CentralMoment12-my_document_features[document_index2]->CentralMoment12); CentralMoment03+=fabs(my_questioned_document_features[document_index1]->CentralMoment03-my_document_features[document_index2]->CentralMoment03); NormalizedCentralMoment00+=fabs(my_questioned_document_features[document_index1]->NormalizedCentralMoment00-my_document_features[document_index2]->NormalizedCentralMoment00); NormalizedCentralMoment10+=fabs(my_questioned_document_features[document_index1]->NormalizedCentralMoment10-my_document_features[document_index2]->NormalizedCentralMoment10); NormalizedCentralMoment20+=fabs(my_questioned_document_features[document_index1]->NormalizedCentralMoment20-my_document_features[document_index2]->NormalizedCentralMoment20); NormalizedCentralMoment30+=fabs(my_questioned_document_features[document_index1]->NormalizedCentralMoment30-my_document_features[document_index2]->NormalizedCentralMoment30); NormalizedCentralMoment01+=fabs(my_questioned_document_features[document_index1]->NormalizedCentralMoment01-my_document_features[document_index2]->NormalizedCentralMoment01); NormalizedCentralMoment11+=fabs(my_questioned_document_features[document_index1]->NormalizedCentralMoment11-my_document_features[document_index2]->NormalizedCentralMoment11); NormalizedCentralMoment21+=fabs(my_questioned_document_features[document_index1]->NormalizedCentralMoment21-my_document_features[document_index2]->NormalizedCentralMoment21); NormalizedCentralMoment02+=fabs(my_questioned_document_features[document_index1]->NormalizedCentralMoment02-my_document_features[document_index2]->NormalizedCentralMoment02); NormalizedCentralMoment12+=fabs(my_questioned_document_features[document_index1]->NormalizedCentralMoment12-my_document_features[document_index2]->NormalizedCentralMoment12); NormalizedCentralMoment03+=fabs(my_questioned_document_features[document_index1]->NormalizedCentralMoment03-my_document_features[document_index2]->NormalizedCentralMoment03); HuMoment1+=fabs(my_questioned_document_features[document_index1]->HuMoment1-my_document_features[document_index2]->HuMoment1); HuMoment2+=fabs(my_questioned_document_features[document_index1]->HuMoment2-my_document_features[document_index2]->HuMoment2); HuMoment3+=fabs(my_questioned_document_features[document_index1]->HuMoment3-my_document_features[document_index2]->HuMoment3); HuMoment4+=fabs(my_questioned_document_features[document_index1]->HuMoment4-my_document_features[document_index2]->HuMoment4); HuMoment5+=fabs(my_questioned_document_features[document_index1]->HuMoment5-my_document_features[document_index2]->HuMoment5); HuMoment6+=fabs(my_questioned_document_features[document_index1]->HuMoment6-my_document_features[document_index2]->HuMoment6); HuMoment7+=fabs(my_questioned_document_features[document_index1]->HuMoment7-my_document_features[document_index2]->HuMoment7); for(int x=0;x<10;x++) XProjectionHist10+=fabs(my_questioned_document_features[document_index1]->XProjectionHist10[x]-my_document_features[document_index2]->XProjectionHist10[x]); for(int x=0;x<10;x++) YProjectionHist10+=fabs(my_questioned_document_features[document_index1]->YProjectionHist10[x]-my_document_features[document_index2]->YProjectionHist10[x]); for(int x=0;x<10;x++) XFilledProjectionHist10+=fabs(my_questioned_document_features[document_index1]->XFilledProjectionHist10[x]-my_document_features[document_index2]->XFilledProjectionHist10[x]); for(int x=0;x<10;x++) YFilledProjectionHist10+=fabs(my_questioned_document_features[document_index1]->YFilledProjectionHist10[x]-my_document_features[document_index2]->YFilledProjectionHist10[x]); for(int x=0;x<10;x++) Distribution10x10_100+=fabs(my_questioned_document_features[document_index1]->Distribution10x10_100[x]-my_document_features[document_index2]->Distribution10x10_100[x]); for(int x=0;x<2;x++) Barycenter2+=fabs(my_questioned_document_features[document_index1]->Barycenter2[x]-my_document_features[document_index2]->Barycenter2[x]); numberOfPixels1+=fabs(my_questioned_document_features[document_index1]->numberOfPixels1[0]-my_document_features[document_index2]->numberOfPixels1[0]); for(int x=0;x<1;x++) Fourier_1+=fabs(my_questioned_document_features[document_index1]->Fourier_1[x]-my_document_features[document_index2]->Fourier_1[x]); for(int x=0;x<5;x++) Fourier_5+=fabs(my_questioned_document_features[document_index1]->Fourier_5[x]-my_document_features[document_index2]->Fourier_5[x]); for(int x=0;x<9;x++) Fourier_9+=fabs(my_questioned_document_features[document_index1]->Fourier_9[x]-my_document_features[document_index2]->Fourier_9[x]); for(int x=0;x<15;x++) Fourier_15+=fabs(my_questioned_document_features[document_index1]->Fourier_15[x]-my_document_features[document_index2]->Fourier_15[x]); nbbranches_1+=fabs(my_questioned_document_features[document_index1]->nbbranches_1[0]-my_document_features[document_index2]->nbbranches_1[0]); for(int x=0;x<10;x++) LengthsOfBranchesHist_10+=fabs(my_questioned_document_features[document_index1]->LengthsOfBranchesHist_10[x]-my_document_features[document_index2]->LengthsOfBranchesHist_10[x]); for(int x=0;x<30;x++) ThicknessLengthsCircleHist30+=fabs(my_questioned_document_features[document_index1]->ThicknessLengthsCircleHist30[x]-my_document_features[document_index2]->ThicknessLengthsCircleHist30[x]); for(int x=0;x<10;x++) tortuosityHist10+=fabs(my_questioned_document_features[document_index1]->tortuosityHist10[x]-my_document_features[document_index2]->tortuosityHist10[x]); for(int x=0;x<10;x++) tortuosityDirectionHist10+=fabs(my_questioned_document_features[document_index1]->tortuosityDirectionHist10[x]-my_document_features[document_index2]->tortuosityDirectionHist10[x]); for(int x=0;x<10;x++) tortuosityDerivateHist10+=fabs(my_questioned_document_features[document_index1]->tortuosityDerivateHist10[x]-my_document_features[document_index2]->tortuosityDerivateHist10[x]); for(int x=0;x<10;x++) tortuosityDerivateDirectionHist10+=fabs(my_questioned_document_features[document_index1]->tortuosityDerivateDirectionHist10[x]-my_document_features[document_index2]->tortuosityDerivateDirectionHist10[x]); for(int x=0;x<10;x++) DirectionPerpendicular5Hist10+=fabs(my_questioned_document_features[document_index1]->DirectionPerpendicular5Hist10[x]-my_document_features[document_index2]->DirectionPerpendicular5Hist10[x]); for(int x=0;x<100;x++) CurvaturePerpendicular5Hist100+=fabs(my_questioned_document_features[document_index1]->CurvaturePerpendicular5Hist100[x]-my_document_features[document_index2]->CurvaturePerpendicular5Hist100[x]); for(int x=0;x<256;x++) luminanceHist256+=fabs(my_questioned_document_features[document_index1]->luminanceHist256[x]-my_document_features[document_index2]->luminanceHist256[x]); for(int x=0;x<100;x++) CurvatureAli5Hist100+=fabs(my_questioned_document_features[document_index1]->CurvatureAli5Hist100[x]-my_document_features[document_index2]->CurvatureAli5Hist100[x]); for(int x=0;x<100;x++) CurvaturesDerivateAli5Hist100+=fabs(my_questioned_document_features[document_index1]->CurvaturesDerivateAli5Hist100[x]-my_document_features[document_index2]->CurvaturesDerivateAli5Hist100[x]); for(int x=0;x<100;x++) CurvatureAli10Hist100+=fabs(my_questioned_document_features[document_index1]->CurvatureAli10Hist100[x]-my_document_features[document_index2]->CurvatureAli10Hist100[x]); for(int x=0;x<100;x++) CurvaturesDerivateAli10Hist100+=fabs(my_questioned_document_features[document_index1]->CurvaturesDerivateAli10Hist100[x]-my_document_features[document_index2]->CurvaturesDerivateAli10Hist100[x]); for(int x=0;x<100;x++) CurvatureAli15Hist100+=fabs(my_questioned_document_features[document_index1]->CurvatureAli15Hist100[x]-my_document_features[document_index2]->CurvatureAli15Hist100[x]); for(int x=0;x<100;x++) CurvaturesDerivateAli15Hist100+=fabs(my_questioned_document_features[document_index1]->CurvaturesDerivateAli15Hist100[x]-my_document_features[document_index2]->CurvaturesDerivateAli15Hist100[x]); for(int x=0;x<100;x++) CurvatureAli20Hist100+=fabs(my_questioned_document_features[document_index1]->CurvatureAli20Hist100[x]-my_document_features[document_index2]->CurvatureAli20Hist100[x]); for(int x=0;x<100;x++) CurvaturesDerivateAli20Hist100+=fabs(my_questioned_document_features[document_index1]->CurvaturesDerivateAli20Hist100[x]-my_document_features[document_index2]->CurvaturesDerivateAli20Hist100[x]); for(int x=0;x<4;x++) chaincodeHist_4+=fabs(my_questioned_document_features[document_index1]->chaincodeHist_4[x]-my_document_features[document_index2]->chaincodeHist_4[x]); for(int x=0;x<8;x++) chaincodeHist_8+=fabs(my_questioned_document_features[document_index1]->chaincodeHist_8[x]-my_document_features[document_index2]->chaincodeHist_8[x]); for(int x=0;x<64;x++) chaincode8order2_64+=fabs(my_questioned_document_features[document_index1]->chaincode8order2_64[x]-my_document_features[document_index2]->chaincode8order2_64[x]); for(int x=0;x<16;x++) chaincode4order2_16+=fabs(my_questioned_document_features[document_index1]->chaincode4order2_16[x]-my_document_features[document_index2]->chaincode4order2_16[x]); for(int x=0;x<64;x++) chaincode4order3_64+=fabs(my_questioned_document_features[document_index1]->chaincode4order3_64[x]-my_document_features[document_index2]->chaincode4order3_64[x]); for(int x=0;x<512;x++) chaincode8order3_512+=fabs(my_questioned_document_features[document_index1]->chaincode8order3_512[x]-my_document_features[document_index2]->chaincode8order3_512[x]); for(int x=0;x<256;x++) chaincode4order4_256+=fabs(my_questioned_document_features[document_index1]->chaincode4order4_256[x]-my_document_features[document_index2]->chaincode4order4_256[x]); for(int x=0;x<4096;x++) chaincode8order4_4096+=fabs(my_questioned_document_features[document_index1]->chaincode8order4_4096[x]-my_document_features[document_index2]->chaincode8order4_4096[x]); //using the model previously built double linear=(-10.667528249636) + -0.0667090106640684 * NumberOfConnectedComponents_1 +9.34910163750235 * ThicknessLengthsCircleHist30 + 30.4471654988069 *CurvaturePerpendicular5Hist100 + 30.5600351874986 * luminanceHist256 +33.6853635484793 * chaincode8order3_512; double prob_same_writer=1. / (1. + exp(linear)); //the most probable writer if(prob_same_writer>maxprob) { maxprob=prob_same_writer; my_questioned_document_features[document_index1]->writer_id=my_document_features[document_index2]->writer_id; } } } //creating the file to submit //this benchmark does not provide a way of detecting unknown writers std::ofstream submission_file(""new_benchmark.csv""); submission_file<<"",031,034,035,036,037,038,039,041,042,046,047,048,049,050,051,052,053,054,055,056,057,058,059,060,061,062,063,064,065,066,067,068,069,070,071,072,073,074,077,078,080,081,082,083,084,087,088,090,091,092,093,095,096,unknown\n""; int writers[]={31,34,35,36,37,38,39,41,42,46,47,48,49,50,51,52,53,54,55,56,57,58,59,60,61,62,63,64,65,66,67,68,69,70,71,72,73,74,77,78,80,81,82,83,84,87,88,90,91,92,93,95,96}; string questioned_writers[]={""AA"",""AB"",""AC"",""AD"",""AE"",""AF"",""AG"",""AH"",""AI"",""AJ"",""AK"",""AM"",""AN"",""AO"",""AP"",""AQ"",""AR"",""AS"",""AT"",""AU"",""AV"",""AW"",""AX"",""AY"",""AZ"",""BA"",""BB"",""BC"",""BD"",""BE"",""BF"",""BG"",""BH"",""BI"",""BK"",""BL"",""BM"",""BN"",""BO"",""BP"",""BQ"",""BR"",""BS"",""BT"",""BU"",""BV"",""BW"",""BX"",""BY"",""BZ"",""CA"",""CB"",""CC""}; for(int document_index1=0;document_index1<53;document_index1++) { submission_file<writer_id==writers[writer_index]) { at_least_one=true; submission_file<<"",1""; } else submission_file<<"",0""; } //unknown writer if(at_least_one)submission_file<<"",0\n""; else submission_file<<"",1\n""; } submission_file.close(); std::cout<<""Press a key to exit...""; _getch();}",0,None,2 ,Sun Mar 13 2011 18:45:09 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/333,/competitions/WIC2011,7th /wcukierski,Luck,"With St. Patrick's day coming up, I thought it might be interesting to bring up the topic of luck. How much of this contest will be determined by luck?In my testing so far, I've seen two effects which cause a disparity in the training and test scores. The first is overfitting. The second is the unpredictable ""randomness"" of a method when trained on a small sample.Even a robust method will have variable performance when trained to small sample size. Some data points are highly representative of their class and easy to classify, some are close to the margin and therefore harder. However, different methods value training points in different ways. One classifier may work well when trained with atypical points, while another might become completely unstable and useless.In this contest, we are given the first 250 points for training, with no choice but to use these points. I've run a few cross-validation experiments with target_practice to see just how much the AUC changes when presenting the same algorithms with different subsets of the data. The variance is large, sometimes large enough to be the difference between 1st and 20th on the leaderboard. This effect is very hard to predict and not addressed by the usual measures to prevent overfitting.In short, the fewer points you select to train on, the more variable the underlying ""quality"" of these points for training will be. (You can convince yourself of this by considering the limiting case where one randomly draws only members of one class for training, in which case it doesn't matter what you do to prevent overfitting.)So, will the determining factor of this contest be algorithm(s) that perform well using the 250 sample points, or the algorithm(s) that best guard against overfitting? If you have thoughts, chime in below.",0,None,4 ,Wed Mar 16 2011 20:01:03 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/335,/competitions/overfitting,5th /dalewong,Newbie question about ROC and AUC,"Sorry to bother you guys with a basic question, but I'm new to statistical algorithms. I've done a lot of algorithmic coding, but never in this field.I'm confused about how you generate an ROC curve and its AUC from the submitted solution. My limited understanding (gleaned only from Wikipedia) is that a single solution, consisting of the classification of 19750 cases, would yield a single point in the ROC space. My impression is that a curve is generated by varying some threshold parameter (that trades off false-positives versus false-negatives) and generating multiple solutions. But for this contest, we are only submitting a single solution, so I'm confused.I have seen examples online where the classifier returns a probability (rather than a class), and this is used as the threshold parameter to generate a curve. But the solutions for this contest are supposed to be binary 0 or 1, correct?As a follow up question, is there some industry standard open source program that calculates the ROC curve and its AUC?Thank you in advance for helping me understand this basic issue.",0,None,4 ,Sun Mar 20 2011 20:59:32 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/336,/competitions/overfitting,None /jeffsonas,Documentation of Methodology (Glicko),"I have been meaning to post an example of a well-documented chess rating methodology, so that people can understand what I would like to see from prizewinners at the end when it is time to reveal your methodology. I finally found a little time to assemble the documentation for one such system (I chose Glicko), and hopefully I will be able to do this for others as well (my next two targets will be Elo and Chessmetrics).There is clearly great value if these research efforts can be repeatable, and since we are all using different technologies or different ways to write up our methodology, it can be very challenging to take someone else's work and try to repeat it. How do you know if you got it right? One obvious approach is to have a common set of sample data, and for the documentation of methodology to include output files that result from running the code against the sample data. It is not necessary for this to be a huge amount of sample data, just enough to exercise the system a bit. And enough that someone else can feel confident they implemented the system correctly if they get the same output data at the end. So I picked twelve top grandmasters from the 1990's, and pulled five years' worth of games among just those twelve players (from Chessbase historical game databases), to constitute the training data (852 games). And I also picked the next five months' worth of games among those twelve players to be the test data (69 games total). There is also a list of initial ratings, and player names, and the solution set (telling you the results from the test set). These five files, along with a readme file, are included in the attached file ""sample_datasets.zip"". My intention is that this can constitute useful example data that people can utilize during the documentation of their system, both for explaining detailed examples and also by providing output files. It certainly is not a large enough dataset to drive any conclusions about predictive power, but of course that is not the point here.A fully documented chess rating methodology would include a prose description of the algorithm, along with any necessary references, accompanied by code that implements the methodology. It should also contain the sample data, and the values of any system constants, and the output files that you get from running the code against the sample data. I have done all this for Glicko, and it is included in the attached file ""glicko_documentation.zip"". I implemented it in Microsoft SQL Server, so my code consists of database SQL scripts, but if I had done it in some other technology, my code would be C# files or R scripts or whatever. I suppose that if different people implement the same system in different technologies, we could add the code to the zip files; that would be a very useful resource!So anyway, I hope this helps. If you are eligible to win a prize and need to document your system in order to receive the prize, this is the kind of documentation I would like to see. Please note that these writeups will be made publicly available. And in fact I would love to see documentation from anyone's system, not just for the prizewinners. I understand that people may be reluctant to post their methodology while the contest is running, but you might be surprised at how it can benefit you to have others comment on your approach, and I hope many of you will at least post your methodology after the contest is complete. Documentation should include a single zip file consisting of:(1) A PDF file containing a prose description of your methodology, although you can just refer to the references where appropriate.(2) A folder named ""references"" that includes any papers, writeups, URL's, etc. that you used within your methodology(3) A folder named ""sample_datasets"" that includes the standard set of sample datafiles (this should always be the same)(4) A folder named ""implementation"" that includes your distributable source code(5) A folder named ""sample_output"" that includes any log files and final output files that result from running your methodology against the sample dataset(6) Note that each of those four folders should include a readme text file that describes the contents of the folder.",0,None,2 ,Tue Mar 22 2011 02:06:34 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/337,/competitions/ChessRatings2,None /jeffsonas,Article #1 about FIDE ratings,"I recently wrote Part 1 in a series of articles about the FIDE Elo rating system. Here it is in PDF format. There is some overlap in the graphs with the contest dataset, but the graphs only reflect aggregate data and don't give anything away. You might have noticed in my writeup about the benchmarks, how I often needed to apply a ""compression factor"" to the rating differences when making predictions, in order to get better accuracy. This only proved unnecessary for Glicko and Chessmetrics, but was always necessary for Elo ratings. The article provides a graphical illustration of this problem. There is also discussion about whether the proper relationship between rating difference and expected score should be a linear model or a logistic model.",0,None,3 ,Tue Mar 22 2011 06:42:57 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/338,/competitions/ChessRatings2,None /jeffsonas,Anyone trying for the FIDE prize?,"Hi everyone, I would really like to get a sense for whether people are trying to win the FIDE prize. The only way for me to know this, is if you tell me, either publicly on this forum or in a message to me. Since there are ten spots available, and the winner among those ten will be based on FIDE reviewing people's writeups about their system (rather than just who was slightly more accurate than others), I don't think the competition for the FIDE prize needs to be particularly cutthroat. But of course it is up to you to decide how much to reveal. Another point is that you might wish to verify whether your approach does indeed meet the conditions for the prize, in time for you to correct any violations. Ideally I would like to know your best score so far among entries that seem to be eligible for the FIDE prize, along with how many rating parameters you are maintaining for each player (maximum is ten), and maybe a brief phrase describing your approach, such as ""Elo with more parameters"" or ""variation of Glicko"" or ""novel approach"". Even if you are not planning to compete for the FIDE prize, I would love to know that as well. Basically as much, or as little, as you are willing to share about your participation in the FIDE prize category, would be great. Thanks!",0,None,24 ,Tue Mar 22 2011 08:57:39 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/339,/competitions/ChessRatings2,None /imannavidi,My Analysis,"This is the result which I got from trying different approaches.At first I worked with Glicko model. As I read ""Mark Glikman"" Ph.D Thesis and papers, its based on Normal Distribution with different standard deviations.Using my implementation I got about 0.26 score (only with primary dataset and without white advantage score). After this I tried to build a model based on Normal Distribution but with same std dev. without optimizing parameters, i got 0.275 score. (I think even with optimization, the best score might be 0.27).In this two models, I tried to find the chaos. based on my idea, a game between two players is chaos IF(prediction - 0.5) * (result-0.5) < 0.For example if a game prediction is 0.7 (more than 0.5) but the game result is 0.2(less than 0.5) its considered as a chaos game.Based on this definition, there were about 20k chaos games (out of 105k) in Glicko model and 21k chaos games in Normal with constant std dev model.As you might know, the chaos games would set loss score more than 0.30103. So I decided to use another approach which is more uncertain about the result.In this way, I used trapezoidal distribution. and what I got were extremely conservative results. and even number of chaos games did not fall.So I think using any model will probably have about 20% chaos.And another approach which I'm currently working on, is using extreme value distribution (or semi Pareto distributions). because I think we should define the ""Skill"" again. in semi normal distributions, we supposed that a player can play higher than his skill, and I think this is not true for chess Skill. because in other sports you can use doping to gain more power, but is there any stuff which gives you more brain power? In this way I think, chess Skill is the maximum (and most probable) performance.by the way I used another approach for predicting results. Using this loss score function, The best score = Pw + 0.5*Pd in which ""Pw = Probability of Wining"" and ""Pd= Probability of draw"". And I supposed the game would beWin: If S1-S2 > DrawRangeDraw: If abs(S1 - S2) < DrawRageDrawRange is a system constant and abs is the absolute functionThis is all my analysis until know. sorry for my bad English. If anyone else has done the same job I'd like to work with him as a teammate.",0,None,4 ,Tue Mar 22 2011 10:43:18 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/340,/competitions/ChessRatings2,89th /jeffsonas,What programming language are you using?,"Hi everyone, I am curious what programming language people are using to develop their solutions. For the last contest, which had far less data, we took a survey during the first month (on the forum) and had the following results:5 answers: Java3 answers: C2 answers: BASIC, C#, Matlab, PHP, Python1 answer: C++, GNU Octave, Perl, SQLBy the way, that SQL answer is me. I am interested to know whether people were forced to use a database due to the size of the dataset, or if the other technologies are still working well. I just had someone ask me about this, because R was choking on the size of the data. I didn't realize that might happen - sorry! Unfortunately SQL is painfully slow on this dataset for something where you need to evaluate it one record at a time (such as TrueSkill) but works great for set-based solutions such as Chessmetrics, Glicko, or Elo. I would guess that C works really well for the one-at-a-time approaches. Is Java the language of choice still? Because of third party libraries?",0,None,10 ,Wed Mar 23 2011 06:15:30 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/341,/competitions/ChessRatings2,None /jeffsonas,Using the secondary and tertiary training data?,"As long as I am already spamming everyone with questions, why not ask one more: are people finding the secondary and tertiary training datasets to be of use? If you don't know what I am referring to, you can look for the ""Additional Training Datasets"" page (you will need to go to the ""Information"" page first in order to get to the link). I am not quite sure why there have been no questions asked about these so far; hopefully the reason is that the documentation was easy to understand.My general sense was that the additional training datasets were very useful for the forward-only updating algorithms that need all the data they can get in order to gradually build ratings for everyone (such as Elo) but for something that looked backward in order to reinterpret games from the past (such as Chessmetrics) there was only marginal benefit to using the secondary and tertiary training datasets. Or it is possible that Chessmetrics is just strange since it never maintains ratings for everyone, and that almost all approaches can benefit from the extra data. I know they probably add a level of undesirable complexity, but on the other hand since so many people were going to be grinding away at this problem, I did want everyone to have as much data as possible.The other reason I hesitated to include the secondary and tertiary training datasets was that they are of limited practical use, moving forward, since there will never be any future game results generated in these formats; it is only an artifact of the way data was collected in the past and the fact that we don't have good game-by-game results for the first several years of the training period. So I don't necessarily want to reward people for sophisticated use of the additional training datasets, as this sophistication is only useful within the bounds of this particular contest, and has no real-world value.Anyway, I guess I am just wondering what other people's thoughts are on these additional training datasets...",0,None,4 ,Wed Mar 23 2011 22:35:36 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/342,/competitions/ChessRatings2,None /shenss,"""Leaderboard"" variable amount","Hi all,since there will be a second part of the competition, but no leaderboard provided for this (for obvious reasons), maybe making the size of the variable subset public to the forum could further stimulate the competition !?I'll start: For my current public AUC of 0.889959 I worked on 120 variables.",0,None,1 Comment,Thu Mar 24 2011 18:59:04 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/343,/competitions/overfitting,90th /lvdmaaten,Leaderboard errors?,"Is it just me, or are weird things happening on the leaderboard since the recent Kaggle updates? The MAEs of the same submissions appear to have changed twice over the last 24 hours.",0,None,8 ,Mon Mar 28 2011 10:50:17 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/346,/competitions/WIC2011,3rd /yinghaoh85,Happy?,"Ok. Fine. You guys want fair? Post all of your IP address and account and let's do it in formal. Let's review what is the regulation of the competition, sentence by sentence.",0,None,1 Comment,Tue Mar 29 2011 23:14:04 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/348,/competitions/stayalert,None /antgoldbloom,External Data,"Entrants are welcome to use other data to develop and test their algorithms and entries until 11:59:59 UTC on April 4, 2012 if the data are (i) freely available to all other Entrants’ and (i) published (or a link provided) to the data in the “External Data” on this Forum topic within one (1) week of an entry submission using the other data. Entrants may not use any data other than the Data Sets after 11:59:59 UTC on April 4, 2012 without prior approval.",0,None,45 ,Sat Apr 02 2011 18:23:34 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/349,/competitions/hhp,None /karansarao,Variable Transformations,"Was wondering how others are going about the best variable transformation discovery. I remember reading in Olivia Parr Rudd about the 20 odd transformations (Log, Inverse, Square, cube, roots, Exp, Sin, Cos the works) and then retain the transformation with the highest Wald Chi value. One could write an R routine which for each of the 200 variables tries out 10 transformations individually and retains the best and then rebuild using GLMNET. Any other approaches...thoughts?",0,None,7 ,Mon Apr 04 2011 16:22:41 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/350,/competitions/overfitting,63rd /davec6371,Drip Feed of Data,"Hi everyone. Does anyone know why the data is being drip-fed to us over several weeks? Is it just because the organisers aren't quite ready yet? Or is there another reason - perhaps related to why the Accuracy Threshold won't be decided for some time either? Cheers, Dave",0,None,7 ,Mon Apr 04 2011 23:54:14 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/351,/competitions/hhp,313th /chefele,Initial Questions about the Rules & Dataset,"First, thanks to all the folks at Kaggle for running this contest. I'm sure today (""launch day"") has been a busy one. I've got a few questions about the rules & datasets --- can any of the organizers provide some answers? Thanks. RULES QUESTIONS Leaderboard: The ""Evaluation"" page, it says entrants can submit beginning April 18th... but on the ""Rules"" page, section 8 (& elsewhere) it says submissions can begin on May 4th. Is the April 18th date a typo? Or can we submit then, but the leaderboard is just not active until May 4th? (I'm assuming a typo, but want to verify). Outside Data Use: The rules say outside data is permitted until April 4, 2012, as long as the source is publically declared in the forums. Can you clarify what happens after that date? Is the intent that after 4/4/2012, we can only use the data sources that others have already declared in the forums? Or is the use of outside data after that date forbidden? (I'm assuming the former, but want to verify). Milestone prize winner: In section 13 of the rules, it says that Milestone prize candidates will provide their algorithm code and documentation and that the sponsors (Kaggle?) will ""post the information on the Website for review and testing by other Entrants."" What does ""information"" mean? Does it mean that all the CODE of the winners will be made available to all competitors, or just the high level descriptions of their algorithms? DATASET QUESTIONS Y5: Dataset ""Y5"" is mentioned in the evaluation page, FAQ, and in the Rules (section 12). However, it's not described fully. In the Evaluation tab, for example, it mentions ""Y4 (or if applicable, Y5)."" Can you elaborate on Y5? When would competitors use it instead of Y4? Why use the phrase, ""if applicable""? Sampling: Is the set of patients a random sample of all the patients, or was some selection critereon applied? (i.e. were non-emergency hospitalizations, like childbirths, excluded?) Releases of Data: Any particular reason why the data tables are not all being released at the same time? (e.g. is it being made available as soon as it becomes available from it's source?) Just to state the obvious, competing is harder without the full dataset in hand! ;)",0,None,3 ,Tue Apr 05 2011 04:14:53 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/353,/competitions/hhp,84th /jhoward,"Use your real name when signing up, and only sign up once!","Nearly everyone who is signing up seems to be using their real name; however there's still a few that are very obviously fake. Please folks, realise this: if you don't provide that information accurately now, and in 2 years time you are at the top of the leaderboard, if your real name does not match what you signed up as then you will not be eligible for the prize! This will be audited for the winner, and for progress prize winners. So please double-check now - if you made a mistake, or didn't consider this when entering, please contact us using the link on the site and let us know, and we'll update it for you. (Be sure to do this ASAP - if you wait, it might be too late!) Also, please be very aware of the rule that you can only sign up once - again, please don't get yourself in a situation where you are not eligible to win a prize because in prize-winner auditing it turns out you've got a 2nd account! Don't risk it - there's lots of ways, both obvious and not-so-obvious, that multiple entries can be identified. With 2 years to analyse this data, there's no point risking being disqualified. (The advisory board for this comp includes a Netflix prize winner, a 3-times KDD cup winner, a de-anonymization expert, and so forth - there's lot of real-world experience there in running and competiting in big comps, so they know all the tricks and how to spot them!)",0,None,16 ,Tue Apr 05 2011 04:23:15 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/354,/competitions/hhp,None /warrenblackwell,Any chance the initial data set still comes out today?,"Any chance the initial data set still comes out today? If not, I can stop refreshing! Thanks, WB",0,None,2 ,Tue Apr 05 2011 05:11:54 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/355,/competitions/hhp,None /jhoward,Media coverage of the HPN launch,"It's been great to see some good media coverage of the HPN launch, including information about some of Kaggle's past competition winners. If you spot any interesting coverage, let us know in this thread! I'll start off with this: here's an article from Australia's main business newspaper, which even includes a picture of me in my mum's beautiful garden! http://jhoward.fastmail.fm/media/kaggle/Data%20analysis%20hits%20the%20big%20time.pdf",0,None,2 ,Tue Apr 05 2011 05:28:34 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/356,/competitions/hhp,None /diegoorofino,he Heritage Health Prize - New York City Group,"If you live in New York, and you want to be part of group of smart engineers and data analyst looking to crack the code behing the current issue behing unnecessary hospitalizations, please become a fan http://www.facebook.com/pages/The-Heritage-Health-Prize-New-York-City-Group/179749388740137",0,None,1 Comment,Tue Apr 05 2011 05:40:12 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/357,/competitions/hhp,None /ogenex,Missing values in DayInHospital_Y2,I'm wondering why there would be missing values for some patients in the DayInHospital_Y2 table? Wouldn't a missing value imply zero days in hospital?,0,None,11 ,Tue Apr 05 2011 06:17:16 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/358,/competitions/hhp,516th /ejlok1,Questions on Members_Y1 data,"Hi I'm creating a post on any questions relating to the Members_Y1 data. So anyone with questions relating to this data can just slot it in here. Hope that helps manage to number of post in the forum I'll start off with a dumb question...the AgeAtFirstClaim, i'm getting some values of 1/10/2019 (Oct-19). Is this correct and what does it mean? Thanks EJ",0,None,2 ,Tue Apr 05 2011 06:45:22 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/359,/competitions/hhp,9th /ferozdsilva,Dates in claims data,The claims data does not contain date field. Can we get the date field also.,0,None,11 ,Tue Apr 05 2011 09:18:01 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/360,/competitions/hhp,None /medgle,External clinical informatics data and Intellectual Property,"First of all, this competition is amazing. :) However, before agreeing to the rules/etc, we had a few questions. 1: Is it the correct interpretation that kaggle/heritagehealthprise has royalty-free ownership of algortihms developed for the competition and as such can develop/license the algorithm to third parties? If no, then please do explain. 2: Over the last 4 years, we have collected ~100 million + data points cross-connecting symptoms, diagnoses, age, gender, duration, tests, and more to analyze clinical data. The data was collected via a large meta-analysis of information from the CDC, NIH, etc. and reviewed by our physicians. For more details about the data/algorithm please visit http://www.medgle.com/front.jsp. Can we use this data and algorithms to help us analyze the data you have provided in this competition? Thanks so much Ash",0,None,4 ,Tue Apr 05 2011 09:49:26 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/361,/competitions/hhp,None /patternengine,Publishing in academic journals methods developed for HHP?,"Hi all, Part of my interest in the HHP is to publish in an academic journal any novel methods I develop. As I read Rule 22, in order to publish a paper that contained any results from the HHP data set, I would need to get explicit written consent from the Sponsor. So, question to the powers-that-be: In general, is permission likely to be granted if/when someone wishes to publish an academic paper that contains results from a method run on the HHP data set (but not the data themselves)? And would one still be allowed to use a such a method to submit entries into the competition? (esp given the ""not previously published"" part of Rule 20). Thanks! Rich",0,None,2 ,Tue Apr 05 2011 14:15:40 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/363,/competitions/hhp,105th /innovative,Ambiguity in Intellectual Property section,"I precieved that, the rights to use the algorithem in any way, will be granted to the sponser, but not transfered. It means that the participant team also may use them in scientific publications or commercial products as it is not denied explicitly. Is this right?",0,None,23 ,Tue Apr 05 2011 14:22:10 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/364,/competitions/hhp,None /innovative,Using of priori knowledge,"I would like to know if it is permitted to use priori knowledge in designing of the algorithem? Some of the computational algorithems, allow to embed priori knowledge (e.g. known medical logic) into the algorithem structure or execution (e.g. Knowledge based Neural Networks) and infact most of the time, these two are not seprable in an algorithem. So the algorithem will be applyable with the same level of accuracy to new datasets as well.",0,None,1 Comment,Tue Apr 05 2011 14:30:55 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/365,/competitions/hhp,None /cybaea,Scoring formula for missing actual (a_i) values?,"How is the scoring formula to be implemented for missing values in the actual data set? (We are seeing missing value in the Y2 set, so presumably they can also be present in the Y4 set.] Is the sum i=1..n done over the not-missing values a_i, or Are missing values treated as zero (a_i = 0)? I am guessing the latter?",0,None,3 ,Tue Apr 05 2011 14:58:14 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/366,/competitions/hhp,109th /solorzano,Primary Condition Codes,Are the primary condition codes based on a standard? They don't appear to be ICD-10 codes. It would be difficult to augment the dataset if they are ad-hoc.,0,None,1 Comment,Tue Apr 05 2011 15:18:23 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/367,/competitions/hhp,114th /rkaanozbayrak,Actual location info,"It would be very relevant to the task to know the actual whereabouts of the members, or at least the primary care physicians. Could we get a zip code table for the primary care physicians?",0,None,3 ,Tue Apr 05 2011 15:31:12 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/368,/competitions/hhp,675th /newmom,"what's the difference of Y1,Y2,Y3 and y4daysinhospital?","y1is the year of claim? y2 is the second year daysinhospital? y3 the third year? or y2,y3,y4 are days of hospitals for different parts of the member ids in second year?",0,None,1 Comment,Tue Apr 05 2011 16:08:46 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/369,/competitions/hhp,None /eyesoftx,"Prediction Accuracy Formula ... which ""log"" will be used?","The prediction accuracy formula involves the use of logs. Which ""log"" is being used? Natural log? Log to base 10? Or something else?",0,None,6 ,Tue Apr 05 2011 17:03:03 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/370,/competitions/hhp,None /sandracarrico,Mapping of Yn to actual year,"Can we map each of Y1...Y4 to actual years? Can we assume Y4 is 2010 and that makes Y1 2007? Regardless, can we assume these years are sequential?",0,None,1 Comment,Tue Apr 05 2011 17:11:59 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/371,/competitions/hhp,None /igor47690,Sequence in claims data,"Is it possible to include a claim sequence, describing the order of the claims, in the claims data (as in SAF (standard analytical files) CMS data from Medicare) + month or quarter of the event? (If Heritage privacy officer does not feel comfortable with the actual date) The information that providing a date is not possible due to privacy concern is not actually true. The HIPAA rules do not prohibit the actual dates in the data: [Link]:http://privacyruleandresearch.nih.gov/pr_02.asp Moreover, multiple data vendors that sell APLD (anonymous patient-level data) provide HIPAA compliant dates. For example, all these vendor of patient-level data have the actual date of a visit/Rx: i3/Ingenix/United Healthcare, IMSHealth/SDI/Verispan, Wolters Kluwer, Thomson/MedStat, etc. How can we predict hospitalization if we do not know if it happens before or after the ER or if ER happens in the beginning or in the end of the year? All predictive models based on patient-level data include sequence of events, otherwise they are useless and do not ""contribute to generalizable knowledge"" and, therefore, violate the HIPAA requirement that patient-level data must be used only with a purpose of “a systematic investigation, including research development, testing, and evaluation, designed to develop or contribute to generalizable knowledge.” See 45 CFR164.501. [Link]:http://www.hhs.gov/ocr/privacy/hipaa/understanding/special/research/index.html",6,bronze,3 ,Tue Apr 05 2011 17:24:55 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/372,/competitions/hhp,None /trezza,Can one be at home and hospitalized at the same time?,"Certain claims data member has service being provided at ""HOME"" and at the same time being hospitalized for 1-2 weeks. Can someone clarify what this means? Thanks, -Cathy",0,None,6 ,Tue Apr 05 2011 18:48:26 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/373,/competitions/hhp,504th /goodi1342,Field description,"Anybody knows if there is a field description available? Fields such as: Provider versus vendor, pcp, Year, specialty, paydelay, dsfs, PrimaryConditionGroup, CharlsonIndex. Thanks",0,None,1 Comment,Tue Apr 05 2011 19:49:47 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/374,/competitions/hhp,None /makagan,Patient Selection,"What was the criteria for a patient to be added to the datset? Were there certain criteria that the patient or the patient's information needed to satisfy to be added to the data-set? Was the sampling random from some pool of patients? I am trying to understand what kind of selection bias might be present in the data-set. For testing and validation at later stages, are we guaranteed that the patients in the testing data sets are selected with the same criteria?",2,bronze,14 ,Tue Apr 05 2011 20:05:22 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/375,/competitions/hhp,None /chefele,Is any more claims data coming?,"Right now, we have claims data for only year 1 (Y1). Eventually, we'll have to make predictions about a patient's hospital stays in year 4 (Y4). So right now, given what we have, it looks like we'd have to make predictions across a 2 or 3 year gap (predicting Y4 hospitalizations based on Y1 claims). That seems much more difficult than, say, predicting Y2 hospitalizations using Y1 claims. So will there eventually be any more claims data provided for Y2 & Y3? (say, in the May 4th release of data?) I know we'll get daysInHospital for Y2 & Y3, and that will help, but claims data for Y2 & Y3 would be even better!",0,None,2 ,Tue Apr 05 2011 20:38:38 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/376,/competitions/hhp,84th /igor47690,Data Problems: Inpatient Hospital stays w/o LengthOfStay & Outpatient LOS,"""Claims_Y1"" file contains around 18K inpatient records ([placesvc]=""Inpatient Hospital"") w/o LengthOfStay (LOS). The standard definition of the inpatient record is that patient stayed in the hospital overnight (of cause, there are exceptions, coming from ER admissions, etc., but they could be easily resolved). I.e. LengthOfStay for every inpatient record should be at least 1 day. Unfortunately, there are thousands of records where inpatient stays do not have any LengthOfStay (LOS). Moreover, there are thousands of records where patients have LengthOfStay in the Outpatient Hospital, Physician Office etc. which is illegal in US because those facilities do not have a proper JCAHO/TJC certification or other legally required certifications for patient overnight stay (i.e. for LengthOfStay>0). ( [Link]:http://www.jointcommission.org/ ) Usually, ""days in hospital"" are defined as sum of all LengthsOfStays for inpatient visits. How did you define ""days in hospital"" in ""DayInHospital_Y2"" file and did you count days in Outpatient Hospital as ""days in hospital"" (especially, because many Outpatient Hospital records have non-zero LengthOfStay)? If days in Outpatient Hospital were counted as ""DaysInHospital_Y2"", then we are not predicting actual hospitalizations, but something different instead. What about LTC (long term care), did you include LTC stays (which could be months) in the ""DaysInHospital_Y2""?",7,silver,10 ,Tue Apr 05 2011 21:39:19 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/377,/competitions/hhp,None /jonlapointe,Is this considered a contest or are Quebec's residents allowed to participate?,"I know that the province of Quebec in Canada is not allowed in many contests online because of some regulations. (Same thing for Arizona I thing from what I see online.) As a Quebecer, can I participate? What if I join a team based elsewhere? Jonathan",1,None,2 ,Tue Apr 05 2011 22:31:37 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/378,/competitions/hhp,None /rickr7766,Intellectual Property Rights,"Several coworkers and I would like to form a team. We plan to leverage some proprietary software from our company to help learn a model that can make the necessary predictions. This software is general purpose (domain independent) learning software that will learn a predictive model (algorithm) that will then be used to assess each exemplar and predict some number of inpatient days. The question concerns the licensing agreement. If we submit entries, would we be required to give you all of the learning software, or just the predictive model that is used to make predictions in this specific domain? We can't give away our proprietary software, so this would be a show stopper for us...",0,None,8 ,Wed Apr 06 2011 01:38:24 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/379,/competitions/hhp,None /ejlok1,Organising topics in the forum?,"Hi all I'm struggling to keep up with all the topics in the forum at the moment. They are all important but there are already 29 topics in the forum only just after 2 days into the competition. I can't speak for the rest but would it be possible to organise the topics by groups? Say for example, a group for any questions relating to the competition rules, one for the Claims data, one for the Members data, etc. I think this would be really helpful moving forward especially for those who join the competition at a later stage. Should make it easier to look for relevant information. Thanks - Eu Jin",0,None,3 ,Wed Apr 06 2011 03:04:24 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/380,/competitions/hhp,9th /igor47690,DaysInHospital outside of HPN (Heritage Network IDN): Leakage Issues,"Quick question: in this data, did you include patients who went to hospitals outside of HPN after seeing physicians in HPN? (I.e. patients went to a HPN physician office, for example, in the end of Y1 and in the beginning of Y2 and then went to the hospital outside of HPN) I.e., for example, patients went to a Cleveland Clinic for a complicated heart surgery after seeing HPN physicians in Y1 and Y2. Are those patient included in the database? Moreover, what if these patients continued going to the non-HPN inpatient hospitals in Y3 and Y4 (i.e. HospitalDays were outside of the HPN network)? Local California patients could potentially go to Kaiser or other California hospitals after seeing physician form HPN. If there is a Leakage in the database (i.e. patients do go to the hospitals outside of the database), is it possible to know the size of the leakage (i.e. % of cases when patients go to hospitals outside of HPN)? Thank you for the clarification, Yours truly, Igor",1,None,7 ,Wed Apr 06 2011 07:02:35 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/381,/competitions/hhp,None /del=9147e4df7f188a91,DaysInHospital do not add up,"Hi DaysInHospital do not add up. Example: memberid = 929358906 claim_count = 24 sum(lengthofstay) = 18 daysinhospital = 15 Another issue: what if somebody has 5 claims, each 1-2 weeks. What is the predicted length of stay? 5 weeks? 10 weeks? 4-8 weeks? 8-12 weeks? Thanks",0,None,1 Comment,Wed Apr 06 2011 07:27:57 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/382,/competitions/hhp,None /michaelbenjamin,"Is it just me, or should we get better data?","I was hoping for ICD-9 codes and CPT codes, but looks like we got diagnosis lumped by category. Does anybody else wonder what the claims are ""for?"" When I submit a claim to an insurance company, I have to tell them a procedure code. Then they look up in their database the code and reimburse me based on the pay scale for that code. I understand CPT is licensed by AMA, but shoot, you're putting up $3m for a contest, can't you at least put claim info in your claims table? I had this notion that I would be sitting around a kitchen table with other docs trying to classify cases based on actual patient information, but it looks like we are getting way less quality info than that here.",1,None,1 Comment,Wed Apr 06 2011 07:43:36 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/383,/competitions/hhp,None /mgomari,Can you provide days in Hospital in Y1?,Is it possible to add to the data the number of days in hospital in Y1? Thanks,1,bronze,18 ,Wed Apr 06 2011 08:34:33 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/384,/competitions/hhp,None /bkadry,Other Data Fields Needed: Can you release the following data?,"This data set is extremely limited and creating any clinically relevant algorithm to predict hospital readmissions is virtually impossible. I think there is value in simply identifying which data fields would be necessary to actually help answer this challenge. So maybe if the forum can help identify these fields the organizers can help release this data. Some things that come to mind: 1) Procedures (e.g. CPT Codes) 2) Lab Values (e.g. Cr, INR, BUN, Hgb, Alb, etc). 3) Diagnostic Studies (CXR, MRI, CT, etc) 4) Social History (Poly Substance Abuse, EtOH, Smoking, etc) 5) Medical History (Obesity, CAD, COPD, CHF, DM, SCD, ICD9-10, SNOMED, etc) 6) Vital Sign Data (HR, BP, Saturation, weight) I can think of atleast 20 more but I'm curious to hear other people's thoughts.",3,bronze,31 ,Wed Apr 06 2011 09:07:05 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/385,/competitions/hhp,None /fordprefect0,Versioning datasets,"I'd like to suggest that dataset filenames be versioned explicitly in future, to prevent confusion in case some other problem like the missing DaysInHospital values shows up, either with this or the next data release. A simple system like adding a consistent version number in (all) the filenames would be enough to prevent tragic mistakes.",3,bronze,1 Comment,Wed Apr 06 2011 09:28:07 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/386,/competitions/hhp,557th /linnemann1,"Missing data in ""Paydelay"" in dataset Claims_Y1","Out of 644,706 claims in Y1 44,623 claims have a missing value for PAYDELAY (the delay between the claim and the day the claim was paid for). There are 157 claims with 0 day delay, the rest somewhere between 1 and 161 days. How to interpret the missing values in PAYDELAY? Is this an error like the missing values in Daysinhospital_Y2?",0,None,1 Comment,Wed Apr 06 2011 09:29:59 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/387,/competitions/hhp,None /ashasho,Residency issue,"Hi I have a talented colleague in my team, originally from an ineligible country but resident(living and studying) in an eligible country, is it possible for her to contribute? Also a genius student of mine is living in a bad(?) country, may we have him as a member of our team? I think that residency is subject to interpretation based on different countries rules. Regards",0,None,20 ,Wed Apr 06 2011 11:28:01 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/388,/competitions/hhp,None /cybaea,Why are even DaysInHospital_Y2 favoured over odd?,"Does anybody have a theory as to why DaysInHospital_Y2 favours stays with an even number of days so much over the odd? Maybe some rounding artifact or is there anything intrinsic to the US health system that would favour, say 4 days over both 3 and 5? #!/usr/bin/Rscriptdih.Y2 <- read.csv(file = ""HHP_release1/DayInHospital_Y2.csv"", colClasses = c(""factor"", ""integer""), comment.char = """")names(dih.Y2)[1] <- ""MemberID"" # Fix broken filet(table(dih.Y2$DaysInHospital_Y2)) gives something like | 0 | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | 14 | 15 ||-------+------+------+-----+------+-----+-----+-----+-----+-----+-----+----+-----+----+-----+-----|| 64361 | 5493 | 1382 | 778 | 1607 | 531 | 692 | 280 | 472 | 176 | 259 | 99 | 179 | 79 | 119 | 782 | Note that the 4 frequency is bigger than both 3 and 5; 6 is bigger than both 5 and 7; etc. (The 15 is special because it encodes all values ≥15.) At the risk of a bad pun, it seems a little odd….",4,bronze,13 ,Wed Apr 06 2011 13:49:46 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/389,/competitions/hhp,109th /friendlyandhelpfulconsultants,"Solution idea for limited data problems, reidentification attacks","Proposal: Have competitors execute full-blown HIPAA-compliant NDAs and the rest of the (legal) needful to get access to the raw data. Still replace names and SSNs so identification is not trivial. Then again, I'm not in the industry so I don't know how much of a nightmare that might be for an individual hobbyist programmer. Thoughts? Has this already been discussed?",0,None,2 ,Wed Apr 06 2011 14:35:34 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/390,/competitions/hhp,None /ejlok1,Male Pregnancy?,"Hi I discovered quite a few Male patients whom I believe were pregnant (PRGNCY), ages from 0-9 up to 80+. For instance, memberID 11832375. I googled male pregnancies to check and so far, this is still an unachievable feat. Perhaps I've mis-intrepreted the information? If so, can someone please clarify? Thanks EJ",1,None,20 ,Wed Apr 06 2011 14:39:52 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/391,/competitions/hhp,9th /byang1,submission limit for multi-person teams,"If I understand the rules correctly, each person can be on only one team, and each team can make only 1 submission per day. This doesn't encourage formation of teams, because when you form teams, you lose submission slots. It may not be a big deal for small 2-person teams, but for larger teams it may be a significant issue, especially during times when you tend to work feverishly (like the days before deadlines). Therefore, I suggest changing the daily submission limit to the number of persons on the team, with a maximum cap of 4 submissions per day, even though teams may have up to 8 people.",1,None,1 Comment,Wed Apr 06 2011 19:17:41 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/393,/competitions/hhp,2nd /tristanz,Call to Boycott Heritage Health Prize,"Researchers, Heritage recently changed the license terms to demand complete exclusivity: By registering for the Competition, each Entrant (a) grants to Sponsor and its designees a worldwide, exclusive (except with respect to Entrant), sub-licensable (through multiple tiers), transferable, fully paid-up, royalty-free, perpetual, irrevocable right to use, not use, reproduce, distribute (through multiple tiers), create derivative works of, publicly perform, publicly display, digitally perform, make, have made, sell, offer for sale and import the entry and the algorithm used to produce the entry, as well as any other algorithm, data or other information whatsoever developed or produced at any time using the data provided to Entrant in this Competition (collectively, the ""Licensed Materials""), in any media now known or hereafter developed, for any purpose whatsoever, commercial or otherwise, without further approval by or payment to Entrant (the ""License"") and (b) represents that he/she/it has the unrestricted right to grant the License. Entrant understands and agrees that the License is exclusive except with respect to Entrant: Entrant may use the Licensed Materials solely for his/her/its own patient management and other internal business purposes but may not grant or otherwise transfer to any third party any rights to or interests in the Licensed Materials whatsoever. Academics should also note that they cannot freely publish their results, even if journals accept publishing proprietary algorithms: Rule 20: ""entry (i) was not previously published"" Rule 22: ""The Data Sets may not be used for any purpose other than participation in the Competition without Sponsor's prior written approval. If you wish to use the Data Sets for research purposes, please contact Sponsor via the Website's ""Contact Us"" form, including a reasonably detailed description of the proposed research. All such requests will be given careful consideration."" This competition is now a shortsighted R&D effort for Heritage. I cannot see how any company or academic can submit results under terms remotely like these. Companies are bought and sold and have many assets that intermix and academics require owership of their creations, both for publications and to build future work. I urge Heritage to quickly change these rules and to follow the Netflix guidelines. Entries should require no license except being described in enough detail that a competent user can recreate the solution. Researchers should be free to publish results as they see fit. This will benefit Heritage commercially by ensuring the algorithms that are developed are far better than they would be otherwise, while protecting the interests of all involved. It is sad that this prize, which has significant potential, will have little impact on academic research or public health under the current terms. I encourage anybody that feels similarly to post their support.",9,silver,66 ,Wed Apr 06 2011 20:34:20 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/394,/competitions/hhp,None /finelinesysdes1,Conflicting UIDs,"While this isn't an extreme problem, I have found at least one instance where a ProviderID and a MemberID are equal. Good software development prinicples will overcome the issue, but I felt I should make it known to the competition committee.",0,None,1 Comment,Wed Apr 06 2011 22:34:20 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/395,/competitions/hhp,None /del=ca464592981e770f,Username and e-mail update?,"Hi, Is there a way to update the username and e-mail of ones profile? I cant seem to find the options to do so anywhere. Thanks",1,None,2 ,Wed Apr 06 2011 23:47:15 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/396,None,None /davidweiss,Why I won't be participating. [Team Ensemble Member],"Hi all, I am a 3rd year graduate student in Machine Learning. I was a competitor in the Netflix Prize who finished in 2nd/3rd place each year of the competition. I was a member of the team ""The Ensemble"" which finished in 2nd place overall due after a tied score that was submitted 20 minutes after the winning team. I also have published a paper at the International Conference of Machine Learning (ICML) on collaborative filtering based on an algorithm developed with a teammate during the Netflix Prize. So I do have quite a bit of expertise both in the general research area and also in statistical competitions like these. As a graduate student in Computer Science, scientific papers are my life's blood, and crucial to my career. The Netflix Prize was set up in such a way that it was mutually beneficial for both competitors and Netflix -- although only a few researchers got to share the $1 million, many other competitors wrote many papers that completely changed the field of collaborative filtering for the better in a very short time. The reason it worked was that the data was easy to access BUT also because it was clear that Netflix was not trying to exploit the community with an extremely restrictive licensing agreement; you had to share your code with them if you won, but the only license you gave them was to use it themselves. If you qualified, you also agreed that they could keep whatever ""residual"" information they obtained during the examination process, which is reasonable -- they didn't want anyone suing them for copyright infringement. But Netflix was not going to be selling your work to others, nor were they going to keep you from publishing. The Rules of this competition go far beyond simply protecting themselves from lawsuits to seeming exploitation of the community they hope to inspire. From my reading of the terms of service, if I participate (and I was fully planning to) in this competition, I have to: 1. Not use any of my previously published research in the prize. (I don't understand what they mean by ""previously published"" -- all basic statistical methods have been published at one point or another. Nobody is going to invent anything that doesn't use some sort of probabilistic reasoning or optimization principles that haven't been already published.) 2. Give the Sponsors license to sell whatever work I just SUBMIT to them, even if they pay me nothing or I get nothing in return. 3. If I do end up developing something that works, I cannot publish it without the Sponsors consent. I.e., they may decide that they would rather sell my work to others and keep it secret, rather than allow me to further my academic career and publish my algorithms at a scientific conference (that competing healthcare providers would be able to see and use for their own profit.) 4. I am not even sure if the IP agreements I signed as a graduate student would even allow me to agree to these conditions. So I support the Boycott and I plan on discouraging any other CS grad students I find who are interested in the problem (and I have found many, who feel the same as I do). The rules completely violate the stated purpose of the competition, which was to inspire new research, and instead seems like a grab for free R&D by dangling a $3 million carrot. Sincerely, David Weiss P.S. The views expressed here are solely my own and do not represent the Ensemble, Grand Prize Team, Dinosaur Planet or any of the other teams that I was affiliated with in the Netflix Prize, nor my employer.",10,silver,47 ,Thu Apr 07 2011 00:10:38 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/397,/competitions/hhp,None /ravi13878,PCPs with large number of members,"It appears that some primary care physicians have a large number of members associated with them - is this normal? For example, pcp id 842615 is associated with 20292 members! I'm not from the US. My understanding is that PCPs are doctors assigned to a member as his/her first level of contact, and that the pcp refers the member to other specialists if there is a need. Based on this understanding, I'd expect that pcp's cater to perhaps a few hundred members or so. 20292 members is just way beyond what I'd expect. Or is is that a common PCP id is assigned to an organization with a number of doctors or a hospital that can cater to a large number of members?",1,None,2 ,Thu Apr 07 2011 05:37:11 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/398,/competitions/hhp,None /mgomari,Interpretation of missing values?,"Can you please clarify what missing values mean for: 1. paydelay (eg. is it no delay (i.e. 0), or not known) 2. LengthofStay (About 85% of inpatient hospital claims have missing LengthofStay, which violates being an inpatient claim unless it means the start and end service dates were the same, i.e. 0 or 1 day (depending on interpretation). Or do missing values simply mean no information, which is hard to believe for 85% of inpatient claims) In general, it is a good idea to include this info in the data page after each column definition or use meaningful globally defined values for interpreting the missing values. Thanks",3,bronze,1 Comment,Thu Apr 07 2011 07:22:12 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/399,/competitions/hhp,None /toulouse,Commercial tools,I have a very simple question: Can I use a commercial tool to participate to the competition ? Thanks!,0,None,1 Comment,Thu Apr 07 2011 13:10:22 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/400,/competitions/hhp,168th /gregs1,Leaderboard?,"I apologize in advance for asking such a basic questions, but I can't seem to find any link to the Leaderboard. The rules state: ""A public leader board (""Leaderboard"") will be displayed on the Website throughout the Competition beginning on May 4, 2011."" OK, but I don't see a link anywhere on any page for it. I tried searching messages for the term ""Leaderboard"" but it came up dry which seems to indicate that kaggle search might be broken. So how does one get to the Leaderboard?",0,None,4 ,Thu Apr 07 2011 15:30:35 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/401,/competitions/hhp,863rd /aeoliana,Rules Confusion,"So I don't really understand where I stand legally with this competition... When I signed up I agreed to one set of rules, a set of rules that has since been changed. Yet since the alteration I have not been prompted to accept them as part of my continued participation, I did not have to agree to the new terms as a part of downloading the data. So where does that put me? Which set am I bound by? It's just kind of ridiculous that apparently by submitting an algorithm I am signing over the rights to my research for a chance at being considered for the prize. A consideration which as of right now is still undefined. So the following is possible?: I submit an algorithm that predicts the correct values in every case but one HHP says that it does not meet their standards for precision and refuse to pay me They keep my algorithm and use it anyways Is my interpretation incorrect? Also, is it just me or is this some incredibly weak data. How am I to correlate gender with anything whatsoever when the gender isnt really the gender of the individual being considered but the gender of the primary for the care plan? Some please enlighten me :(",2,None,1 Comment,Thu Apr 07 2011 15:57:08 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/402,/competitions/hhp,None /matthewclark,Prediction baseline,"I get a score of 0.279, using the published function, by using the mean days of hospitalization per patient, 0.665, as the prediction for each patient, and comparing to the data in DayInHospital_Y2.csv (This assumes that ""log"" is log base 10.) So that is the ""null"" score and the predictive methods can be compared to that. Matthew",3,bronze,68 ,Thu Apr 07 2011 18:17:45 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/403,/competitions/hhp,None /rafael0,How do I submit entries?,"""Each entry must be uploaded to the Website in the manner and format specified on the Website."" Where ? Do algorithms need to be in one specific programming language ?",0,None,1 Comment,Thu Apr 07 2011 19:15:43 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/404,/competitions/hhp,None /smartersoft,Accuracy Threshold set?,"Did I miss something regarding the threshold to become eligible to win this competition? They say that you need to reach a certain accuracy threshold, but I can't find that number. It seems they might be moving the bar later in the game. What if the data is so limited that providing anything better than 40% accuracy is impossible? Will the bar be at 60%? My point is that if the bar isn't set now, we don't know if we are just wasting our time chasing an impossible goal. --alex",0,None,2 ,Thu Apr 07 2011 19:35:57 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/405,/competitions/hhp,796th /scottp,are predictions required in real time? size of real database/set?,"I am going to assume that the data sample is only a very small sample of the larger database. I also am going to assume that the real data set is much larger - say in the terabyte range.1. Can you give us an idea of the size if the ""real"" database/set?2. Would any algorithm be required to make it's ""predictions"" in real time?Note:If this is the case, I would assume that this knock out just about all ""classic"" neural net algorithms because one, the ""training"" time would required would be unacceptable, and two, you could not update the nets in real time.",0,None,3 ,Thu Apr 07 2011 20:07:02 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/406,/competitions/hhp,None /antgoldbloom,IP Issue - Behind the Scenes,"I want to reassure everyone that HPN is working hard behind the scenes to clarify the IP issue. It is not their intention to prevent people from using standard tools, nor to discourage anyone from applying their innovative ideas to this problem. For background, at Monday's launch event, Dr Richard Merkin, the man behind the prize, spoke of the long tradition of innovation that has resulted from past prizes. He spoke of: the Longitude Prize (http://en.wikipedia.org/wiki/Longitude_prize) - apparently Newton and Galileo had attempted to solve this problem but the winner was a self educated clockmaker from Yorkshire; Napoleon's food preservation prize - won by a confectioner and resulted in the invention of canned food; the Orteig Prize to fly non-stop from New York to Paris (http://en.wikipedia.org/wiki/Orteig_Prize) - won by the unlikely Charles Lindbergh. It is his hope that this prize will spur similar innovation to solve one of America's most vexing problems. We appreciate your patience while we await clarification. Kind Regards, Anthony",4,bronze,4 ,Thu Apr 07 2011 20:33:45 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/408,/competitions/hhp,None /igor47690,Evaluation Metric Issues: Solutions where 100% of patients went to hospital could have higher score!,"I am afraid that Evaluation Metric might have another issue: solutions where 100% of patients went to the hospital could have higher score than solutions with right number of patients (around 17%) who went to the hospital. In the first case, even if DaysInHospital are predicted better (based on the Evaluation Metric epsilon: [Link]:http://www.heritagehealthprize.com/c/hhp/Details/Evaluation ) the predictive model is not very useful for HPN because it predicts completely wrong number of patients. I.e. this issue with the Evaluation Metric epsilon is that it does not take into the consideration that % of the patients that go to the hospital stay practically the same (i.e. conserved, minus slow national trend to decrease number of hospitalizations, which could be really small in HPN case and which we will see comparing DayInHospital_Y2 with DayInHospital_Y3). And solution, where every patient goes to the hospital (which is completely wrong) could have a better score, than solution predicting the right number of patients in the hospital. In this case predictive model will be useless for HPN, because it predicts a completely wrong number of patients in the hospital while predicting the ""right"" total number of days in the hospital. Do you think it is possible to add another metric which evaluates prediction based on if patient went to the hospital vs. if patient did not go to the hospital (i.e. binary - 0 or 1)? Then 2 scores (DaysInHospital & PatientInHospital (0/1)) could be combined and winner is identified based on the combined score. For example, the second score is equal to 0 if one can predict for each patient if this patient went to the hospital or not regardless of DaysInHospital.",0,None,36 ,Thu Apr 07 2011 23:23:35 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/409,/competitions/hhp,None /john17,Reading List Suggestions,Does anyone have reading list suggestions? specifically about epidemiology/hospitalization work that has been done. I know it may not be in the spirit of such a cut throat crew but I thought I’d ask.,1,bronze,7 ,Fri Apr 08 2011 05:41:39 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/410,/competitions/hhp,1092nd /igor47690,Should p be integer in evaluation formula?,Should p be integer in evaluation formula? [Link]:http://www.heritagehealthprize.com/c/hhp/Details/Evaluation Or we can use any real value for p?,0,None,3 ,Fri Apr 08 2011 05:53:06 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/411,/competitions/hhp,None /diogoff,"""you can only select 2 submissions""","There have been some changes to the website and I'm not sure whether the difficulty I'm experiencing is related to that. Basically, when I select my submissions I get the message ""You can only select 2 submissions"". I've checked the rules again and they do not seem to have changed, it still says 5 entries. Anyone with the same problem?",0,None,1 Comment,Fri Apr 08 2011 12:01:19 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/412,/competitions/ChessRatings2,18th /darragh0,Understanding Claims_Y1.csv Headings,"Hi,Just a quick explanation of the heading DSFS (Days since first claim).The column for dsfs in Data Claims_Y1 shows 0-1 month for example, does this mean that the Days since first claim for that year was within the last month?Or when the column shows 0-7 months for example, does this mean that the Days since first claim for that year was 7 months ago or within the last 7 months? Anybody comprehend and confirm this or what is actually the case. Thanking youJim",0,None,3 ,Fri Apr 08 2011 14:50:07 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/413,/competitions/hhp,855th /doc555,Since you put the data in there...,"Since you put the data in there, does a null value in the pay delay field mean that the patient paid right away, or that you don't have the data, or that sometimes the patient paid right away, and some times you don't have the data? My working and testable hypothesis is that the longer a patient lets a bill go, the longer they might let a medical condition go, and the more likely they might be to end up in the hospital. Perhaps it is what you were thinking when you added the data in the first place?",0,None,1 Comment,Fri Apr 08 2011 15:35:57 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/414,/competitions/hhp,418th /sciolist,Milestones & Publishing Algorithms,"I didn't see this in the FAQ - are teams required to publish their algorithms once they make an entry to a milestone? I know they have to submit it to the organisers obviously, but will it then be published publicly?",0,None,8 ,Fri Apr 08 2011 17:58:43 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/415,/competitions/hhp,None /aeoliana,Claims table oddities,So how come I have records for outpatients with 26 weeks for the length of stay? Isn't that like... not what outpatient means?,0,None,3 ,Fri Apr 08 2011 21:23:30 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/416,/competitions/hhp,None /pthinker,How to determine the initial rating,"Hi, Really new to here and to chess game.I hava a question I am not clear: For those players whose rating are not in the initial rating list, how can we determine their rating? Can we just assume their initial ratings are 0? Thanks!",0,None,3 ,Fri Apr 08 2011 22:31:49 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/417,/competitions/ChessRatings2,111th /aeoliana,Basic Metrics,"So right now I am just messing around compiling some basic comparison metrics in the hopes that maybe I'll notice some sort of trend or stimulate some ideas. Here is a visualization of one of them: Average DIH by CCI by Condition Code - Colored by CCI - 5+ darkest So I dunno if there are any things like this other people would like to see, if you have a request just post it here. I am doing this as much for brain excercise as I am trying to win :) I can also provide the datasets along with the visuals.",10,silver,14 ,Fri Apr 08 2011 22:33:48 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/418,/competitions/hhp,None /novakoff,Follow-Up Items,"HHP has been asked for the following for which replies are pending. 1. Corrected DayInHospital_Y2 data/data set quality. There was an abnormality in the ETL process resulting in an odd distribution of data which would be critical. A third party audit of the data files could help prevent additional questions of this sort. 2. Review of data disclosure restrictions. The disclosure of information for this contest should be reviewed by an IRB whose members are expert in the release of data for research purposes and the associated waivers specified in 45 CRF Part 46 and 21 CRF Parts 50 and 56. 3. Review of Foreign Assets Control Regulations. There have been complaints that the scope of the restrictions on foreign nationals exceeds those required by 31 CFR Parts 500 through 598. 4. Review of IP restrictions. There have been complaints that the IP restrictions for the HHP unnecessarily exceed those of the Netflix prize or have there is some confusion. 5. Review of statistical requirements. There is discussion of whether to use Real or Integer numbers in projecting hospital days, for example. Again, a third party review of the contest rules could help resolve these questions. Resolved Questions. 1. Inconsistant data. There have been complaints that there is missing or inconsistent data items (such as pregnant men). HHP responded that such is typical of heathcare data and that has been my experience as well. Comments on this list would be most appreciated. What did I miss?",2,bronze,8 ,Sat Apr 09 2011 00:18:42 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/419,/competitions/hhp,None /zaccaksolutions,Kaggle and HPN Employees,Is it possible to mark posts by Kaggle or HPN employees with a different colour? Or maybe just put some kind of logo under your name so its easier to tell which are official employee posts. Thanks! -H,0,None,15 ,Sat Apr 09 2011 02:16:13 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/420,/competitions/hhp,544th /del=2f182a57cb6ffbc0,I think I smell a mouse...,"When I read that the IPR rules had been changed to require ALL participants to grant an exclusive license to HPN, I wondered what was behind it. There will probably be 10,000 entries. Surely HPN is not interested in the 2,349th algorithm? Or is it? After thinking about this more, I have the sinking feeling that we are in Douglas Adams' Universe and HPN is the equivalent of the white mouse. Most of the algorithms that have won recent contest are variations of the Random Forests algorithm. The theory behind the random forests approach is that a sufficiently large collection of weak models will out perform any single best model. Hence, the problem reduces to one of finding a good feature space on which to apply the random forests algorithm. So why not apply the approach recursively? Generate a large number of models on a large number of feature spaces. Unfortunately, there is no known automatic method for generating all possible feature spaces for any given problem. The best you can do is ask 10,000 researchers to each create a feature space. Then one can apply a super random forests algorithm over the space of all possible models over all possible feature spaces... In other words, after collecting all 10,000 algorithms, HPN will be in the position to create a combined algorithm which will be better than the best model produced by any one entrant. In fact, if the threshold is set high enough, then HPN may not need to pay out the 3 million at all, but by combining all the entries still have a model powerful enough to cross the threshold. I have noticed several other threads in this Forum have also complained about the rule changes, however, no one seems to have thought the ramifications through to the end. As it stands the rules are written in a ""heads I win, tails you loose"" legalise which is not at all conducive to progress in the science of predictive modeling, not to mention improvement in clinical health care. Or is all this just the paranoid ramblings of one who has read too many volumes of the Hitchhiker's Trilogy?",3,bronze,4 ,Sat Apr 09 2011 12:12:05 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/421,/competitions/hhp,None /wcukierski,Final Submission,Can we be allowed more than one final guess on target_evaluate? Maybe 3? It's just so easy to make a sign error or lose by some other silly slip up.,1,None,8 ,Sat Apr 09 2011 21:55:08 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/422,/competitions/overfitting,5th /rbovard,Interpreting zero days in Y2,"I compared hospitalization in Y1 (# members with inpatient or outpatient claims in Y1) against the same in Y2 (#members with 1 or more days) and got 15,369 members in Y1 and 12,928 members in Y2. Could the zero day assignments include those who could not make claims in Y2 because they were deceased or dropped out for other reasons?",0,None,6 ,Sat Apr 09 2011 22:31:31 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/423,/competitions/hhp,562nd /domcastro,Do we have to pay to take part later on?,"Hi, Before Kaggle was involved in the competition, I preregistered for the event. When I read the rules it says that entrants would be expected to pay a ""modest registration fee"". I can't see this mentioned anymore - has this changed? I'm a bit worried that in 2012 at the final stage, we will be charged for entering? thanks",0,None,1 Comment,Sun Apr 10 2011 00:39:11 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/424,/competitions/hhp,306th /chaseshaw,9 year old pregnant boys,what exactly is primaryconditiongroup? because it has 5 admitees ages 0-9 two of whom are male.,0,None,3 ,Sun Apr 10 2011 06:38:44 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/425,/competitions/hhp,None /davec6371,What constitutes 'External Data'?,"Hi Regarding the rules: Entrants must use the Data Sets provided to them solely to prepare their entries and develop and test their algorithms for accurately predicting the number of days andEntrants may use data other than the Data Sets to develop and test their algorithms and entries provided that (i) such data are freely available to all other Entrants and (ii) the data and/or a link to the data are published in the ""External Data"" topic in the Forums section of the Website within one (1) week of the date on which an entry that uses such data is submitted to the Website What constitutes 'external data' for the purposes of this competition? I'm specifically thinking about data that might obvious to a clinician, but not necessarily to a non-medical statistician. For instance, suppose that I decide that I'm going to give a weighting in some aspect of my algorithm of 1.0 to condition code 'SEIZURE', and a weighting of 2.0 to 'STROKE', because I know/think that strokes should cause more hospitalization than seizures. Do I need to declare my data source for this? Note that I'm supposing that these weighting haven't been gleaned from the data, they are 'common knowledge'. (or not-so-common knowledge that only a medical specialist might know). If the answer to this question is that 'yes, you must declare all your priors and put them in the 'external data' section of the website', then I foresee that we will be inundated with external data, as each of the 3000+ competitors publishes any data assumptions that they might make (to ensure that they do not later get disqualified for non-disclosure of their priors) Thanks for any clarification.",2,bronze,2 ,Sun Apr 10 2011 22:48:43 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/426,/competitions/hhp,313th /ejlok1,Final Results,Hi Just wondering when will the final results be announced?,0,None,2 ,Mon Apr 11 2011 01:39:32 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/427,/competitions/WIC2011,5th /markwaddle,Misc. data questions,"Hello Kaggle staff, In my investigations I noticed a few oddities in the data that haven't been asked about yet. Why are ""ingestions"" and ""benign tumors"" lumped together in one PrimaryConditionGroup of ""Ingestions and benign tumors""? These seem like very different conditions. What type of cancer is ""Cancer A""? What type of cancer is ""Cancer B""? Is there any insight into what the differences might be between the ""Miscellaneous 1"", ""Miscellaneous 2"" and ""Miscellaneous 3"" groups? There are 171 members whose first claim has a ""dsfs"" other than ""0- 1 months"". Shouldn't the first claim for each member be 0-1 months? Are these anomolies due to recording errors or data cleansing errors? First DSFS Member count0- 1 month 771181- 2 months 1022- 3 months 333- 4 months 194- 5 months 85- 6 months 46- 7 months 5 Thanks for your help. Mark",1,bronze,1 Comment,Mon Apr 11 2011 02:49:02 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/428,/competitions/hhp,218th /augustocallejas,license requirements for open source libraries?,"hi- after reading the rules and searching this forum, its not clear to me if we're allowed to use open source libraries in our software. specifically, what licenses are allowed? is it sufficient for the license(s) we choose to allow ""Link with code using a different license"": http://en.wikipedia.org/wiki/Comparison_of_free_software_licenses thanks, augusto.",0,None,1 Comment,Mon Apr 11 2011 07:46:27 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/429,/competitions/hhp,None /ahassaine,Methods description,"Dear all, Many thanks for participating to this contest. We will be very grateful if you could send us your name, affiliation and a description of your method along with reference to publications (if available). We will be interested in hearing about what you tried, what didn’t work, and what ended up working. This will be included in the competition article that will be published in the ICDAR2011 proceedings and probably in an extended journal article. Also, if you have used the features we provided, please mention this in your description. Finally, we will be happy to listen to your comments for improving future editions of this contest. You might post this either directly on this forum or by sending an email to hassaine (at) qu.edu.qa Thanks again for participating, Best regards, Ali",0,None,6 ,Mon Apr 11 2011 08:19:34 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/430,/competitions/WIC2011,7th /cybaea,Aim is HIGHEST prediction score??,"The Evaluation page says: ""The eligible Grand Prize Entry that the judges determine produces the highest prediction score..."" but I think you mean the smallest? (Otherwise my submission is p_i = +Inf for all i for an \epsillon of +Inf regardless of the actual values - can I have my money now? :-))",0,None,4 ,Mon Apr 11 2011 12:31:33 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/431,/competitions/hhp,109th /toulouse,LengthOfStay,"Hello! In the Claims Table, what does the variable LengthOfStay mean exactely? Thanks a lot!",0,None,4 ,Mon Apr 11 2011 13:51:48 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/432,/competitions/hhp,168th /daveime,Initial Impressions ?,"So what are your initial impressions of the dataset (in it's partially released state) ? For me, the major problem I see is trying to build a coherent model using many variables, when so much data is blank or unknown or downright nonsense. Just looking at the ClaimsY1, what are we to assume where paydelay is blank ? Has it not yet been paid ? Has the claim been denied outright ? These missing data might play a part in the final result (off the top of my head, if a patient makes many claims, and has many denied, it may be suspected he/she is hypochondriac and is less likely to be admitted) ... but with an empty value, and no indication what it represents (or rather fails to represent), it's not much use is it ? LengthOfStay is another one with mostly blank values, and while the concensus view is that a blank value means the patient did not stay at all, it is again not clear. Are we going to get an official statement on what values NULL implies in each column of these tables ? There are so many nonsensical data points, I suspect this really is going to be a crapshoot. On TWO separate occasions, patient 911633904 spent 3 DAYS in an Ambulance ? WTF ? There are instances of 0-9 year old boys being pregnant ? And I'm sure many other strange aspects will come to light. Seriously, I understand the need for randomizing and anoymizing the data, but unless they have some way to unrandomize it afterwards, any algorithms we create will serve no real world application. This project is all about finding correlations and links between historical conditions, claims, medicines etc ... if the data is garbage, the result will be overfitted garbage suited for this dataset only and no other. So far, I'm very disappointed ... I competed on Netflix Prize, and IMHO the dataset was far higher quality.",0,None,4 ,Mon Apr 11 2011 16:22:32 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/433,/competitions/hhp,717th /mgomari,How did you address these two points when counting Days in Hospital?,"1. Assuming the number of days are based on the difference between a service_start_date and service_end_date, how many days you assigned when service_start_date was say 1/1/2009 and service_end_date was 1/2/2009? Did you assign 1 or 2 days? Similarly we will use the definition to figure out what happens if both dates are the same, i.e. 0 or 1 day. 2. Going based on medical claims to compute days a patient may have multiple claims for one inpatient day, did you take care of these potential overlaps so that the same day in hospital is not counted more than once? Thanks",0,None,1 Comment,Mon Apr 11 2011 18:35:34 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/434,/competitions/hhp,None /domcastro,American Health System,"I'm from the UK and we are very lucky to have a free for all health system. From just reading someone's post just now, it seems that time in hospital is linked to the insurance company and not a decision made by doctors. Could someone confirm this? Are Doctor's decisions overruled by insurance companies? Very surprised by this. The UK Health system RULES!",0,None,1 Comment,Mon Apr 11 2011 22:27:01 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/435,/competitions/hhp,306th /botm2123,Is most of the leaderboard overfitting?,"I am asking to get a feel for how others are training and validating. My validation methodology trains on the 250 indicated, selecting optimal parameters using 10x10Fold CV for Maximum Accuracy. These parameters are validated on the remaining datapoints. This methodology is reccommended by Kohavi (1995). What methods are you using? LOO RSV Holdout?",0,None,41 ,Mon Apr 11 2011 23:00:10 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/436,/competitions/overfitting,136th /floydnelson,The prediction values,"Nothing is said about whether the Pi need to be integers. Certainly the common log of Pi + 1 will NOT (generally) be an integer. It would seem that in the real(BILLING) world , the P i would have to be intgers, but in the world of mathematical forecasts, continuous variables exist, and make some kind of sense. DOES ANYONE KNOW if the P i are constrained to be integers ? FLOYD NELSON",0,None,1 Comment,Mon Apr 11 2011 23:45:27 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/437,/competitions/hhp,None /floydnelson,The prediction values are required to be integers ?,"Nothing is said about whether the Pi need to be integers. Certainly the common log of Pi + 1 will NOT (generally) be an integer. It would seem that in the real(BILLING) world , the P i would have to be intgers, but in the world of mathematical forecasts, continuous variables exist, and make some kind of sense. DOES ANYONE KNOW if the P i are constrained to be integers ? FLOYD NELSON",0,None,2 ,Mon Apr 11 2011 23:46:49 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/438,/competitions/hhp,None /blonchar,Disappointed by Category Data,"I'm disappointed that much of the data has been pre-categorized, primarily PrimaryConditionGroup. I read in another post that the reason behind that was patient privacy. The problem is that the biggest innovation that would have come from this contest is in HOW TO categorize the data for procedure and diagnosis codes in a way that optimizes data mining. Categorizing that data down to a few dozen possibilities dumbs down the intelligence that can be derived from the data, and will result in a solution only as good as that categorization. If privacy really is the issue here, then I am very disappointed that our privacy laws protect people at the cost of limiting innovation, which ultimately would reduce costs and save lives. Please let me know if I'm missing something or interpreting the data incorrectly. Thanks.",0,None,9 ,Tue Apr 12 2011 05:19:13 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/439,/competitions/hhp,1073rd /jjjjjj,Can DaysInHospital_2/3 data sets be input to DaysInHospital_4 prediction?,"I have read through the rules twice and I think they are ambiguous on this. Is it within the rules to use DaysInHospital_2 and DaysInHospital_3 as inputs for the model used to generate DaysInHospital_4 for submission as an entry? Or are DaysInHospital_2/3 solely intended for training and validation. It is a reasonable assumption that a patient's hospitalization length in past 2 years (Y2 and Y3) may help predict Y4, and the rules should be clear on this. If this is not allowed, the obvious followup question is can DaysInHospital_2/3 be declared a ""USE OF OTHER DATA"" as described in section 7 of the rules. Thanks",0,None,3 ,Tue Apr 12 2011 05:49:10 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/440,/competitions/hhp,113th /mgomari,Partitioning the data into Training and Validation sets?,"I wasn't able to find an answer to this in the rules. Is it fair to assume that Entrants can partition the available data into Training and Validation sets as they wish? Further, are Entrants allowed to fine tune these data sets if they see fit, e.g. removal of junk data? Thanks",0,None,1 Comment,Tue Apr 12 2011 06:33:20 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/441,/competitions/hhp,None /ionic4313,Test set release,Just wondering if the classes of the test set will be released?,0,None,2 ,Wed Apr 13 2011 04:10:47 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/442,/competitions/WIC2011,None /cooliomcdude,How to parallelise R [code included],"Hi All, This is my first data mining competition, so since my chance of winning is infinetesimal I may as well contribute some R knowledge (I have a statistics background). A common critisim of R is it's poor performance of iterative looping, but this can be ameliorated by writing explicitly parallel code. Make sure you're using a single-threaded BLAS or this will be inefficient. This following code runs the latest glmnet R benchmark in parallel. There are many packages that do this, but ""snowfall"" is the one I've had most succes with. If you use Linux it's quite easy to also parallelise over multiple machines, you just need passwordless SSH between the nodes and the master (I can provide more details if anyone's interested). On a single machine you don't need any additional setup (as far as I can tell - I only tested this briefly on Windows and it worked, but you need to make a firewall exception). Let us know if you have any problems or I've made a mistake ############################################ mydata <- read.csv(""overfitting.csv"", header=TRUE)colnames(mydata)trainset = mydata[mydata$train == 1,]testset = mydata[mydata$train == 0,]#set the targetstargettrain <- trainset$Target_Leaderboard#remove redundant columnstrainset$case_id = NULLtrainset$train = NULLtrainset$Target_Evaluate = NULLtrainset$Target_Practice = NULLtrainset$Target_Leaderboard = NULLtestID <- testset$case_idtestset$case_id = NULLtestset$train = NULLtestset$Target_Evaluate = NULLtestset$Target_Practice = NULLtestset$Target_Leaderboard = NULL ################################################### Implement the benchmark (parallel).num <- 1000 #the number of lambda values to generatewid <- 50 #the number each side of the median to include in the ensemble# Function to be parallelised - takes loop index as argument.fi <- function(i){ # if (i %% 50 == 0) print(i) mylambda <- cv.glmnet(as.matrix(trainset),targettrain,family=""binomial"",type=""auc"",nfolds=10) return(mylambda$lambda.min)}library(snowfall)# Initialise ""cluster""sfInit(parallel = TRUE, cpus = 2, type = ""SOCK"")# Example for running on multiple machines# sfInit(parallel = TRUE, socketHosts = c(rep(""serverNode"", 4), ""localhost"", ""localhost""), cpus = 6, type = ""SOCK"")# Make data available to other R instances / nodessfExport(list = c(""trainset"", ""targettrain""))# To load a library on each R instance / nodesfClusterEval(library(glmnet))# Use a parallel RNG to avoid correlated random numbers# Requires library(rlecuyer) installed on all nodessfClusterSetupRNG()system.time(lambdas <- sfClusterApplyLB(1:num, fi))# Using 4 threads on server, 2 on desktop:# user system elapsed # 0.468 0.050 619.891sfStop()# Change results from list to vector. # There are other tricks for returning a matrix, NULL, etc.lambdas2 <- unlist(lambdas)#sort the lambda valueslambdavals <- lambdas2[order(lambdas2,decreasing = TRUE)]#get the 'middle' lambda valueslambdamedians=lambdavals[((num/2) - wid):((num/2) + wid)]#build the models using these lambda valuesglmnet_model <- glmnet(as.matrix(trainset),targettrain,family=""binomial"",lambda=lambdamedians)#average the ensemblepredictions <- rowMeans(predict(glmnet_model,as.matrix(testset),type=""response""))",6,silver,2 ,Wed Apr 13 2011 06:22:53 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/443,/competitions/overfitting,84th /informationman,Proprietary software and competiton rules,"So do I understand right that 1) you have to use ""open source"" software or ""open source"" algorithms or write your own algorithms to be able to win this price. 2) it is not allowed to use tools like SPSS Statistics, SPSS Modeler, SAS or other tools (cause the algos are closed source and have copyrights)",0,None,9 ,Wed Apr 13 2011 12:51:26 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/444,/competitions/hhp,1096th /floydnelson,Release of competitive Results - GAMING,"Section 13. Milestone Prize Entries (ii) Other entrants will have the opportunity to submit comments/complaints relating to conditional winners methodology... This implies that the methodology of the intermediate winners will be published. Though this may intensify the competition and probably increase the accuracy, it will probably set off a type of GAMING. WHY would I help my competitors ? It seems I would hold off until the end to give my full, complete ,and best results... so as not not give it to my competitors while they still have time to act on my information. In fact, this even gives way to fake results and decoys on intermediate results . EVER bid on EBAY ? You know the real bids are held off until the last few seconds on an item that several people want ... not to be placing intermediate bids that will just raise the cost of what you pay. How will KAGGLE deal with this type of behavior ?",0,None,3 ,Wed Apr 13 2011 17:19:06 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/446,/competitions/hhp,None /newamerica9710,Accessing hospital names,Would it be possible to access the names and/or locations of the inpatient hospitals that the patients were admitted to?,0,None,1 Comment,Wed Apr 13 2011 23:19:57 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/447,/competitions/hhp,None /markhays,Will HPN / Kaggle own our work?,"Will HPN and/or Kaggle own the right to sell any software and IP we use or develop as part of our entry in this competition? It appears that the ""License"" section of the agreement would give HPN and/or Kaggle the unlimited right to sell any ""algorithm"" or software used by any competitor who joins the Heritage Health Prize competition, whether they win a prize or not. In other words, if we participate in good faith in this competition -- to improve healthcare services nationwide -- the sponsors will own all of the intellectual property developed by every competitor? I can see granting a license to HPN for their internal use, as the sponsor who funded the competition. Asking every competitor to grant a free license that would allow HPN (and maybe Kaggle) to sell our work worldwide -- with no royalties -- is something else. Was this the intent of the agreement? If not, it needs to be clarified. Anyone who has done serious work in predictive analytics knows the value of their IP -- and will refuse to participate.",0,None,1 Comment,Thu Apr 14 2011 21:27:56 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/449,/competitions/hhp,None /zachmayer,Another way to parallelize R [Code Included],"Here is my take on replicating the benchmark in parallel. I use the 'multicore' and 'caret' packages, which I think simplify things a lot. It also makes it very easy to try different models, using the same process. For example, you could change my code to use a SVM by changing method='svmRadial', deleting the ""family='binomial'"" argument, and deleting the ""tuneGrid=MyGrid"" arguments. Multicore automatically detects the number of processors you have and spawns new processes using 'fork,' so it requres very little setup. I like to run it on Amazon EC2 instances with lots of cores... =) ############################# #1. Setup #############################rm(list = ls(all = TRUE)) #CLEAR WORKSPACE mydata <- read.csv(""overfitting.csv"", header=TRUE)trainset = mydata[mydata$train == 1,] testset = mydata[mydata$train == 0,] #set the targetstarget train <- trainset$Target_Leaderboard#remove redundant columns trainset$case_id = NULL trainset$train = NULL trainset$Target_Evaluate = NULL trainset$Target_Practice = NULL trainset$Target_Leaderboard = NULLtestID <- testset$case_id testset$case_id = NULL testset$train = NULL testset$Target_Evaluate = NULL testset$Target_Practice = NULL testset$Target_Leaderboard = NULL#Define Model Controls library(caret) library(multicore) MultiControl <- trainControl(workers = 2, #2 cores method = 'repeatedcv', number = 10, #10 Folds repeats = 25, #25 Repeats classProbs = TRUE, returnResamp = ""all"", summaryFunction = twoClassSummary,#Use 2 class-summary function to get AUC computeFunction = mclapply) #Use the parallel apply function ############################# #2. Run Model ############################# library(glmnet)MyGrid <- createGrid('glmnet',len=10) #Define a tune grid with alpha=1 or 0 MyGrid$.alpha <- rep(c(0,1),(dim(MyGrid)[1])/2) MyGrid$.lambda <- MyGrid$.lambda-.1 #Allow lambda to equal 0 MyGrid <- MyGrid[!duplicated(MyGrid),] #Remove duplicated alpha/lambda combinations targettrain <- as.factor(paste('X',targettrain,sep='')) #Bug in caret model <- train(trainset,as.factor(targettrain),method='glmnet',family=""binomial"", metric=""ROC"",tuneGrid=MyGrid,trControl=MultiControl) finalprediction <- predict(model,testset,type=""prob"") submit_file = cbind(testID,finalprediction) write.csv(submit_file, file=""Benchmark.csv"", row.names = FALSE)",8,bronze,3 ,Fri Apr 15 2011 18:21:27 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/454,/competitions/overfitting,59th /ihbicmu,Timeline for the 2nd Milestone,"I'm assuming the date for the second milestone should be 2012, not 2011 as stated.",0,None,1 Comment,Fri Apr 15 2011 19:21:56 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/455,/competitions/hhp,None /salimali,modelling algorithms in R,"I'm encouraged to see people are beginning to share R code. The problem with R is that you don't know what you don't know - I've already been introduced to flexgrid, glmnet & parallel processing. Below is some code that I hope some will find useful - a collection of algorithms that all run in the same simple framework. Hopefully this list can be expanded... please feel free to add any other algorithms you use! ###################################################### Collection of Examples of the different algorithms# that are available to build classification models# in R.## includes:## Logistic Regression# Linear Regression# RLM# Support Vector Machine# Decision Tree# Random Forests# Gradient Boosting Machine# Multivariate Adaptive Regression Splines############################################################################################################ 1. SETUP DATA######################################################clear worksacerm(list = ls(all = TRUE))#set working directory setwd(""C:/wherevever"") #load the datamydata <- read.csv(""overfitting.csv"", header=TRUE)colnames(mydata)#create train and test setstrainset = mydata[mydata$train == 1,]testset = mydata[mydata$train == 0,]#eliminate unwanted columns from train settrainset$case_id = NULLtrainset$train = NULLtrainset$Target_Evaluate = NULL#trainset$Target_Practice = NULLtrainset$Target_Leaderboard = NULL###################################################### 2. set the formula#####################################################theTarget <- ""Target_Practice""theFormula <- as.formula(paste(""as.factor("",theTarget, "") ~ . ""))theFormula1 <- as.formula(paste(theTarget,"" ~ . ""))trainTarget = trainset[,which(names(trainset)==theTarget)] testTarget = testset[,which(names(testset)==theTarget)]library(caTools) #requireed for AUC calc#####################################################display_results <- function(){train_AUC <- colAUC(train_pred,trainTarget)test_AUC <- colAUC(test_pred,testTarget)cat(""\n\n***"",what,""***\ntraining:"",train_AUC,""\ntesting:"",test_AUC,""\n*****************************\n"")}###################################################### 3. Now just apply the algorithms########################################################################################################### Logisic Regression#####################################################what <- ""Logistic Regression""LOGISTIC_model <- glm(theFormula, data=trainset, family=binomial(link=""logit""))train_pred <- predict(LOGISTIC_model, type=""response"", trainset)test_pred <- predict(LOGISTIC_model, type=""response"", testset)display_results()###################################################### Linear Regression#####################################################what <- ""Linear Regression""LINEAR_model <- lm(theFormula1, data=trainset)train_pred <- predict(LINEAR_model, type=""response"", trainset)test_pred <- predict(LINEAR_model, type=""response"", testset)display_results()###################################################### Robust Fitting of Linear Models#####################################################library(MASS)what <- ""RLM""RLM_model <- rlm(theFormula1, data=trainset)train_pred <- predict(RLM_model, type=""response"", trainset)test_pred <- predict(RLM_model, type=""response"", testset)display_results()###################################################### SVM#####################################################library('e1071')what <- ""SVM""SVM_model <- svm(theFormula, data=trainset,type='C',kernel='linear',probability = TRUE)outTrain <- predict(SVM_model, trainset, probability = TRUE)outTest <- predict(SVM_model, testset, probability = TRUE)train_pred <- attr(outTrain, ""probabilities"")[,2]test_pred <- attr(outTest, ""probabilities"")[,2]display_results()###################################################### Tree#####################################################library(rpart)what <- ""TREE""TREE_model <- rpart(theFormula, data=trainset, method=""class"")train_pred <- predict(TREE_model, trainset)[,2]test_pred <- predict(TREE_model, testset)[,2]display_results()###################################################### Random Forest#####################################################library(randomForest)what <- ""Random Forest""FOREST_model <- randomForest(theFormula, data=trainset, ntree=50)train_pred <- predict(FOREST_model, trainset, type=""prob"")[,2]test_pred <- predict(FOREST_model, testset, type=""prob"")[,2]display_results()###################################################### Gradient Boosting Machine#####################################################library(gbm)what <- ""GBM""GBM_model = gbm(theFormula1,data=trainset,n.trees=50,shrinkage=0.005 ,cv.folds=10)best.iter <- gbm.perf(GBM_model,method=""cv"")train_pred <- predict.gbm(GBM_model,trainset,best.iter)test_pred <- predict.gbm(GBM_model,testset,best.iter)display_results()###################################################### Multivariate Adaptive Regression Splines#####################################################library(earth)what <- ""MARS (earth)""EARTH_model <- earth(theFormula, data=trainset)train_pred <- predict(EARTH_model, trainset)test_pred <- predict(EARTH_model, testset)display_results()",12,bronze,8 ,Sat Apr 16 2011 01:57:38 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/456,/competitions/overfitting,98th /gregs1,Scope of model - local or national?,"Is a single model that is trained once intended to work anywhere or is the idea that the model will always be trained with data specific to a particular location? For example, will the same patient, doctor, and location codes typically be seen again and again when new data is presented to the model?",1,bronze,1 Comment,Sat Apr 16 2011 04:45:29 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/457,/competitions/hhp,863rd /washtell,Correlation coefficients,Apologies for posting this again. It seemed like it would probably suit a new thread. I see a few folks quoting scores. Does anybody have correlation coefficients (linear or rank) to report? These are arguably more useful when trying to get a feel for what's possible with the data. Minimizing the actual score function can be considered an additional calibration step. My preliminary values for Pearson's R come close to but do not yet exceed 0.1. I've not tried seeing how these translate into actual scores yet - I'll do that if and when I start getting something a bit more respectable. Any more encouraging results out there?,1,bronze,15 ,Sun Apr 17 2011 00:35:23 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/458,/competitions/hhp,None /makagan,order of claims,"Not sure if this was answered previously, it concerns claims in the same month since first claim (i.e. claims with same dsfs). If a member has multiple claims in the same dsfs (say 4 claims with dsfs = 0- 1 month), were the claims entered into the data in a time ordered fashion, or is the order of the claims random?",0,None,2 ,Sun Apr 17 2011 04:19:57 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/459,/competitions/hhp,None /dslate,Team membership problem,"I recently entered this competition, and when I made my first submission I changed my team name to ""Old Dogs With New Tricks"", with the intention of adding Peter Frey to my team as I did for the first Elo contest. I believe I clicked on his name, but as far as I can tell he is not listed as part of ""Old Dogs With New Tricks"", and I don't know why this happened or how to fix this. Does Peter have to do something to confirm team membership? Any ideas on how to add him to my team after having already made 3 submissions? Thanks, -- Dave Slate",0,None,2 ,Sun Apr 17 2011 20:33:00 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/460,/competitions/ChessRatings2,11th /aeoliana,More data would be cool n' stuff,"So I am finding there are 24,327 unique condition combinations in the data. And there are 77k~ points of data for Days in Hospital... Best case thats like a whopping 3 unique points of information for any given set of conditions. (In actuality its a lot of single entries with more common condition combinations reaching as many as 4,145) So I know it is unlikely, but is there any way we could get like... 50 times as much info as we have here? :) Kind of hard to make any statistically significant observations with only one point of data, even if we may get 2 more with Y2/3 data.",0,None,18 ,Mon Apr 18 2011 17:24:15 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/461,/competitions/hhp,None /informationman,RStudio code optimization for newbies,"I am totaly new to R (have got experience with SPSS, SPSS Modeler and matlab) I tried weka on my computer but it failed to do any good with 60.000 datasets because of lack of RAM - error heap size what ever Now I'm trying to use the randomForest implementiation in R with the simple dataset: memberid, Number_of_Claims, DaysInHospital_Y2 ####################### library(caret) ## load csvdata <- read.csv(file = ""data.csv"",sep = "";"", colClasses = c( rep(""integer"", 3) ), comment.char = """") mdl <- train(data, data$DaysInHospital_Y2, ""rf"") ####################### RStudio works for some minutes and fails: Fitting: mtry=2 Error: can't allocate 294.8 MB #(translated) 1: In rfout$nodestatus <- rfout$nodestatus[1:max.nodes, , drop = FALSE] : Reached total allocation of 4087Mb: see help(memory.size) 2: In rfout$nodestatus <- rfout$nodestatus[1:max.nodes, , drop = FALSE] : Reached total allocation of 4087Mb: see help(memory.size) 3: In rfout$nodestatus <- rfout$nodestatus[1:max.nodes, , drop = FALSE] : Reached total allocation of 4087Mb: see help(memory.size) 4: In rfout$nodestatus <- rfout$nodestatus[1:max.nodes, , drop = FALSE] : Reached total allocation of 4087Mb: see help(memory.size)",0,None,9 ,Mon Apr 18 2011 19:12:19 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/462,/competitions/hhp,1096th /zaccaksolutions,HHP: Evalutaion Algo in Groovy,"Hi all, Here is a quick Groovy class I did to run the evaulation. Feel free to use it, abuse it...etc. Just cite me if you post it. Let me know if you see any mistakes. Thanks, Henry /* * Heritage Health Prize: Evaluation of predicted results to actual results. * http://www.heritagehealthprize.com/c/hhp/Details/Evaluation * * By Henry Zaccak * henry AT zaccak.com * */class Evaluation { static main(args) { // prediction data def pFile = 'DayInHospital_Y2_no_header.csv' def p = [] double pAvg = 0 double pErr = 0 importCSV(pFile, p) // actual data def aFile = 'DayInHospital_Y2_no_header.csv' def a = [] double aAvg = 0 double aErr = 0 importCSV(aFile, a) // assume they are in the same order, make sure they are same size if (a.size != p.size) { println ""File columns sizes don't match!"" return } // evaluate double e = 0 for (i in 0..a.size-1) { //p[i].los = 0.664984667934635 // Mean prediction //p[i].los = 0.2243310244221811 // Minimal error prediction pAvg += p[i].los aAvg += a[i].los double pln = Math.log(p[i].los + 1) double aln = Math.log(a[i].los + 1) pErr += pln aErr += aln e += Math.pow(pln - aln, 2) //println ""P: "" + p[i] + "" A: "" + a[i] } pErr = pErr / p.size aErr = aErr / a.size pAvg = pAvg / p.size aAvg = aAvg / a.size e = Math.sqrt(e / p.size) println ""Err Prediction: "" + pErr println ""Err Actual: "" + aErr println ""Avg Prediction: "" + pAvg println ""Avg Actual: "" + aAvg println ""Final e: "" + e } private static importCSV(String file, List list) { new File(file).splitEachLine(','){fields -> // id: MemberID, los: length of stay list.add(id: fields[0], los: fields[1].toDouble()) } }}",0,None,2 ,Tue Apr 19 2011 02:49:14 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/463,/competitions/hhp,544th /dslate,Test set filter ambiguity,"The Data page explains the utility of the WhitePlayerPrev and BlackPlayerPrev variables as follows: So in the creation of the test set for months 133-135, there was a filter applied. Instead of including all games played during months 133-135, the test dataset only includes games in which both players had at least 12 games played during the final 24 months of the primary training dataset (i.e. months 109-132). ... For instance, if your validation test set was drawn from the games of month 130 only, and you only selected games from that month having WhitePlayerPrev > 12 and BlackPlayerPrev > 12, then you would only be getting games in which both players had at least 12 games played in the previous 24 months of the primary training set (i.e. months 106-129). And this filter is analogous to what was done to filter the games for the test set. It seems to me that the test for ""at least 12 games played"" should be: WhitePlayerPrev >= 12 and BlackPlayerPrev >= 12 not: WhitePlayerPrev > 12 and BlackPlayerPrev > 12 Am I missing something? Thanks, -- Dave Slate ",0,None,1 Comment,Tue Apr 19 2011 06:50:55 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/464,/competitions/ChessRatings2,11th /dyakonov,Сan I use the description of this problem?,Сan I use the description of the Social network challenge problem and the reference to this site in my scientific article? Thank you.,0,None,1 Comment,Tue Apr 19 2011 17:10:59 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/465,/competitions/socialNetwork,7th /aeoliana,Good scores?,"So my algorithm is only like halfway through munching but it tells the the value of that squiggly e as it goes... I was just wondering what a good score is, and where you guys are at just so I can sort of gauge my progress. Thanks! (oh my score is wavering on .49-.5?)",0,None,4 ,Tue Apr 19 2011 23:09:03 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/466,/competitions/hhp,None /del=38030232b02450fa,Data Set Question?,"I am wondering will data set include info such as date of birth, city of birth,..and other personal data such as weight, height?",0,None,3 ,Wed Apr 20 2011 02:37:45 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/467,/competitions/hhp,None /kvougas,On predicting Year_4 values,Can somebody explain how one is able to train a model to predict Year-4 values if the training set does not contain any Year-4 values....? I cannot see how this can be done. Shouldn't the Kaggle team release a partial set of Year-4 values than can actually be used as a training set??? Thanxs in advance,0,None,2 ,Wed Apr 20 2011 10:40:50 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/468,/competitions/hhp,None /sarmal,Length of Stay,"Is it fair to assume that a LengthofStay value (in Claims_Y1.csv) is a hospital stay (of the type pi or ai in the ε metric ) only if the placesvc (Place of Service) is Inpatient or Outpatient Hospital? Also, do the DaysinHospital in the DaysinHospital_Y2.csv file also refer only to Inpatient and Outpatient stays? What is the interpretation of 2-4 weeks for LengthofStay with placesvc set at home or other?",0,None,5 ,Wed Apr 20 2011 19:22:40 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/469,/competitions/hhp,817th /karstenw,Year 2 hospitalization at most 15 days?,It is a little bit suprising that patients stay at most 15 days in hospital in year 2. Is there an explanation for this?,0,None,7 ,Thu Apr 21 2011 23:53:53 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/471,/competitions/hhp,None /salimali,More prizes up for grabs!,"Well Kagglers, I'm highly impressed so far. Before the comp started I was expecting 0.86 would have been a very good attempt at this problem but your efforts have far exceeded this (that is if you have not overfit to the leaderboard!) . Just for fun, three more prizes of $100 each are up for grabs: 1) The contestant who’s contributions to the forum are judged most valuable by the other contestants. In order to judge this we will be looking at how many 'thanks' each contestant gets in the forum and also get each entrant in the final submission to nominate their top 3 contributors. 2) The contestant who can best predict the top 5 final standings in the AUC part of the competition. When you make your final submissions via email, we will also ask you to give a prediction of which teams will eventually finish in the top 5 places and in what order. 3) The contestant with the lowest aggregate ranking when the results of AUC and Variable Selection entries are combined. The judges decision is final and if there are ties we will donate the money to charity. Just a recap of what is expected at the end of the competition... 1) The leaderboard will change to reflect the scores on the unseen 90% of the data. You will also be able to then see the 90% scores on each of your individual submissions. 2) You need to prepare two final submission files. The first is for your model scores for predicting 'Target_Evaluate' in the dataset. Prepare this in the same way as normal submission files, but have your team name as the header field in your prediction column. The second file is a list of the 200 variables, with a 1/0 against each to indicate if you think they were used to generate the target. Again, please also have your team name as the header in the 2nd column. 3) email these two files, including in your email details of your team name and team members real names, votes for 1) and predictions for 2) as mentioned above. Details of the email address to send the predictions to will be given later. 4) The top 3 placed teams in each section will be announced in the forum - but without revealing the finishing order. Each of these teams will then be asked to briefly describe their technique in the Kaggle blog over the next 7 days. The winners will then be announced - but you are only eligible for the prize money if you reveal your technique to all. As everyone is probably very busy and otherwise engaged, there is going to be a window of 8 days between the contest finishing and the deadline for me receiving the final email submissions. This ensures everyone at least gets a weekend to work over. The initial rules said 24hrs but I would prefer everyone to get a chance to submit something. Have fun Phil Tiberius Data Mining",1,bronze,7 ,Sat Apr 23 2011 08:46:04 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/474,/competitions/overfitting,98th /jeffsonas,Best FIDE Prize Scores?,"Hi everyone, with less than two weeks left, I would like to know what our FIDE Prize leaderboard looks like. Currently I believe that team Reversi, in 15th place, has the best-performing public score among entries that meet the restrictions of the FIDE Prize, but remember that there is room for ten finalists. I expect that the top ten for the FIDE Prize will stretch down to the #50 spot or so on the final standings. So, please post the public score of your best prospect for the FIDE prize, or send me email privately (at jeff@chessmetrics.com) if you would like to keep your identity secret until the end of the contest. Also remember that your most recent entry, out of the five selected entries on your Submissions page, will be the default one eligible for consideration for the FIDE prize, unless you let me know via email (jeff@chessmetrics.com) which specific one is your candidate. Thanks! -- Jeff",0,None,3 ,Sat Apr 23 2011 19:57:01 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/475,/competitions/ChessRatings2,None /ockham,Try these variables,var_8var_10var_11var_14var_15var_20var_21var_22var_26var_27var_30var_32var_33var_35var_36var_37var_39var_41var_43var_44var_45var_48var_49var_50var_51var_53var_54var_56var_58var_59var_61var_62var_63var_64var_67var_69var_70var_71var_72var_76var_77var_79var_82var_84var_86var_88var_89var_90var_91var_92var_94var_95var_96var_98var_100var_101var_102var_103var_105var_107var_110var_111var_112var_114var_115var_116var_117var_122var_127var_129var_132var_133var_134var_136var_137var_143var_145var_146var_150var_151var_154var_155var_158var_159var_160var_161var_162var_163var_167var_168var_170var_174var_178var_179var_180var_181var_182var_183var_185var_187var_188var_191var_193var_194var_196var_197var_199var_200,5,bronze,29 ,Tue Apr 26 2011 01:52:35 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/477,/competitions/overfitting,44th /del=37478cf4f027318a,giving help to other teams,"Since this competition is ultimately about the health of patients is it okay if you think you have a good algorithm but are not good enough at statistics or programming to develop the algorithm properly, to let everyone else on this forum know what your algorithm is so they can take it further if they want to? I am someone with limited knowledge of statistics and programming who has already thought of an algorithm that may work well.Suppose my algorithm could predict 75 per cent of patient days in hospital correctly and it was top of the leader board next month but I cannot improve it.Isn' there a moral duty to pass the algorithm on to other competitors who are more able to improve it.Suppose after three months of the competition the algorithm is 98 per cent effective at predicting days in hospital - do you really let the competition go on for two more years when patients could be helped in three months!",0,None,4 ,Tue Apr 26 2011 11:09:53 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/478,/competitions/hhp,1081st /cybaea,Does missing values of paydelay mean unknown or not applicable?,"There are 44,623 missing values of paydelay in Claims_Y1.csv. Are these claims where You didn't record this metric for some reason (!?) or No payment was due for this claim in isolation If #2 (which I assume), is this because (a) the payment was rolled up to another of the claims (e.g. a series of tests payed as one) and/or (b) are there genuine free treatments?",0,None,6 ,Tue Apr 26 2011 12:31:36 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/479,/competitions/hhp,109th /del=37478cf4f027318a,best software for analysis,Is MS Excel better for this task than say mysql or php?,0,None,30 ,Thu Apr 28 2011 00:13:37 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/482,/competitions/hhp,1081st /jeffmoser,Tips for posting code,"It's been great to see several people posting code in these forums. My plan is to eventually add a syntax highlighter for code snippets as well as supporting attachments. In the mean time, if you'd like to post code, consider putting an HTML
tag
around it. You can do this by clicking on the ""HTML"" button in the second toolbar row or you can use the ""Preformatted"" style dropdown (it's the second dropdown on the top toolbar row). For example: public static void SayHello() { Console.WriteLine(""Hello World!""); } To keep things looking nice, try not to make lines longer than 80 characters. If you obey this style, I'll be sure that your code imports well when I upgrade to syntax highlighting. Feel free to post any further ideas for posting code here as well.",0,None,3 ,Fri Apr 29 2011 00:08:51 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/483,None,None /salimali,Ensembling,"Here are some ensembles of models that compare a single model built with all variables with what happens when you just average lots of models built on random sub populations of the data but with the same model parameters. The models were built on the 250 cases and the AUC in the plots is for the other 19,750 using Target_Practice. The baseline is the model built on all data and each sub pop model used data randomly genreated from 50-100% of variables and 50-100% of cases. This demonstrates nicely for this data set that if you don't know what settings to use, then and ensemble will do well. It also demonstrates something that can be couter intuitive - that an average of lots of poor models is a lot better than the best individual model. glmnet and pls stand out as not benefitting a great deal from ensembling - the algorithm does the regularisation itself, although for alpha = 1 it would appear ensembling may be of benefit. Vanialla logistic and linear regression show that there is a lot of overfitting and the ensemble reduces this effect, essentially by reducing extreme weights. Hope this is of interest and that this post works OK! The End!",4,bronze,2 ,Fri Apr 29 2011 04:23:34 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/484,/competitions/overfitting,98th /rks138062,Entry specifications,"Rule 8 states that the entries must be uploaded in the manner and format specified on the website, but I've rummaged around on the website everywhere I can think of and I can't find the specifications. The picture at the bottom of the Data page seems to suggest that the interim entries should be in the form of a vector of predicted values for the members and that the prizes for the milestones will be based on predicting the Y2, Y3, & part of Y4 days, but I can't find a clear statement of that either. Could someone point me to the specs?",0,None,2 ,Fri Apr 29 2011 05:25:11 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/485,/competitions/hhp,436th /cq887003,Question about definitions of dsfs and day_in_hospital_Y2 ,"1. dsfs: majority of dsfs are with value: ""0-1 month"", but some are different. If its definition is ""days since first claim made that year"", we could have two explanations: a. ""0-1 month"" is the first claim time. Q: everyone should have at least one value of ""0-1 month"" for the first cliam that year, which is not true from the data. Why? b. The cliam data is not the first one. Q: where is the first one? 2. day_in_hospital_Y2: How is it presented if a person made cliams in Y1 but none in Y2? This should be different from day_in_hospital_Y2 =0, meaning ""no stay"".",0,None,1 Comment,Fri Apr 29 2011 13:24:27 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/486,/competitions/hhp,1012th /zachmayer,Feature selection using SVM?,"We've already seen tks implement feature selection using a glmnet. How would you implement something similar, using e1071 or kernlab in R to do feature selection using a support vector machine?",1,None,14 ,Fri Apr 29 2011 15:59:03 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/487,/competitions/overfitting,59th /zachmayer,"Parallelizing, cross-validating, and testing tks' feature selection method","Here is some code I wrote to parallelize and cross-validate tks' glmnet feature selection. It hasn't improved my ranking on the leaderboard, but the code was fun to write, and it can easily be extended to test other feature selection methods. Please let me know if you spot any room for improvement. edit: it seems that I can't embed gists from github (that would be a nice feature to have...) so here's a link to my blog, where you can view the code, complete with R syntax hilighting! [Link]:http://moderntoolmaking.blogspot.com/2011/04/parallelizing-and-cross-validating.html",3,bronze,5 ,Fri Apr 29 2011 18:57:45 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/488,/competitions/overfitting,59th /ahassaine,Future editions of the contest?,Hi there ! Just wondering if you are planning to organize future editions of this contest. Thanks ! Ali,0,None,19 ,Sun May 01 2011 13:03:42 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/489,/competitions/ChessRatings2,None /trezza,Visualizing the data,"I don't know what people are using to visualize the claims data, if any, but since I'm a visual person I made up some icons to try to see the spectrum of the pcg data in a different way: It really is enlightening to see the course of illnesses are as varied as the people who have them. As I was looking through the hip fracture data, I found that looking at the data this way helped to better illustrate transient conditions that you could eliminate from your statistical ratios if you could identify them in a generic way. I think it turns the data back into people too. If I find a good way to visualize the other data, I'll probably add it too. Anyway....if anyone wants the icons or has suggestions, let me know and I'll put them up somewhere. -Cathy",0,None,2 ,Mon May 02 2011 06:22:29 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/490,/competitions/hhp,504th /byang1,"Some basic tallies, for verification","Hi, Took a first real look at the released data today. Here're some basic numbers I tallied, can anyone confirm them ? I just want to make sure I didn't make any stupid mistake in basic processing, I can generate more numbers if anyone wants to see them. Thanks ------------------------------------------- In Claims_Y1.csv, there're 644706 entries. Total members: 77289 Total days in hospital in Y2: 51396 ------------------------------------------- Some of the missing data: ProviderID: 3903 //entries with no ProviderID, and so on Vendor: 6492 PCP: 1619 PayDelay: 44623 LengthOfStay: 617827 ------------------------------------------- Claims by Specialty: Anesthesiology 7499 Diagnostic Imaging 66641 Emergency 31988 General Practice 129284 Internal 170642 Laboratory 124325 Obstetrics and Gynecology 10419 Other 20983 Pathology 4194 Pediatrics 17675 Rehabilitation 7282 Surgery 53774 ------------------------------------------- Top 5 Members with most claims: MemberID Claims 643099505 37 182716400 36 373012540 36 461240127 36 894669103 36 ------------------------------------------- A few claims by paydelay: missing 44623 0 157 50 8604 100 1293 161 522 //161 is the max paydelay in file ------------------------------------------- Member count and days in hospital by first claim age group: 0 6892 1833 //0 to 9, 6892 members, 1833 days in hospital 1 7397 1736 //10 to 19, ... 2 4916 2863 3 7818 3639 4 10838 3628 5 9258 4255 6 9570 6884 7 13254 14441 8 7346 12117 ------------------------------------------- 3778 members have paydelay>150, together they have 3893 days in hospital.",4,bronze,1 Comment,Mon May 02 2011 07:25:34 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/491,/competitions/hhp,2nd /trezza,Data set question - a years worth of data?,"I have a question about the claims data set. Is it true that the data represents a snapshot of what was happening in a given 365 days or is each member given a year's worth of claims; that is, the hospitalization data represents what happened to a member at least 365 days after the member's first claim? I believe it's relevant to determining whether any given condition is being managed, is temporary, or resolved. Thanks for clarifying. -Cathy",0,None,5 ,Mon May 02 2011 14:46:20 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/492,/competitions/hhp,504th /jeffsonas,Follow-up Dataset,"In accordance with the contest rules, the top four finishers in the main contest (and top ten in the FIDE prize contest) are required to run their systems against a new set of data, within a week of the end of the contest. Hopefully this will let us assess the robustness of the winning systems against a similar (but definitely different) dataset, that I am calling the ""follow-up"" dataset. I had already prepared the data files in advance, but given the discussions of the past 24 hours, I decided to add a lot more spurious games to the test set before distributing it. It will be available right after the contest ends. The follow-up dataset has a few differences: most importantly the player ID#'s have been randomized again, thousands of additional players have been added, and the test period has been moved three months later (so the training period will cover months 1-135 and the test period will cover months 136-138). We have decided to make this dataset available to everyone, not just the top finishers, so you will be able to find it on the Data page within the first day after the contest ends. There is no way to submit predictions automatically for the ""follow-up"" dataset, but I am happy to score the submissions manually against my database if people would like to know their relative performance against this new dataset that hasn't been chewed up as thoroughly as the contest dataset. More details to follow later, but I was envisioning that the winners would have to make their submission manually to me within the first week (by Wednesday May 11) in accordance with the rules, but anyone who wants to can send me one or two tries by Friday May 13, and I can post those results over the weekend. It won't affect the prize allocation (unless something suspicious is revealed by this process) but it will be interesting to see, I think. I will also encourage people to make a second set of predictions, one that makes no use of the future data from the test set.",0,None,5 ,Tue May 03 2011 10:50:37 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/493,/competitions/ChessRatings2,None /jeffsonas,FIDE Prize top ten (based on public score),"Hi everybody, I have finally been able to assemble a preliminary listing of the top ten for the FIDE prize. I have recently conducted a survey of who is competing for the FIDE prize, and I have identified the following ten teams: AFC, chessnuts, Nirtak, Real Glicko, Reversi, Sam Burer, Stalemate, True Grit, uqwn, Uri Blass I don't know for sure that all ten of these teams intend to document their methodology and compete for the FIDE prize, but I think they are going to. If there is anyone else who plans to compete for the FIDE prize, and is not mentioned in the above listing of ten teams, please let me know by sending email to jeff@chessmetrics.com ASAP. Even just letting me know that you are NOT competing for the FIDE prize, is very helpful, if you haven't told me already. If you are trying for the FIDE prize, I will need to know the date of submission and public score for your entry, as only one submission from each team is considered for the FIDE prize. And please note that only the top ten performing entries (based on final private leaderboard score) will be finalists, out of all eligible teams. So if you did better with your FIDE prize entry than one of the above ten teams did with theirs, you could knock them off the list and take your place as one of the ten finalists, assuming you meet the other conditions.",0,None,15 ,Tue May 03 2011 12:54:35 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/494,/competitions/ChessRatings2,None /vsu1664,Main prize,"To address some recent discussions, this is just a quick confirmation that the team ""uqwn"" did not use any future scheduling in order to produce result below 0.249. Also, it appears to be more logical if not top 4 but top 10 teams will be participating in the final exercise against an independent dataset (months 136-138).",0,None,39 ,Tue May 03 2011 23:19:46 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/496,/competitions/ChessRatings2,5th /cybaea,Missing first claims? [Release 1 data],"If DSFC is the time since that customer's first claim, then I would expect evey MemberID in the Claims file to have a ""0- 1 month"" entry. And 77,118 of them do, but there are 171 members with (much) later first claims. Question: Are these errors, or patients with a scrubbed earlier claim? > table(claims.Y1[, list(first.claim = min(dsfs)), by = list(MemberID)][, first.claim]) 0- 1 month 1- 2 months 2- 3 months 3- 4 months 4- 5 months 5- 6 months 77118 102 33 19 8 4 6- 7 months 7- 8 months 8- 9 months 9-10 months 10-11 months 11-12 months 5 0 0 0 0 0 > head(claims.Y1[, list(first.claim = min(dsfs)), by = list(MemberID)][first.claim != ""0- 1 month"", MemberID])[1] 100932559 113011586 113190879 124180853 130424132 13089624477289 Levels: 100021596 10005398 100059282 100063319 100074925 ... 999999313 OT: What happened to preview on this forum?",0,None,3 ,Wed May 04 2011 17:58:02 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/497,/competitions/hhp,109th /wcukierski,"The ""Real"" Leaderboard","Is anybody above 0.92 or thereabouts not using Ockham's variable list? I haven't been able to come up with a feature selection method that works as well as his. If my exploration with target_practice has shown me anything, it's that better variable selection is much more important than better models/parameters. I created a ""ground truth"" variable importance metric by peeking at all 20000 labels of target practice. Instead of trying to classify on samples, I tried instead to classify on variable importance. The best method I've found is the bootstrap lasso (""bolasso""). It gets about 0.9 AUC on my ground truth using just the first 250 points. I suspect Ockham's method is closer to 0.95 (but I can't say for sure because I don't know what his predictions for tartget_practice would be). My attempts to mix my own variable estimations with Ockham's list have increased my error, indicating his list is much, much better than mine. So what does the ""real"" leaderboard look like now? Is Ockham going to release his method?",0,None,4 ,Wed May 04 2011 17:59:27 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/498,/competitions/overfitting,5th /tristanz,Open source and IP,"HGN seems like a very confused group of lawyers. Today's statement is absurd. There now seems to be some exception with respect to open source licenses. Although how this matches up with the the restriction on previously published research is unclear. What does ""previously published"" even mean? If I standadize my variables before using them, have I used a previously published result? Can you confirm that, under the new rules, that if I release my prediction algorithms as open source that is fine? It seems like one way to get some freedom would be to release all the algorithms first under a very liberal license and then you can incorporate these into future work just like anybody else.",1,bronze,3 ,Wed May 04 2011 18:09:59 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/499,/competitions/hhp,None /domcastro,Law confuses me so some basic questions,"Hi I'm getting confused by the laws so I am going to ask questions that are relevant to me that will just require YES or NO answers 1. Can I use R? 2. Can I use Weka? 3. Can I use Excel? 4. If I organise the data in a novel way and just use a standard processing algorithm, such as Naive Bayes, is this OK? Many thanks, this is all I need to know",0,None,30 ,Wed May 04 2011 19:34:51 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/500,/competitions/hhp,306th /jeffsonas,FIDE Prize final standings,"Hi everyone, here at last are the final standings for the FIDE prize. I am including the top 11 here because I think we should have an ""alternate"" in case one of the top ten turns out not to have qualified under the rules. Team Reversi has the most accurate submission, but please remember that this does not mean team Reversi has won the FIDE prize. This contest is a blend between objective performance and subjective appeal, and the final winner is not necessarily the most accurate, if another's methodology turns out to be most simple or most appealing to FIDE. By virtue of having performed in the top ten, the following teams (Reversi, Uri Blass, uqwn, JAQ, Real Glicko, TrueGrit, Stalemate, chessnuts, Nirtak, and AFC) have apparently qualified as the ten finalists. The next stage of this FIDE Prize competition will be having the top ten document their methodology over the next week and re-run their methodology against an independent dataset. The alternate (Dave Poet) is also welcome to do this as well, in case one of the top ten turns out not to meet the conditions. Rank: Private score (Public score, Submission date): Team name #1: 0.256683 (0.256237, 04/25/2011 03:50): Reversi #2: 0.257354 (0.257094, 05/04/2011 14:17): Uri Blass #3: 0.257435 (0.257001, 05/04/2011 07:31): uqwn #4: 0.257608 (0.257411, 04/22/2011 20:42): JAQ #5: 0.257622 (0.257287, 05/04/2011 01:17): Real Glicko #6: 0.257723 (0.257482, 05/04/2011 13:11): TrueGrit --- Glicko Benchmark (using c=15.8) scored 0.257834 --- #7: 0.258554 (0.258238, 04/11/2011 00:10): Stalemate #8: 0.258950 (0.258358, 04/19/2011 11:36): chessnuts --- Actual FIDE Ratings Benchmark scored 0.259751 --- #9: 0.259901 (0.259560, 04/07/2011 05:24): Nirtak #10: 0.259947 (0.259794, 05/03/2011 12:29): AFC -------------------------------------------------- #11: 0.260296 (0.260350, 05/03/2011 14:45): Dave Poet According to the rules, you now have one week to run your same algorithm (the one identified in the listing above) against an independent dataset (known as the follow-up dataset, available on the Data page of the contest website) and submit a new set of predictions to me. You will also need to document your methodology. In the rules I also stated that you needed to provide a full log of player rating vectors, but I think this is too burdensome so I am going to make it optional. Here are the next steps: Already done: Follow-up dataset made available to everyone - see the Data page by May 11th (3pm UTC): Submit full documentation of your method to me, via email (jeff@chessmetrics.com) by May 11th (3pm UTC): Re-run your algorithm against the follow-up dataset and send me a new set of predictions for the test set, via email (jeff@chessmetrics.com) Optional, by May 11th (3pm UTC): Send a full log of player rating vectors from the follow-up run, across all months and all players, via email (jeff@chessmetrics.com) -- Jeff",0,None,20 ,Wed May 04 2011 21:24:16 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/501,/competitions/ChessRatings2,None /jeffsonas,Best Submissions When Ignoring Future Information From Test Set?,"The techniques developed by some contest participants to extract useful information out of the test dataset were very useful to help win the contest, but are not useful in identifying the most accurate chess prediction algorithm. And that (identifying the most accurate chess prediction algorithm) was the main purpose for me of spending the time to set up and run this contest. Therefore I would really like to identify the best submissions that did not use any ""future information"" from the test set. This has no bearing on the contest standings but is of great interest to me (and I am sure, to others). A complex algorithm is not as useful to FIDE but they still have expressed some interest, and for other applications such as chess servers or my own calculation of historical chess ratings going back to the mid-19th century, even a complex algorithm is fine. I realize that some people, those who focused heavily on how best to use the test set data for improving predictions, might not have made submissions that completely ignored the future; if you need me to score a few submissions from the contest test set, I am happy to do that. Obviously some very spectacular results have been achieved, even without extracting useful information from the test set, and I hope people are willing to share these numbers and also hopefully document their methodologies. The main prizewinners need to do this in order to qualify for prizes but I would love it if at least the top ten could produce some level of documentation, and also hopefully help the assessment of the ""pure"" competition (i.e. the one that doesn't use the future information from the test set). I am interested in knowing: #1 Which submission was your best one that didn't use the future information from the test set? #2 Did the follow-up dataset do a better job of defeating efforts to use future information from the test set? In order to answer #1, I would either need you to identify your most promising entry by its score, or you can send me a submission set (since people can no longer submit automatically) and I can score it for you. You can email me at jeff@chessmetrics.com In order to answer #2, I would need you to prepare two sets of submissions against the follow-up dataset, using your most promising approaches. One set that used the future information and one that didn't. Please post your numbers on this forum topic, and feel free to send me submissions against either the regular contest dataset, or the follow-up dataset, so I can score them for you. I am happy to make myself available over the next couple of weeks to support this process. After that, I might just take a break from chess prediction contests for a while!! Thanks, -- Jeff",0,None,12 ,Wed May 04 2011 23:56:58 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/502,/competitions/ChessRatings2,None /rkaanozbayrak,Taking the time line seriously,"It is already May 5th, 2011 UTC and we are still waiting for the new data and illuminating information regarding the treshold, leaderboard, etc. Not that a one-day delay is significant in the course of a 2-year event, but it takes away from the professionalism of the whole endevour. You should have been ready to go forward by midnight UTC yesterday. If you had too little time to get things ready, you should have announced the day as May 5, 2011, not May 4, 2011. Just my humble opinion if you are interested in how things look from a participant's point of view.",0,None,1 Comment,Thu May 05 2011 01:29:50 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/503,/competitions/hhp,675th /rsankula,where is SampleEntry.csv,I couldn't find the SampleEntry.csv It is not in the HHP_release2.zip Did anyone else find it?,0,None,1 Comment,Thu May 05 2011 02:50:19 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/504,/competitions/hhp,298th /breakfastpirate,Do the members need to be in the same order as Target.csv?,Do the members need to be in the same order as Target.csv when we submit our entries? Or can they be in any order?,0,None,8 ,Thu May 05 2011 03:06:08 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/506,/competitions/hhp,60th /cdubois,"Y2, Y3 correlation","Hi all, I wanted to get a sense of the correlation between Y2 and Y3 for those memberIDs that are available. Below I've included a table of number of members who spent in given # of days in Y2 a given # of days in Y3 (only for those memberIDs that we observe in both years). Anybody else have similar findings? (Using R...) m <- intersect(DaysInHospital.Y2$MemberID,DaysInHospital.Y3$MemberID)ix <- match(m,DaysInHospital.Y2$MemberID)jx <- match(m,DaysInHospital.Y3$MemberID)table(DaysInHospital.Y2$DaysInHospital[ix],DaysInHospital.Y3$DaysInHospital[jx]) 0 1 2 3 4 5 6 7 8 9 10 11 0 38106 2374 1112 767 433 254 132 104 64 49 48 33 1 2591 372 217 130 65 39 26 14 14 9 8 6 2 1254 181 107 54 46 33 12 16 5 13 8 3 3 743 94 72 51 24 22 5 14 6 3 7 3 4 457 64 47 27 16 14 9 10 3 2 3 1 5 226 37 46 27 14 9 3 6 1 2 1 0 6 155 31 18 10 11 3 6 2 3 2 2 1 7 90 20 15 5 6 3 5 1 4 4 1 0 8 75 7 3 5 3 3 4 5 0 2 2 0 9 64 2 2 4 3 3 1 0 0 1 1 1 10 32 5 6 8 4 4 1 2 1 1 0 1 11 27 4 3 2 2 2 1 1 1 1 1 0 12 23 1 3 3 1 4 2 1 4 0 0 0 13 16 4 3 0 2 2 0 1 3 1 1 1 14 16 2 0 1 1 1 0 1 0 0 0 1",1,bronze,5 ,Thu May 05 2011 03:06:15 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/507,/competitions/hhp,515th /salimali,Competitons Resulting in Publications - thoughts please...,"This competition is coming to an end, and before it does I would like to gather thoughts on the idea that similar competitions could be run where the top x competitors were rewarded with the invitation to write a paper on their method for publication in a journal. The peer review would be your fellow competitors via the leaderboard. Would publication in a journal be an incentive to enter? What types of data sets would you like to see? Please reply if you have any thoughts…",0,None,6 ,Thu May 05 2011 03:23:07 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/508,/competitions/overfitting,98th /fordprefect0,Does Release2.zip supercede Release1.zip?,"Is the data contained in Release1.zip still relevant, or are all files superceded by Release2.zip? Should we ignore Release1.zip completely from now on? For example, in Release1.zip there's a file DayInHospital_Y2.csv which a priori should contain similar information as the file DaysInHospital_Y2.csv from Release2.zip. However, they have different formats, and don't seem directly related: the memberid 60481 from DayInHospital_Y2.csv doesn't show up in DaysInHospital_Y2.csv at all.",0,None,4 ,Thu May 05 2011 03:40:38 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/509,/competitions/hhp,557th /trezza,I submission available?,"Is submission available yet? If so, where? Thanks! -Cathy",0,None,3 ,Thu May 05 2011 04:20:15 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/510,/competitions/hhp,504th /chrisraimondi,Have the data sets been renumbered?,"I was trying to make sure I didn't miss any members, but there seems to be a huge lack of overlap between the member numbers from the first data set (HHP_release1.zip) and second dataset. So huge that I am guessing they were renumbered. Can someone confirm so I can go to sleep :)",0,None,1 Comment,Thu May 05 2011 05:59:26 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/511,/competitions/hhp,20th /cybaea,Prediction Error Threshold - where is it?,"The Prediction Error Threshold aka Accuracy Threshold was supposed to be rekeased yesterday. Maybe I am going blind, but I can't find it?",0,None,2 ,Thu May 05 2011 08:12:26 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/514,/competitions/hhp,109th /nimararora,Missing members in DaysInHospital_Y2,"I am a bit confused by the fact that certain members have claims in year 1 but no days-in-hospital entry for year 2. Does this mean that these members have 0 hospital days in year 2? For example: mysql> select Year, count(*) from Claims where MemberID=24027423 group by 1; +------+----------+ | Year | count(*) | +------+----------+ | Y1 | 5 | | Y2 | 3 | +------+----------+ mysql> select * from DaysInHospital where MemberID=24027423; +----------+-----------------+----------------+------+ | MemberID | ClaimsTruncated | DaysInHospital | Year | +----------+-----------------+----------------+------+ | 24027423 | 0 | 0 | Y3 | +----------+-----------------+----------------+------+ So, is it safe to assume that MemberID 24027423 was never in the hospital for Y2?",0,None,7 ,Thu May 05 2011 09:54:22 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/515,/competitions/hhp,83rd /ssrc9486,SupLOS,"I understand that when SupLOS is 1 rather than 0, the associated LengthOfStay has been suppressed. What does this mean? What determines whether an LOS would be suppresssed, and what does this involve? Does this mean that those patients with a blank LengthOfStay and a 0 in SupLOS have spent 0 days in the unit (hospital if the claim is from inpatient/ER) and that their LOS is at least 1 day if SupLOS = 1? As LOS options range from 1 day to +26 weeks, I am confused as to why a LOS would need to be suppressed, rather than falling into one of these broad LOS options. Thanks!",0,None,5 ,Thu May 05 2011 11:00:51 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/516,/competitions/hhp,1333rd /uriblass,missing information and different tyoe of information in the new data,"I compared the information in Members.csv of the new data and the old data and I found the following differences(maybe there are more differences that I still did not see) 1)memberID is not increasing in every line in the new data(unlike the old data). 2)There are memberID that begins with 0 with the new data(unlike the old data). 3)There are memberID that have no information about their age or their sex and I wonder how it is possible because I expect to have a full information of these details(there is a full information of these details in the old data). I also wonder what is the reason for the changes that now force people to change their program if their programs are based on assumptions 1,2,3 and more assumptions that they checked that they are right for the old tables but they are not right for the new tables(fortunately I still did not have time to do much programming on the tables but the little that I did about saving part of the data in memory is something that I need to change).",0,None,1 Comment,Thu May 05 2011 12:14:15 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/517,/competitions/hhp,340th /mkwan7977,Team formation,"Is the interface for setting up teams - the ""Team Wizard"" - available yet? I can't find it. If I make a personal submission now, does that affect my ability to form a team later? If not, my team-mates and I would be better off working individually, since we can make more submissions that way.",0,None,3 ,Thu May 05 2011 13:01:55 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/518,/competitions/hhp,17th /chrisraimondi,"Was the 30% ""Feedback Data Set"" randomly chosen?","We do not know (for good reason) which members are in the 30% of the Y4 data that is being used to compute the public scoreboard. In other contests attempts have been made to make splits such as these: even - like in the HIV contest to make each category equally represented biased towards interesting cases - I believe I read somewhere the chess contest was like that random - I am guessing the overfit contest is like that Can you confirm that the 30/70 split was totally random (not withstanding the normal ""there is no such thing as a 'truly random' number"") with no attempts to make them even, representative, or pick out interesting cases?",0,None,1 Comment,Thu May 05 2011 13:03:16 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/519,/competitions/hhp,20th /toulouse,ClaimsTruncated,"Hello Maybe I have missed something obvious but what ""ClaimsTruncated"" means in the entries we have to submit? Thanks!",0,None,20 ,Thu May 05 2011 13:48:40 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/520,/competitions/hhp,168th /jasontigg,Combining Different Team Predictions,"Jeff I was wondering, if it was not too onerous, whether you would be able to produce the symmetric 5x5 matrix of the binomial deviance obtained by taking the average of leaderboard team i's best submission and leaderboard team j's best submission. I think this matrix would be really interesting in terms of seeing how correlated the top 5 teams best submissions were.",0,None,3 ,Thu May 05 2011 14:52:30 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/521,/competitions/ChessRatings2,4th /zachmayer,Missing values of ordered or numeric variables,"How are people dealing with missing values of numeric or ordered variables, such as DSFS, or Length of Stay? For now, I am recoding those missing values to zero, but I was wondering if there was a better solution.",0,None,4 ,Thu May 05 2011 16:40:09 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/522,/competitions/hhp,9th /cybaea,Interesting submissions with scores?,"Did anybody do any interesting submissions they want to share? I submitted $p_i = 0.18584427052136$ for all $i$ giving a public score of 0.486849. If anybody has submitted all zeros, then we can calculate the mean of $a_i$ for the sample.",1,bronze,21 ,Thu May 05 2011 17:18:20 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/523,/competitions/hhp,109th /del=37478cf4f027318a,blank rows of data,Do blank rows of data in data sets released by HHP mean value unknown or value is zero? How do we know data inputter did not forget to put in values? What is the average error rate for a data inputter-how often do they get data wrong ? How often do they input a wrong number or word string? Do doctors input data or staff employed by them or both?,1,bronze,3 ,Thu May 05 2011 20:06:05 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/524,/competitions/hhp,1081st /zachmayer,What methods are you using?,"I'm having good results with an SVM on a sample of the data, but it's been very difficult to fit on the whole dataset. I've also tried simple linear regression, but it's less than ideal. What method are you using?",0,None,16 ,Fri May 06 2011 00:01:43 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/526,/competitions/hhp,9th /onemillionmonkeys,Minor parsing issue,"Member IDs may have leading zeroes in members and claims files. For example: 02759427,40-49,M But the leading zeroes are suppressed in the ""days in hospital"" files. For example: 2759427,0,0 This could cause problems for people who treat member IDs as strings or who aren't careful in parsing those strings as integers (e.g., C routine sscanf with %i format does the wrong thing). Kaggle folks may want to clean this up with the next data release.",8,silver,5 ,Fri May 06 2011 00:32:05 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/527,/competitions/hhp,127th /hyperdose,Members Table - Why did the data get worse?,"Members.CSV Question: First set of data we recieved - all 77290 members had a Sex and AgeAtFirstClaim This second set of data - we now have 113001 members and 17552 do not have a sex and 5300 do not have a value for AgeAtFirstClaim. Is it possible to get sex and ageatfirstclaim for all the members? I find it very odd, that with the new set of data we get 35711 new members but 17552 of them come incomplete? (almost 50% of the new members are incomplete?) Thanks! lana",0,None,2 ,Fri May 06 2011 00:57:05 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/528,/competitions/hhp,1098th /rkaanozbayrak,"""Input string was not in a correct format""","I keep getting this error message when I try to submit, even though my entry meets all the listed criteria: Your entry must: be in CSV format have your prediction in column 3 have exactly 70,943 rows Each predicted value must be: Total number of days spent in the hospital. That is, a real-valued number in the interval [0, 15]. Where am I going wrong?",0,None,5 ,Fri May 06 2011 02:09:22 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/529,/competitions/hhp,675th /del=37478cf4f027318a,what qualifications do you need to be good at doing this task?,Maths and statistics are obvious answers as well as programming and software skills.But could someone with a degree in ancient history win this competition? Or perhaps someone who has one of the illnesses listed and who has had to think about medical matters more than others? What exercises should we do to get fit for this competition?,0,None,12 ,Fri May 06 2011 15:02:52 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/530,/competitions/hhp,1081st /zaccaksolutions,Submissions: Number of Digits,"""Entries will be judged based on the degree of accuracy of their predictions of DaysInHospital for Y4 (or if applicable Y5), carried to six (6) decimal places"" What happens if we submit entries that have a higher degree of accuracy? (Rounding. Ceiling or Floor?) Thanks! -H",1,None,4 ,Sat May 07 2011 01:14:36 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/532,/competitions/hhp,544th /rkaanozbayrak,The Order of Memberid in Target,"I would appreciate if someone from Kaggle clarifies whether keeping this particular order in our submissions matters. It is quite difficult to rearrange the output to match this particular order. If it turns out that the submission really has to be made in this particular order, it would be a great help for one team to post their R code for it. Thank you.",0,None,12 ,Sat May 07 2011 01:52:19 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/533,/competitions/hhp,675th /markwaddle,Forum needs search feature,"I apologize if someone already posted this, but I can't search to find out. :) The forum badly needs a search feature. Or I just missed it and someone needs to show me where it is. Thanks, Mark",0,None,4 ,Sat May 07 2011 08:17:39 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/534,/competitions/hhp,218th /tsfhaines,Exclusivity & Publication,"Ok, people have been discussing it in other threads, but I just want to ask two questions and get two clean answers from the people runnning this competiton: 1) Is the exclusivity requirement for the code going to be dropped? I need to be able to use any code I develop quite arbitarilly, for any other purpose I see fit, not to mention what effect this has on my existing code base, that I would inevitably build upon. I typically release my code under an open source license - you are effectivly preventing that, to give just one example. 2) Are the publication rules going to be adjusted? I don't mind sensible restrictions on publishing results dependent on the data, as I can always find another problem to apply an algorithm to, but your rules currently prevent publishing the algorithm, and I can not risk being unable to publish my work. As an academic its quite simple - unless these two problems are fixed, in the terms (The assurances of the organisers on this forum are legally meaningless.), I can not participate, and I would like to know now so I can stop checking this website to find out if things have changed if the answer is no. I would also mention that the terms currently contain a fair few contradictions, my favorite of which is the implication that after each milestone everybody who has submitted an entry has to start their codebase again, from scratch. Their are also interesting potential interactions with open source licenses - that term has clearly not been thought through. I get the distinct feeling that the goals of this competition have not been clearly laid out, or that the various parties comming together to make the competion happen are ignoring them - right now there is definite disconnect between the stated goals and what the competion actually is. Also, whilst I am here, where is the error rate threshold that was meant to be published to the website a couple days ago? I can't find it.",0,None,11 ,Sat May 07 2011 11:33:21 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/536,/competitions/hhp,None /chrisraimondi,Grrr - If you have 0.509697 as your score - here is what you screwed up....,"You forgot to supress the rownames in the file - so instead of submitting what you though was: 20820036 0 0.70276 14625274 1 0.026924 99227820 0 0.416635 74486714 0 0.58793 92341995 0 0.883569 7127539 0 0.993286 79094292 1 0.787652 99239152 0 0.090042 20820036 0 0.70276 14625274 1 0.026924 99227820 0 0.416635 74486714 0 0.58793 92341995 0 0.883569 7127539 0 0.993286 79094292 1 0.787652 99239152 0 0.090042 1 20820036 0 2 14625274 1 3 99227820 0 4 74486714 0 5 92341995 0 6 7127539 0 7 79094292 1 8 99239152 0 What you really submitted was: 1 20820036 0 2 14625274 1 3 99227820 0 4 74486714 0 5 92341995 0 6 7127539 0 7 79094292 1 8 99239152 0 Make sure you use the row.names=FALSE flag as in: write.csv(submission.official, file=""submission.official.csv"", row.names=FALSE)",2,bronze,2 ,Sun May 08 2011 17:30:32 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/538,/competitions/hhp,20th /zachmayer,Benchmarks,"Is kaggle planning to submit some benchmark entries, as has been done in other competitions? Elsewhere in the forums is discussion of a zero benchmark, a 15 benchmark, and a constant-value benchmark. Personally, I think Jeremy Howard's best shot at this competition would be an excellent benchmark, and since he works for kaggle he could eventually release his code for all of our benefit.",0,None,8 ,Sun May 08 2011 22:55:17 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/539,/competitions/hhp,9th /del=37478cf4f027318a,paydelay and poverty and time spent in hospital,As someone who is not a US citizen I would like to ask: If you take longer to pay are you poorer? If you are poorer you are more likely to be ill(I would think yes). Does more time in hospital mean they fix a richer patient (who presumably can afford the cost of a longer stay in hospital) better or that the patient was more ill? Who gets blamed most in the media for poor health care- vendors or doctors?,0,None,11 ,Sun May 08 2011 23:53:15 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/540,/competitions/hhp,1081st /tatianamcclintock,30-day readmissions and unplanned admissions,"It is interesting to me that the algorithm does not address the chance of patients being readmitted within 30 days. Medicare does not pay for patients admissions readmitted within 30 days. I understand that The Heritage Provider Network wants to predict the number of days. But how do we define unplanned admissions of those? I assume unplanned admissions are those would be to focus on. I do not see any data field that would differentiate planned from unplanned admissions. I guess emergency room admissions would be unplanned by definition, but it does not have to be. Less urgent condition that a patient has not foreseen can be handled through a normal hospital inpatient admission process or a clinic.",0,None,3 ,Mon May 09 2011 00:43:23 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/541,/competitions/hhp,None /markrothfuss,Variable Selection Routine,"The code posted below attempts to weight variable importance by how frequently a glmnet object made from a random selection of vars can return an AUC of 1 on the training data. I have not had luck extending this variable selection to improvement on the test data but perhaps someone else will with some tweaks. A chart during the run will start showing a separation after 2500 iterations which will be very clear by 5000 iterations. It identifies ~65 vars plus or minus for the Leaderboard targets, eg: > var.best.names [1] ""var_7"" ""var_10"" ""var_14"" ""var_15"" ""var_20"" ""var_22"" ""var_30"" [8] ""var_33"" ""var_36"" ""var_37"" ""var_38"" ""var_39"" ""var_40"" ""var_42"" [15] ""var_46"" ""var_50"" ""var_51"" ""var_53"" ""var_54"" ""var_56"" ""var_60"" [22] ""var_65"" ""var_67"" ""var_68"" ""var_70"" ""var_82"" ""var_83"" ""var_86"" [29] ""var_87"" ""var_90"" ""var_91"" ""var_93"" ""var_96"" ""var_97"" ""var_99"" [36] ""var_102"" ""var_105"" ""var_107"" ""var_110"" ""var_115"" ""var_117"" ""var_125""[43] ""var_127"" ""var_136"" ""var_142"" ""var_145"" ""var_146"" ""var_149"" ""var_150""[50] ""var_157"" ""var_159"" ""var_161"" ""var_162"" ""var_163"" ""var_174"" ""var_178""[57] ""var_179"" ""var_183"" ""var_185"" ""var_187"" ""var_188"" ""var_193"" ""var_196""[64] ""var_200"" Here is the code, when running just hit escape to end the code since it uses an infinite repeat. # LOAD LIBRARIESlibrary(glmnet)library(caTools)# SET WORKING DIRECTORY & LOAD DATAsetwd(""C:\\Users\\user\\Desktop\\Overfit"")d.raw <- read.csv(file=""overfitting.csv"",header=T)# DEFINE __TRAINING__ DATA & TARGETSd.train = d.raw[d.raw$train == 1,]d.train.target <- d.train$Target_Leaderboardd.train$case_id = NULLd.train$train = NULLd.train$Target_Evaluate = NULLd.train$Target_Practice = NULLd.train$Target_Leaderboard = NULL# DEFINE __TEST__ DATA & TARGETSd.test = d.raw[d.raw$train == 0,]d.test.id <- d.test$case_idd.test.target <- d.test$Target_Leaderboardd.test$case_id = NULLd.test$train = NULLd.test$Target_Evaluate = NULLd.test$Target_Practice = NULLd.test$Target_Leaderboard = NULL# Constantsk.alpha <- 0 # Glmnet alpha parameterk.vars <- 200 # vars in data setk.var.min <- k.vars/2 # Min # of random vars to use, Suggested range: 0 to k.vars/2k.scale <- 2 # Exponent to scale importance weightsvar.count.min <- .8 * k.vars # Count of vars for glmnet models, too high doesn't discriminate, too low can't make good modelsvar.importance <- NULL # Store variable importance weightsvar.importance[1:k.vars] <- 0 # initalize weights to 0auc.best <- 0 # init model selection parameter, usefull for when perfect models are not attainable earlyiter <- 0 repeat { iter <- iter + 1 # Select a random num of vars around k.var.min var.current.count <- max(k.var.min, round(var.count.min - 3 + order(runif(12))[1])) var.current <- order(var.importance^k.scale + runif(k.vars), decreasing=T)[1:var.current.count] # Run glmnet (or some other package), calculate train preds & AUC go <- glmnet(as.matrix(d.train[var.current]), d.train.target, family=""binomial"", alpha=k.alpha, standardize=FALSE) preds <- predict(go, as.matrix(d.train[var.current]), type=""response"") auc <- max(colAUC(preds, d.train.target)) if (auc >= auc.best^2) { auc.best <- auc # Keep track of min var count required to get 1 AUC or best.auc if (var.current.count < var.count.min) {var.count.min <- var.current.count} # Use EMA to keep track of variable importance var.importance[var.current] <- 0.99*var.importance[var.current] + 0.01 var.importance[-var.current] <- 0.99*var.importance[-var.current] # Count and generate list of most important vars var.best.count <- length(var.importance[var.importance > 0.9]) var.best <- sort(order(var.importance, decreasing=T)[0:var.best.count]) var.best.names <- names(d.train[var.best]) # Output some info and plot the sorted var weights cat(""I:"", iter, "" AUC:"", auc.best, "" Var Min:"", var.count.min, "" Var Current:"", var.current.count, "" Var Best:"", var.best.count, "" Vars:"", var.best, ""\n"") flush.console() plot(sort(var.importance, decreasing=T), ylim=c(0,1)) }}",0,None,5 ,Mon May 09 2011 05:32:54 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/542,/competitions/overfitting,87th /pmajek,patients with nonzero DaysInHospital_Y2.csv but no Y2 entries in Claims.csv,"I'm not sure I understand the data properly. How is it possible that there are patients who have nonzero DaysInHospital_Y2.csv entries but without a single Y2 entriy in Claims.csv. For example a patient with ID 42286978 has the entry: ""42286978,0,2"" in DaysInHospital_Y2.csv, but in Claims.csv all entries for the patient 42286978 are from Y1. Thanks in advance for clarification, Peter",0,None,1 Comment,Mon May 09 2011 13:35:54 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/543,/competitions/hhp,64th /del=37478cf4f027318a,problem uploading submission,"What does"" key already added"" mean when my submission is rejected? Do we include oir own column names for 70942 data rows?",0,None,6 ,Mon May 09 2011 17:05:41 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/544,/competitions/hhp,1081st /karansarao,Length of Stay interpretation,"Would be greatful if someone can explain the LOS interpretation. for e.g. consider member 78865731 for Y1, there are 38 claim entries, and LOS populated against many of them. Obviously these are part of the same hospitalization incident else the total sum exceeds 365. My query is how do we differentiate between seperate hospitalization incidents, i.e. when to add up and when to just take Max if I want to find out for a given member simply the number of hospitalization days in that year.",0,None,1 Comment,Mon May 09 2011 18:08:50 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/545,/competitions/hhp,219th /boegel,"Missing values in DaysInHospitalY4, and how they are used in scoring","In both the DaysInHospital_Y2 and DaysInHospital_Y3 datasets, several members included in the Target.csv have no value: there are 27,705 target members for which the DaysInHospital_Y2 is missing, and 21,260 target members which have DaysInHospital_Y3 missing. That's roughly 40% and 30% of the members missing a value for DaysInHospital Y2 and Y3, respectively. I can only assume that the DaysInHospital_Y4, i.e. the value we need to predict, has a similar distribution (if you will). This brings me to my question. We need to predict how many days each member in the Target.csv list will be hospitalized in Y4, and our prediction is compared to the real value only known by HPN. How are our predicted values scored for members which don't have a value for DaysInHospital_Y4? Since both the Y2 and Y3 dimensions have a significant amount of missing values, I can hardly imagine this is not the case for Y4. The ""Evaluation"" page on the HHP website doesn't mention anything about missing values. Any comments on this are highly appreciated, maybe I'm missing something obvious.",4,bronze,11 ,Mon May 09 2011 21:49:08 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/546,/competitions/hhp,639th /salimali,Final Submission Instructions,"The competiton ends in a few days, but remember - the leaderboard counts for nothing! Here are the instructions on what you need to do to submit your final predictions on Target_Evaluate. 1) Check on the leaderboard to see if you have beaten the benchmark. If you have then proceed... 2) The final AUC submission should be 1 column only, ordered by case_id. There should be 19,750 rows in this file, plus a header row. The header row should be the name of your team. Please name the file AUC_your_team_name.txt 3) The variable prediction file should be 1 column with 200 rows plus a header. Each row should contain a 1 or 0. The first row represents var_1 and the last row var_200. Put a 1 if you think the variable was used and a 0 if you don't think it was used. The header row should be the name of your team. Please name the file VAR_your_team_name.txt 4) These two files should be emailed in the same email to dontoverfit@gmail.com 5) Please put your team name as the subject of the email. You should receive an automated acknowledgement saying the email was received. 6) Also in the email you need to include, Your real name and those of your team members. You won't win any money if you don't supply your real name. The team names of the 3 contestant who you think contributed most to the forum The names of the teams that you think will finish in the top 5, winner first, 5th place last. Your email should look something like this... OUR TEAM: TEAM A Mr ADAM APPLE, Mrs JENNY JONES CONTRIBUTORS: TEAM A TEAM B TEAM C WINNERS: TEAM A TEAM B TEAM C TEAM D TEAM E The submissions should arrive by Monday 23rd May. I live in Australia and will be opening the emails on Tuesday 24th, anything not in by then won't get scored. Please only send 1 submission. The first one received will be the one used. And now for some news you may find useful in predicting the winners; Ockham who kindly gave everyone a list of variables to try actually had information that these were the actual variables used. As we don't like insider trading Ockham has been banned from entering part 2. Ockham was impressed though by your efforts. Sali Mali ;-)",0,None,28 ,Tue May 10 2011 13:04:16 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/547,/competitions/overfitting,98th /informationman,patients leaving hospital against medical advice,I don't know if this is possible in the US but in Germany you can leave the hospital earlier then the doctors would like you to stay (against medical advice). Maybe this correlates with follow up hospitalisations. You should consider adding this bit of information to the data. A recent study shows there are 1.65 % patients in Germany who are leaving earlier and even more in some PrimaryConditionsGroups. If a patient shows a habit of leaving earlier in some years ... this could be an indicator for later years and/or have an impact on DaysInHospital source: [Link]:I%20don't%20know%20if%20this%20is%20possible%20in%20the%20US%20but%20in%20Germany%20you%20can%20leave%20the%20hospital%20earlier%20then%20the%20doctors%20would%20like%20you%20to%20stay%20(against%20medical%20advice).%20%20Maybe%20this%20correlates%20with%20follow%20up%20hospitalisations.%20You%20should%20consider%20adding%20this%20bit%20of%20information%20to%20the%20data.%20%20A%20recent%20study%20shows%20there%20are%201.65%20%%20patients%20in%20Germany%20who%20are%20leaving%20earlier%20and%20even%20more%20in%20some%20PrimaryConditionsGroups.%20%20If%20a%20patient%20shows%20a%20habit%20of%20leaving%20earlier%20in%20some%20years%20...%20this%20could%20be%20an%20indicator%20for%20later%20years%20and/or%20have%20an%20impact%20on%20DaysInHospital%20%20%20%20%20%20source:%20http:/www.apotheke-adhoc.de/Nachrichten/Panorama/15052.html%20%20google%20translated%20source:%20http:/translate.google.com/translate?hl=en&sl=de&tl=en&u=http:/www.apotheke-adhoc.de/Nachrichten/Panorama/15052.html google translated source: [Link]:http://translate.google.com/translate?hl=en&sl=de&tl=en&u=http://www.apotheke-adhoc.de/Nachrichten/Panorama/15052.html definition: [Link]:http://en.wikipedia.org/wiki/Against_medical_advice Think about adding this information to the data,0,None,1 Comment,Tue May 10 2011 16:08:10 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/548,/competitions/hhp,1096th /mitchmaltenfort,what hardware are people using?,"I'm beginning to wonder if my 16 Gig RAM, 64 bit Windows (work) box is a bit underpowered.",0,None,7 ,Tue May 10 2011 17:04:59 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/549,/competitions/hhp,355th /del=37478cf4f027318a,What is the difference between days in hospital and length of stay?,What is the difference between days in hospital and length of stay? What does each of these terms mean?,0,None,9 ,Tue May 10 2011 17:11:49 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/550,/competitions/hhp,1081st /tatianamcclintock,IBM Business Analytics,[Link]:http://insurancenewsnet.com/article.aspx?id=259014&type=newswires Did anyone use this IBM business analytics technology?,0,None,1 Comment,Tue May 10 2011 17:46:54 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/551,/competitions/hhp,None /doc555,How to predict Year 2 Hospitalizations from Year 2 data?,"When you sum the data for year 2 patients and compare them to DaysInHospital_Y2: 1. You get 71,435 summed patient records, but only 51,965 matching patient records from DaysInHospital_Y2. Check1: (SELECT ClaimsY2.MemberID, Sum(ClaimsY2.LengthOfStay) AS SumOfLengthOfStay, Sum(ClaimsY2.SupLOS) AS SumOfSupLOS FROM ClaimsY2 GROUP BY ClaimsY2.MemberID;) - 71,435 records Check2: (SELECT DaysInHospital_Y2.*, Check1.SumOfLengthOfStay, Check1.SumOfSupLOS FROM DaysInHospital_Y2 INNER JOIN Check1 ON DaysInHospital_Y2.MemberID=Check1.MemberID;) - 51,935 records 2. 697 of those records have a days in hospital value of 1 or above with 0 Length of stay values for that patient and 0 and 0 claims truncated: (SELECT DaysInHospital_Y2.*, Check1.SumOfLengthOfStay, Check1.SumOfSupLOS FROM DaysInHospital_Y2 INNER JOIN Check1 ON DaysInHospital_Y2.MemberID = Check1.MemberID WHERE (((Check1.SumOfLengthOfStay)=0) AND ((Check1.SumOfSupLOS)=0) AND ((DaysInHospital_Y2.DaysInHospital)>0) AND ((DaysInHospital_Y2.ClaimsTruncated)=0));) - 697 records 3. 4747 records have a length of stay of 1 or above with 0 days in Hospital and 0 claims truncated: (SELECT DaysInHospital_Y2.*, Check1.SumOfLengthOfStay, Check1.SumOfSupLOS FROM DaysInHospital_Y2 INNER JOIN Check1 ON DaysInHospital_Y2.MemberID = Check1.MemberID WHERE (((Check1.SumOfLengthOfStay)>0) AND ((DaysInHospital_Y2.DaysInHospital)=0) AND ((DaysInHospital_Y2.ClaimsTruncated)=0));) - 4747 records. So how do you predict Y2 days in Hospital from Y2 claims?!? This is not by any means a trivial portion of the data for hospitalized patients. Do other people get similar results or did I have some kind of import error?",0,None,7 ,Tue May 10 2011 20:52:06 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/552,/competitions/hhp,418th /teamsmrt,Supercomputer/Cluster use & lots of side notes,"My main question is this: How many of you have gotten good results because you have a computer that can do a lot of CPU cycles in a short amount of time? My motivation for asking: I'm currently trying an iterated feature selection routine that is just murdering my little laptop right now. I don't really know if it will produce good results or not, but while I'm waiting to see how it turns out I can't try any other techniques. I'm wondering if it might be worth it to spend a few bucks for a few hours of computing time on the Amazon EC2 (for those of you who don't know what that is, http://aws.amazon.com/ec2/). How many of you feel like you have gotten better results because you had the resources available to try any idea you wanted, regardless of how many CPU cycles it eats up? Since I didn't feel like starting a thread for all of these other ideas I wanted to ask about, here's a bunch of side notes that I had been meaning to get on the forum but just didn't for one reason or another. Side note #1: Have any of you used CRdata.org before? It looks like it would be really helpful for this site since everyone seems to be an R junkie here (no, I'm not affiliated with the site, I just thought I'd get some opinions before trying it out) Side note #2: The variable selection technique I'm trying is a blend of and SVMs and forward/backward passing. Basically I use the caret and e1071 packages to fit a model for each individual variable, pick the best one, and then fit another set of models including that one ""best"" variable to see which would be the next best variable to add to it. After there are two or more variables in the model, it not only checks to see if there would be any benefit to adding a variable, but it also looks to see if there is benefit in removing a varible. In this way it will hopefully approach a near optimum variable set. If you'd like to look at the code, just ask. I figured I wouldn't post it unless someone actually wants it- like I said, it's a monster that will render your computer unusable while running it (maybe not if you have more than one CPU core) and you may not actually get any results on it before the contest is over. Side note #3: This has been the most fun I've had thinking about stuff since my days doing quiz bowl in high school. Thanks for sharing all your ideas and techniques on the forums, it really made this competition interesting. I know that I'll definitely be trying more of these competitions out in the future. Side note #4: Anybody interested in joining up to make a team for the Heritage Health Prize? If you're looking for someone to work with, TeamSMRT could use seven additional members. I haven't looked at the data sets yet, but I'll bet the people who do well in this competition could do pretty well in that one. I'm also confident we could figure out a way to divide $3,000,000 in a way that makes everyone happy. Thanks, Harris (TeamSMRT's lone team member since none of my friends ended up joining)",0,None,4 ,Tue May 10 2011 23:03:19 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/553,/competitions/overfitting,57th /salimali,Inserting R code into a blog with syntax highlighting,"A cry for help, Zach used github to post some R code into his blog with what looks like syntax highlighting, but not sure if it is true R highlighting. [Link]:http://moderntoolmaking.blogspot.com/2011/04/parallelizing-and-cross-validating.html I've also found this site that generates HTML. You can specify the language but I can't see R on the list. [Link]:http://tohtml.com/auto/ Does anyone know of any easy to use alternatives? Cheers, Sali Mali",0,None,6 ,Wed May 11 2011 02:40:27 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/555,/competitions/overfitting,98th /dansbecker,Merging claims to members,"When I merged the claims table and the members table, I didn't find any members without claims. Are others finding the same thing? If so, why is that? Surely there were members who didn't use medical care. Thanks, Dan",0,None,11 ,Wed May 11 2011 04:07:32 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/556,/competitions/hhp,2nd /del=37478cf4f027318a,leaderboard scores and technique,Do people with very similar leader board scores use very similar techniques of analysis.There are 12 people with 0.46.What can we conclude about how they are doing this competition? If you have participated in a competition like this one before: in a few weeks time how much lower do you expect the leader's score to get?,0,None,10 ,Wed May 11 2011 15:18:21 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/557,/competitions/hhp,1081st /del=37478cf4f027318a,"Using Tableau 6.0 to ""see"" data",How useful are the visualisations given by the Tableau 6.0 software for this competition? Will many people be using Tableau 6.0?,0,None,1 Comment,Wed May 11 2011 15:28:42 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/558,/competitions/hhp,1081st /zachmayer,Feature selection,"We're coming down the the wire here, and I've still yet to find a good feature selection routine. Anyone willing to share some code, or am I on my own here?",0,None,15 ,Wed May 11 2011 22:11:19 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/559,/competitions/overfitting,59th /zaccaksolutions,Claims: Length of Stay missing values?,"Anyone else notice there are no length of stay values of ""8-12 weeks"" or ""12-26 weeks"" in the release 2 claims table? Is this right? Seems odd.",1,bronze,11 ,Thu May 12 2011 05:35:50 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/560,/competitions/hhp,544th /yassertabandeh,Instance selection,"Removing “bad” instances from training set may help classifiers (especially SVMs) to avoid overfitting. (Take a look at the blue and red points on picture of this competition’s logo) I did a trial and error effort on practice set and found that removing some instances from train set can improve AUC more than 0.02 on test set, but the problem is how to detect these instances. Like feature selection a supervised or semi-supervised method is required. I thought similarity can be a good measure for instance selection, but when I used Euclidian distance (based on Ockham’s variables) and excluded 10 instances with less similarity to test set, the AUC dropped. Is there anyone else who tried instance selection?",2,None,4 ,Fri May 13 2011 09:58:32 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/563,/competitions/overfitting,4th /salimali,what's the chance of this?,Just noticed a bit of congestion in the middle of the leaderboard...,0,None,2 ,Fri May 13 2011 11:58:09 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/564,/competitions/overfitting,98th /toulouse,Does DaysInHospital only concern the claims made the year before?,"I ask this question because the following sentence can be confusing: DaysInHospital_Y : Days in hospital, the main outcome, for members with claims in Y1",0,None,3 ,Fri May 13 2011 17:20:06 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/565,/competitions/hhp,168th /zaccaksolutions,Members: Missing Members,"Just curious, why are there members in the DIH but not in the Member data? 124472 unique members in DIH tables (Y2,Y3,Y4) 113000 unique members in Member table Even if they didn't have an age and gender, I would of still expected them to exist in the member table. Thoughts?",0,None,11 ,Sat May 14 2011 03:03:09 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/566,/competitions/hhp,544th /tansoei,what software packages is allowed?,"I have just registered, and still feeling my way around. What software packages are allowed? E.g. Rattle, Microsoft Bayesian Network, Google refine, can these be used for the composition? Can we use any function from the Cran-Project? How do we know if, say an algorithm like evolutionary computation has won an competition in the past somewhere, and cannot be used here?",0,None,1 Comment,Sat May 14 2011 04:58:17 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/567,/competitions/hhp,608th /jeffsonas,Main Prizewinner Documentation,"Hi everyone, here are the writeups that I got from the top five finishers regarding their methodology. First a few comments, though: Tim Salimans provided [Link]:http://people.few.eur.nl/salimans/chess.html of his writeup, including lots of useful hyperlinks to other references and also providing his code. So you probably should go there for a more interactive experience, or to see his Matlab code, but I also wanted to have everyone's writeups in PDF format here, so I created a PDF out of it that was just the methodology description without including the code. I changed his opening paragraph accordingly in the PDF. Sami (Shang Tsung) declined to participate in this last stage of documentation and running against the follow-up datasets, so he is not eligible for his 2nd place prize (and thus the prizes reach down to #5 uqwn instead). He did email me a couple of paragraphs about his approach and so I built a PDF out of that text. It appears he actually didn't do too much future scheduling so it's a shame we don't have more details, or a submission for either the contest or the follow-up dataset that has all mining of the test set removed. Andy Cotter (Team George) was kind enough to adhere to my suggested format for documentation, as I had constructed using the Glicko documentation as an example. On the other hand that led to a large file, and so just as with Tim, I am only providing here the writeup about methodology while I try to figure out how to present the remaining parts of the file. Jason Tigg and David Clague (PlanetThanet) were able to provide several pages of detail about their methodology but I think they are planning to provide additional detail soon as well. So I am including their first writeup here, but we may update the PDF later. Special thanks to Vladimir Nikulin (Team uqwn) for providing a writeup and follow-up submissions for both his main prize entry, and also his FIDE prize entry. I hadn't anticipated that someone would qualify in both categories and have to do both tasks within the same week after the contest ended, so I appreciate Vladimir completing all that. He was also the only person to win a prize in the main competition of this contest and also the previous contest. Apologies to anyone if I am not presenting your documentation effectively, but I do want to get these out to the larger audience while people are still somewhat engaged with the contest. Please let me know if there is anything you would like me to change, or if you have a new version of your writeup for me to post. Here are the five PDF files: [Link]:http://www.chessmetrics.com/KaggleComp/1-TimSalimans.pdf [Link]:http://www.chessmetrics.com/KaggleComp/2-ShangTsung.pdf [Link]:http://www.chessmetrics.com/KaggleComp/3-George.pdf [Link]:http://www.chessmetrics.com/KaggleComp/4-PlanetThanet.pdf [Link]:http://www.chessmetrics.com/KaggleComp/5-uqwn.pdf",0,None,2 ,Sat May 14 2011 13:51:12 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/568,/competitions/ChessRatings2,None /uriblass,translating strings to numbers,"I hate strings and I wonder if there is a program that simply translate all the strings that we have in the data to integers when different strings get different integers(when the program treat both 0234 and 234 as the same 234 integer). A missing number in a column can be translated to -1(or to a different number that is not in the column(if the column include also -1 when the program tell me that some number means missing value) The program should also generate files that explain the meaning of the numbers in every column(except columns that include only numbers) for example in 6th column of claims.csv it may generate file with the following content claims.csc Anesthesiology=0,Diagnostic Imaging=1,Emergency=2,... I think that it is going to be easier if people who participate in this conmpetition do not need to deal with strings and the need to deal with strings is part of the reason that so far I did not make a submission in this contest.",0,None,6 ,Sun May 15 2011 11:24:31 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/569,/competitions/hhp,340th /wmestrom,Trainingset cross-validation scores,"Hi everyone, I'm having some strange results, perhaps due to my own faults but perhaps others are seeing the same thing... In order to find the right learning parameters for my models I'm currently using a very simple cross-validation setup. For a given set of parameters I train my model on Y1 claim data and Y2 days in hospital and then make predictions for Y3 days in hospital from the Y2 claim data. In a separate run I train my model on Y2 claim data and Y3 days in hospital and then make predictions for Y2 days in hospital from the Y1 claim data. I optimize my learning parameters in order to minimize sum of RMSLE's of both predictions. The problem is that somehow this seems to have hardly any correlation with the final public scores of the resulting submissions (submissions are made by training on one year only for now, which year doesn't really matter although year 1 seems to give slightly better scores). Some results CV leaderboard0.462 0.4670.460 0.4690.455 0.470 This makes it very hard to make any improvements since you never know wether it is going to be an improvement on the leaderboard. Does anyone recognize this? Or perhaps does anyone have a better setup where there is good correlation between a trainingset cross-validation score and the public leaderboard score? Willem",0,None,7 ,Sun May 15 2011 15:32:55 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/570,/competitions/hhp,1st /torneus,An issue with the scoring code,"I seem to have uncovered a bug in the code that scores entries. It completely ignores the member IDs in column 1, and just assumes that the predictions in column 3 go in the same order as the members in Target.csv (the file published with the data). The reason I know this is because I submitted one and the same entry yesterday in today. In yesterday's file, the members were ordered by increasing member ID, and today I used the same ordering as in Target.csv. Today's submission got a better score than yesterday (0.477 vs 0.500). Kaggle - is this something that you guys are going to fix? If no, you should at least clearly spell this out on the submissions page (""The patients in your entry must go in the same order as in the sample entry"", etc. etc.) If you don't, you are giving an unfair advantage to competitors that know to order their predictions correctly.",0,None,1 Comment,Mon May 16 2011 03:40:21 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/571,/competitions/hhp,547th /del=37478cf4f027318a,leaving competition - can't query arge amounts of data on my server,I am quitting the competition because it is so difficult to query large amounts of data on the server I use. I haven't got one entry in yet because I spend a my time breaking up files into smaller units and piecing them together again. I can't even export the entry form from my database in one piece.,0,None,7 ,Mon May 16 2011 16:15:45 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/572,/competitions/hhp,1081st /jeffmoser,Any interest in a SQLite version of the dataset?,"We thought about releasing the second dataset as a single compressed SQLite database instead of a set of CSV files. I ultimately decided against this thinking that it might add too much complexity to importing the data into your favorite set of tools. However, after seeing some discussion from competitors, it seems that many people are just importing the CSV data into a database as their first step. Therefore, I'd like to get your feedback for future data drops on these questions: 1.) Would a SQLite version of the dataset have been more convenient for you than the CSV files? and a similar question: 2.) Would you have been able to read a SQLite version of the data just as easily as the CSV files? My main concern is that I didn't want to prevent anyone from reading the data. Thanks in advance for your feedback!",0,None,19 ,Mon May 16 2011 16:34:26 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/573,/competitions/hhp,None /launeric,question regarding days_in_hospital_y2,"For memberid=76307074 In daysInHospital_y2 table, DAYSINHOSPITAL=3 In Claims table, all the YEAR=Y1, and the only entry has LENGTHOFSTAY for 76307074 is 1 day 1)So it mean memberid=76307074 makes no claims inY2? 2)If a member does not make any claim on Y2 and have stayed in hospital in Y2 Does it mean most likely the member makes the claims let say in Dec and stay in the hospital in Jan or something like that?",0,None,1 Comment,Mon May 16 2011 21:50:54 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/574,/competitions/hhp,1278th /jeffsonas,Follow-up Solution Set,"Hi everyone, I am starting to wrap up the final phase here. I supported the process of scoring follow-up solutions for a couple of weeks so that we could double-check and investigate the performance of prizewinners' methodologies on a similar (but different) dataset. The files for that (except the solution) can be found on the Data page (see writeup at the bottom of that page) and the solution set can now be found here: [Link]:http://www.chessmetrics.com/KaggleComp/follow_up_solution.zip Note that this file only lists the real games; the spurious games (which you can see make up 75% of the games) are omitted from this solution file. Upon learning about the ""future scheduling"" trick a couple of weeks ago, I saw that there was a strong correlation in the actual contest test set between a player's average quantity [player rating - opponent rating] and their average quantity [actual pct score - predicted pct score]. I tried to defeat this via my introduction of additional spurious games targeted at breaking this correlation, and I think it was pretty effective at defeating participants' use of future scheduling to improve their score. A bit late, of course, but still effective. UPDATE: The follow-up solution is now attached [Link]:https://storage.googleapis.com/kaggle-forum-message-attachments/1113/follow_up_solution.zip",0,None,1 Comment,Tue May 17 2011 21:37:15 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/575,/competitions/ChessRatings2,None /tcash21,Length of Stay from claims table to predict Days in Hospital?,Doesn't this feel like cheating to anyone else? The Length of Stay from the claims data had to be collected after the patient was discharged and a hospital would not have that information when a patient presents. I'm assuming we cannot use Length of Stay to predict Days in Hospital for the target members?,0,None,3 ,Tue May 17 2011 22:08:00 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/576,/competitions/hhp,None /ihbicmu,Interpretation of DSFC,"Does a null value in the DSFC indicate the first claim of a given patient in any year? We find a number of patients with null values suggesting there must be some rule for the assigned values; however, many patients have a value in each of their claims in the dataset. Is every patient assigned a DSFC for the first claim in a year?",0,None,1 Comment,Wed May 18 2011 13:30:28 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/577,/competitions/hhp,None /ihbicmu,Age at First Claim -- How is it handled over time and category?,"Age at first claim – Is this value set within the three year timeframe of the data or does it precede this dataset? For example, if a patient had its first claim 3 years before the start of this dataset (Y1) and they were 19, will the value for age at first claim in our data set be 10-19 or 20-29?",0,None,1 Comment,Wed May 18 2011 13:32:13 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/578,/competitions/hhp,None /ihbicmu,Resubmissions -- are they in the data set two times?,"Are resubmissions in the dataset? For example, if payment on a claim is rejected and it is resubmitted with some modification, will both claims appear in the dataset?",0,None,1 Comment,Wed May 18 2011 13:33:26 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/579,/competitions/hhp,None /adamhurwitz,key for claims data,Is there a primary key for the claims data? There is no claim id and it doesn't seem like a simple combination of fields is unique.,0,None,3 ,Wed May 18 2011 17:02:28 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/580,/competitions/hhp,None /jeffmoser,Tips for beautiful math posts,"I added support for [Link]:http://www.mathjax.org/ in these forums. MathJax is a JavaScript plugin that converts specially formatted text into nice looking math in most browsers. MathJax supports two major modes: The first is ""inline mode"" which allows you to put math right in line with normal text. For example, you can say that π≈3.14 and e≈2.71. To achieve ""inline mode"", you need to prefix your math with the three characters \ \ ( and then suffix the math with the three characters \ \ ) The second is ""display mode"" which will put the math on a separate line for display purposes like this: Jα(x)=∞∑m=0(−1)mm!Γ(m+α+1)(x2)2m+α or (nr)=n!r!(n−r)! You can prefix ""display mode"" math with the two characters \ [ and then suffix with \ ] or alternatively use a double dollar sign $ $ for both prefix and suffix. Inside either mode you can use LATEX math mode notation. More details on this notation can be found [Link]:http://en.wikibooks.org/wiki/LaTeX/Mathematics. One benefit of using JavaScript to render the math is that you can simply look at the HTML source of a post to see exactly how the math was written (i.e. use your browser's ""view source"" feature on this page to see how the above equation was typeset). I picked the prefix and suffix characters so that they wouldn't cause any confusion with other text (i.e. accidentally turning code into math) but I'm open to feedback to tweak them if something else would be easier.",41,bronze,27 ,Thu May 19 2011 16:30:38 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/581,None,None /launeric,dsfs null question?,"When dsfs is null, does it mean the info is missing? Or it implies Zero?? -Thanks",0,None,1 Comment,Thu May 19 2011 23:49:37 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/582,/competitions/hhp,1278th /cybaea,Submission blues….,"Has anybody else seen this error message when submitting and figured out what it means? Field index must be included in [0, FieldCount[. Specified field index was : '2'. Parameter name: field Actual value was 2. Am I the only one who thinks the message is more than a little cryptic?",0,None,2 ,Fri May 20 2011 11:04:02 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/583,/competitions/hhp,109th /jwdatagirl,Submission App problem?,"I have successfully submitted several entries however this evening am unable to submit. i get an error that says i have a format problem but i don't believe i do--exact same format as other submissions, i confimed it. is something broken with the app perhaps?? anybody else have trouble recently?",0,None,7 ,Sun May 22 2011 00:59:11 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/584,/competitions/hhp,425th /del=37478cf4f027318a,Is my score luck or skill?,What score would I need to get to know I am getting the right patients by skill rather than luck? What score do I need to prove I am getting any patients right at all? Can someone who is not top of the 30 per cent leaderboard be in reality winning by a large margin? If this can be true what is the point of the leaderboard?,0,None,2 ,Sun May 22 2011 19:18:40 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/585,/competitions/hhp,1081st /del=37478cf4f027318a,Which data columns are most important for a high score?,"Which data columns do people think are most important for this competition? For me the PrimaryCondition,AgeAtFirstClaim and CharlesonIndex columns matter most. And the days in hospital year three too.These allow many basic calculations to be made that seem to correspond to common sense.I find the other columns a bit cryptic eg. DSFS and Vendor.How many columns would I need to win this competition.Presumably the fewer I use the better my insight into the data would have to be.",0,None,1 Comment,Sun May 22 2011 19:27:28 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/586,/competitions/hhp,1081st /lesley,Lesley's Offer,"To all my colleagues, For 25 years I have been a computer programmer/ analyst working with hospital admissions data, however, I have decied to leave my job and sail around the Pacific Ocean on a small sailing boat. The boat will be too small to carry computers and I expect I will have limited further opportunity to work on my algorthythm. My score is poor (74th after 11 submissions with a score of [Link]:http://www.heritagehealthprize.com/c/hhpLeaderboard#0.485629). I believe this is due to be my poor statistical analysis skills rather than lack of understanding of admission patterns. I am not sure if by supplying the links (below) you will gain access to my score data- but give it a go. Submission File Public Score Fri, 20 May 2011 05:56:43 [Link]:http://www.kaggle.com/c/27597/DownloadSubmission/27597.zip 0.485629 Tue, 17 May 2011 01:08:24 [Link]:http://www.kaggle.com/c/27422/DownloadSubmission/27422.zip 0.485629 Mon, 16 May 2011 03:23:18 [Link]:http://www.kaggle.com/c/27383/DownloadSubmission/27383.zip 0.486176 Sun, 15 May 2011 23:49:05 [Link]:http://www.kaggle.com/c/27367/DownloadSubmission/27367.zip 0.486329 Sat, 14 May 2011 23:49:52 [Link]:http://www.kaggle.com/c/27322/DownloadSubmission/27322.zip 0.486004 Fri, 13 May 2011 07:14:11 [Link]:http://www.kaggle.com/c/27020/DownloadSubmission/27020.zip 0.492027 Thu, 12 May 2011 01:31:24 [Link]:http://www.kaggle.com/c/26808/DownloadSubmission/26808.zip 0.505451 Wed, 11 May 2011 04:45:29 [Link]:http://www.kaggle.com/c/26689/DownloadSubmission/26689.zip 0.490508 Mon, 09 May 2011 22:57:27 [Link]:http://www.kaggle.com/c/26492/DownloadSubmission/26492.zip 0.522226 Sun, 08 May 2011 04:46:37 [Link]:http://www.kaggle.com/c/26312/DownloadSubmission/26312.zip 0.497909 Sat, 07 May 2011 04:06:47 [Link]:http://www.kaggle.com/c/26242/DownloadSubmission/26242.zip 0.846610 What I am offering is my 'burden of disease' index. It is my best estimate of the degree of sickness of each person. The number is between 0 to 1300 and does not yet incorporate sex, Dob, CharlsonIndex. but I think that is the next step. I will provide a ''burden of disease.csv' file with 3 fields:- MemberID, DiseaseBurden (integer), PredictedDaysInHospitalY4 (real) If you incorporate my work into your solution, I require academic acknowledgement by you,and also 1% of any winnings. If you think it is shit, please don't flame me , but rather be polite and let me down gently. Lesley [Link]:mailto:Lesley@moobaa.com",0,None,1 Comment,Sun May 22 2011 23:28:07 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/587,/competitions/hhp,896th /turbo11361,Decrease time interval for submissions,Since you have other (hidden) data for final results. You shouldn't care too much about exploration of test data. I just experimenting for now and I don't want to wait 12 hours to test other solution. I think time interval in 10-30 minutes will be good compromise. Thank you. UPD: Or second option is just provide some small subset from fourth year. In this case participants could write their own scoring system to check effectivness of solution at home.,0,None,12 ,Mon May 23 2011 14:05:15 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/588,/competitions/hhp,1349th /teamsmrt,Just to save everyone some time:,"Here's a package to read all the .png image files into R: [Link]:http://cran.r-project.org/web/packages/png/index.html I'm not sure what to do from there, but I suspect that I will estimate the PSF from the star image, use it to deconvolve the galaxy image, and create some way to measure ellipticity from there. PCA comes to mind, as the first two components should fall along a galaxy's axes. The two associated eigenvalues could be used to estimate ""a"" and ""b."" At least that will be where I will start. I don't have any good ideas for how to correct for the gravitational lensing of the dark matter. From my interpretation of the problem, deconvolving the galaxy image only reduces the blur and noise, but not the gravitational lensing of dark matter. Is there a way to correct for something unobservable and which we know nothing about? Are we supposed to assume that the gravitational lensing is constant for every galaxy? That just doesn't seem reasonable to me.",1,bronze,18 ,Mon May 23 2011 21:53:46 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/590,/competitions/mdm,46th /dansbecker,Including previous year's days in hospital in model,"Hi, I had a model where I used the previous year's claims table to predict days in hospital. Adding the previous year's ""days in hospital"" to the model significantly improved the fit in the training data (from year 3), and it improved the fit in year 3 data I set aside for cross validation. But it signifantly worsened the fit in the year 4 target. I of course used the year 2's days in hospital to forecast year 3, and used year 3's days in hospital when forecasting year 4. I imputed 0's for individuals who did not appear in the previous years days in hospital table. Would anyone care to speculate if there's some sort data issue explaining this surprising pattern? Thanks, Dan",0,None,4 ,Tue May 24 2011 01:28:51 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/591,/competitions/hhp,2nd /michael9274,What is the relation between LengthOfStay and DaysInHospital?,"Hello, I hope I am not repeating a question. Please refer me to the topic it was dealt with if I do. Looking, as an example, at member ID 75590378 it appears that there are two hospitalizations during Y2 according to the claims file. When trying to confirm that data in the DaysInHospital_Y2 file I can't find that memberID at all. The inverse mapping also doesn't seem to work. I would appreciate an explanation. Thanks Michael",0,None,2 ,Tue May 24 2011 02:26:51 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/592,/competitions/hhp,767th /salimali,Results - AUC,"Congratualtions to everyone - hope you all enjoyed this competition and got something useful out of it. Here are the results of the AUC part. The top 3 teams are (in alphabetical order are) Jose_Solorzano SEES (Roger Guimerà, Marta Sales-Pardo, Gonzalo Guillén-Gosálbez) Tim.Salimans If these teams would be so kind as to prepare a description of the techniques you used and post in this forum thread (then subsequently on the Kaggle blog), I'll then announce the order of the top 3. The scores for everyone else who submitted are... cole_harris 0.9293 Brian_Elwell 0.9264 Outis 0.9248 PRIM 0.9240 tks 0.9184 grandprix 0.9134 statovic 0.9072 Zach 0.9031 IKEF 0.8953 Eu.Jin.Lok 0.8917 D.yakonov.Alexander 0.8898 GSC 0.8868 Yasser.Tabandeh 0.8867 E.T. 0.8864 William.Cukierski 0.8709 NSchneider 0.8678 nadiavor 0.8677 Shea_Parkes 0.8471 OilPainter 0.8388 mkozine 0.8272 Forbin 0.8251 Bourbaki 0.7388 Bernhard.Pfahringer 0.6966 Vrici 0.5956 Jason_Noriega 0.5279 FINE 0.5143 Suhendar_Gunawan 0.5080 TeamSMRT 0.5007",1,None,20 ,Tue May 24 2011 12:11:00 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/593,/competitions/overfitting,98th /salimali,Results - Variable Selection,"In the evaluation equation, there were only 55 variablers used, far fewer than in the leaderboard (108) or practice (118) equations. With the scoring scheme, the maximum score is 200 for correctly identifying all the variables, and the minimum is -200 for incorrectly identifying all the variables. And the winners are... Team Score VariablesChosen Jose_Solorzano ??? 50 SEES ??? 59 Tim Salimans ??? 51 (again - you might as well detail your variable selection methods in the same write up for the AUC part, and then I'll let you know the final order) And the scores for the other competitors are, Team Score No. Variables Chosen grandprix 130 52 tks 130 40 IKEF 104 65 D'yakonov Alexander 100 77 OilPainter 100 29 Shea_Parkes 100 85 Yasser Tabandeh 94 80 cole_harris 82 96 PRIM 80 99 statovic 74 100 Jason_Noriega 70 12 Outis 62 108 E.T. 58 100 nadiavor 58 104 mkozine 58 104 Brian_Elwell 46 116 GSC 40 121 NSchneider 8 113 Eu Jin Lok 6 140 fine -10 110 Bourbaki -14 106 Suhendar_Gunawan -22 114 Vrici -36 127 Bernhard Pfahringer -38 132 Forbin -60 91 TeamSMRT -126 140 William Cukierski -134 148",0,None,6 ,Tue May 24 2011 13:11:16 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/594,/competitions/overfitting,98th /paulprice,"Multiple entries, different methods?","Hi Tom. We're testing multiple shape measurement algorithms for the LSST/HSC pipeline (I think we've got 5 or 6 so far). Can we submit each of them independently? Or do we have to simply choose the best? Thanks, Paul.",0,None,5 ,Tue May 24 2011 16:05:48 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/595,/competitions/mdm,21st /patternengine,Exchange of Good Ideas,"So, I figure we should have a thread for sharing thoughts/ideas about how we're getting good prediction results. Of course, no one wants to give away the secret edge that's going to win them the prizes :-) But there are clearly also going to be a range of 'standard' ideas that everyone will end up figuring out and using. If we pool them here on the forum, we can all benefit and get on with working on cleverer/sneakier approaches. To put my money where my mouth is, here are some things I've learned so far: Generating informative sets of features seems pretty important, straight off the bat. I've found the following features to be informative. Sex, Age, nDaysInHosptial (previous year) And from the claims data for the previous year: total nClaims, nCharlsonIndex of each category, Counts of primary conditions, Counts of procedures, Counts of placeSvc, Counts of speciality There may also be a benefit from also using the same features two years previous to thetarget values, but the effect seems pretty small. (I feel like there's more one could do with the Claims data, but there are issues with large number os features) Method-wise, I've started with simple linear regression (with stepwise feature selection). I'm pretty sure this is too restrictive to be useful, but it's very handy for data exploration. I'll be trying out some more interesting models in the near future. I hope this is useful to people. If you would like to reciprocate, that would be awesome :-) And if this thread gets going, I'm happy to keep contributing my thoughts to it, as I think we'll all benefit from it. *braces for deluge of useful responses*",2,bronze,14 ,Tue May 24 2011 17:07:38 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/596,/competitions/hhp,105th /ijvaughn,Noise distributions,"Have the noise distributions been characterized more than ""Poisson with some Gaussian and bad pixels?"" What is the ratio of Gaussian to Poisson? What is the ratio of (bad pixels) to (total pixels)? Is the telescope PRF (point response function) linear over the pupil? If so, is it the typical ""airy disc"" scalar response for a circular pupil? What are the atmosphere assumptions? non-linear (spatially) varying index of refraction? Some constant Gaussian blur?` Cheers",0,None,8 ,Tue May 24 2011 23:30:29 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/597,/competitions/mdm,None /markhays,Who owns the rights to our HPN Competition entries?,"Who owns the rights to every entry in the HPN Competition? I asked this question at the start of the HPN / Kaggle competition. According to the rules on the website, it appears that HPN and Kaggle will own full rights to every entry in the competition -- including worldwide sales, with no compensation to the developer(s). On 15 April, I received an email from Anthony Goldbloom (Kaggle), saying that he was going to check with HPN. I did not receive any follow-up, however. Fundamentally, it appears that the ""License"" section of the agreement would give HPN and Kaggle the unlimited right to sell any ""algorithm"" or software used by any competitor who joins the Heritage Health Prize competition, whether they win a prize or not. I can see granting a license to HPN for their internal use, as the sponsor who funded the competition. Asking every competitor to grant a free license that would allow HPN (and Kaggle) to sell our work worldwide -- with no royalties -- is something else. Was this the intent of the agreement? If not, it needs to be clarified. Anyone who has done serious work in predictive analytics knows the value of their IP -- and will refuse to participate. Please let all of us know if this issue has been clarified. Mark Hays",0,None,10 ,Wed May 25 2011 03:24:51 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/598,/competitions/hhp,None /ftw11339,SupLOS=1 but LengthOfStay non-NULL,"Question to the organizers (or anyone who knows). The following line is taken from Claims.csv: 00529616,1727574,340953,11593,Y2,Internal,Outpatient Hospital,23,1 day,8- 9 months,METAB3,3-4,SCS,1 The last column says that LengthOfStay is NULL because of suppression, but the LengthOfStay field says ""1 day"". This is the only such inconsistency in the entire file. Is there a chance the SupLOS field have been shifted or otherwise been misplaced? If I understand correctly, this field didn't exist in the first data set.",1,bronze,3 ,Wed May 25 2011 06:48:15 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/599,/competitions/hhp,457th /salimali,And The Winners Are...,"I am please to announce the winners. The same three teams were at the top in each part with Tim Salimmans the AUC winner, Jose Solorzano the variable selection winner with SEES not always the bridesmaid, as they were confident enough to back themselves and win the contest for predicting the winners! Tim just about takes the overall title, with only 1 variable in it - otherwise it could have been a 3 way tie! Zach and TKS were the peoples choice for contributing most to the forum - thank you both for your efforts. Hope you all enjoyed this - I certainly did.And if you want to discover what the secret formula was in the data, read the winners posts on how they did it, there is no hiding anything from good data scientists! Team AUC Tim Salimans 0.94298 SEES 0.94079 Jose Solorzano 0.93954 Team Var Selection Score Jose Solorzano 138 SEES 132 Tim Salimans 132",5,None,13 ,Wed May 25 2011 14:09:33 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/600,/competitions/overfitting,98th /astrotom,Prize Update,"Some more details on the prize. (prize fund recently increased from $1000 to $3000). The costs cover travel to the meeting, accommodation and reasonable local expenses. The meeting is at NASA JPL, Pasadena and is between 26th and 29th September 2011. A website for general registration for this conference will be available from 13th June onwards.",0,None,11 ,Wed May 25 2011 16:35:26 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/602,/competitions/mdm,None /zachmayer,Getting started,"This is a very different competition from the other ones I've participated in on Kaggle. Does anyone have any advice for getting started with the analysis? It seems like some people have made constant value predictions, but not much beyond that. Will traditional machine learning techniques work on this problem? We're trying to predict ellipticity, given some data with distortion and noise, but it seems like there's no ""true"" data to use to train an algorithm. So far, I can use the png package TeamSMRT provided to turn an image into an R matrix, but I'm stuck at this point.",0,None,18 ,Wed May 25 2011 19:32:24 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/603,/competitions/mdm,None /davec6371,$3M or $500K?,"Hi. Until recently, I've been thinking 'wow, there is a US$3M prize' for first place! However now I'm telling myself to revise my enthusiasm downwards... since it seems to me likely that no-one is going to beat the required 0.4 accuracy threshold. Hence the prize is really just US$500k. Not bad.... but not enough to retire on :( After nearly a month of submissions, the rate of improvement is already heading towards an asymptote that isn't 0.4. The best today is 0.461113. Admittedly we are still to get some more data supplied. But realistically I don't think that we're going to be able to get down to 0.4. That target is just too hard... Any other opinions? Personally I'd be happy to bet at 5-1 odds today that by the end of the competition no-one exceeds 0.4. Any takers? (eg, I put up $50, you put up $10) Dave",0,None,17 ,Thu May 26 2011 01:41:51 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/604,/competitions/hhp,313th /aristotle137,mdm_training_solution.csv file,"Does the mdm_training_solution.csv file contain the correct ellipticity (i.e. ground truth) for the training set or just an example of how the solution should be formatted? I obviously guess the former, but in this case what is the difference between mdm_example_entry.csv and mdm_example_training.csv? They both seem to exemplify the same thing. Thanks, Marius",0,None,27 ,Thu May 26 2011 04:41:36 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/605,/competitions/mdm,5th /smark11436,"Will the large numbers of days in hospital be generalized into a group of ""15+ days""?","The question is addressed to the organizers, it's about Scoring Data Set (as well as Feedback Data Set). Does it have only ""15"" for large numbers, or it has actual numbers of days spent in hospital?",0,None,1 Comment,Thu May 26 2011 15:42:27 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/606,/competitions/hhp,561st /uriblass,R questions,"1)What is the best tutorial to learn the relevant parts of R for this competition 2)Is there a function in R to do binary search(note that I found that I can use order to replace the order of lines to have one vector in non decreasing order Members<-read.csv(file=""Members.csv"",head=TRUE,sep="","") OrderMembers<-Members[order(Members$MemberID),] Now the question is if I want to find the place of MemberID 78832045 in this file by binary search then how do I do it in R. I need to find 22222 in this example because OrderMembers$MemberID[22222]=78832045 but I want to do binary search and use the fact that OrderMembers$MemberID is an increasing sequence.",0,None,48 ,Thu May 26 2011 19:25:14 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/607,/competitions/hhp,340th /nem2511679,Team creation,We are a group of people wishing to participate in the competition as a team. Do we first 'Enter the Competition' individually and then create the team or must we create it before clicking the 'Enter the competition' button. I could not find the Team Wizard anywhere.,0,None,9 ,Thu May 26 2011 23:55:02 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/608,/competitions/hhp,None /ejlok1,Some resources for newbies to get started,"Hi Alot of us have absolutely no knowledge in this field what-so-ever, but I believe great discoveries can come from all walks of life. I'd love to give this a go and may I ask if anyone knows of any good articles to get me started? Here's one I found: [Link]:http://www.eclipse.net/~cmmiller/DM/ Thanks Eu Jin",0,None,2 ,Fri May 27 2011 05:52:01 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/609,/competitions/mdm,3rd /ahassaine,matlab code for computing UWQM,"Hi there ! Here is a matlab code for computing the UWQM. For mdm_galaxy_training_1.png I am getting e1=-0.0032 and e2=-0.0026. However, according to mdm_example_training.csv, I am supposed to get e1=-0.193511 and e2=0.142878. Have you guys been able to replicate these values ? im=imread('mdm_galaxy_training_1.png');im=double(im);width=size(im,1);height=size(im,2);x_average=0;y_average=0;sumI=0;for y=1:height for x=1:width x_average=x_average+x*im(y,x); y_average=y_average+y*im(y,x); sumI=sumI+im(y,x); endendx_average=x_average/sumI;y_average=y_average/sumI;q11=0;q12=0;q22=0;for y=1:height for x=1:width q11=q11+im(y,x)*(x_average-x)*(x_average-x); q12=q12+im(y,x)*(x_average-x)*(y_average-y); q22=q22+im(y,x)*(y_average-y)*(y_average-y); endendq11=q11/sumI;q12=q12/sumI;q22=q22/sumI;e1=(q11-q22)/(q11+q22)e2=(2*q12)/(q11+q22)",0,None,11 ,Fri May 27 2011 09:31:35 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/610,/competitions/mdm,3rd /wcukierski,Are we **sure** the ground truth is correct?,"Apologies, but I'm still hung up on whether the ground truth is correct. When I make a set of predictions, the error is (somehat) invariant to permutations of the answers. i.e. either my guesses are complete junk or the ground truth is complete junk. I've tried 3 methods of deconvolution and 3 ways to estimate ellipticity. When you made the answers, did you ensure the files were sorted numerically (1,2,10) as opposed to the standard file system sorting (1,10,2)?",0,None,6 ,Fri May 27 2011 19:33:48 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/611,/competitions/mdm,45th /stephennerhodes,distribution of light from the galaxy,Are we to assume that one sersic function has been used for all the galaxy images or there is some kind of renadom distribution. I presume when doing such an experiment for real that one would select a suite of galaxies with similar sersic models. Is this the case here?,0,None,6 ,Fri May 27 2011 21:19:08 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/612,/competitions/mdm,None /inference,Data checksums,Can someone confirm the MD5 checksums of the data files for me? I'm not sure if I've got a corrupted download and also I don't know if I have the correctly updated version of the truth data. My MD5s are: b12c2773bdb880b4025be28b987fcc93 great10.pdf 9721f2fa4b4e2ae812b64ec7985eebda mdm_example_entry.csv 54cc54a5026dacc60d5289bbbd5632d2 mdm_example_training.csv 2431420c3c0f12a903d5e8c810e08d5d mdm_images.zip ce3858c09243b4fd203b862b7488e585 mdm_training_solution.csv Do these match yours? Thanks.,0,None,9 ,Fri May 27 2011 23:22:12 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/613,/competitions/mdm,None /jeffmoser,Updated training solution and rescore,"Thanks to [Link]:/users/3258/william-cukierski's statistical analysis (and others too) in a previous [Link]:/c/mdm/forums/t/611/are-we-sure-the-ground-truth-is-correct, we discovered that the training solution and solution used for the leaderboard calculation were incorrect. Thomas sent me an updated solution set and I updated the solution used to make leaderboard calculations. (Note: the example UWQM files seem to have been in the correct order all along). In addition, I re-scored all previous submissions and updated the leaderboard. Any submissions from now on will use the updated solution for scoring. I also uploaded the new training solution as ""mdm_training_solution_sorted.csv"" Due to the timezone differences, Thomas won't be able to double check things for several hours, but I wanted to go ahead and post the preliminary updates now because they give a very interesting update to the competition. As you can see from the leaderboard, team ""Fire on Wires"" is clearly leading the pack! It appears that getting below ""0.05"" requires analysis of the images whereas above ""0.15"" is achievable by just doing some basic statistical analysis of the training solution. As I mentioned, Thomas will double check things before we make it official, but you're all welcome to check things against the latest training solution file to see if things seem better now. Sorry about the confusion and inconvenience this has caused.",0,None,4 ,Sat May 28 2011 04:18:21 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/614,/competitions/mdm,None /darragh0,Waiting Times To Go into Hospital,"I'm from Australia. We have waiting times for elective surgery. Does America have such a policy? If so this could have an impact when patients were to be hospitalized. Any response from HHP Admin would be appreciated. Thanking you, Jim",0,None,2 ,Sun May 29 2011 13:47:59 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/615,/competitions/hhp,855th /karansarao,Building models on the cloud,"I am facing serious problems running R on my IBM 64 bit with only 4 GB RAM, run out of memory very soon, which is getting frustrating as I know I can extract just that little bit more if I can get the computation done without worrying about memory or CPU usage. Is it possible to run R on Amazon's cloud service, i.e. rent a windows/linux instance (preferably 64 bit) with much higher memory, has anybody done this? More importantly what would the cost of doing say 12 hour modeling runs a few times a week? Can Kaggle wangle us a discount ? should be good publicity for Amazon (better than using the cloud to hack into Sony!)",0,None,25 ,Mon May 30 2011 14:43:04 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/616,/competitions/hhp,219th /ryanmorrisroe,Clarification Request on Data Dictionary,"Is there any way we can get a clarification on the definition of CATAST? It's the only diagnosis group that doesn't have any codes associated with it, and ""multiple codes"" isn't very helpful. Thanks, Ryan Edit: As a followup, in the original paper the groupings are no clearer than what is given by Kaggle, so I no longer expect any clarification from this front.",0,None,2 ,Mon May 30 2011 22:37:14 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/617,/competitions/hhp,481st /cybaea,Missing DaysInHospital_Y2 for MemberID 24027423,"We seem to be missing member 24027423 in DaysInHospital_Y2.csv (he has claims that year, so should be there) -- can you give us the days for that member in that year?",0,None,6 ,Tue May 31 2011 15:47:38 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/620,/competitions/hhp,109th /del=92525096498f3bbd,Is it okay to use mySQL to store the data? ,"I was planning on stroing the data on a local harddrive. Thanks, Aniket",0,None,6 ,Tue May 31 2011 18:07:30 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/621,/competitions/hhp,None /jjjjjj,GNU GENERAL PUBLIC LICENSE,"Since R is released under GNU GPL and lots of people are using R for this contest, I assume, in general, using (free) software released under GPL is probably ok? Link: http://www.gnu.org/copyleft/gpl.html",0,None,3 ,Tue May 31 2011 22:46:58 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/622,/competitions/hhp,113th /del=92525096498f3bbd,Existing Algorithms,"What are some existing algorithms which do predict number of days in hospital? Like i have used WEKA software before and know the process of data mining. But, am compeletely clueless when it comes to this particular domain.",0,None,2 ,Wed Jun 01 2011 05:37:46 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/623,/competitions/hhp,None /darragh0,Submission Data ?,"When uploading Submission data in the form as laid out as in the ""SampleEntry.csv"". What Year is the data compared to when assessing the accuracy using the error formula? HPN advice appreciated. Thanking you, Jim",0,None,1 Comment,Wed Jun 01 2011 15:06:25 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/624,/competitions/hhp,855th /martinoleary,Strange pixels,"I've noticed that in some of the images (e.g. training galaxy 28, attached), there are central pixels which have a value of zero, while their surroundings have much higher values. Am I right in thinking that these are the result of a wrap-around error, and that they should have a value of 256? [Link]:https://storage.googleapis.com/kaggle-forum-message-attachments/1121/mdm_galaxy_training_28.png",0,None,5 ,Wed Jun 01 2011 19:11:59 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/625,/competitions/mdm,4th /zachmayer,My code,"I did a writeup of the code I used and my results on my blog, if anyone is interested. Everything's written in R, so it will be easy to replicate. [Link]:http://moderntoolmaking.blogspot.com/2011/06/kaggle-competition-walkthrough-wrapup.html",6,bronze,2 ,Wed Jun 01 2011 23:42:03 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/626,/competitions/overfitting,59th /timcotten,Do the images have a consistent center coordinate?,"Are all images (stars and galaxies) centered on a specific pixel coordinate? I should qualify this and say ""before distortion"" - in other words, are all stars *supposed* to have their brightest points be at a given constant coordinate, or are we supposed to infer a potential center for it and its associated galaxy?",0,None,6 ,Thu Jun 02 2011 01:25:40 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/627,/competitions/mdm,None /sgerber,Image Analysis vs. Machine Learning,"I was wondering if anybody would be willing to share if they obtain their results through: 1. Purely image based analysis, i.e. denoising and fitting some sort of ellipse 2. Purely learning based approach on raw images. 3. Combination of the two. Thanks for any feedback. Sam",1,None,17 ,Thu Jun 02 2011 17:58:47 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/628,/competitions/mdm,10th /mitchmaltenfort,Lab and Rx: are you kidding?,"the big reveal...# of distinct tests, and # of distinct prescriptions. Not even a breakdown by type. Oh dear. I'm gonna play with this a bit more next week then stick to my National Inpatient Sample. Sheesh.",2,bronze,13 ,Sat Jun 04 2011 04:41:23 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/631,/competitions/hhp,355th /fordprefect0,Dataset release 3 issues,"Here's a list of issues I've discovered so far with the new dataset: 1) In the Claims.csv file, some of the ID numbers have extra zeros in front compared with release 2, but there are no changes to any of the values. 2) The DaysInHospital_Y2.csv has one extra row for MemberID 24027423 3) The new DrugCount.csv file contains some entries which don't correspond with Claims.csv 210,Y3,7- 8 months,1 210,Y3,8- 9 months,1 210,Y1,4- 5 months,1 210,Y3,5- 6 months,2 A I understand it, there should be claims for MemberID 210 appearing in Claims.csv for those particular year and DSFC combinations, but they are missing. The other rows of DrugCount.csv have a corresponding claim in Claims.csv. (Addendum) That last sentence isn't actually true. I've just looked at the data again, and there are a lot more MemberIDs where 3) applies. It turns out that every one of those MemberIDs have ClaimsTruncated=1 in the DaysInHospital csv files. I suppose this means that the claims were anonymized, but the drug count data associated with them wasn't. This raises a few questions: Is the drug count data complete for all members, or did DrugCount.csv get anonymized as well? Either way, the missing claims can be used to obtain a better estimate of the true number of claims for those members :-)",1,bronze,11 ,Sat Jun 04 2011 08:03:10 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/632,/competitions/hhp,557th /misiek,corrupted ground truth??,"In mdm_training_solution_sorted.csv file (which is the ground truth for the e1 and e2 parameters) there is a description for a galaxy with id=11. This galaxy is referenced to the following files: mdm_star_training_11.png (which desrcibes a PSF function) mdm_galaxy_training_11.png, which is the simulated galaxy (blurred and noisy) Based on definition of ellipticity put on this site, after some basic equation manipulation, I can find the true theta angle. In particular: tg(2 * theta) = e2/e1 This means, that for 11th galaxy, theta should be approx. -35.3 degrees (minus 35.3 degrees). Ok, now take a look into galaxy image. The theta angle obviously is not -35 degrees - it is about +48 degrees. This value was obtained manually after some less or more advanced image processing (galaxy is pretty nice :) ) Something is wrong here - either images are mistaken, or ground truth (the ""...sorted.csv"" file), or equations for ellipticity... or I made a mistake. Can anybody confirm that ground truth file is correct and images are correct? I just want to know... Anyhow, if you take a look at the 11th galaxyimage, even without processing, it is clear, that theta angle is positive. To be clear - the theta angle is the angle from ""ox"" axis to long ellipse (galaxy) diagonal. Ok, i will atach the processed galaxy file. [Link]:https://storage.googleapis.com/kaggle-forum-message-attachments/1129/mdm_galaxy_training_11_restored.png",0,None,2 ,Sun Jun 05 2011 14:46:39 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/633,/competitions/mdm,44th /vanushvaswani,Naive method not working?,"Hello, I am an undergrad but this competition sounds interesting as I want to learn about image processing. Well firstly, I decided wanted to get some answers that are at least close to the training solution so I would know whether I'm in the correct ballpark. What I did: Load training galaxy Load training star Crop training star and subtract 55 Remove noise from training star using medfilt2 mat2gray training star Lucy deconvolution of (galaxy subtracted by 90) and the training star (now a PSF) Remove noise from deconvolved image Calculate UWQM. The ellipticity values I get are for training #11 are: e1 = -0.1602 and e2 = -0.4566 This is way off the actual values of e1 = -0.10228 e2 = 0.290302, which has an ellipse in completely different direction I'm just wondering where I have made the incorrect assumption? Please be kind :P [Link]:https://storage.googleapis.com/kaggle-forum-message-attachments/1130/11_test.png [Link]:https://storage.googleapis.com/kaggle-forum-message-attachments/1131/11_psf.png",0,None,6 ,Sun Jun 05 2011 16:49:40 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/634,/competitions/mdm,47th /katardin,What if...,Someone won by doing randint for all of the predictions. That would blow all of your little statistician minds.,0,None,1 Comment,Sun Jun 05 2011 23:48:26 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/635,/competitions/hhp,None /salimali,plotting the leaderboard in R,I cobbled together some R code that will plot the live leaderboard and show you where you are. If you can enhance this or make it more efficient then please let us know. [Link]:http://anotherdataminingblog.blogspot.com/2011/06/scraping-up-leaderboard.html,0,None,9 ,Mon Jun 06 2011 23:58:42 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/636,/competitions/hhp,1st /peterkorinis,Neural Network Software Packages,"I apologize in advance if this is a stupid question. it seems that a neural network would be ideally suited to this type of problem, though it might not provide the accuracy needed to win. i have read all the posts in the forum, many centered around sophisticated analytical tools like R ... but i've seen nothing on use of neural network packages. what am i missing? is anyone using NN software? why not?",0,None,7 ,Tue Jun 07 2011 00:59:13 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/637,/competitions/hhp,None /jeremya,"As far as we can tell, the early leaders don't have a background in healthcare...","Looking to connect with any others working on this prize that do have a background in healthcare. My Profile: http://www.heritagehealthprize.com/users/8752/jeremya 12+ years of healthcare performance management and accountability decision support using fincanial, statistical and clinical data. Any folk with similar experiences working on this problem? Please post a link to your HHp profile here. Thx Jeremy",0,None,6 ,Wed Jun 08 2011 03:04:17 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/638,/competitions/hhp,1184th /seyhan,Target Attribute of Training dataset?,"Hi, I have to say in the out start that I am coming from machine learning side of data mining and never involved an image mining before,- apart from taking a therotical multimedia data mining subject at uni, such as image processing to covert image(s) into tabular form and then appy them to a ml model. I find the competition very interesting and like to participate in it. But I have a poblem. The issue for me is to identify the target attribute in the training dataset. They are all images. I read same explanations on e1, e2, (I think they are eigenvalues of given images). But what if I miscalculate the e1 and e2 in the first place, then my model would predict the values of e1, e1 incorrecty, since target attributes for training calculated incorrectly. Best Regards, Seyhan",0,None,2 ,Wed Jun 08 2011 05:18:58 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/640,/competitions/mdm,None /andydm,"Submission status - ""Pending"", score not calculated","Hi, Yesterday I've made my third submission and still waiting score results. With submissions #1 and #2 score was calculated immeaditely, but now status is ""Pending"" and not changed. Today I have re-submit my solution, but with same result. I have download files back and compared with previously uploaded solutions - nothing different (except the values, of course). Separator is "","", decimal symbol is ""."", 60001 rows. What can be wrong? AndyDm.",0,None,2 ,Wed Jun 08 2011 07:25:59 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/641,/competitions/mdm,52nd /ssrc9486,"Are the members table, claims table, DIH tables different in Release 3?","It is not just the labs table and prescriptions/drugs table that have been released, but also all the other tables. Is this because the other tables have been updated? Should I only use the claims table from release 3, or can I continue using it from release 2, to which I have added all my analyses. Thank you! Sam",0,None,3 ,Wed Jun 08 2011 13:09:15 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/642,/competitions/hhp,1333rd /ssrc9486,DrugCount? count of drugs or prescriptions?,"Is the drug count referring to 1. the count of drug types taken over the year (e.g. 3 = aspirin and inhaler and hay fever tablets), or 2. the number of prescriptions over the year (e.g. 3 = 2 paracetamol prescriptions and 1 inhaler prescription)? The HPN data files page says the RxTable (which I assume is the DrugCount table) contains ""certain details of prescriptions filled by members"" - is there not an automatic record of their count of prescriptions/drugs? If it is 2. then it seems a bit much to ask patients to recall how many prescriptions they had over the past year. Thank you!",0,None,2 ,Wed Jun 08 2011 13:18:16 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/643,/competitions/hhp,1333rd /vanushvaswani,A Primer on Machine Learning,"I notice people are getting good scores by incorporating 'machine learning' in this problem. Unforunately, I have no experience in this field. Would anyone be so kind as to recommend a good primer on it?",0,None,3 ,Wed Jun 08 2011 15:07:57 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/644,/competitions/mdm,47th /daggerfs,Convolution kernel types?,"Do we have any principled knowledge about the convolution kernel used to blur the images, other than the pixelized star images? I am thinking of an isometric Gaussian (or a skewed Gaussian), but am wondering if this is the right assumption.",0,None,4 ,Wed Jun 08 2011 19:44:43 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/645,/competitions/mdm,30th /jldml7810,Using Weka on large data-sets,"Hi, Could someone share ideas on how to train the algorithms implemented in WEKA on large data-sets and not have it run out of memory? Is it possible? Right now, the largest training data-set size I have managed is that of 2000 instances and it's not serving very well. Any help or suggestions in this regard would be appreciated. Thanks in advance!",0,None,5 ,Wed Jun 08 2011 21:28:35 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/646,/competitions/hhp,837th /pabloruggia1,Forum Rank Feature,"Hi ! I noticed that some posts show the rank of the user in the competition. For example, for Chris Raimondi, it shows that he is in second place in hhp competition. But for others, there is no rank even if they have made submisions, for example Zach, who is currently 112, doesn't show any rank in his forum posts. Is this a bug or a feature (you only show the rank for the top X people)? Thanks !",0,None,3 ,Wed Jun 08 2011 23:38:38 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/647,None,None /mattfornari,Cases missing Sex and Age code in release 3,"Just started working on this so sorry if this has been addressed already. Why are so many cases missing Sex codes? Around 15% are missing, seems like an important and easilly discerned variable. Is there a procedural reason they are missing? Additionally, is there any reason behind cases missing Age codes? Missing cases for both seem like significant predictors. [Link]:https://storage.googleapis.com/kaggle-forum-message-attachments/1132/agebysex.jpeg",0,None,26 ,Thu Jun 09 2011 22:57:32 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/648,/competitions/hhp,61st /rickyjames,Bad Data ???,"I have waited until the Release 3 data set was out to start working on this contest. I haven't made a submission yet because I have instead been taking a deep dive into the R3 data set to check it out and get a feel for it. Now I am at a halt because I am convinced the R3 data set has serious inconsistency problems. Try this. Open the DaysInHospital_Y3 in Excel and do a sort on MemberID, smallest to largest. Verify that MemberIDs 14552 and 68150 are not even listed as valid entries for Y3, that MemberIDs18190 and 55259 spent zero days in the hospital, and that MemberID 55920 spent 5 days in the hospital. Designate these MemberIDs as Exhibit A. Next verify that HHP is saying that MemberIDs 416310 spent a single day in the hospital during Y3. Designate this as Exhibit B. Now verify that HHP is saying that MemberIDs 158589, 314883, 320038, and 463091 all spent a single day in the hospital as well during Y3. Designate these Member IDs as Exhibit C. Now let's look at the Claims data for these ten MemberID (just how to accomplish this is up to you...) In the total set of all claims made by Exhibit A MemberIDs, all five show a single hospitialization claim for one day in Y3: 14552 2206422 505451 18175 Y3 Internal Urgent Care 30 1 day 1- 2 months GIBLEED 0 SDS 0 18190 3584092 593413 93067 Y3 Surgery Ambulance 73 1 day 2- 3 months NEUMENT 2-Jan SDS 0 55259 9311197 168707 30569 Y3 Surgery Ambulance 0 1 day 8- 9 months ARTHSPIN 0 SDS 0 55920 9121540 523791 58880 Y3 Emergency Urgent Care 65 1 day 1- 2 months HEART2 0 SDS 0 68150 7520858 441329 821 Y3 Internal Outpatient Hospital 29 1 day 0- 1 month ARTHSPIN 0 SRS 0 For the MemberID of 416310 in Exhibit B, there are two seperate claims in Y3 showing one day of hospitalization each for a total of two days (note one claim is SDS, the other is SCS): 416310 8253892 258154 97143 Y3 Internal Urgent Care 31 1 day 0- 1 month TRAUMA 0 SDS 0 416310 8253892 258154 97143 Y3 Internal Urgent Care 31 1 day 0- 1 month TRAUMA 0 SCS 0 Finally, for the MemberIDs of Exhibit C, there are no claims at all that show any hospitialzation in Y3. In fact, for these four MemberIDs there are no Y3 claims at all of any kind, only Y1 and Y2 claims. Bottom line, for none of these MemberIDs do the Claims data and the DaysInHospital data match up right. This is only ten examples, I logged over 1300 such discrepancies in Y3 alone before I quit counting. Can somebody from Heritage Health please verify this mishmash is what we are really truly supposed to be analyzing? Or am I TOTALLY off base here somehow?",1,bronze,6 ,Mon Jun 13 2011 22:53:37 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/651,/competitions/hhp,861st /doc555,Making a Submission - Order Important?,"I have made several submissions and gotten what I perceive to be very puzzling results. Do you have to make your submissions in the same order as the sample? I figured that if you entered a submission and had the patient ID number in the first row, claims truncated in the 2nd row and your prediction in the 3rd row, it didn't matter what order your IDs were in. If you have to make your submissions in the same order as the sample my results begin to make sense to me. It could be that I am hopelessly lost, but after very discouraging initial results, I have vastly simplified my model, and I really can't imagine it is less predictive than a constant value. If it is, c'est la guerre...",0,None,5 ,Wed Jun 15 2011 15:20:02 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/652,/competitions/hhp,418th /jeffmoser,Forum Suggestions,"The competition forums have been really important for sharing ideas, but I realize there are some bugs/issues/annoyances people are facing with the forums as well as lack of certain features. I'd like to consolidate all the suggestion discussion to this topic to make sure I haven't lost track of things. Here's what I have so far: Make sure text never overflows in a forum (Fixed: 16 Jun 2011) Better copy and paste, especially from Excel (Update: Seems to be a Chrome issue as IE works ok) Improve formatting of quick reply (Fixed: 16 Jun 2011) Indicate unread posts (Partial: 17 Jun 2011) Make it easier to include images in posts Improve R syntax highlighting Better search features Ability to ignore topics Are there other ones you think are important? Please feel free to add them here. Thanks!",1,None,4 ,Wed Jun 15 2011 19:04:55 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/653,None,None /chrisraimondi,Why so quiet?,"Why so quiet - What is everyone up to? I am starting to get annoyed at my CPU fan and hard drive noise. I can sort of tell where in the code my CPU is based off of the CPU fan noise. Also I think I am going to get an SSD drive. [Would probably also help if it wasn't located three feet from my head] Other than that - trying to find new features and better algos and trying to clean up and organize code. Just starting really R Studio - I think I like it - should be much neater looking than the 16,000 line disorganized TextPad file I am working with now. I want to try doing some multicore R stuff - so far I have just been manually launching multiple instances. And then maybe even give the Amazon EC2 stuff a try. How about the rest of you - any goals/objectives/frusterations you care to share?",0,None,13 ,Mon Jun 20 2011 20:04:50 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/654,/competitions/hhp,20th /jldml7810,LengthOfStay values observation,"Hi So far, I haven't been able to locate any data instance with a LengthOfStay value= 8-12weeks or 12-26weeks. Could someone tell me if this is a correct obeservation? And if it isn't could you please point me to a data instance (as in, with values for all the Claims attributes or whichever additional attributes you may choose to include - it helps in making searching easier :-) )? Thank you.",0,None,3 ,Tue Jun 21 2011 18:43:29 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/655,/competitions/hhp,837th /sja5779,Publication and Usage of the Dataset,It seems that this competition has very restricted license term. Can anyone answer for the following questions? 1. Can one publish or give a talk on the method developed for or used in the competition? 2. Can one use the dataset for other research works in academia apart from the competition? Thank you.,0,None,1 Comment,Tue Jun 21 2011 21:47:00 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/656,/competitions/hhp,138th /blacksou,Can we use information from Y4?,Each time you make a submission the prediction error rate gives you useful information about Y4 so is it ok to use this information?,0,None,1 Comment,Wed Jun 22 2011 11:46:01 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/657,/competitions/hhp,622nd /chrisraimondi,Getting pretty fancy with the new features....,Cool stuff. I like the notes on the leaderboard! FWIW - A little trick I learned to prevent the CSS caching issue... Just rename the CSS every time you update it - that way it forces the browser to download the new version.,0,None,3 ,Fri Jun 24 2011 00:25:29 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/659,/competitions/hhp,20th /jeffmoser,Benchmark Suggestions?,"Today I updated the site to allow for multiple benchmarks on the leaderboard to help give you an idea of where your submissions rank relative to some basic ideas (i.e. submitting the example entry, submitting an entry with all zeros, etc). In addition, each benchmark has a special graphical designation (to stand out) and a brief description of how you can get that benchmark score. Do you find these benchmarks to be helpful? If so, can you think of additional ones that I should add? I'm definitely not looking for anything that would give away a big discovery of yours, but just some basic techniques to fill a few more spots on the leaderboard. This question is slightly related to the "" [Link]:http://www.heritagehealthprize.com/c/hhp/forums/t/523/interesting-submissions-with-scores/3195#post3195"" forum topic, but in this case I'm looking for basic techniques that can be completely described in a sentence or two and are easily reproduceable using a variety of tools (Excel, R, etc.).",0,None,15 ,Fri Jun 24 2011 01:44:07 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/660,/competitions/hhp,None /jeffmoser,"The ""Optimized Constant Value"" Benchmark","(NOTE: The description below is not new insight, but rather is an expansion upon an [Link]:/c/hhp/forums/t/523/interesting-submissions-with-scores/3695#post3695 by [Link]:http://www.kaggle.com/users/971/allan-engelhardt. This post is effectively notes I took while reading his post. All of the credit goes to him for this technique. If I've made a mistake, all the errors within belong to me) I've just added an ""Optimized Constant Value Benchmark"" to the leaderboard which represents approximately the best score you can get with a constant value of approximately 0.209179. This approach has been used by several competitors and many have exceeded it already. To understand this benchmark, we need to look at the evaluation metric for this competition which is the Root Mean Squared Logarithmic Error (RMSLE): ϵ=√1nn∑i=1(log(pi+1)−log(ai+1))2 Where: ϵ is the RMSLE value (score) n is the total number of members pi is your predicted DaysInHospital value for member i ai is the actual DaysInHospital value for member i log(x) is the natural logarithim of x We'd like to know an optimal p constant value such that if we make a submission where all pi=p, we'll get a decent score on the leaderboard for the RMSLE evaluation. This would roughly be the best possible score you could get without looking at any individual member data because this approach tries to identify an ""average"" member. One good place to start in the process of finding an optimal p is to try p=pi=0. This is exactly what the ""All Zeros Benchmark"" does. I'll call this score ϵ0. If we look at how it's calculated, we see: ϵ0=√1nn∑i=1(log(0+1)−log(ai+1))2 =√1nn∑i=1(log(1)−log(ai+1))2 =√1nn∑i=1(0−log(ai+1))2 =√1nn∑i=1(−log(ai+1))2 =√1nn∑i=1(log(ai+1))2 To keep things compact, I'll use the ""bar"" notation to denote the mean (average). That is, ¯P=1n∑ni=1pi. This gives us: ϵ0=√¯log(ai+1)2 ϵ20=¯log(ai+1)2 By looking at the public leaderboard, we see that ϵ0≈0.522226: ϵ20=0.5222262≈0.272720=¯log(ai+1)2 Thus, we can figure out average squared logarithm value (plus 1) for the public leaderboard's ai's Things get interesting if we compare another constant submission. The leaderboard also has an ""All 15's Benchmark"" which is a score obtained by submitting a constant value of p=pi=15 for each member. Let's call this score ϵ15: ϵ15=√1nn∑i=1(log(pi+1)−log(ai+1))2 =√1nn∑i=1(log(15+1)−log(ai+1))2 =√1nn∑i=1(log(16)−log(ai+1))2 =√1nn∑i=1(log(16)−log(ai+1))(log(16)−log(ai+1)) =√1nn∑i=1(log(16)2−log(16)log(ai+1)−log(ai+1)log(16)+log(ai+1)2) This simplifies to: ϵ15=√1nn∑i=1(log(16)2−2log(16)log(ai+1)+log(ai+1)2) We can square both sides: ϵ215=1nn∑i=1(log(16)2−2log(16)log(ai+1)+log(ai+1)2) We can use our ""bar"" notation for averages to simplify this: ϵ215=¯log(16)2−¯2log(16)log(ai+1)+¯log(ai+1)2) We know that the average of a constant value is the constant value itself, so this simplifies to: ϵ215=log(16)2−2log(16)¯log(ai+1)+¯log(ai+1)2) In addition, we learned earlier that ϵ20=¯log(ai+1)2, so we can substitute that in to obtain: ϵ215=log(16)2−2log(16)¯log(ai+1)+ϵ20 We can rearrange terms: 2log(16)¯log(ai+1)=log(16)2+ϵ20−ϵ215 and then divide each side by 2log(16): ¯log(ai+1)=log(16)2+ϵ20−ϵ2152log(16) We'd really like to solve for ¯ai. This is where we step into some dangerous territory from a mathematical perspective. We can calculate exp(x)=ex on each side, but this approach is misleading because it won't really tell us the true ¯ai but rather a somewhat optimal value for an average based on our error metric. We'll blissfully ignore this for now and press on: exp(¯log(ai+1))=exp(log(16)2+ϵ20−ϵ2152log(16)) ¯ai+1=exp(log(16)2+ϵ20−ϵ2152log(16)) ¯ai=exp(log(16)2+ϵ20−ϵ2152log(16))−1 Again, by looking at the leaderboard, we know that ϵ0≈0.522226 ϵ15≈2.628062 Thus, we can calculate ¯ai: ¯ai=exp(log(16)2+0.5222262−2.62806222log(16))−1 ≈0.209178645003481 What's really interesting about this approach is that you can use any other non-zero constant value (p) submission along with its public leaderboard error (ϵp) and you should get the same ""optimized"" constant value by calculating: ¯ai=exp(log(p+1)2+ϵ20−ϵ2p2log(p+1))−1 Now, a few words of caution: It's critical to realize that the calculated ¯ai≈0.209179 value is not the real mean value for DaysInHospital. Instead, it is a constant value that is useful only in the context of this contest's evaluation metric. A better approximation for the real mean value can be obtained by finding the mean of Y2 and Y3. Note that both ϵ0 and ϵ15 came from the public leaderboard values which represent approximately 30% of the solution set. The other 70% is used in the private leaderboard and thus will be used to calculate your real final scores. Thus, there is a chance that this ""optimized"" constant will be different than had we calculated it based off the complete solution set. Just by looking at the leaderboard, you can tell that a constant value submission will not put you in the running for any prizes, so you'll actually have to look at the data :) Regardless, I thought this technique was a clever approach to the data. Thanks again to Allan for [Link]:/c/hhp/forums/t/523/interesting-submissions-with-scores/3695#post3695.",1,None,5 ,Fri Jun 24 2011 23:29:05 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/661,/competitions/hhp,None /ccccat,GREAT10,I am wondering if anybody decided to participate in GREAT10 directly. The data set looks so big that I am not sure I have a hardware to handle it.,0,None,6 ,Sun Jun 26 2011 18:01:59 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/663,/competitions/mdm,2nd /boooeee,Cross Validation Discrepancies,"I'm finding that there seems to be a consistent gap in what I would expect my leaderboard score to be (using cross validation) vs. my actual score. For example, my most recent (and best) score is from a fairly straightforward Random Forest model. The leaderboard score is 0.468123. However, when I use the default cross-validation approach from that Random Forest model (average error on the OOB samples), I get an expected score of 0.4506299. I'm still a bit of a novice when it comes to machine learning, but I was wondering if anybody else was experiencing a similar gap. Or is this just systemic to cross-validation? Anybody getting better scores than what they expect?",0,None,17 ,Sun Jun 26 2011 21:45:48 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/664,/competitions/hhp,16th /chrisraimondi,Contribute an R Function,"I thought it might be nice to have a way to share R functions that are useful for this contest. So I will start with a simple one... This is a function you can use before submitting your prediction - it corrects any prediction less than 0 or more than 15. You can also change it as needed. [quote]cutOff <- function(x, y=0, z=15){ x <- ifelse(x < y, y, x) x <- ifelse(x > z, z, x) x } > -15:20 [1] -15 -14 -13 -12 -11 -10 -9 -8 -7 -6 -5 -4 -3 -2 -1 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 [33] 17 18 19 20 > cutOff(-15:20) [1] 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 15 15 15 15 15 > cutOff(-15:20,0.01,14.99) [1] 0.01 0.01 0.01 0.01 0.01 0.01 0.01 0.01 0.01 0.01 0.01 0.01 0.01 0.01 0.01 0.01 1.00 2.00 3.00 4.00 5.00 [22] 6.00 7.00 8.00 9.00 10.00 11.00 12.00 13.00 14.00 14.99 14.99 14.99 14.99 14.99 14.99",1,bronze,39 ,Mon Jun 27 2011 22:51:41 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/666,/competitions/hhp,20th /gestaltgeber,1.47708,Under Information you find the following: Our own internally developed prediction model scores an RMSLE of 1.47708. Your submission should at a minimum beat this prediction to be eligible (see our Rules as well). On the Leaderboard you can find: Optimized Constant Value Benchmark This entry represents the best possible score you can probably get using a constant value of 1.750998229. This constant value was derived by analyzing some characteristics of the evaluation metric. It represents the best possible score you're likely to get without actually analyzing the data. ... with a RMSLE of 1.47708 Does this mean Wikipedias internal prediction model is a constant value? Or are there some numbers mixed up?,0,None,1 Comment,Tue Jun 28 2011 19:11:36 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/667,/competitions/wikichallenge,None /jeffmoser,Importing the data into SQL Server,"While preparing the dataset, we used both MongoDB and SQL Server to get a good feel for the types of tools people might use to store the data for this competition. Here's an example of my SQL Server schema: USE [wikichallenge] GO CREATE TABLE [categories]( [category_id] [tinyint] NOT NULL, [category] [varchar](17) NOT NULL ); CREATE TABLE [comments]( [revision_id] [int] NOT NULL, [comment] [nvarchar](257) NOT NULL ); CREATE TABLE [namespaces]( [namespace_id] [tinyint] NOT NULL, [namespace] [varchar](14) NOT NULL ); CREATE TABLE [titles]( [article_id] [int] NOT NULL, [category] [tinyint] NOT NULL, [timestamp] [datetime] NOT NULL, [namespace] [tinyint] NOT NULL, [redirect] [bit] NOT NULL, [title] [nvarchar](247) NULL, [related_page] [int] NULL ); CREATE TABLE [training]( [user_id] [int] NOT NULL, [article_id] [int] NOT NULL, [revision_id] [int] NOT NULL, [namespace] [tinyint] NOT NULL, [timestamp] [datetime] NOT NULL, [md5] [varchar](32) NULL, [reverted] [bit] NOT NULL, [reverted_user_id] [int] NULL, [reverted_revision_id] [int] NULL, [delta] [int] NOT NULL, [cur_size] [int] NOT NULL ); GO -- Make implied NULLs actual NULLs UPDATE titles SET related_page = NULL where related_page = -1; UPDATE training set md5 = NULL where md5 = '-1'; UPDATE training SET reverted_user_id = NULL WHERE reverted_user_id = -1; UPDATE training SET reverted_revision_id = NULL WHERE reverted_revision_id = -1; GO -- Optionally add primary keys ALTER TABLE categories ADD CONSTRAINT PK_categories PRIMARY KEY CLUSTERED (category_id); ALTER TABLE comments ADD CONSTRAINT PK_comments PRIMARY KEY CLUSTERED (revision_id); ALTER TABLE namespaces ADD CONSTRAINT PK_namespaces PRIMARY KEY CLUSTERED (namespace_id); ALTER TABLE titles ADD CONSTRAINT PK_titles PRIMARY KEY CLUSTERED (article_id); ALTER TABLE training ADD CONSTRAINT PK_training PRIMARY KEY CLUSTERED (revision_id); GO To actually perform the import, I just used a simple right click on the database name followed by ""Import Data..."" and followed the wizard steps for a flat file, tab deliminated file. If you use a different SQL database, you'll probably have to follow similar steps. My hope is that the provided schema might give you a starting point. Feel free to ask any follow-up questions.",5,bronze,14 ,Tue Jun 28 2011 20:23:40 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/668,/competitions/wikichallenge,None /dansbecker,Submission error,"Anyone else getting an error ""Field index must be included in [0, FieldCount[. Specified field index was : '2'. Parameter name: field Actual value was 2."" Looking at my submission file, I don't see anything that looks obviously in error (e.g. extra quotation marks, mislabeled fields, etc.)",0,None,1 Comment,Wed Jun 29 2011 14:49:02 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/669,/competitions/hhp,2nd /dzafarsadik,Error evaluation ,Could you please give a formula for Root Mean Squared Logarithmic Error. Thanks,0,None,1 Comment,Wed Jun 29 2011 22:05:01 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/670,/competitions/wikichallenge,None /byang1,Useless data columns ?,"First of all, I suggest breaking the large training.tsv into 5 or 10 smaller files. Extracting a 2GB file, loading it into an editor, and jumping around to random lines aren't exactly easy on most computers today. Can I assume MD5 will be useless for prediction and therefore a big waste of space ? Is reverted_used_id -1 only when reverted is 0 ? If so reverted flag can be removed to save space too.",0,None,9 ,Wed Jun 29 2011 23:22:48 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/671,/competitions/wikichallenge,None /starakaj,Other data,"We're thinking that much of what motivates people to edit wikipedia comes from external factors, like world news events and major media releases. What external data are we allowed to take into account in building our model?",0,None,1 Comment,Thu Jun 30 2011 07:11:43 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/672,/competitions/wikichallenge,None /pwfrey42,Supreme Court Ruling,Doesn't the recent Supreme Court decision in respect to free speech and the release of pharmacy data raise quesitons on whether HHP needs to take such draconian measures in respect to releasing pharmacy data for the contest? The pharmacy data would be highly predictive and would improve the forecasts significantly.,0,None,7 ,Thu Jun 30 2011 18:07:43 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/673,/competitions/hhp,10th /mebeid,Sampling approach,"Would it be possible to provide details about the sampling approach? Empirically, it would appear that the sampled editor population reflects ""survivorship bias"". Two observations that support this possibility: 1. The number of editors whose first edit date is a recent date far surpasses those whose first edit date is more distant (or very distant) date. For example, of the total sample of 44,514 editors: - 17,524 have first edit date in the included 8 months of 2010 - while only 11,625 have first edit date in all 12 months of 2009 So unless the true number of new editors is increasingly substantially, it would appear that the sample may over-represent more recently enrolled editors. 2. For the 6 months from Nov. 2009 to April 2010, the mean number of edits in the subsequent 5 months trends lower every month for ""eligible"" editors (i.e, editors with a first edit date prior to the month of analysis). It seems likely that this is an artifact of the sampling approach rather than a true trend. See results below. In other words, it seems likely that the reason the average # of subsequent 5-month edits for eligible editors as of 11/1/2009 is much higher than for eligible editors as of 4/1/2010 (87 vs 61) is that the 4/1/2010 population includes more newly enrolled editors than does the 11/1/2009 population. Information on the sampling approach would likely help competitors make proper use of the data. Thanks for your consideration. As-of-date, Eligible-editors, Avg-edits-next-5-months 4/1/2010, 33839, 60.62 3/1/2010, 31457, 65.89 2/1/2010, 29287, 70.11 1/1/2010, 26990, 76.49 12/1/2009, 24987, 80.45 11/1/2009, 22804, 86.77",2,bronze,14 ,Fri Jul 01 2011 07:18:08 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/674,/competitions/wikichallenge,None /sashikanthdareddy,Validation dataset?,It would be great if the organisers can make available a small validation dataset.,4,bronze,12 ,Sun Jul 03 2011 10:52:55 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/675,/competitions/wikichallenge,31st /uriblass,what is the minimal difference in the leaderboard that is significant?,In other words what is the minimal difference that you can be sure with 95% certainty that if A is better than B in the leaderboard then A is also better than B in the real table. It is going to be nice if the organizers answer it.,0,None,1 Comment,Sun Jul 03 2011 16:06:01 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/676,/competitions/hhp,340th /ccccat,RMSE precision,Is it possible to add one-two extra digits to the displayed RMSE?,0,None,5 ,Sun Jul 03 2011 21:34:16 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/677,/competitions/mdm,2nd /mkwan7977,AgeAtFirstClaim definition,"Is the AgeAtFirstClaim variable in Members.csv the age of the member in Y1, or their age in the year of their first claim? In other words, say you had two members, both with AgeAtFirstClaim = 20-29 The first member has claims in Y1,Y2,Y3, so they are presumably aged 23-32 in Y4 The second member only has Y3 claims. In Y4 are they aged 23-32 or 21-30?",0,None,2 ,Mon Jul 04 2011 05:51:33 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/678,/competitions/hhp,17th /sabbiryousufsanny,Data Download,"Is it possible to make the data downloadable using torrents? Dowloading a 1GB file is troublesome for slow connections specially when an interruption will mean that I have to re-download. If torrent is not an option, you can at least make 5-10 smaller segments of the whole data.",0,None,6 ,Mon Jul 04 2011 08:25:32 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/679,/competitions/wikichallenge,None /frandom,"Submission Error ""an item with the same key...""","Hi - trying to make a submission tonight but no luck, I am getting an error message ""an item with the same key has already been added"". The submission is in an identical format to my previous submissions, and I have checked that the filename is different from all previous submissions. Jeff M - please investigate/resolve",0,None,3 ,Thu Jul 07 2011 01:48:40 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/681,/competitions/hhp,144th /titatum,Data on user enrollment time,"Hi, Is it possible to have data on user enrollment times? Or is it safe to assume that all sampled users in the training set joined Wikipedia before September 2009. Thanks!",0,None,10 ,Thu Jul 07 2011 04:28:13 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/682,/competitions/wikichallenge,84th /lipiji,Data in Chinese ,"hi, I am a student from China. There are some unusual data here, for instance: In Table Members.csv, there are many “10月19日” in the column of ""AgeAtFirstClaim"", it maybe the ""10-19"",but I have no idea why, beacuse all the other data is normal. In Table Claims.csv, in the column ""CharlsonIndex"", all the data except ""0"" are like ""1月2日""“3月4日”, are that mean ""1-2"" ""3-4""? What are the format of this cells in .csv? Thanks a lot.",0,None,3 ,Fri Jul 08 2011 02:58:39 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/683,/competitions/hhp,1041st /astrotom,Quadrupoles,"There was a slight inconsistency in the normalisation of the Quadrupole moments equation (e1,e2) on this webpage http://www.kaggle.com/c/mdm/Details/Ellipticity (5th equation down) which has now been updated. This does not change any results on the leaderboard, but for anyone using unweighted quadrupole moments based purely on the equations on this webpage should now take this into account.",0,None,1 Comment,Fri Jul 08 2011 18:01:42 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/684,/competitions/mdm,None /baoqiang,what is an edit?,"Sorry to ask such a trivial question. About predicting how many edits one user has in 5 months. What are the criteria to be counted as one edit? If one user visits his/her edited article, regardless it is revise, new entry, or simply view the edited article, it would be counted as one edit. Is it the right interpretion? Thanks!",0,None,1 Comment,Fri Jul 08 2011 22:38:57 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/685,/competitions/wikichallenge,48th /woshialex,has anybody really improved the score by using the Star file ?,I found that the star file is totally useless. Has somebody used the star file and get higher score than not using it? Thanks!,0,None,20 ,Sat Jul 09 2011 03:45:33 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/687,/competitions/mdm,6th /roobs5218,row ordering for submissions?,"hi kaggle admins, I'm assuming that submitted predictions must have their rows ordered by increasing user id. is this correct? If so, it might be worth explicitly mentioning it in a few places, e.g. on the info page for submissions, and again on the submission upload page! initially i assumed row order would not matter since we are uploading (user, prediction) pairs, but when my rows are not sorted i see about +0.8 error!",2,bronze,3 ,Sat Jul 09 2011 09:50:33 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/688,/competitions/wikichallenge,5th /sashikanthdareddy,comments table,"Hi, Do all revision_ids in training dataset have an associated comment in comments.tsv? if not, why not?",0,None,7 ,Sat Jul 09 2011 21:05:42 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/689,/competitions/wikichallenge,31st /ahassaine,Publication,"Hello, In addition to the method description we are supposed to provide. Will the top ranked participants be invited to publish in a special issue of a certain journal? Thanks, Ali",0,None,2 ,Sat Jul 09 2011 21:34:10 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/690,/competitions/mdm,3rd /pwfrey42,Data Privacy,"The objective for participants in the HHP contest is to forecast how many days persons in the database will spend in the hospital in the next year. This outcome measure would seem to depend on how healthy the person is and on the person’s propensity for seeking medical attention for a perceived illness. The data provided by the sponsors include age and gender but there is no information on family medical history, smoking history, dietary preferences, liquor consumption or exercise activity. Standard data from annual checkups such as blood test information (providing 40 or so measures), a list of drugs the person is taking and physical measurements such as height, weight and waist circumference are also missing. Information on the individuals past history in seeking medical help is also very limited. Some individuals seek medical attention when they feel a minor twinge and others only seek medical assistance if they are incapacitated. One would think that the measures described above would be readily available to medical practitioners. If the participants had access to this information, their forecasts would be more accurate. One of the objectives of the contest is to determine if modern predictive analytical techniques can make useful medical predictions. Given this, why have the sponsors organized a contest that handicaps the participants by not providing relevant data? The sponsors have also mangled the data. Drug count and lab count are truncated and length of stay (the outcome measure) has been converted into a non-linear numeric. These conversions degrade the estimate of the cost of future hospitalizations and ignore the value of methodologies that are effective with outliers. Much has been said about protecting the patients’ privacy but research efforts in other areas such as financial services in which highly sensitive personal data is used have been subjected to a less draconian privacy stance. The person’s name, address, phone number and social security number have been removed from each record. In theory, an individual’s pattern of financial activity might be used to identify that person. However, the probability that a single record among several hundred thousand records could be linked to one of the several hundred million people in this country is extremely small. A cost benefit analysis would surely indicate that improvements in healthcare and reduction in healthcare costs that could result from more sophisticated medical data processing would outweigh by many orders of magnitude the negative impact of an occasional identification of a person in the database.",0,None,10 ,Sat Jul 09 2011 22:05:38 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/691,/competitions/hhp,10th /analyticsguy,Algorithms and Problem Statements,"Hi Folks, This year a lot of you are participating in multiple competitions across different business domains. The Heritage is in healthcare, The Hearst Challenge is in publishing and KDD was in music. I was wondering if based on your insights do you think certain approach/algorithms work better for different domains or do you feel there is no difference in the science. I personally belong to the second category and feel the differences come not from the appropriate science but more from the business context and data availability. Appreciate your thoughts. Regards, analyticsguy",0,None,2 ,Sun Jul 10 2011 21:48:21 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/692,/competitions/hhp,None /andywocky,feedback vs scoring data sets?,"I'm seeking some clarifications regarding the Feedback Data Set (FDS) vs. the Scoring Data Set (SDS) described in the rules, and I hope a fellow forum member or admin can help: Are the members in Target.csv the complete set of members in the FDS? Is FDS a subset of SDS? If so, and |FDS| = 70942, then this implies that SDS contains 70942/0.3 = 236473 unique members, correct? Is the Milestone Prize awarded based on scoring using SDS, or SDS \ FDS (SDS excluding the FDS members)? The description on the Data page on the competition website suggests the latter: ""30% of the Y4 data is used to calculate the public scoreboard. The other 70% of the Y4 data is used to judge the final placements"" Thanks for the help! Andy",0,None,2 ,Mon Jul 11 2011 06:49:07 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/694,/competitions/hhp,265th /launeric,newbie R question,Hi I am new to R and l just want to load a package/library so I can use the function moments() I am using Rstudto via Amazon bioconductor AMI Any help Please -Thanks sincerely,0,None,3 ,Tue Jul 12 2011 01:34:11 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/697,/competitions/hhp,1278th /jeffmoser,Compressed Submissions,"I just added support for compressed submissions across all Kaggle competitions. This means that you can now take your .CSV submission file and optionally compress it using: GZip - This is the standard on Linux/Unix (i.e. ""gzip mysubmission.csv""). The uploaded file extension must end with "".gz"" as in ""mysubmission.csv.gz"" ZIP - You can create a ZIP file that has only your submission CSV file inside of it. The uploaded file extension must end with "".zip"" as in ""mysubmission.zip"" In addition, I added ""sniffing"" code that looks at your submitted CSV (whether inside a compressed file or not) to see if you only submitted an ""essential"" column. For example, if a competition only uses one column for predictions (i.e. column 3), you have traditionally had to submit a file with 3 columns for it to be accepted (even if the first two columns are always ignored). The new ""sniffing"" code will sniff around to see if you only have one column (i.e. your file has no commas in it). If this is the case, then the submission processor will assume that that single column contains your predictions. Lastly, the sniffer will look to see if the first row contains headers. If the first row column values are all floating point numbers, then it will assume that you didn't submit a header row and should press on with correct values. My hope is that these changes help people (especially those with slower connections) participate easier in competitions as we continue to grow and have competitions with larger submissions. As always, let me know if you have any questions with these changes.",10,bronze,4 ,Wed Jul 13 2011 04:51:18 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/699,None,None /jeffmoser,"Tip: consider using ""compressed submissions"" for this competition","Due to the relatively large submission files associated with this competition, you're all encouraged to use the new ""compressed submissions"" feature described in this post: [Link]:/forums/t/699/compressed-submissions/4559#post4559 I included two example submissions: a normal one and a ""compressed"" one. Using the compressed submissions should notably decrease your upload times.",4,bronze,4 ,Wed Jul 13 2011 05:37:45 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/700,/competitions/ClaimPredictionChallenge,None /chrisraimondi,Gini vs AUC,"I know I could probably look this up, but what is the practical difference (if any) between this and the AUC method used in the overfit competition.",0,None,3 ,Wed Jul 13 2011 07:07:28 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/702,/competitions/ClaimPredictionChallenge,48th /jeffmoser,Code to calculate NormalizedGini,"Since this is a new metric to Kaggle, I thought I'd share the code we use to calculate it (in C#): public static double Gini(this IList a, IList p) { if(a.Count != p.Count) { throw new ArgumentException(); } var all = p.Zip(a, (pred, actual) => new { actualValue = actual, predictedValue = pred }) // (actual, prediction) .Zip(Enumerable.Range(1, a.Count), (ap, i) => new { ap.actualValue, ap.predictedValue, originalIndex = i }) .OrderByDescending(ap => ap.predictedValue) // important to sort descending by prediction .ThenBy(ap => ap.originalIndex); // secondary sorts to ensure unambigious orders var totalActualLosses = a.Sum(); double populationDelta = 1.0/a.Count; double accumulatedPopulationPercentageSum = 0; double accumulatedLossPercentageSum = 0; double giniSum = 0.0; foreach(var currentPair in all) { accumulatedLossPercentageSum += (currentPair.actualValue/totalActualLosses); accumulatedPopulationPercentageSum += populationDelta; giniSum += accumulatedLossPercentageSum - accumulatedPopulationPercentageSum; } var gini = giniSum/a.Count; return gini;}public static double GiniNormalized(this IList a, IList p) { return a.Gini(p)/(a.Gini(a));} Let me know if you have any questions about the code. Feel free to post your own ports of it to the languages of your choice.",35,bronze,24 ,Wed Jul 13 2011 15:49:34 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/703,/competitions/ClaimPredictionChallenge,None /lipiji,CSV to Mysql errors,"hi, because there are many values like ""A.1"" ""B.2"", so mysql may consider the ""A"" and ""B"" as Table Column and I can not import the data. How to handle problem, thanks a lot. By the way, I give the sql to creat table. DROP TABLE IF EXISTS `train`; CREATE TABLE `train` ( `Id` int(11) NOT NULL auto_increment, `Household_ID` int(11) default NULL, `Vehicle` int(11) default NULL, `Calendar_Year` varchar(20) default NULL, `Model_Year` varchar(20) default NULL, `Blind_Make` varchar(11) default NULL, `Blind_Model` int(11) default NULL, `Blind_Submodel` varchar(20) default NULL, `Cat1` varchar(11) default NULL, `Cat2` varchar(11) default NULL, `Cat3` varchar(1) default NULL, `Cat4` varchar(11) default NULL, `Cat5` varchar(11) default NULL, `Cat6` varchar(11) default NULL, `Cat7` varchar(11) default NULL, `Cat8` varchar(11) default NULL, `Cat9` varchar(11) default NULL, `Cat10` varchar(11) default NULL, `Cat11` varchar(11) default NULL, `Cat12` varchar(11) default NULL, `OrdCat` varchar(20) default NULL, `Var1` float default NULL, `Var2` float default NULL, `Var3` float default NULL, `Var4` float default NULL, `Var5` float default NULL, `Var6` float default NULL, `Var7` float default NULL, `Var8` float default NULL, `NVCat` varchar(11) default NULL, `NVVar1` float default NULL, `NVVar2` float default NULL, `NVVar3` float default NULL, `NVVar4` float default NULL, `Claim_Amount` float default NULL, PRIMARY KEY (`Id`) ) ENGINE=MyISAM DEFAULT CHARSET=utf8",0,None,1 Comment,Thu Jul 14 2011 03:30:46 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/705,/competitions/ClaimPredictionChallenge,79th /cwilkes,"Are ""Categorical vehicle variables"" unique to each car type? ","Are the categories for Cat1->12 the same for all cars or are they unique per vehicle type? To put it another way is the first category always ""sedan or hatchback""? Or is it if the car is a sedan then the 2nd category is number of doors while for hatchbacks it is the car color? The categories ""Continuous vehicle variable"" to me implies that those are values based on the 12 categories so you can't compare them one for one with another row unless it is the same make and model. That is to say if the vehicle is a pickup truck the 1st column is the towing capacity while if it is a motorcycle it is the size of the engine. But I can also see those columns varying. For example the 2nd column can be used for either the MPG if the 3rd column in the category was ""N"" for No Damage but if there was damage to the car then it is the dollar amount that the car had. And finally for the ""Continuous non-vehicle variable"" are those the same across all rows? That is is the first column always the number of children in the household?",0,None,1 Comment,Thu Jul 14 2011 08:17:44 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/706,/competitions/ClaimPredictionChallenge,None /dirknbr,Households,"Just in case someone has run this already, how many households from train are also in test, and how any are new?",1,None,1 Comment,Thu Jul 14 2011 11:49:31 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/708,/competitions/ClaimPredictionChallenge,52nd /dirknbr,submission,"The submission page says Your entry must: be in CSV format have a header row as the first row have your prediction in column 2 Each predicted value must be: A real number. That is, a real-valued number in the interval (-∞, ∞). But the example submission only has one column, no header - I assumed they are the row_id. Dirk",0,None,3 ,Thu Jul 14 2011 12:31:16 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/709,/competitions/ClaimPredictionChallenge,52nd /jeffmoser,Importing to SQL Server and Aggregate Statistics,"In case it helps others, I'm posting the SQL Server schema I used for this competition. I used this schema in conjunction with SQL Server's import data wizard (i.e. create a new database, then right click on that database and click 'Import Data'). This schema reflects a fairly compact representation of the data. In addition, I've included many aggregate statistics so that you can verify your import: USE ClaimPredictionChallenge; CREATE TABLE [test_set]( [Row_ID] [int] NOT NULL, [Household_ID] [int] NOT NULL, [Vehicle] [smallint] NOT NULL, [Calendar_Year] [smallint] NOT NULL, [Model_Year] [smallint] NOT NULL, [Blind_Make] [varchar](2) NULL, [Blind_Model] [varchar](6) NULL, [Blind_Submodel] [varchar](8) NULL, [Cat1] [char](1) NOT NULL, [Cat2] [char](1) NOT NULL, [Cat3] [char](1) NOT NULL, [Cat4] [char](1) NOT NULL, [Cat5] [char](1) NOT NULL, [Cat6] [char](1) NOT NULL, [Cat7] [char](1) NOT NULL, [Cat8] [char](1) NOT NULL, [Cat9] [char](1) NOT NULL, [Cat10] [char](1) NOT NULL, [Cat11] [char](1) NOT NULL, [Cat12] [char](1) NOT NULL, [OrdCat] [smallint] NOT NULL, [Var1] [real] NOT NULL, [Var2] [real] NOT NULL, [Var3] [real] NOT NULL, [Var4] [real] NOT NULL, [Var5] [real] NOT NULL, [Var6] [real] NOT NULL, [Var7] [real] NOT NULL, [Var8] [real] NOT NULL, [NVCat] [char](1) NOT NULL, [NVVar1] [real] NOT NULL, [NVVar2] [real] NOT NULL, [NVVar3] [real] NOT NULL, [NVVar4] [real] NOT NULL ); CREATE TABLE [train_set]( [Row_ID] [int] NOT NULL, [Household_ID] [int] NOT NULL, [Vehicle] [smallint] NOT NULL, [Calendar_Year] [smallint] NOT NULL, [Model_Year] [smallint] NOT NULL, [Blind_Make] [varchar](2) NULL, [Blind_Model] [varchar](6) NULL, [Blind_Submodel] [varchar](8) NULL, [Cat1] [char](1) NULL, [Cat2] [char](1) NULL, [Cat3] [char](1) NULL, [Cat4] [char](1) NULL, [Cat5] [char](1) NULL, [Cat6] [char](1) NULL, [Cat7] [char](1) NULL, [Cat8] [char](1) NULL, [Cat9] [char](1) NULL, [Cat10] [char](1) NULL, [Cat11] [char](1) NULL, [Cat12] [char](1) NULL, [OrdCat] [char](1) NULL, [Var1] [real] NOT NULL, [Var2] [real] NOT NULL, [Var3] [real] NOT NULL, [Var4] [real] NOT NULL, [Var5] [real] NOT NULL, [Var6] [real] NOT NULL, [Var7] [real] NOT NULL, [Var8] [real] NOT NULL, [NVCat] [char](1) NULL, [NVVar1] [real] NOT NULL, [NVVar2] [real] NOT NULL, [NVVar3] [real] NOT NULL, [NVVar4] [real] NOT NULL, [Claim_Amount] [real] NOT NULL ); -- Make implied NULLs actual NULLs UPDATE train_set SET Cat1 = NULL WHERE Cat1 = '?'; UPDATE train_set SET Cat2 = NULL WHERE Cat2 = '?'; UPDATE train_set SET Cat3 = NULL WHERE Cat3 = '?'; UPDATE train_set SET Cat4 = NULL WHERE Cat4 = '?'; UPDATE train_set SET Cat5 = NULL WHERE Cat5 = '?'; UPDATE train_set SET Cat6 = NULL WHERE Cat6 = '?'; UPDATE train_set SET Cat7 = NULL WHERE Cat7 = '?'; UPDATE train_set SET Cat8 = NULL WHERE Cat8 = '?'; UPDATE train_set SET Cat9 = NULL WHERE Cat9 = '?'; UPDATE train_set SET Cat10 = NULL WHERE Cat10 = '?'; UPDATE train_set SET Cat11 = NULL WHERE Cat11 = '?'; UPDATE train_set SET Cat12 = NULL WHERE Cat12 IN ('?', ''); UPDATE train_set SET OrdCat = NULL WHERE OrdCat = '?' -- Optionally add primary keys ALTER TABLE train_set ADD CONSTRAINT PK_train_set PRIMARY KEY CLUSTERED (Row_ID); ALTER TABLE test_set ADD CONSTRAINT PK_test_set PRIMARY KEY CLUSTERED (Row_ID); --------------------------> train_set table SELECT Calendar_Year, COUNT(*) Vehicles_Per_Year FROM train_set GROUP BY Calendar_Year ORDER BY Calendar_Year; -- Calendar_Year Vehicles_Per_Year -- 2005 4025672 -- 2006 4447730 -- 2007 4710888 SELECT Model_Year, COUNT(*) Vehicles_Per_Year FROM train_set GROUP BY Model_Year ORDER BY Model_Year; --Model_Year Vehicles_Per_Year --1981 20966 --1982 24868 --1983 33514 --1984 53449 --1985 70738 --1986 92397 --1987 104061 --1988 138717 --1989 177728 --1990 203242 --1991 235688 --1992 281716 --1993 382330 --1994 478067 --1995 627182 --1996 589347 --1997 732651 --1998 791734 --1999 887858 --2000 1004464 --2001 993400 --2002 1055299 --2003 1007184 --2004 1067981 --2005 1007447 --2006 704628 --2007 366845 --2008 50787 --2009 2 SELECT TOP 10 Blind_Make, COUNT(*) Make_Count FROM train_set GROUP BY Blind_Make ORDER BY COUNT(*) DESC --Blind_Make Make_Count --K 1657185 --AJ 1547886 --BW 1265861 --AU 1071883 --Y 848371 --X 807923 --BO 657257 --W 552217 --L 382047 --AO 381448 SELECT TOP 10 Blind_Make, COUNT(*) Make_Count FROM train_set GROUP BY Blind_Make ORDER BY COUNT(*) --Blind_Make Make_Count --AB 5 --C 7 --H 10 --CA 13 --A 17 --AK 18 --AS 30 --BK 40 --BQ 81 --F 132 SELECT TOP 10 Blind_Model, COUNT(*) Model_Count FROM train_set GROUP BY Blind_Model ORDER BY COUNT(*) DESC --Blind_Model Model_Count --K.7 597433 --AU.14 303444 --X.45 291959 --W.16 233343 --AU.11 203473 --BO.38 194172 --AO.7 169280 --AJ.58 159714 --AJ.52 159204 --AU.58 151439 SELECT TOP 10 Blind_Model, COUNT(*) Model_Count FROM train_set GROUP BY Blind_Model ORDER BY COUNT(*), Blind_Model --Blind_Model Model_Count --AH.114 1 --AH.117 1 --AH.13 1 --AJ.107 1 --AM.10 1 --BQ.3 1 --BQ.9 1 --BU.37 1 --BW.123 1 --BW.128 1 SELECT TOP 10 Blind_Submodel, COUNT(*) Vehicle_Count FROM train_set GROUP BY Blind_Submodel ORDER BY COUNT(*) DESC --Blind_Submodel Vehicle_Count --K.7.3 165298 --AU.58.0 150110 --AU.14.1 141627 --AU.14.0 137043 --W.16.3 136104 --AU.11.3 132679 --K.7.0 116310 --BW.3.0 112414 --BW.115.0 112275 --BW.95.0 106209 SELECT TOP 10 Blind_Submodel, COUNT(*) AS Vehicle_Count FROM train_set GROUP BY Blind_Submodel ORDER BY COUNT(*), Blind_Submodel --Blind_Submodel Vehicle_Count --AE.3.0 1 --AE.4.1 1 --AE.4.2 1 --AE.6.2 1 --AH.114.1 1 --AH.117.1 1 --AH.13.1 1 --AJ.107.1 1 --AM.10.1 1 --AN.3.1 1 SELECT Cat1, COUNT(*) AS Total FROM train_set GROUP BY Cat1 ORDER BY COUNT(*) DESC; --Cat1 Total --B 4017739 --I 2654532 --D 2487951 --F 1305108 --G 782602 --A 768871 --C 401355 --E 279699 --J 233968 --H 226484 --? 25981 SELECT Cat2, COUNT(*) AS Total FROM train_set GROUP BY Cat2 ORDER BY COUNT(*) DESC; --Cat2 Total --C 5895027 --? 4874164 --A 2191054 --B 224045 SELECT Cat3, COUNT(*) AS Total FROM train_set GROUP BY Cat3 ORDER BY COUNT(*) DESC; --Cat3 Total --A 7488029 --B 2256802 --C 1270889 --E 886816 --F 872031 --D 405724 --? 3999 SELECT Cat4, COUNT(*) AS Total FROM train_set GROUP BY Cat4 ORDER BY COUNT(*) DESC; --Cat4 Total --A 5723163 --? 5631649 --C 1454425 --B 375053 SELECT Cat5, COUNT(*) AS Total FROM train_set GROUP BY Cat5 ORDER BY COUNT(*) DESC; --Cat5 Total --A 6683980 --? 5637321 --C 779280 --B 83709 SELECT Cat6, COUNT(*) AS Total FROM train_set GROUP BY Cat6 ORDER BY COUNT(*) DESC; --Cat6 Total --B 4265208 --C 3677694 --D 3604486 --E 1173316 --F 437605 --? 25981 SELECT Cat7, COUNT(*) AS Total FROM train_set GROUP BY Cat7 ORDER BY COUNT(*) DESC; --Cat7 Total --? 7167634 --C 4618653 --A 1050621 --B 233786 --D 113596 SELECT Cat8, COUNT(*) AS Total FROM train_set GROUP BY Cat8 ORDER BY COUNT(*) DESC; --Cat8 Total --A 8626513 --B 3673932 --C 880481 --? 3364 SELECT Cat9, COUNT(*) AS Total FROM train_set GROUP BY Cat9 ORDER BY COUNT(*) DESC; --Cat9 Total --B 10850782 --A 2333508 SELECT Cat10, COUNT(*) AS Total FROM train_set GROUP BY Cat10 ORDER BY COUNT(*) DESC; --Cat10 Total --A 8573092 --B 3969170 --C 638111 --? 3917 SELECT Cat11, COUNT(*) AS Total FROM train_set GROUP BY Cat11 ORDER BY COUNT(*) DESC; --Cat11 Total --A 6951038 --B 3174528 --C 1103640 --E 816595 --F 787998 --D 319022 --? 31469 SELECT Cat12, COUNT(*) AS Total FROM train_set GROUP BY Cat12 ORDER BY COUNT(*) DESC; --Cat12 Total --B 4348276 --C 3619974 --D 3525723 --E 1196458 --F 462388 -- 28882 (Note: probably should have been '?') --A 2589 SELECT OrdCat, COUNT(*) AS Total FROM train_set GROUP BY OrdCat ORDER BY COUNT(*) DESC; --OrdCat Total --4 5935475 --2 4146321 --5 2964704 --3 93976 --6 16198 --1 15835 --? 7546 --7 4235 SELECT MIN(Var1) FROM train_set; -- -2.578222 SELECT MAX(Var1) FROM train_set; -- 5.143392 SELECT SUM(Var1) FROM train_set; -- -133415.165078922 SELECT AVG(Var1) FROM train_set; -- -0.0101192529198707 SELECT MIN(Var2) FROM train_set; -- -2.493393 SELECT MAX(Var2) FROM train_set; -- 7.82942 SELECT SUM(Var2) FROM train_set; -- -858126.221982679 SELECT AVG(Var2) FROM train_set; -- -0.0650870256936611 SELECT MIN(Var3) FROM train_set; -- -2.790335 SELECT MAX(Var3) FROM train_set; -- 5.563325 SELECT SUM(Var3) FROM train_set; -- -335328.063572197 SELECT AVG(Var3) FROM train_set; -- -0.0254339113878864 SELECT MIN(Var4) FROM train_set; -- -2.508216 SELECT MAX(Var4) FROM train_set; -- 7.589262 SELECT SUM(Var4) FROM train_set; -- -719439.378401036 SELECT AVG(Var4) FROM train_set; -- -0.0545679273135707 SELECT MIN(Var5) FROM train_set; -- -3.350344 SELECT MAX(Var5) FROM train_set; -- 4.018167 SELECT SUM(Var5) FROM train_set; -- 50609.1422801865 SELECT AVG(Var5) FROM train_set; -- 0.00383859443930515 SELECT MIN(Var6) FROM train_set; -- -2.376657 SELECT MAX(Var6) FROM train_set; -- 4.584289 SELECT SUM(Var6) FROM train_set; -- -528989.514271023 SELECT AVG(Var6) FROM train_set; -- -0.0401227153127717 SELECT MIN(Var7) FROM train_set; -- -2.778491 SELECT MAX(Var7) FROM train_set; -- 4.127148 SELECT SUM(Var7) FROM train_set; -- -319229.602963614 SELECT AVG(Var7) FROM train_set; -- -0.024212877823805 SELECT MIN(Var8) FROM train_set; -- -2.163042 SELECT MAX(Var8) FROM train_set; -- 47.35074 SELECT SUM(Var8) FROM train_set; -- -772079.797664134 SELECT AVG(Var8) FROM train_set; -- -0.0585605897370381 SELECT NVCat, COUNT(*) AS NVCat_Count FROM train_set GROUP BY NVCat ORDER BY COUNT(*) DESC; --NVCat NVCat_Count --M 5767944 --O 3416948 --N 1328428 --L 804000 --J 559165 --E 401274 --F 325556 --B 173724 --H 134702 --K 119996 --C 64753 --A 45758 --I 19208 --G 16073 --D 6761 SELECT MIN(NVVar1) FROM train_set; -- -0.2315299 SELECT MAX(NVVar1) FROM train_set; -- 6.62711 SELECT SUM(NVVar1) FROM train_set; -- 193599.357427523 SELECT AVG(NVVar1) FROM train_set; -- 0.0146840942839943 SELECT MIN(NVVar2) FROM train_set; -- -0.2661168 SELECT MAX(NVVar2) FROM train_set; -- 8.883081 SELECT SUM(NVVar2) FROM train_set; -- 230879.161265343 SELECT AVG(NVVar2) FROM train_set; -- 0.0175116871113532 SELECT MIN(NVVar3) FROM train_set; -- -0.2723372 SELECT MAX(NVVar3) FROM train_set; -- 8.691144 SELECT SUM(NVVar3) FROM train_set; -- 178545.106917024 SELECT AVG(NVVar3) FROM train_set; -- 0.0135422618068188 SELECT MIN(NVVar4) FROM train_set; -- -0.2514189 SELECT MAX(NVVar4) FROM train_set; -- 6.388803 SELECT SUM(NVVar4) FROM train_set; -- 244090.89812161 SELECT AVG(NVVar4) FROM train_set; -- 0.0185137688962857 SELECT MIN(Claim_Amount) FROM train_set; -- 0 SELECT MAX(Claim_Amount) FROM train_set; -- 11440.75 SELECT SUM(Claim_Amount) FROM train_set; -- 17939315.1737089 SELECT AVG(Claim_Amount) FROM train_set; -- 1.36065841798905 SELECT CAST((SELECT COUNT(*) FROM train_set WHERE Claim_Amount = 0) AS real) / (SELECT COUNT(*) FROM train_set); -- 0.9927486 --------------------------> test_set table SELECT Calendar_Year, COUNT(*) Vehicles_Per_Year FROM test_set GROUP BY Calendar_Year ORDER BY Calendar_Year; --Calendar_Year Vehicles_Per_Year --2008 2118739 --2009 2196126 SELECT Model_Year, COUNT(*) Vehicles_Per_Year FROM test_set GROUP BY Model_Year ORDER BY Model_Year; --Model_Year Vehicles_Per_Year --1984 1 --1986 1 --1987 3 --1989 261 --1990 2 --1993 2 --1994 3110 --1995 9870 --1996 4486 --1997 1910 --1998 49561 --1999 300904 --2000 403069 --2001 374678 --2002 405638 --2003 395663 --2004 387160 --2005 459951 --2006 488643 --2007 471198 --2008 378999 --2009 153706 --2010 26049 SELECT TOP 100 Blind_Make, COUNT(*) Make_Count FROM test_set GROUP BY Blind_Make ORDER BY COUNT(*) DESC --Blind_Make Make_Count --K 554407 --AU 413009 --Y 362378 --AO 308624 --X 300597 --BF 161520 --L 161133 --BO 138123 --BU 135777 --AL 118783 SELECT TOP 10 Blind_Make, COUNT(*) Make_Count FROM test_set GROUP BY Blind_Make ORDER BY COUNT(*) --Blind_Make Make_Count --AS 31 --CB 36 --Z 42 --BQ 70 --AG 105 --T 286 --E 328 --BA 335 --AX 853 --BR 2796 SELECT TOP 10 Blind_Model, COUNT(*) Model_Count FROM test_set GROUP BY Blind_Model ORDER BY COUNT(*) DESC --Blind_Model Model_Count --K.7 328245 --AO.2 277386 --AU.14 205493 --X.45 129039 --Y.29 125310 --K.65 101414 --BO.38 94234 --Y.34 90861 --X.24 88772 --AU.11 86631 SELECT TOP 10 Blind_Model, COUNT(*) Model_Count FROM test_set GROUP BY Blind_Model ORDER BY COUNT(*), Blind_Model --Blind_Model Model_Count --AL.5 1 --BH.17 1 --BZ.18 1 --BZ.19 1 --AG.1 4 --E.18 4 --AY.68 5 --AG.5 6 --BG.24 7 --BT.68 7 SELECT TOP 10 Blind_Submodel, COUNT(*) Vehicle_Count FROM test_set GROUP BY Blind_Submodel ORDER BY COUNT(*) DESC --Blind_Submodel Vehicle_Count --AU.14.1 148261 --K.7.3 139655 --AU.58.0 77481 --AO.2.5 67926 --AO.2.13 66457 --K.7.1 63079 --X.45.8 57204 --K.7.2 56568 --AO.2.11 55175 --Y.29.0 54375 SELECT TOP 10 Blind_Submodel, COUNT(*) AS Vehicle_Count FROM test_set GROUP BY Blind_Submodel ORDER BY COUNT(*), Blind_Submodel --Blind_Submodel Vehicle_Count --AL.5.0 1 --AR.2.1 1 --AY.21.2 1 --AY.32.1 1 --AY.33.3 1 --AY.57.1 1 --AY.60.3 1 --AZ.27.18 1 --BH.17.0 1 --BZ.18.1 1 SELECT Cat1, COUNT(*) AS Total FROM test_set GROUP BY Cat1 ORDER BY COUNT(*) DESC; --Cat1 Total --B 2858631 --A 572074 --G 457935 --E 283406 --C 75064 --F 67755 SELECT Cat2, COUNT(*) AS Total FROM test_set GROUP BY Cat2 ORDER BY COUNT(*) DESC; --Cat2 Total --C 3302271 --A 891344 --B 121250 SELECT Cat3, COUNT(*) AS Total FROM test_set GROUP BY Cat3 ORDER BY COUNT(*) DESC; --Cat3 Total --B 2156278 --A 2007717 --F 148523 --D 1998 --E 260 --C 89 SELECT Cat4, COUNT(*) AS Total FROM test_set GROUP BY Cat4 ORDER BY COUNT(*) DESC; --A 4145055 --C 169810 SELECT Cat5, COUNT(*) AS Total FROM test_set GROUP BY Cat5 ORDER BY COUNT(*) DESC; --Cat5 Total --A 3804513 --C 458019 --B 52333 SELECT Cat6, COUNT(*) AS Total FROM test_set GROUP BY Cat6 ORDER BY COUNT(*) DESC; --Cat6 Total --B 1934060 --D 999375 --C 890884 --E 384563 --F 105697 --A 286 SELECT Cat7, COUNT(*) AS Total FROM test_set GROUP BY Cat7 ORDER BY COUNT(*) DESC; --Cat7 Total --C 3378427 --A 761065 --B 162285 --D 13088 SELECT Cat8, COUNT(*) AS Total FROM test_set GROUP BY Cat8 ORDER BY COUNT(*) DESC; --Cat8 Total --A 2238788 --B 1824053 --C 252024 SELECT Cat9, COUNT(*) AS Total FROM test_set GROUP BY Cat9 ORDER BY COUNT(*) DESC; --Cat9 Total --B 2508005 --A 1806860 SELECT Cat10, COUNT(*) AS Total FROM test_set GROUP BY Cat10 ORDER BY COUNT(*) DESC; --Cat10 Total --A 2807036 --B 1299078 --C 208751 SELECT Cat11, COUNT(*) AS Total FROM test_set GROUP BY Cat11 ORDER BY COUNT(*) DESC; --Cat11 Total --A 2279692 --B 1041526 --C 361464 --E 268073 --F 258831 --D 105279 SELECT Cat12, COUNT(*) AS Total FROM test_set GROUP BY Cat12 ORDER BY COUNT(*) DESC; --Cat12 Total --B 1426443 --C 1185201 --D 1157767 --E 393609 --F 150976 --A 869 SELECT OrdCat, COUNT(*) AS Total FROM test_set GROUP BY OrdCat ORDER BY COUNT(*) DESC; --OrdCat Total --2 2033886 --4 1950006 --5 258913 --3 69039 --7 1619 --6 1115 --1 287 SELECT MIN(Var1) FROM test_set; -- -3.09246 SELECT MAX(Var1) FROM test_set; -- 3.086437 SELECT SUM(Var1) FROM test_set; -- -1256558.88539529 SELECT AVG(Var1) FROM test_set; -- -0.291216268735011 SELECT MIN(Var2) FROM test_set; -- -2.14757 SELECT MAX(Var2) FROM test_set; -- 7.82942 SELECT SUM(Var2) FROM test_set; -- -228227.168241829 SELECT AVG(Var2) FROM test_set; -- -0.0528932349544723 SELECT MIN(Var3) FROM test_set; -- -2.46637 SELECT MAX(Var3) FROM test_set; -- 2.069135 SELECT SUM(Var3) FROM test_set; -- -900919.478849606 SELECT AVG(Var3) FROM test_set; -- -0.20879436062301 SELECT MIN(Var4) FROM test_set; -- -2.169942 SELECT MAX(Var4) FROM test_set; -- 7.94445 SELECT SUM(Var4) FROM test_set; -- -383728.20701796 SELECT AVG(Var4) FROM test_set; -- -0.0889316831506803 SELECT MIN(Var5) FROM test_set; -- -5.057174 SELECT MAX(Var5) FROM test_set; -- 2.876315 SELECT SUM(Var5) FROM test_set; -- -745186.335228794 SELECT AVG(Var5) FROM test_set; -- -0.17270212051334 SELECT MIN(Var6) FROM test_set; -- -2.029253 SELECT MAX(Var6) FROM test_set; -- 2.858966 SELECT SUM(Var6) FROM test_set; -- -1602296.64387677 SELECT AVG(Var6) FROM test_set; -- -0.371343400981669 SELECT MIN(Var7) FROM test_set; -- -2.21326 SELECT MAX(Var7) FROM test_set; -- 1.681913 SELECT SUM(Var7) FROM test_set; -- -2385324.70575756 SELECT AVG(Var7) FROM test_set; -- -0.552815605067032 SELECT MIN(Var8) FROM test_set; -- -1.484801 SELECT MAX(Var8) FROM test_set; -- 46.72172 SELECT SUM(Var8) FROM test_set; -- 378444.673914099 SELECT AVG(Var8) FROM test_set; -- 0.0877071875746051 SELECT NVCat, COUNT(*) AS NVCat_Count FROM test_set GROUP BY NVCat ORDER BY COUNT(*) DESC; --NVCat NVCat_Count --M 1861275 --O 1205736 --N 407912 --L 284602 --J 178823 --E 120624 --F 85545 --B 57172 --H 41202 --K 33773 --C 16057 --A 12100 --G 4408 --I 4159 --D 1477 SELECT MIN(NVVar1) FROM test_set; -- -0.2315299 SELECT MAX(NVVar1) FROM test_set; -- 6.62711 SELECT SUM(NVVar1) FROM test_set; -- -116860.566231743 SELECT AVG(NVVar1) FROM test_set; -- -0.0270832497034652 SELECT MIN(NVVar2) FROM test_set; -- -0.2661168 SELECT MAX(NVVar2) FROM test_set; -- 8.883081 SELECT SUM(NVVar2) FROM test_set; -- -43259.1929190159 SELECT AVG(NVVar2) FROM test_set; -- -0.0100256190909834 SELECT MIN(NVVar3) FROM test_set; -- -0.2723372 SELECT MAX(NVVar3) FROM test_set; -- 8.691144 SELECT SUM(NVVar3) FROM test_set; -- -195504.770742655 SELECT AVG(NVVar3) FROM test_set; -- -0.0453095915498294 SELECT MIN(NVVar4) FROM test_set; -- -0.2514189 SELECT MAX(NVVar4) FROM test_set; -- 6.388803 SELECT SUM(NVVar4) FROM test_set; -- 41278.680568099 SELECT AVG(NVVar4) FROM test_set; -- 0.00956662156709399 [Link]:https://storage.googleapis.com/kaggle-forum-message-attachments/1171/import_and_aggregate_stats.sql",0,None,1 Comment,Thu Jul 14 2011 16:09:06 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/711,/competitions/ClaimPredictionChallenge,None /mikhail1,new articles in test dataset ... just to clarify,"may be I missed something in data description, but the question is - if test dataset contains edit counts for ""new"" articles. I mean articles that were created after training period.",0,None,6 ,Thu Jul 14 2011 18:53:43 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/712,/competitions/wikichallenge,None /tim69933,"Question about ""exclusive original work"" assertion","Hi all, As I understand it from section 13 of the rules, and from previous discussions in the forum, in order to win a milestone prize, one must publish their potentially winning algorithm for all other contestants to review. Presumably, other contestants may then incorporate the winning algorithms into their own work, used to produce future entries. However, section 20 of the rules requires an entrant to represent that the algorithm used to produce an entry is ""the exclusive original work of the entrant"". But is this really true if an entrant built their algorithm on a milestone prize winning algorithm posted by another contestant? So, is the intent really that contestants use and adapt the algorithms posted by other contestants as part of the milestone prize qualification process, and if so, how does this reconcile with section 20 of the rules? Thanks, Tim",4,bronze,2 ,Sat Jul 16 2011 22:58:15 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/714,/competitions/hhp,None /analyticsguy,Most Effective versus Most Innovative,"Folks, Another interesting question for the analytics competition community. Traditionally, all datamining competitions have focused on obtaining the best score and rewarding participants that do so. Do you think there is merit in also looking at creativity/innovation? How do you judge that effectively. Hearst Analytics Challenge this year has teamed up with Wharton to offer an innovation award to be solely judged by some faculty members. Is that a good way to go and are we going to see more of that in the future? I would appreciate your views. Regards, analyticsguy",0,None,1 Comment,Sun Jul 17 2011 15:53:09 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/716,/competitions/hhp,None /uriblass,Memory problems with reading the train set file with the R language.,"I tried to read the train_set by the commands memory.limit(4095) aaa<-read.csv(file=""train_set.csv"") Unfortunately I get the error cannot allocate a vector of size 62.5 M ""If 32-bit R is run on most 64-bit versions of Windows the maximum value of obtainable memory is just under 4Gb. For a 64-bit versions of R under 64-bit Windows the limit is currently 8Tb."" 4 Gb and 8 Tb is a big difference and I wonder if people who use R for this competition use only 64 bit versions of R under 64 bit Windows.",0,None,13 ,Mon Jul 18 2011 16:05:02 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/718,/competitions/ClaimPredictionChallenge,None /twanvl,More training data,"In an attempt to get some data without the horrible selection bias, I have been collecting data from wikipedia myself. This is explictly allowed by the rules. What I did is download a list of wikipedia users, and for each user (in random order), download a list of edits made by that user between 2001-01-01 and 2010-08-31. The only bias in this data is that it includes only users who have made at least one edit in this period. Because I am such a nice guy, I decided to share this data with all of you. The entire file, in a format similar to training.tsv, is a bit too large to share easily (263MB). If anyone knows of a good way to do this I will certainly make it available. In the mean time, here is a file with just a summary. The file more.octave.txt is a sparse matrix in octave format, where the rows are the users, and the columns are the days. Each row in the file (except for the 5 header lines) looks like: Userids are renumbered from 1 to 85641, and days are numbered 1 to 3310. The file contains 648829 nonzero user/day pairs. If a user/day pair is not mentioned in the file, then that user didn't make any edits on that day. You can download this file from [Link]:http://twan.home.fmf.nl/files/wikichallenge-2011-07-19.zip. I can not guarantee that there are no errors, so use at your own risk. [Link]:https://storage.googleapis.com/kaggle-forum-message-attachments/1174/wikichallenge-2011-07-19.zip",1,bronze,5 ,Tue Jul 19 2011 13:22:30 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/719,/competitions/wikichallenge,13th /meliponemoody,Var2 and Var4,I see values for Var2 and Var4 that are above 1.0. I thought that the quantitative variables were normalized to have 0 mean and stdev 1. Am I missing something?,0,None,4 ,Tue Jul 19 2011 21:51:53 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/720,/competitions/ClaimPredictionChallenge,None /ccccat,Are we hitting a wall?,"Looks like there was no visible progress for last several weeks. Top 6 (or maybe even more) results are statistically even. It will be just ""lottery"" at the end. Taking this into account and the fact that organizers want to see several top algorithms in any case, I am willing to start collaboration with other top participants. I really hope that we are using different methods and combination of them will result in statistically meaningful progress. As a start I can provide my solution for training set in exchange for the same just to play with. If it will result in a progress on the test set then we will form a team for the rest of the competition. (I hope it is not against the rules).",0,None,43 ,Wed Jul 20 2011 17:32:54 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/721,/competitions/mdm,2nd /matthew3,Missing Data?,"I am a bit confused still as to the layout of the database that we have been given. Isn't the Days In Hopsital _YEAR supposed to tell us who is elligible for a given year to file claims? I made a SQL call to pull members for Year 2 that had claims, but nothing in the DaysInHopsital_Y2 table (which I named LOSY2 to shorten it. select c.MemberID, count(c.MemberID) as 'count'from claims as c where YEAR in ('y2')and MemberID not in (select memberID from LOSY2)group by c.MemberIDorder by [count] desc This call found many 19469 members who were not elliglbe who made had claims in Y2? Like members 42758 and 86970 ? What is going on here?",0,None,1 Comment,Fri Jul 22 2011 01:20:18 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/722,/competitions/hhp,274th /jopisch,Claim_Amount variable does not appear,"Hi, Claim_Amount does not even appear in the headings when I import the Test data (a small part of it, of course). in Excel. Thanks",0,None,1 Comment,Fri Jul 22 2011 10:58:50 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/724,/competitions/ClaimPredictionChallenge,None /dslate,Incorrect number of teams on leaderboard?,"This is a very minor point, but the number of teams shown at the bottom of the leaderboard always seems to be one greater than the rank of the last team. Is this a bug, or am I missing something? [Link]:https://storage.googleapis.com/kaggle-forum-message-attachments/1177/z",0,None,6 ,Sun Jul 24 2011 02:37:03 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/725,/competitions/hhp,10th /qubie7895,Goal is sorting?,Anyone who can help: from what I understand it seems that the actual predicted claim dollar amount does not matter (contrary to what the competition data page suggests). Instead it seems as if the the goal is to sort the claims by amount (if I am understanding the GINI index correctly). Can anyone clarify? The GINI index seems only to care about ordering.,0,None,8 ,Sun Jul 24 2011 16:59:30 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/726,/competitions/ClaimPredictionChallenge,None /zaccaksolutions,Scoring of ordinal Variables,I'm wondering how people are scoring their ordinal variables. For example: > summary(members$AgeAtFirstClaim) 0-9 10-19 20-29 30-39 40-49 50-59 60-69 70-79 80+ 5753 10791 11319 8505 12435 16111 13329 12622 14514 7621 For numbers that have a start and end boundary I take the midpoint: 0-9 turns to 4.5 10-19 turns to 14.5 ... How are people treating 80+ (6.7% of the data) and all the missing values (5% of the data) in order to get meaningful sense of the data?,0,None,2 ,Mon Jul 25 2011 03:59:21 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/727,/competitions/hhp,544th /thomas1934,There was a problem with your submission...,"I've tried submitting my results several times...compressed, uncompressed, laid out like the example file and, most recently, with a two column CSV file...column 1 is the RowID and column 2 are my predictions. To the best of my knowledge I've done everything according to the guidelines...but it simply won't accept the file. What's going on? Thanks, Mike",0,None,6 ,Mon Jul 25 2011 16:56:23 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/728,/competitions/ClaimPredictionChallenge,70th /gurch13633,mistake on evaluation page,"The ""Evaluation"" page states: A contestant’s model should predict, for each editor from the dataset, the number of edits made in the first 5 namespaces of the English Wikipedia ... Namespaces 0 to 5 are included in the data. That's six namespaces in total, not five.",1,bronze,3 ,Mon Jul 25 2011 22:51:50 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/730,/competitions/wikichallenge,None /gurch13633,using other data?,"The rules state we can use any pre-September 2010 data that is suitably licensed, which means anything from Wikipedia's logs from before that date. I assume this means we are allowed to extract additional information about the users in the dataset. For example, let's say we decide that if a user has been permanently blocked from editing, we want to predict zero future edits from that user. The dataset lacks any information on whether the user is blocked, but if we obtained such information from the wiki we could use it in our predictions, provided we were careful only to consider blocks made before September 2010. However, since usernames are not provided and the supplied user IDs aren't the real ones, finding such information -- or anything else about the user that isn't in the data set -- is non-trivial. Since the revision IDs haven't been obfuscated, those can be used together with a data dump (or the API on the live site) to obtain the user name, and from that, block logs. But I'm not clear on whether this is against the rules; the obfuscation of user IDs seems to suggest our algorithm shouldn't know the user name or the real user ID. Is this allowed?",0,None,2 ,Mon Jul 25 2011 23:20:35 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/731,/competitions/wikichallenge,None /vishal17,Newbie - Data Question,"I am little confused with the Data - There are 3 sets HHP_release1, HHP_release2, HHP_release3 - Is HHP_release1 & HHP_release2 subset of HHP_release3. So if I am just starting, should I just work with HHP_release3?",0,None,4 ,Tue Jul 26 2011 03:56:49 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/732,/competitions/hhp,250th /richardfraccaro,The Good and the Bad of Kaggle,"Hello to all Kaggle devotees, Let there be no doubt that I think Kaggle is a fantastic concept and I for one am glad to see it flourishing. However, in the mad flourish to get the smallest RMSE or Gini coefficient, I hope that participants do not lose sight of some important principles about analytics that would appear to be contrary to the goals of Kaggle. If you are a follower of Analyst First (analystfirst.com), then you will be aware of the view in that movement that ""Analytics is an Intelligence activity"". Analytics is just part of solving problems and building solutions in an organisation. From understanding the broader business problem to considering how to manage change arising from the insights gained through modelling, it is more than just loading up data into a software package and running some cleverly-designed algorithms. Another consideration is that models (i.e. predictive models) need a robustness that makes them reliable regardless of noise and inherent fluctuations in the data. Seeing some of the highly specialised solutions being posited for the various competitions, I can't help wonder whether the model is going to be the reliable rock upon which better business/domain understanding is based. I recall a leading banking luminary speaking at an IAPA conference some time ago (years? anyone remember who I am talking about? name is on the tip of my tongue...) saying something along the lines of ""I have plenty of people advocating data mining algorithms that no doubt will do better than the logistic regression we use for loan default analysis, but the point is the logistic regression has long term stability and I know it is robust against the bumps and anomalies that come along in the data now and again"". Kaggle is a great place to hone your data scientist skills, but lets remember the broader picture of where this analytics fits. Would be great to hear the opinion of others on this topic. Cheers, Richard P.S. So you see, I am not actually saying kaggle is ""bad"" per-se... that was just to get your attention :)",0,None,4 ,Tue Jul 26 2011 08:16:18 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/734,None,None /boooeee,Vendor,"Would it be possible to get some elaboration or high level examples of what the ""Vendor"" field represents? I understand PCP and Provider ID, but I'm not clear on Vendor. For example, for a visit to a Specialist, the PCP identifies the members Primary Care Provider (which is not, in general, claim specific) and the Provider ID identifies the Specialist that the member visited (I'm assuming). What would the Vendor represent? Similarly, for an Inpatient Hospital stay, the Provider ID identifies the Hospital the member went to (I'm assuming). What does Vendor represent in this case?",0,None,3 ,Tue Jul 26 2011 08:44:37 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/735,/competitions/hhp,16th /deepak2,claim amount ,"hi , do we need to predict values for cases where claim amounts equals to zero since generally we do not consider zero values for severity modeling?",0,None,1 Comment,Tue Jul 26 2011 11:03:39 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/736,/competitions/ClaimPredictionChallenge,26th /keiththerring,Evaluation/Test Set,"Apologies if this has already been stated. Can you provide more information on the set of users, T, that our algorithms will be evaluated against. Likely T comes in the form of a random X element/user subset of U, chosen uniformly from all size X subsets of U. Example universe sets U include: U = Set of all non-blocked users with at least one edit in the period Jan 1 2001 - Aug 31 2010. U = Set of all non-blocked users with at least one edit in the period Sep 1 2009 - Aug 31 2010. U = Set of 44514 users in original training set. If no information is provided, I will assume we should be tuning our algorithms to the most general case, i.e. U = set of all users. Thanks.",0,None,2 ,Wed Jul 27 2011 04:18:25 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/737,/competitions/wikichallenge,2nd /del=33e3fc4ba6629bb7,Important Clarification Question,"This perhaps is obvious, but after reading the FAQs, the Great10 handbook, the explanation on the kaggle website, and arguing several times with ourselves, my team is still confused as to one, very important point in this challenge: Are we attempting to find the ellipticies of the galaxies, post-lensing, OR, are we trying to find the ellipticies of the galaxies, pre-lensing? In other words, are we trying to simply find the ellipticity of the denoised/deconvoluted galaxy in the image, OR are we trying to somehow model the effect of lensing, and determine the ellipticity of the original ""simulated galaxy,"" when we only have the post-lensing images available to us. Thanks for the help",0,None,17 ,Wed Jul 27 2011 04:41:26 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/738,/competitions/mdm,None /chrisraimondi,R Help - Machine Learning,"[quote]I fear I am hijacking this thread for R help, but here it goes. Thanks to Chris for answering the question I asked last time. The answer helped, but it brought up a new question. So I now have the predict function looking for my y2 data to predict with. However, I want to use my Y3 data. The columns are not named the same, so how do I fudge it to use the model from before? I came up with a cludge, but I would prefer something elegant and fast. Also, I when running a random forest, what is the expected resulting time to run. I am running it on my 70,000 x 50 data table, with 5 trees, and R goes into not responding mode... This seems similar to what was described on the Nabble forum: [Link]:http://r.789695.n4.nabble.com/Large-dataset-randomForest-td830768.html I am answering this here in order to split up the topics better.... [quote]...The columns are not named the same, so how do I fudge it to use the model from before? I came up with a cludge, but I would prefer something elegant and fast. IMHO there is no fast way to do this - you have to invest the time in naming, cleaning, organizing the data up front. Everyone hates this, but I have spent the VAST majority of my time on this part. Think ahead - what do you want to predict - and how should you name these in order to scale. As you have already discovered - naming them Y2XYZ and Y3XYZ doesn't work out real well. I would recomment making three seperate data frame - I call mine right.a, right.b, and right.c. right.a has claims from year one and DaysInHospital from year two right.b has claims from year two and DaysInHospital from year three right.c has claims from year three and DaysInHospital from year four(which is what you want to predict) I use the a, b, and c as it was hurting my brain to try and think of year numbers when you are using year one data to predict year 2 DIH. All three of these files have the same columns and same column names. If you do it this way - it makes it earier to train and predict. Also - put the column you want to predict as THE FIRST column - I will tell you why in the next section... [quote]70,000 x 50 data table, with 5 trees, and R goes into not responding mode A couple things - do you really mean 5 trees or mtry of 5? Some general rules of thumb I have found: 1) Doubling the rows takes around 3.5 times as long 2) The number of columns has little impact if you pick the same number of mtry (it is pulling that many columns whether you have 1000 or 50. 3) If you have factors - they will increase the amount of time - make sure you aren't coding stuff as factors you don't mean to. 4) Start off small and time how long it takes - try 50 trees with 100 rows. This should be pretty quick. If so - bump it up. 5) Don't use the formula interface. - instead of randomForest(DaysInHospital ~ A + B + C, ...) use randomForest(right.a[,-1], right.a[,1]) This is why I said to put the y value as your first column - it makes it easier to tring for when you can use [,-1] to train on everything except that column, and [,1] to choose that as the y value. No matter how many columns you add - you can always use this method. As far as the amount of time goes - I don't know about only 5 trees - I assume there is some overhead, but in general for the default setting - 500 trees - and 50 vars - I don't think you will get it to run on a system with less than 8 gigs - especially if you are using the formula interface. If you cut the mtry down to 2 or 3 - and use 100 trees - my guess is somewhere between 20 minutes and an hour or so. I found using the formula interface would cramp my style at over ~30000 rows - even if I adjusted for a small number of trees.",2,bronze,11 ,Wed Jul 27 2011 04:44:50 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/739,/competitions/hhp,20th /pgraff,Submission error,"Hello, I've recived the following error when trying to submit a new entry that I hadn't received before. ""String was not recognized as a valid DateTime."" Can any of the admins explain what's going on and/or fix it? Thanks. EDIT: I was just able to make a submission without getting the error. I'll assume it was a temporary issue that is now fixed.",0,None,1 Comment,Thu Jul 28 2011 12:12:04 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/741,/competitions/hhp,350th /vanushvaswani,Cross validation,"Just wondering, for the top players, do your models give similar RMSE (~0.015) when you use the training set as input?",0,None,2 ,Fri Jul 29 2011 09:44:43 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/742,/competitions/mdm,47th /dansbecker,Can I boost your score on my way out?,"I've moved on to new projects. As a last experiment, I want to see if I can significantly improve your score by incorporating my predictions. I've read about boosting, but haven't tried it yet. I want to send someone my predictions and see if is useful to them. If this helped someone get in the money, I wouldn't want ask for any part of it. ----- Overview of my algorithm: My algorithm was fairly straightforward, and I think it was different from what most people here are using. I created variables from the data that I thought would be predictive. I then ran an OLS regression on training data DIH=beta*variables I used the fitted values from that regression as an index of predicted health usage. I ran a very simple non-parametric estimator to map the index to predictions that minimizes rmsle. ------- What I was going to do next (In case anyone cares): I'd like to include a quite a few more variables (e.g. more dummy vars for specific vendor, more interaction terms), but I think I have a method to reduce overfitting when I do so. I would have included these variables in a multi-level estimation framework that shrinks imprecise estimates towards group means. I was going to use methods from Gelman and Hill's book. This incorporate ""regression to the mean"" to reduce overfitting. I was going to implement this in PyMC, but you could do it in R too. I thought this was a really good idea (and I thought it was the big advantage of using a regression in the first stage rather than random forests.) I don't have time to follow it through, but hopefully the idea interests someone. ----- How to take me up on the offer: If kaggle says I can make my predictions or my code publicly available, I'll do so. I cleaned the data in stata and did estimation in python. If I'm only allowed to give it to one team, I'd like to see if it helps someone that already has a better algorithm than me. Drop me a line though. I'm out... have fun predicting.",3,bronze,20 ,Fri Jul 29 2011 14:17:36 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/744,/competitions/hhp,2nd /jeffmoser,SQL Schema and Aggregate Stats,"In case anyone finds it useful, here was the schema I used for the import: USE DunnhumbyChallenge;CREATE TABLE [training]( [customer_id] [int] NOT NULL, [visit_date] [date] NOT NULL, [visit_spend] [money] NOT NULL);CREATE TABLE [test]( [customer_id] [int] NOT NULL, [visit_date] [date] NOT NULL, [visit_spend] [money] NOT NULL);SELECT COUNT(*) FROM training;-- 12146637SELECT COUNT(DISTINCT customer_id) FROM training;-- 100000SELECT SUM(visit_spend) FROM training;-- 500500131.90SELECT COUNT(*) FROM test;-- 1008142SELECT COUNT(DISTINCT customer_id) FROM test;-- 10000SELECT SUM(visit_spend) FROM test;-- 41101802.52 For previous forum discussion on importing data, see [Link]:http://www.kaggle.com/c/ClaimPredictionChallenge/forums/t/711/importing-to-sql-server-and-aggregate-statistics.",0,None,4 ,Fri Jul 29 2011 22:08:54 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/745,/competitions/dunnhumbychallenge,None /jeffmoser,Fun Fact: Training spend data follows Benford's law,"One fun aspect of working with real data is that you get to observe real-life phenomenon. For example, [Link]:http://en.wikipedia.org/wiki/Benford's_law (also known as the ""first-digit law"") states: ""in lists of numbers from many (but not all) real-life sources of data, the leading digit is distributed in a specific, non-uniform way. According to this law, the first digit is 1 about 30% of the time, and larger digits occur as the leading digit with lower and lower frequency, to the point where 9 as a first digit occurs less than 5% of the time."" A simple SQL query on the training dataset of: SELECT LEFT(CONVERT(VARCHAR(10), visit_spend),1) AS leading_digit, COUNT(*) AS total_matchesFROM training WHERE LEFT(CONVERT(VARCHAR(10), visit_spend),1) != '0'GROUP BY LEFT(CONVERT(VARCHAR(10), visit_spend),1)ORDER BY COUNT(*) DESC Gives us the raw data with which we can compare the data: digit count actual_probability benford_expected_probability abs diff 1 3368866 27.9% 30.1% 2.2% 2 1912850 15.8% 17.6% 1.8% 3 1483366 12.3% 12.5% 0.2% 4 1258157 10.4% 9.7% 0.7% 5 1109766 9.2% 7.9% 1.3% 6 933048 7.7% 6.7% 1.0% 7 787636 6.5% 5.8% 0.7% 8 668351 5.5% 5.1% 0.4% 9 573359 4.7% 4.6% 0.1% Sure enough, the data from millions of shopping visits demonstrates the validity of this law. I just thought this was an interesting application of something you hear about all the time in statistics discussions.",9,silver,3 ,Fri Jul 29 2011 23:44:28 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/746,/competitions/dunnhumbychallenge,None /eyal10942,Problem with submission,"Sorry, but I am new here. I submitted a results file based on the specified format: header row, 10001 rows total, 3 columns with last column being a number. and yet I got an error message saying : ""There was a problem with your submission"" with no explanation. what could be the problem? thank you",0,None,3 ,Sat Jul 30 2011 11:16:38 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/748,/competitions/dunnhumbychallenge,135th /salimali,naive solution ,"Here is a solution done quickly in SQL. The logic is that the customer will spend exactly the same as the last vistit, and there will be the same gap to the next visit as there was between the last two visits. This will give you 6.77, which appears to be not that good. select *, Rank() over (Partition BY customer_id order by visit_date ASC) as visit_numberinto #temp1from testselect a.*,DATEDIFF(DD,b.visit_date,a.visit_date) as days_since_last_visitinto #temp2from #temp1 a inner join #temp1 bon a.customer_id = b.customer_idand a.visit_number = b.visit_number + 1order by a.customer_id, a.visit_dateselect customer_id, max(visit_number) as max_visitinto #temp3from #temp2group by customer_idselect #temp2.*into #temp4from #temp2 inner join #temp3on #temp2.customer_id = #temp3.customer_idand #temp2.visit_number = #temp3.max_visitselect customer_id,dateadd(dd,days_since_last_visit,visit_date) as visit_date,visit_spend from #temp4 order by customer_id",0,None,6 ,Sat Jul 30 2011 12:50:46 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/749,/competitions/dunnhumbychallenge,206th /salimali,IP Question,"This is a fantastic data set to play with and use to develop some intersting new techniques that could prove highly useful to Dunnhumby - its great to see companies catching on to the usefullness of Kaggle now. My question is regarding the need to reveal your algorithm in order to collect the prize, there is no mention in the rules about this.",0,None,5 ,Sat Jul 30 2011 23:34:38 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/750,/competitions/dunnhumbychallenge,206th /sashikanthdareddy,Question about evaluation,""" For each test customer in the dataset your Model needs to provide: a prediction of the first date on or after 1 April 2011 that the test customer will next go shopping AND a prediction of the test customer’s dollar spend on that shopping visit. The prediction will be classified as correct if the date AND the dollar spend is correct (a “Correct Visit”). Dollar spend will be deemed to be correct if it is within $10 of the actual spend, an under or over prediction of equal to or more than $10.01 being incorrect. "" From above, the predicted spend amount could be within $10 of the actual amount, what about the predicted date of next visit? can it be within x no. of days of the actual visit date or does it have to be exact?",0,None,1 Comment,Sun Jul 31 2011 20:01:40 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/751,/competitions/dunnhumbychallenge,None /woshialex,training score and submission score,"deviding all training data to 70% for train and 30% for test(to avoid overfiiting), the score I get is much much lower(better) than for the submission score I get. Do other people have similar situation? Thanks",0,None,5 ,Mon Aug 01 2011 07:43:35 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/753,/competitions/hhp,210th /del=fdab3e126c5082ed,Evaluation,1) Hi its quite unclear what the evaluation rules are going to be ? For the test sample do we just predict a) If the test bed is likely to vist the store on 4-1-2011 b) what the $ amount i? 2) Should we predict the visit and amounts for all the days from 4-1-2011 until 6-1-2011 3) What is the criteria is it MAPE? 4) what about expected # visits and total $ for the complete test bed. What I mean is for the individual predictions you can be $10 off but the aggregate time series for the test bed is quite predictable?,0,None,20 ,Mon Aug 01 2011 16:18:29 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/754,/competitions/dunnhumbychallenge,None /jjjjjj,Clarification Request: How many entries do we select for the milestone prize?,I can see it either as 1 or 5 depending on how I interpret the rules. Thanks,0,None,1 Comment,Mon Aug 01 2011 18:42:35 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/755,/competitions/hhp,113th /arthur,Leaderboard test data,Is the 30% of test data used for calculating the leaderboard randomly selected? Or has it been selected with the intention of throwing those who tweak their algorithms to approach 100% when the actual data could be a lot poorer. How many submissions can we make?,0,None,3 ,Mon Aug 01 2011 19:38:08 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/756,/competitions/dunnhumbychallenge,None /guyko81,Holidays,Do we know state holidays of the country where the data was collected? Which country is it? thanks,0,None,3 ,Mon Aug 01 2011 19:55:04 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/757,/competitions/dunnhumbychallenge,None /adamscruggs,SQL Server Denali Analytic Functions,"Hello everyone - I just downloaded and installed the CTP3 version of SQL Server 11 codename ""Denali"". Of potential relevance to this contest, it supports the LEAD() and LAG() analytic functions which you previously needed to use Oracle XE (or a horrible workaround in SQL Server) to access. Example: SELECT TOP 5 [customer_id] ,[visit_date] ,[visit_spend] ,LEAD(visit_spend) OVER(PARTITION BY customer_id ORDER BY visit_date) next_spend FROM [Shopping].[dbo].[training] Results: CUSTOMER_ID VISIT_DATE VISIT_SPEND NEXT_SPEND 100 2010-04-06 111.79 117.85 100 2010-04-13 117.85 21.54 100 2010-04-16 21.54 20.01 100 2010-04-18 20.01 116.52 100 2010-04-20 116.52 20.02 Hope this is helpful for someone. Download [Link]:http://www.microsoft.com/betaexperience/pd/SQLDCTP3CTA/enus/default.aspx.",5,bronze,3 ,Mon Aug 01 2011 21:00:08 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/758,/competitions/dunnhumbychallenge,None /skosian,Duplicate records in Claims,"There are about 38 thousand records in the Claims dataset that have one or more exact duplicates in all columns. Is this a data error? One possibility is, becaues we don't have the exact date, the same procedure has been done on multiple days of the same DSFS. With the limited number of columns that we have in the Claims data, it would be impossible to tell apart one claim from another if they are less than a month apart, and have everything else equal. Any thoughts? Which is it, error in data or legitimate duplicates? regards, Sassoon",0,None,6 ,Mon Aug 01 2011 23:47:47 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/759,/competitions/hhp,2nd /arthur,Customer 83,Is Customer 83 intentionally missing from training.csv?,0,None,5 ,Tue Aug 02 2011 01:12:32 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/760,/competitions/dunnhumbychallenge,None /jseetao,What constitutes a visit?,"There are a few observations where the customer is listed as spending $0. What kind of observational units does this dataset cover? Is it any visit that involves a transaction, or is it just visits that involve purchases?",0,None,4 ,Tue Aug 02 2011 01:49:13 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/761,/competitions/dunnhumbychallenge,87th /roobs5218,an example simple solution,"Here's a pretty simple model which achieves 12.23% correct: For each customer: predicted_amount := median of most recent 17 amounts spent predicted_date := max(1st april 2011, date of most recent visit + median number of days between successive visits)",0,None,4 ,Tue Aug 02 2011 14:19:17 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/762,/competitions/dunnhumbychallenge,147th /iamivo,Never returning customers,Hi there! I have some simple questions regarding customers not returning ever after 2011-03-31. The training dataset contains 49 customers of this kind. Does the test dataset contain customers who never actually came back later to buy something? How can I mark a customer in the test dataset who did not shop again?,0,None,5 ,Tue Aug 02 2011 14:59:24 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/763,/competitions/dunnhumbychallenge,48th /analtiks,Model or business rules,"I am new to the competition,and I have some questions regarding the competition 1. Can the prediction of the visit and spend, can be based on business rules, or specific model need to be built 2. Does the data need to be analysed using any specific software, or is it software neutral Your response is appreciated",0,None,10 ,Tue Aug 02 2011 18:38:13 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/765,/competitions/dunnhumbychallenge,None /antipov,What business value will a good predictive model bring?,The task is clear from the technical point of view. But why is it valuable for a retailer to predict when and how much a buyer will purchase?,0,None,7 ,Wed Aug 03 2011 01:18:35 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/766,/competitions/dunnhumbychallenge,None /woshialex,what can I improve?,"on the trainning data, I got roughtly 39% correct date, 34% correct spend, and 13.5% for both. Which part could I improve? date or spend? any idea? Thanks.",0,None,4 ,Wed Aug 03 2011 19:16:57 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/767,/competitions/dunnhumbychallenge,55th /blacksou,Forecasting the future using future information...,"I had a quick look at the data provided and I really don't see what is the interest of this competition. You have to forecast the future behavior of some customers in April 2011 but you already know what other customers did in April 2011... It's like forecasting what will be the price of silver knowing the price of gold, it will give you a great forecast on the paper but nothing useful...",0,None,6 ,Thu Aug 04 2011 11:46:41 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/768,/competitions/dunnhumbychallenge,150th /frandom,Drug and lab count improvement,Hi all - would anybody be willing to volunteer information on how much improvement was attained with the inclusion of drug and lab data in your model? Viktor,0,None,2 ,Thu Aug 04 2011 15:25:16 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/769,/competitions/hhp,144th /del=92525096498f3bbd,Open training.csv file (last row problem),I opened the csv file using notepad++ how many rows should it have without header? 12146637 rows?,0,None,5 ,Fri Aug 05 2011 07:00:24 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/770,/competitions/dunnhumbychallenge,None /bobby4834,Withdrawal,"Hello, Hypothetical question: If a contestant makes first place position and/or wins milestone prizes, is it legally binding that they must win the final prize if they remain in first place at the end of the competition? Thank you, Bobby",0,None,6 ,Fri Aug 05 2011 16:39:51 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/771,/competitions/hhp,None /erdman,downloading the data from kaggle to remote linux instance,How does one download the data to a remote (EC2) linux instance? Neither of these are working for me ... wget http://www.kaggle.com/c/ClaimPredictionChallenge/Download/test_set.7z wget --http-user=myusername --http-password=mypassword http://www.kaggle.com/c/ClaimPredictionChallenge/Download/test_set.7z My upload pipe is a little small to d/l here first then u/l via scp ... there should be a direct way to do it?,0,None,4 ,Fri Aug 05 2011 18:43:16 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/772,/competitions/ClaimPredictionChallenge,18th /zachmayer,Model building,"As I'm building my models, I've split the training set into 2 parts: Train: April 2010-March 2011 Test: April 2011 I've dropped all but the 1st visit from the test set. does this seem like a reasonable way to partition the data to evaluate my models?",0,None,2 ,Fri Aug 05 2011 21:03:09 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/773,/competitions/dunnhumbychallenge,None /alexkromwell,Deleted Edits,"It appears there are several ways in which an edit may be ""deleted"" from Wikipedia. The corresponding user may be blocked or deleted. The edit may be reverted. etc. Further it appears some ""deleted"" edits are still shown by wikipedia (with a strike through them) and others are not visible at all. My question is: which if any of these ""deleted"" edits count when calculating the total edits for a user in the Sep10-Jan11 range. Will all edits be counted regardless if they were later deleted? Or maybe all edits so long as they were not deleted before Jan11? Or maybe just the strike-through/visible ""deletions"" are counted. In short some clarification on this would be helpful. This is a relatively minor issue, but may make the difference in the final model tweaks/performance.",1,bronze,3 ,Sat Aug 06 2011 05:53:14 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/774,/competitions/wikichallenge,None /timwangstats,MemberID match between Claims and DaysInHospital table,"1) In CLAIMS table, by selecting those MemberIDs with year=Y2 , the members are those filed claims in Y2. (also see Year definition). 2) DaysInHospital_Y2 contains only those eligible members in Y2. If the above understanding is correct, then the MemberID in 1) should match with those in 2). But this is not true. Any explanation? Tim",0,None,9 ,Sun Aug 07 2011 04:40:54 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/775,/competitions/hhp,None /jxiesd,why some members have non-zero days in hospital but no claims in the same year,I found many of these cases. I think it is strange.,0,None,2 ,Sun Aug 07 2011 09:50:06 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/776,/competitions/hhp,5th /wmestrom,Rules for the prize winners,"Hi, I have a question about the rules of the Heritage Health Prize. The (milestone) prize winners have to submit code and documentation to reproduce the results but some things are not entirely clear to me: to what extend the results have to be identical? (for example small differences in the random number generator may give different results although they should be similar) in how much time should the results be reproduce-able? (my current best result is a mix of many models each may take minutes to hours to generate) the algorithm should produce similar results on a new dataset, this doesn't sound very realistic: I don't think there is any way to win this competition without optimizing for this specific dataset. Results on other datasets may be very bad with the given optimizations. Probably very good results can be produced by the same algorithm after some tuning but this is a process that requires a lot of knowledge about the used algorithms and (a lot of) time and patience. I hope someone from Kaggle can give an answers to these questions since these answers may have a big inpact on what I will select as my final submission... Thanks, Willem Mestrom",0,None,7 ,Mon Aug 08 2011 06:56:13 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/777,/competitions/hhp,1st /arroyito,How to keep competitive after Milestone 1?,"To qualify for the Milestone 1 prize, the qualifying algorithm will be published in this website: ""conditional winners will have 21 days from receipt of notification to document their methodology as described in Rule 12 above. Sponsor will deliver the Prediction Algorithm and documentation to the judges and also post the information on the Website for review and testing by other Entrants. ..."" This means that any team can make a derived work from the algorithm, allowing such team to qualify for the final prize with less effort. This is not fair. Am I missing something?",0,None,6 ,Tue Aug 09 2011 03:07:16 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/778,/competitions/hhp,1159th /rodinleg,Is the Leaderboard evaluated on random custome ids?,Was just wondering if the leaderboard gets evaluated on random customer ids and the correct visits or is it like the first 3000 customers?,0,None,1 Comment,Tue Aug 09 2011 12:04:34 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/779,/competitions/dunnhumbychallenge,193rd /thedoors0,Bit confused with some info,"""Note: You can select up to 5 submissions that will be used to calculate your final leaderboard score. If you do not select them, up to 5 entries will be chosen for you based on your most recent submissions. Your final score will not be based on the same exact subset data as the public leaderboard, but rather a different private data subset of your full submission. Your public score is only a rough indication of what your final score might be. You should choose entries that will most likely be best overall, and not necessarily just on the public subset."" This is mentioned on the submissions page. isn't your winning decided on basis of your best score until now? Is it like your score is some average of the best 5? Can someone please clarify. Thank you.",0,None,1 Comment,Wed Aug 10 2011 00:03:51 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/780,/competitions/dunnhumbychallenge,179th /cnie9178,A question on ClaimsTruncated and SupLOS,"It's said in the data dictionary that ClaimedTruncated: Members with truncated claims in the year prior to the main outcome are assigned a value of 1, and 0 otherwise. I assume there is a correspondence relationship between SupLOS in the Claims table and the ClaimedTruncated in DaysInHospital table. I found something interesting in the data set. In DaysInHospital_Y3.csv, we can find the patient: MemberID ClaimsTruncated DaysInHospital18253899 1 3 Accoriding to the dictionary, the member 18253899 should have ""truncated claims in the year prior to the main outcome"", that's 18253899 should have truncated claims in Y2. However, when I search this patient in the claims table I found all the SupLOS values are 0: memberid suplos18253899 018253899 018253899 018253899 018253899 018253899 018253899 018253899 018253899 018253899 018253899 018253899 018253899 018253899 018253899 018253899 018253899 018253899 018253899 018253899 018253899 018253899 018253899 018253899 018253899 018253899 018253899 018253899 018253899 018253899 018253899 018253899 018253899 018253899 018253899 018253899 018253899 018253899 018253899 018253899 018253899 018253899 018253899 0 It means all the claims in Y2 in the claims table is not truncated at all. Is it a discrepency in the data or just my misunderstanding of something? One more example is in the opposite way, you can find one patient in DaysInHospital_Y3.csv: MemberID ClaimsTruncated DaysInHospital 4706710 0 0 It means, 4706710 does not have truncated claims in Y2. However, the corresponding claims data for her in Y2 is: memberid suplos4706710 14706710 04706710 0 It's shown that this patient has one truncated claim in Y2. Any advice is highly appreciated.",0,None,1 Comment,Thu Aug 11 2011 17:16:26 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/781,/competitions/hhp,472nd /olegvasilyev,"Claims Truncated: Different definition in Y1, Y2 and Y3","This question was already asked in the forum, but I could not find a clear answer. It seems that definition of ""ClaimsTruncated"" changed every year. The ""ClaimsTruncated=1"" samples have different distribution of number of claims in each year. In the first year number of claims for such samples ranges from 3,5,6 etc. up to ...40,41,42,43. (Number of samples corresponding to last four values are: 237, 217, 116, 581.). In the second year all ""ClaimsTruncated=1"" samples have same number of claims: 43. And in the third year all ""ClaimsTruncated=1"" samples have number of claims = 44. As I understand, the best explanation given or assumed so far in the forum is that the year 3 had cut-off =44; the year 2 had cut-off =43 (why different?), and the year 1 had of course completely different definition, and somehow the difference is related to ""ProcedureGroup"" data. Is there any better info on this, any clarification, or definitions of ""ClaimsTruncated"" that would be easy to understand? Thanks",0,None,7 ,Fri Aug 12 2011 02:49:46 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/782,/competitions/hhp,13th /booksiberia,"customers in test set aren't ther same as the customers in train set, correct?","so, for example, customer id'ed as customer 40 in test set has nothing to do with the customer 40 in training set. according to my reading of the information given for this particular competition. On the other hand, customer 40 in the test set is exactly the customer to predict the next visit date/visit_spend for in the entry submission, correct? Please confirm or correct me on this matter, please?",0,None,5 ,Sat Aug 13 2011 22:57:22 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/783,/competitions/dunnhumbychallenge,None /cnie9178,Can I use the data for research?,"I mean, can I use the data to publish? Definitely, it is not allowed for the Health Prize. However, I did not see relevant information on whether we can use this data set for research like publishing a paper. Thanks",0,None,6 ,Sun Aug 14 2011 22:03:33 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/784,/competitions/dunnhumbychallenge,None /andywocky,Clarifying Rule #13 for Milestone 1 - Open Questions & Issues,"@Anthony There have been several concerns raised in the forum about the impact and interpretation of Rule 13 on the contest, which states that conditional milestone winners must disclose their ""Prediction Algorithm and documentation"" to the website for competitor review and commentary? In particular, there are unanswered questions with regard to inconsistencies and/or potentially unfair advantages arising from this rule. Can you comment on the following specific items so the community has firm, consistent and realistic expectations as we approach the Milestone 1 date? Is it inconsistent, as Sali Mali pointed out in another thread, to require documentation of the winning algorithms be publicly disclosed to all competitors given Rule 20, Entrant Representations? It seems that this disclosure will encourage other competitors to use aspects of the winning Prediction Algorithm which cause violation, directly or otherwise, of (i) - (iii) and possibly (iv) of that Rule. Can you clarify that code, libraries and software specifications are *not* required to be publicly disclosed to competitors? These materials and intellectual property appear to be referenced separately from ""Prediction Algorithm and documentation."" Will Kaggle or Heritage have a moderation or appeals process for handling competitor complaints? From the winning entrant's point-of-view, they would not want to be forced through the review process to allow back-door answers to code and libraries which accelerate a competitor's integration of the winning solution. Can you comment on the spirit and fairness of the public disclosure of the Prediction Algorithm documentation and it's impact on competitiveness? In particular, if the documentation truly does meet the requirement of enabling a skilled computer science practitioner to reproduce the winning result, then this places the winning team at an unfair disadavantage: all competitors will have access to their algorithms and research, in addition to the winning algorithm. Can you provide more detailed clarification on the level of documentation required by conditional milestone winners? The guideline provided by the rules would cover a range of details and description spanning from ""lecture notes"" to ""detailed tutorial"" to ""whitepaper"" to ""conference paper"", etc. Can you comment on the reproducibility requirement? For example, it is possible to construct algorithms with stochastic elements that may not be precisely reproducible, even using the same random seed-- is it sufficient for these algorithms to reproduce the submission approximately? What if they don't reproduce exactly, or reproduce at a prediction accuracy that is worse than the submission score, possibly worse than other competitor submissions? Thanks, Andy",2,bronze,33 ,Mon Aug 15 2011 04:49:44 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/785,/competitions/hhp,265th /rafisher1,Validation set sampling strategy,"How was the validation set generated? Diederik mentions it spans Jan 01 to Dec 07 - were users sampled and then their complete revision histories from this time window pulled, without any additional filters? Then do the validation_solutions counts correspond to the 5 months following Dec 07? Why are there many more user_id's in validation_solutions than in validation? Thanks.",1,bronze,9 ,Mon Aug 15 2011 09:14:42 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/786,/competitions/wikichallenge,43rd /uriblass,can the length of stay for claims overlap?,I ask it because I see that there is a person who has 17 claims with 4-8 weeks length of stay in one year,0,None,1 Comment,Mon Aug 15 2011 19:58:43 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/787,/competitions/hhp,340th /jxiesd,How will the final scores be evaluated,"We can select up to 5 entries as final to be evaluated. Is the final score the average of the 5 selected entries, or the highest/lowest among the 5 entries? Can the Organizer specify this or there is already a rule for this? Thanks",0,None,1 Comment,Tue Aug 16 2011 02:44:12 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/788,/competitions/hhp,5th /pgraff,Image processing,"Hi everyone, Now that the competition is just about over, I am curious as to how people processed the images to remove or minimise the noise effects? I tried simple filters in MATLAB and a simple implementation of the CLEAN algorithm ( [Link]:http://web.njit.edu/~gary/728/Lecture7.html), but only managed to obtain results as good as on the original images at best. Regards, Phil",1,None,12 ,Thu Aug 18 2011 01:52:34 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/789,/competitions/mdm,36th /ahassaine,Are the results correct??,"Hello, Is is normal that the wall has moved from about 0.015 to about 0.02?",0,None,31 ,Thu Aug 18 2011 02:03:52 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/790,/competitions/mdm,3rd /chrisraimondi,Anyway we can keep the leaderboard results archived?,"Well I know we CAN in theory :) but ... I noticed the Dark Matter competition ended - and there was some talk about image_doctor's come from behind victory. However - I do not see the 30% leaderboard. I believe Kaggle used to show both for complete contests. I know it might be confusing to some, but I though/think it was/is useful to have both. I think especially to show overfitting and other issues that can occur (give's people some hope [that aren't first on the leaderboard] in that the leaderboard TRULY doesn't necessarily reflect the true results). I also think it is valuable/interesting for historical purposes. If confusion is the issue - you could put a link at the top of the ""Leaderboard"" and link to the ""Results"" page. Just a suggestion!",0,None,2 ,Thu Aug 18 2011 04:33:07 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/791,None,None /astrotom,Analysing Results ,"Dear All, Thank you all for an exciting and enlightening experience in this competition. In designing this competition we had to be careful to make it accessible, but such that it couldn't be overfitted, and so that the algorithms developed will be useful on real astronomical imaging. In real data we want algorithms that can accurately measure the ellipticities of galaxies, and this is the metric on which the leaderboard was scored. There is a secondary effect in that for real data dark matter acts (to first order on small areas) to add a very small mean value to the ellipticities of a population of galaxies (called ""shear"") - the more dark matter the larger the mean. In real data we do not know what this is, and what we need are algorithms that can accurately determine this by measuring the ellipticities of galaxies without any assumption about this; we have no leaderboard feedback on real data. To test the ability of algorithms to do this the smallest change we could make was to simulate this scenario in the challenge by having a zero mean for the public data and a non-zero mean in the private data. We could not reveal this during the challenge unfortunately but it was of paramount importance for the usability of the algorithms. This explains some of change in the leaderboard. In post-challenge analysis of results we are seeing that some methods have performed remarkably well in this secondary aspect, and we will be in contact with you. A further reason for the change in the leaderboard was due to the ""pick 5"" rule that Kaggle employs at the end of competitions. In scenarios where the public and private data is different this can cause discrepancies, this was an unforeseen issue and something that will be addressed in future Kaggle challenges. In fact DeepZot did have the best overall score but unfortunately did not select it in the chosen 5. To remedy this we would like in this case to also invite DeepZot to the workshop with exactly the same prize. There has been some notable and active members of the Mapping Dark Matter community. As a ""runners-up/notable performance prize"" we will be emailing you personally to invite you to the conference and talk to us about your ideas, or in the case that you cannot make it we would like to develop your methods and ideas over email or in these forums with an aim to applying these to real astronomical data. Finally there will be a scientific article written on the results of this challenge. The more information we have about methods (which worked and why, which failed and why) the better. So please send as much information as you can on your methods to great10helpdesk@gmail.com or post on this forum.",2,bronze,36 ,Thu Aug 18 2011 20:23:55 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/792,/competitions/mdm,None /stephennerhodes,Still bemused,"I have held off commenting until the competition is closed but after reading everyone's contributions I still have qualms about mathematical justification for the approach dictated. The assumption seems to be that the two parameters ε1 and ε2 are statistically independant. In the general case that is true but in this specific instance we have pixelation which I think completely invalidates this assumption If we consider a bounding box having an aspect ratio X, with an ellipse just touching each of the sides, then a bit of math shows that the ratio a/b is given by ""a\b = ((\cos(\theta)^2-X^2+X^2*\cos(\theta)^2)*(\cos(\theta)^2+X^2*\cos(\theta)^2-1))^(1/2)/(\cos(\theta)^2-X^2+X^2*\cos(\theta)^2)"" We are only interested in the region when -π/2 < θ < π/2 and the positive root. It can be clearly seen that depending upon the value of X this equation has singularities and the result can be complex. Now considering a square field of pixels, say 4X4, the value X must apparently be drawn from the set: X_i \in {1/4,1/2.3/4,1,4/3,2,4} A similar set exists for all such arrays. X is not single valued as the pixels are finite in size so for a given angle θ there is some spread in ab which becomes very non-linear for larger θ . These constraints mean that a/b and θ are linked for any image by a non-linear relationship controlled by a finite and small set of bounding boxes. To my mind to employ any form of regression analysis must be questioned as there are large ares of the result plane which do not exist and others are multi-populated. However, if the problem is restated to ask how we determine the lensing effect from such images I suggest a different approach might work. Each image can be categorized into an aspect ratio bin (such as 2-4 in the above example) generating a ""histogram like"" structure the theoretical content of which can be computed precisely by integration of the above function choosing limits to ensure real results. Now when the observation of the ratio a/b is changed by lensing this population function is also changed as an offset constant is added into the integrations. By comparing the observed population function with the theoretical one the lensing element can then be deduced. As this lensing constant is truly independant regression methods now become become appropriate. In a nutshell I propose the problem might be handled by reducing the images into aspect ration ""bins"" (maybe with a modicum of scaling to normalize things) and fitting the result against the theoretical model to extract the lensing coefficient. Note there is symmetry in that the count in each bin matches its conjugate e.g 1/4 v 4 in the above example, and this provides an orthogonal measure. Stephenne",0,None,3 ,Fri Aug 19 2011 11:16:25 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/794,/competitions/mdm,None /ahassaine,Piles of data for data lovers,"Hi all, I'd like to post here all my predictors. I combined several methods, some methods are inspired from my PhD thesis (soundtrack restoration), others are taken from my current research (signature verification and writer identification), there are also some methods which are specifically developed for this problem. Each methods comes with hundreds of predictors. So in total, there are thousands of predictors, I only combined them using linear fit and I am sure that the predictive power of these predictors is far from what I obtained. I contacted Eu Jin Lok 5 days before the end of the competition and because of time issues we could not improve the results that much. For those of you who might be interested, you can download these predictors from the following links: [Link]:http://goo.gl/JbEBa [Link]:http://goo.gl/GkojD If you come up with interesting ways of combining them. I'll be happy to hear from you. Thanks, Ali",0,None,3 ,Fri Aug 19 2011 13:59:25 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/795,/competitions/mdm,3rd /jeffmoser,Solution,"Due to all of the interest in this competition, we've decided to make the solution public. I've attached two files to this post: mdm_solution_with_mappings.xlsx - This is an Excel spreadsheet that has a trove of information about the competition. It shows how I randomly mapped Tom's files to the files that were available for download. The files that you've been working with are named in columns G (GalaxyMappedName) and J (StarMappedName). Note how column E indicates the ""paring"" that was present for the private dataset columns. There's a bit more going on, but notice the mean values for the paired galaxies. Column ""L"" indicates if the galaxy was in the public or private test set. Columns M and N indicate the actual solution values. Columns ""O"" and ""P"" are the example submission values. Note that you can paste in your own submission values and the calculated public and private RMSE will appear in U4 and U5 respectively. Finally, there are MD5 hashes of everything to make sure nothing got tampered with in the course of mapping files. mdm_solution.csv is a much more simplified version than that above and just represents the perfect submission. I hope that this helps and leads to even more interesting discussions! UPDATE: These solution files do not take into account the rescore. See the updated solution files later on in this thread. [Link]:https://storage.googleapis.com/kaggle-forum-message-attachments/1200/mdm_solution_with_mappings.xlsx [Link]:https://storage.googleapis.com/kaggle-forum-message-attachments/1201/mdm_solution.csv",0,None,7 ,Fri Aug 19 2011 17:07:17 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/796,/competitions/mdm,None /mlearn,Test set size,"I was wondering why the test set in this competition is so small? The leaderboard is being done on 3,000 examples (30% of 10,000). With the current top scores of around 17% the 95% confidence interval on an algorithm's underlying accuracy is about 2.7% in width (formula from [Link]:http://en.wikipedia.org/wiki/Binomial_proportion_confidence_interval). This interval covers all the entries from first down to twenty-third on the leaderboard. If the algorithms doing the entries are uncorrelated then we could see a pretty radical overhaul of the leaderboard when the competition closes. However I would guess the entries are correlated so perhaps things aren't so bad. Have score changes in previous competitions been studied - what type of shift might we be expecting? Thanks!",0,None,4 ,Fri Aug 19 2011 23:47:03 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/797,/competitions/dunnhumbychallenge,8th /mkwan7977,Request for a submission API,"I was wondering whether it would be possible to create a submission API so that we could submit from software? It's fairly common for my team to pre-generate a few solutions then take a break. But we still have to log in every day to submit them. If there was an API we could just set up a cron job that uploads the solutions and e-mails us the result. Failing that, could you provide the ability to upload multiple solutions and auto-submit one every 24 hours?",1,bronze,28 ,Sat Aug 20 2011 12:07:46 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/798,/competitions/hhp,17th /jeffmoser,All submissions have been rescored which affected the final/private score,"After all of the forum discussion and Tom's private analysis of the results, it seems there was an error in the private test scores. Important note: this error only affected the private scores and did not affect the public leaderboard results during the competition. The specific error was that the solution e1 for galaxies/stars in the private leaderboard set was off by 0.02. Thus, the new solution e1 is the old solution e1-0.02. It seems there was a sign change somewhere along the line (a mean e1 of 0.01 in the catalogue should have -0.01). After learning about this error, I updated the solution and rescored all 819 submissions. The current leaderboard shows the results of the rescore. In addition, further ""after the deadline"" submissions from now on will use the updated solution file. Sorry about the confusion this has caused. Additional details will follow.",3,bronze,14 ,Sat Aug 20 2011 19:07:44 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/799,/competitions/mdm,None /jhoward,Come see us at KDD!,"Hey gang, if any of you are going to KDD next week, be sure to come along to session K6 on Wednesday, where 4 of the HPN prize advisory group will be discussing ""Lessons learned from contests in data mining"". (The 4 members are Charles Elkan, Yehuda Koren, Claudia Perlich, and me; Tie-Yan Liu will also be on the panel, but is not part of the HPN advisory group). I will of course be thorougly out-classed by the rest of the group (Charles was the first-ever KDD winner, Claudia is the only 3-time back-to-back KDD winner, and Yehuda is a Netflix prize winner!) so I plan to mainly keep quiet and listen... Also, be sure to come and say hi if you're at the conference. My email address is in the same format as everyone else at Kaggle: firstname.lastname@kaggle.com - feel free to shoot me a message if you'd like to catch up.",0,None,1 Comment,Sun Aug 21 2011 02:19:18 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/800,/competitions/hhp,None /wbhumanoid,Input string was not in a correct format.,"Hi, I'm trying to submit a csv file for the dunnhumby's challenge and I'm getting the error ""Input string was not in a correct format."" Anyone know why I might be getting this? Is it the file or something else?",0,None,1 Comment,Sun Aug 21 2011 14:08:27 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/801,None,None /del=b7189cf475020b66,Sample data file,I don't understand why the days in hospital (that we need to predict) field in the Sample data has continuous values instead of whole number from 0 to 15? It is telling the probability?,0,None,2 ,Sun Aug 21 2011 14:10:15 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/803,/competitions/hhp,None /nitin14651,Generalized specialty,"Hi, Can anyone please explain whether the ""Generalized specialty"" listed in Claims table is assocaited with ""Charlson Index"" or is it only of ""Providers"" specialty. The reference material doesnt indicate clearly. Thanks,",0,None,1 Comment,Sun Aug 21 2011 20:19:14 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/804,/competitions/hhp,1261st /dansbecker,Project Management software for Data Analysis,"Are there any project management tools for data analysis (something that integrates a version control system, keeps track of relationships between data files and source code, etc.)? While I'm at it, what larger data analysis communities forums are there to ask this sort of question?",0,None,10 ,Mon Aug 22 2011 00:50:01 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/805,/competitions/hhp,2nd /andywocky,Post-Milestone Mixer,"I've enjoyed this contest, and I was thinking I'd like to get to know other competitors. Would anyone else be interested in some sort of online networking chat session or virtual event after the milestone? The idea would just be to get to know who we are as a community, share stories, discover mutual interests, network professionally, etc. If this sounds like something you'd consider attending, please Thank this post, or respond in this thread. If there are enough responses we can talk about organizing the event.",0,None,4 ,Mon Aug 22 2011 23:13:26 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/806,/competitions/hhp,265th /sfin2015,You should not predict worse than my first submission,"Hello all, My first submission is constant prediction (it is not mean or median) for all records. RMSLE for my method is 0.486600, whereas the best score currently is 0.456575 I find it a bit ""funny"" that there are over 100 team scoring below ""my non-public ranking 258"" (and I bet some of those are more complicated algorithms). This is because I find my method to be somewhat ""starting point""/worst possible one that one can do (that is after a bit thinking). My next move will be convert my algorithm from constant prediction to conditional prediction (that is to utilize given variables), however I think I will do discretized prediction. But not sure if I manage to do this by coming deadline. I have two questions in my mind: - Is anyone using prediction discretization for daysinhospital value 0 (as it is pretty common). I mean that one would for example first predict whether daysinhospital is zero or non-zero (using logistic regression or mlp etc). And if it is zero then place value 0 to it, and otherwise place continuous predicted value to it. - Is anyone using principal component analysis or independent component analysis or any projection methods? (I am thinking of Year1, Year2, Year3 ""time series"") Finally, happy predictions & good luck in the challenge! :)",0,None,6 ,Tue Aug 23 2011 18:47:24 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/807,/competitions/hhp,757th /nogginhead,Is there a limit on the number of submissions?,"Otherwise, I believe you could game the evaluations to increase your score with no meaningful algorithm.",0,None,1 Comment,Wed Aug 24 2011 00:17:37 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/808,/competitions/dunnhumbychallenge,None /teamrender,error regarding reverts ?,"Hi, I have two questions regarding the ""reverts"" in the dataset: 1. how do you define a revert? this is stated nowhere explicitly. As I read between the lines, it seems to me that you defined it as any revision that was made between two other, identical revisions (the first one being the one reverted TO, the second one the REVERTER). Is this interpretation correct? 2. If the above is your definition of a revert (and even otherwise), I have found an inconsistency when actually comparing the Diffs of the revisions at wikipedia.org : why is the article content of var revision_id in the dataset identical to the article content of reverted_revision_id (if reverted = 1 of course)? this makes no sense, as revision_id should have been reverted by some other edit TO reverted_revision_id, hence they can't be identical or there would be nothing to revert.. It rather seems that the edits you listed with revert = 1 are the reverting edits, not the reverted ones, and reverted_revision_id is where THEY revert to. This is something completely different than what you describe in the instructions.",1,bronze,2 ,Wed Aug 24 2011 15:34:06 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/809,/competitions/wikichallenge,38th /dpmcna,Text fields of the dataset,"Hey everyone, This competition has text fields present in the dataset as part of the titles and comments files. This makes it quite different to other Kaggle competitions. Would anyone be willing to share their general approach to handling these? I'd be interested to know whether people are using or ignoring these. Thanks!",0,None,2 ,Thu Aug 25 2011 08:30:06 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/810,/competitions/wikichallenge,33rd /kymhorsell1,submission fails with wrong number of lines,"Something seems to have changed with the submission process. It now keeps saying ""your submission [does not have] exactly 10,000 non-header lines"". My output format has not changed. Even the example csv file I uploaded from before gets the same problem.",0,None,2 ,Fri Aug 26 2011 04:34:29 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/811,/competitions/dunnhumbychallenge,28th /pckben,Determine which claims are responsible for DIH?,"Dear all, We've just joined the competition recently and are having a question on finding the claims associated with one's days in hospital in the same year. We've searched the forum a bit and understood that records having PlaceSvc='Inpatient Hospital' or records having PlaceSvc='Urgent Care' and use the emergency room facility are those that should sum up into DIH, provided that the LOS are given. It seems right in most of the time, but not in some cases for example for member 10009391, who has 2 Emergency Urgent Care in year 2 but his/her y2.DIH=0. Another special case is id=45539346, who has a popping 32 'Inpatient Hospital' records in year 2 but y2.DIH=0. Are these inaccurate data, or our understanding stated is wrong? Thanks, Ben",0,None,1 Comment,Fri Aug 26 2011 08:43:59 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/812,/competitions/hhp,1079th /byang1,Hidden Rules for Team Mergers ?,"In section 4 of the rules, it says ""In Sponsor's discretion, Teams may be permitted to merge."". I'd like to know what are the current rules or factors, if any, that would prevent a team merge. Thru contact with other teams in this contest, I was surprised to learn that there's a hidden, previously unpublicized rule which has prevented at least one team merger: the total number of submissions of the merged team cannot exceed the number of submission available since the start of the contest, in order to be ""fair"" to other, unmerged teams. If this is true, I think it is a bad rule and should be overturned. I don't see any unfairness here, because every team is free to try to merge with others. However, the limit based on submission count, and the previously unpublicized nature means this rule is unfair to those who have made many submissions. It essentially punishes them for their past efforts and hard work. For example, in the last 5 or 6 weeks, my goal was to make a submission everyday, which I was largely able to meet. If I had known my submissions would prevent me from joining forces with other teams later, I'd have done things very differently. This is a case where the contest organizers should stay out of the way and let us organize ourselves as we see fit. And I'd like to know if there're other rules on organizers mind waiting to strike.",1,bronze,5 ,Fri Aug 26 2011 20:00:33 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/815,/competitions/hhp,2nd /columbus,Need explanation of RMSLE,"I am trying to understand evaluation methodology, run some numbers in Wolfram Alpha and get different results. For example, actual edits 0, predicted 1. (Site number 0.48 ) Wolfram Alpha gives 0.69. abs(log(1 + 1) - log(0 + 1)) Actual edits 0, predicted 0.5. (Site number 0.16) Wolfram Alpha 0.40. abs(log(0.5 + 1) - log(0 + 1)) Can you help to understand the methodology of evaluation?",0,None,7 ,Sat Aug 27 2011 00:07:56 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/816,/competitions/wikichallenge,27th /sirguessalot,"@Kaggle: request to preserve formatting of ""technique description"" field","When making submissions, I like to use specific formatting in the ""Description of your technique"" field. Very frequently, it's a list of numbered items. Unfortunately, when it's presented on the Submissions list page, the formatting is stripped and it's all mushed together. I'd like to request that the formatting be preserved (i.e. carriage returns are not stripped) so that when we have more than a few submissions, it's easier to read the individual entries. Thanks for your consideration.",0,None,2 ,Sat Aug 27 2011 20:09:54 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/817,/competitions/hhp,37th /kristin,15% for training set but 5% when submitted...,"I have developed an algorithm that gives 15% correct prediction using around 40000 random customers in the training set. But when submitted I only have 5% correct prediction. Does anyone else have the same problem? What do you think that might have gone wrong? Is the statistics of the customers in the test set completely different from the training set? Best, Kristin",0,None,3 ,Sun Aug 28 2011 08:58:10 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/818,/competitions/dunnhumbychallenge,52nd /cswdyer,What are your tools of choice?,"A mention of Kaggle in New Scientist has led me here, I'm new to this and have an engineering rather than programming background, but it looks like an interesting hobby. I was hence wondering what sort of toolkit is needed for this (and other) challenges? I note elsewhere one of the competitors in the Ford challenge was analysing the data in XL to get an understanding of the data, but what are you using (MatLab / Mathmatica etc)? (XL struggles to import this volume of data!) SQL / MySQL seems a good option for storage, though I note that during the Netflix prize many were suggesting that databases weren't of much use due to the size of the dataset and hence they were programming it in such a way as to store all the data in memory. Then what are the options for the developing the prediction algorithms & methodology - are you using software like R & SPSS etc, or straight programming (Perl / C#)? I look forward to hearing what weapons you have in your armoury! Thanks, cswd P.S. Apologies if these are incredibly basic newbie questions, but I've got to start somewhere and understanding what would I sholud learn is the first step...",1,bronze,5 ,Sun Aug 28 2011 20:28:17 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/819,/competitions/wikichallenge,77th /jeffmoser,Round 1 Milestone Submission Selection Info,"This is just a reminder that the first round milestone is soon approaching. Per [Link]:http://www.heritagehealthprize.com/c/hhp/Details/Timeline, submissions must be received on or before August 31, 2011 06:59:59 UTC to be considered for Round 1 Milestone prizes. In case of any confusion, the current UTC time is always listed at the bottom of the leaderboard, so you should be able to see how much time is left. You will be able to select up to one (1) submission to be considered for this milestone. The milestone prizes will be ranked by the private leaderboard score (i.e. the score on the remaining 70% of the data) so it is possible that the private score ranking will be different than the public score ranking. You can make your submission selection via the ""Submissions"" page at the top. Previously the site allowed you to select up to 5 submissions per the final prize guidelines, but this has been changed to a maximum of 1 submission to avoid confusion with the milestone prizes. If you had previously selected more than 1 submission (27 teams did so) then your selected submission with the top public score was selected and all others were deselected automatically. If you do not make a selection by the deadline, your submission with the best public leaderboard score that was submitted on or before the Round 1 milestone deadline will be choosen on your behalf. NOTE: If you are one of the 559 players that are on a team that has made a submission, then you should have already received a reminder email about this upcoming deadline. The email came out around Mon, 29 Aug 2011 20:55:00 UTC.",1,bronze,6 ,Mon Aug 29 2011 22:18:55 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/820,/competitions/hhp,None /antgoldbloom,Milestone prize announcement,"Just to keep you all in the loop, the plan is to announce the milestone prize winners at O'Reilly's Strataconf ( [Link]:http://strataconf.com/public/content/home). Will let you know the exact date as soon as we're told.",0,None,23 ,Wed Aug 31 2011 07:57:03 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/821,/competitions/hhp,None /liveflow,Real numbers_Submission Error,"Hi, recently I got a submission error, and says the predicted number has to be real numbers from 0 to 15. But I already check these numbers and I got a precision of 8 digits and the data is between 0 to 15. So please help me with this. The submission was on deadline time for milestone 1. Thanks.",0,None,5 ,Wed Aug 31 2011 19:24:43 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/822,/competitions/hhp,1142nd /jason15,Data Years,"I am looking to utilize external economic data in my analysis, but am unable to find the years of the data as posted (i.e. 1999 vs Y1). What year is Y1? Regards",0,None,3 ,Wed Aug 31 2011 20:06:55 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/823,/competitions/hhp,None /jxiesd,Is the grand prize threshold (0.4) reachable?,"I start thinking about this. The best result so far on the leaderboard is 0.456, you need another 10+% improvement to reach the bar. This is already a higher requirement than the NetFlix challenge. I bet it will need many team mergers in order to reach the bar. My best results on the training set even never reached 0.4. Need to have some breakthroughs...",0,None,6 ,Wed Aug 31 2011 20:13:14 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/824,/competitions/hhp,5th /pwfrey42,Questions for Sponsor,Why are records with missing data included in the training set but not in the test set? Is there some reason why the contest does not include 2010 data?,0,None,3 ,Fri Sep 02 2011 17:49:28 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/826,/competitions/ClaimPredictionChallenge,4th /jeffmoser,Reliving the leaderboards of the past,"I like the leaderboard a lot. It's sort of a like a competition's pulse. One of the greatest strengths of the leaderboard is that it shows recent trends, which is great for up-to-the-minute coverage of a competition. The optional ""delta"" parameter lets you set the compare window for the arrow movement indicators (which defaults to 1 week for established competition). For example, to see movement indicators for the past day, you can go to: [Link]:http://www.heritagehealthprize.com/c/hhp/Leaderboard?delta=1d Or you could set it to "" [Link]:http://www.heritagehealthprize.com/c/hhp/Leaderboard?delta=6h"" for the past 6 hours or "" [Link]:http://www.heritagehealthprize.com/c/hhp/Leaderboard?delta=2w"" for the past 2 weeks, or "" [Link]:http://www.heritagehealthprize.com/c/hhp/Leaderboard?delta=1m"" for the past month. However, in addition to the arrow indicators, I also find it interesting to look back and see what a competition was like in the past. For example, on the recently completed [Link]:http://www.kaggle.com/c/mdm/ competition, it was interesting at the start to see people explore the data the data with some relatively weak scores early on only to see dramatic improvements later on. Previously, all of this action was hidden as you only saw the final private leaderboard results as of the last second of the competition. This week I added support for viewing the public leaderboard for completed competitions. For example, [Link]:http://www.kaggle.com/c/mdm/Leaderboard/public Shows the public leaderboard as it was at the close of the competition. However, more interesting is that you can now add an ""asOf"" parameter and specify a date to see the leaderboard as it was on UTC midnight for that date. For example: [Link]:http://www.kaggle.com/c/mdm/Leaderboard/public?asOf=2011-5-24 Shows that the leaderboard was quite bare early on, but soon saw a dramatic improvement with the ""Fire On Wires"" team 3 days later: [Link]:http://www.kaggle.com/c/mdm/Leaderboard/public?asOf=2011-5-27 And then Martin came on the scene 2 days later at the top: [Link]:http://www.kaggle.com/c/mdm/Leaderboard/public?asOf=2011-5-29 where he stayed for over a month and a half only to be overtaken by Ali: [Link]:http://www.kaggle.com/c/mdm/Leaderboard/public?asOf=2011-7-16 which stood until the DeepZot team broke the 0.015 barrier a few weeks later: [Link]:http://www.kaggle.com/c/mdm/Leaderboard/public?asOf=2011-8-11 This works for both the public leaderboard and private leaderboard (for competitions that have finished anyways). For example, we can see that on the day that the DeepZot team broke the 0.015 barrier publically, they didn't beat it on the private leaderboard: [Link]:http://www.kaggle.com/c/mdm/Leaderboard?asOf=2011-8-11 If a significant event happened during a day, you can get the leaderboard at a specific moment in time by adding the time to parameter. For example, here is the Heritage Health Prize a few hours after we started accepting submission when there were only 10 teams on the leaderboard: [Link]:http://www.heritagehealthprize.com/c/hhp/Leaderboard?asOf=2011-05-05%2014:00 This is especially important if you wanted to capture when you make it to the top of the leaderboard for only a brief time window during a day :) It's my hope that this feature might lead to some great memories of what it was like during the rush of these competitions. Please feel free to share any special moments you find using this feature.",9,None,8 ,Fri Sep 02 2011 18:33:00 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/827,None,None /jeffmoser,New and Improved Submission Processor,"You spend a lot of time creating your submissions and we want to make sure they get the care and attention they need once they arrive at our servers. I’m happy to announce that as of today your submissions are now being processed with a submission processor that I re-wrote from scratch to specifically address a lot of concerns that have come up over the past few months. It’s important to realize that all of these changes affected the parsing and validation of your submission and not the mathematics of the evaluation algorithms themselves. Thus, unless the old submission processor made a mistake in how it interpreted your submission, there is no need to resubmit your old submissions. Our old submission processing code was a bit harsh. Since I wrote it, I’ll be first in line to point out some of its flaws: Bad error messages – when there was an error with your submission, you often got a very obscure error message that didn’t help you fix your submission or at the very least identify exactly where the problem was located. Additionally, if there were multiple issues with your submission, you only found out about them one at a time with each successive submission. Alternate row orders were not allowed – Our example competition submissions often included row identifiers (i.e. “MemberID”). Your submissions had to match the order of the example submission exactly, even if the example’s row identifier wasn’t in a particular order (i.e. sorted ascending or descending). This led to some justified frustration, especially among newcomers who would put their rows in a more convenient order (such as sorted ascending) only to find out that their score was particularly bad. You didn’t receive any error message telling you your rows were in the “wrong” order; your only clue that something was wrong was a bad score. Often you only found out about this through forum posts. Alternate column orders were not allowed – You work with a wide variety of tools. When you went to write your submission file, the columns were sometimes in a different order than what the submission processor expected. Again, you often only found out that there was a problem by receiving a bad score. You didn’t receive any indication that your columns were in the “wrong” order. Lossy Storage – As indicated above, any data that was outside of the prediction column(s) was ignored. When we saved your submission on our server, we deleted all other columns and headers. If you ever tried to download one of your previous submissions from our site, you’d notice that all of your other columns and all of your headers were gone. This made it particularly difficult to detect if you put your rows in the “wrong” order. Poor handling of different data types – Internally, all of the data had to be stored as floating point number. Competitions that included a date, such as the Dunnhumby Shopping Challenge, had to have that converted to a number before it could be used. If you ever downloaded one of your previous submissions, you’d see huge numbers where your dates used to be. In addition, competitions could only have a single data validator for the entire competition; this made data validation a challenge in competitions that had two different data types such as a date and a dollar spend amount. Silent Suffering – Perhaps worst of all, you often suffered in silence if your submission wasn’t in the exact format that we expected. Unless you contacted us, we often didn’t know that you were experiencing any trouble with the process. I felt awful when I learned that people tried to submit many times to submit to a competition only to continually run into errors. I felt even worse for the people that ran into problems and gave up without letting us know. As mentioned above, we had no real way of knowing how many people were experiencing problems with their submissions. Due to the above problems (and several others), I knew I had to fix it. After some analysis, I realized that I would have rewrite the vast majority of it to really fix the underlying issues. In addition, while rewriting a lot of the code I added some features that I have wanted to add for some time. Here are some highlights of the new submission processor: Vastly improved error messages – If there is a problem with your submission, you will now get a detailed error message that describes the exact problem (often including a file line number and column). In addition, if there are multiple errors with your submission, you’ll get several at once so that you don’t have to re-submit only to find out you have more. I want error messages to be as helpful as possible. If a particular error message is not helpful, I will update it based on feedback. Documented assumptions and warnings – If the processor makes a non-trivial assumption about your submission, it will make a note about it. These notes will be visible in the competition’s “Submissions” page for that particular submission. I will review feedback to see if I need to add more warnings and assumption messages. All submissions are logged – Previously, if there was an error with your submission, it was often deleted after an error was reported. Now, all submissions are kept, even if they generated an error. This allows Kaggle administrators like me to proactively investigate issues, even if you didn’t report them. This also provides you a way to backup your submissions on our server, even if they had issues. Lossless Storage – Your submissions are now stored exactly as you gave them to us: bit for bit. The only thing we do with it is compress it into a “.zip” file to make it slightly more compact. This should be helpful for you when you want to review one of your previous submissions. In addition, this gives Kaggle administrators much more ability to diagnose any issues with your submissions. Previously, you would have to email your actual submission to us for further investigation; this is no longer needed. Enhanced sniffing – The new submission parser goes out of its way to try to understand the structure of your submission. It will try to figure out what file format it’s in, whether or not you included a row header, what order you put the columns in, whether you compressed your submission, and several other things. Compressed Submissions – You can optionally compress your entry with ZIP or GZip compression. The processor detects the compression based off the “.zip” and “.gz” file extensions respectively. In addition, your submission only needs to have at a minimum the required prediction columns. Thus, if a competition example submission has a row ID column and then a prediction column, you will only have to submit the values in the prediction column (you don’t even need to include a header). This was partially implemented before (LINK), but it’s been improved in the new processor. Flexible row orders – If the parser can determine which column in your submission is the row id (i.e. you include a matching column header that indicates this), then the processor will sort your submission rows to match that of the solution. This means you can put your rows in any order you want. Flexible column orders – The parser now tries to understand each of your columns and map them to the corresponding column in the solution. Due to compressed submission support, not all of the columns in the example submission might be present in your submission; only the prediction columns will be required. In the event that you put your prediction columns in a different order than the submission, be sure that your column header indicates this and the processor will correctly read from the appropriate column when calculating your score. Multiple validators – Each column can now optionally have its own data validator. This is particularly helpful in competitions like the Dunnhumby Shopping Challenge that has a visit_date and visit_ spend amount. Now each column can have separate meaningful validators to let you know if your values are out of the expected ranges. Rebuilt legacy submissions – Because the old submission storage used to delete data from non-essential columns, downloading your previous submissions was often a confusing experience. Now that submissions are stored losslessly, I went back and rebuilt every one of your submissions to make it look exactly how the submission processor understood it. You’ll now see the column headers and row identifiers of how it was processed. This should be helpful in investigating why an older submission might have scored poorly. These changes took quite a bit of time to implement. Rebuilding legacy submissions alone took several hours of batch processing that scanned many gigabytes worth of submissions. In addition, a lot of this code is brand new and might have some bugs. Please contact me by emailing “support at kaggle.com” in case one of your legacy Kaggle 2.0 (post March 30th) submissions was missed or you experience trouble and I’ll work to get it fixed quickly. I will also be reviewing submissions for errors now that we have that ability. It’s my hope that the new submission processor is far more [Link]:http://en.wikipedia.org/wiki/Robustness_principle%20 and gives each one of your submissions the care and love it deserves.",2,bronze,2 ,Sun Sep 04 2011 00:20:45 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/829,None,None /seeker66,regarding execution time,"Just checking if any of you are using R for this competition, what is the approx. time does it take for you for model building ? I bumped up the memory but its still taking quite long...",0,None,1 Comment,Sun Sep 04 2011 07:47:17 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/832,/competitions/dunnhumbychallenge,229th /seeker66,Leaderboard evaluation metric,"Hi, I see the leaderboard evaluation metric says: % correct entries. Does it correspond to % of correctly predicted visit dates ? I am bit confused. I was under the impression that since we are required to predict both the next visit date and also the visit spend amount, the leaderboard evaluation metric would incorporate both the %correctly predicted visit date and also the accuracy of spend amount ? Could someone comment on this ? Thanks !",0,None,2 ,Sun Sep 04 2011 18:34:07 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/833,/competitions/dunnhumbychallenge,229th /zbicyclist,What is the date 2011-06-19?,"If the data end on March 31, why are there records in the training set which seem to be newer? Simple example are records 112 through 126 in the data set, which show for repondent 2 trips that seem to occur in April, May and June. Record 126 is 2,2011-06-19,41.00 Obviously I'm missing something obvious here, but what is it?",0,None,3 ,Mon Sep 05 2011 04:56:49 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/834,/competitions/dunnhumbychallenge,None /texane,question about the one column compressed submission format,"Hi, Could you explicit what is the column used in the single column submission format. Is the single column the AMOUNT or the ROW_ID? I re read the rules but what you mean by ""prediction"" is ambiguous, could be either ROW_ID or AMOUNT, depending on how your script interpret the submission. In the example you give (compressed_entry.csv.zip), it seems to be the ROW_ID... is it right? Thanks very much for clarifying, f.",0,None,5 ,Mon Sep 05 2011 18:45:07 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/835,/competitions/ClaimPredictionChallenge,89th /stephencollins,Can you post a condensed train file?,"Kaggle, I'd like to work on the Insurance challenge. The large file size is hard for me to handle. It has been said on the forum, that the characteristics are shared for each submodel. Could you wrap up the train file by submodel as said above and make that file available in the data download area?",0,None,3 ,Tue Sep 06 2011 22:12:56 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/836,/competitions/ClaimPredictionChallenge,44th /texane,pending submission,"hello, I submitted this morning, maybe 2 hours ago, and my submission is still pending. Is it possible to have any information about that? I can not resubmit twice, because of the 2 submissions per day limitation. Thank you to cancelling it if there was any problem, Best regards, f.",0,None,5 ,Wed Sep 07 2011 08:49:13 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/837,/competitions/ClaimPredictionChallenge,89th /afaron1,Kaggle Teams,I'm new to kaggle but It seems that to enter a competition you need to be part of a team ie you can't submit on your own. How does this work? - I couldn't see anything explained under 'How it works' - but I'm probably missing something. I'd be grateful for any guidance. Ron,1,None,17 ,Thu Sep 08 2011 20:31:00 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/839,None,None /junji14430,DaysInHospital is not the sum of LengthOfStay?,"Hi, everyone, I'm a newbie here and I want to ask a quick question, does DaysInHospital equal sum of LengthOfStay? Thanks a lot in advance. The problem is as follows: Left part of the equations are DaysInHospital_Y2 and right part of the equations are corresponding sum of LengthOfStay in Y2's claims. I just print first 100 records, however, they are somehow not equal. 0=+++0=1=1=++++++++++++++++1 day+++0=+++++++++++++0=++++++++0=++++++++0=0=++++++++0=++++++++++++++++++++++0=+++0=0=0=++++++++++0=++++++++++++++++++0=+++++++++++++++++++++++++++0=++++++++++++++1 day++++++1 day+1 day++0=+++++++++0=++++++++++0=0=1=+++++1 day+++++++++++0=++++++++++++0=0=0=+++1=++++++++1 day+++++0=0=0=+++++++++0=1=++++++++++++++1 day++++++++++++++4=0=++++++++++++++0=+++0=0=+++++0=0=++++0=++++0=++++++++++0=++0=+++++++0=++1 day+++1 day+++++1 day+++++1 day+++1 day+++++++++0=++++++++++++++0=++++++++++++++++++++++6=++++++++++++++++++6 days+++++++++++++++0=1=++1 day+++++++++++++1=+++++++++1 day++++++1 day++++++0=0=0=0=+++++++++++++++++0=++++0=++++++++++++++++0=0=+++++++++0=+++++++++++5=+++++++++++++++++++++++++++++++++++++++++++0=0=+++++++++++++++++0=+0=+++++++++++0=0=0=++0=0=2=++++1 day+++++++++++++++++1 day0=++++++++++2 days+0=0=++++++++++++0=0=++++++++1=+1 day++1 day+++0=0=+0=++++++++++++++++++++++++++++++++++++++++4=0=+++++++++++++++++++++++15=+++++++++++++++++++++++++++++++++++++++++++0=0=0=+0=+++++++++++++++++++++++++++++++++++++++++++0=+++++++++++++++++++++0=+++++0=++0=+++++++++++++++++++++++++0=++++++0=++++++++0=0=0=++++0=+2=++++1 day+++++++++1 day++++1 day+++1 day+++1 day+0=0=+++++++0=+++++++++++++++++++++",0,None,3 ,Fri Sep 09 2011 20:36:18 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/840,/competitions/hhp,349th /chadcambell1,submission clarification,"My solution includes some additional data not included in the original training set. Do I need to submit all of this raw data as part of my final solution, for example in .csv/.sql format? Or is it sufficient to submit the scripts used to obtain the data and/or describe the data set in words? Also my method for selecting parameters for my model uses some library functions from the matlab optimization toolbox. Also when bringing data into the matlab workspace for model selection I use the database toolbox to pull from msyql. The model itself though doesn't require any matlab toolboxes, i.e. once the final model is selected it can run predictions independent of matlab toolboxes. Is this going to be ok? Or do I need to figure out a way to port the model selection code to a non matlab toolbox implementation? Appreciate any clarification.",0,None,3 ,Tue Sep 13 2011 22:45:47 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/845,/competitions/wikichallenge,None /sja5779,Different feature distributions prblem for each year,"Is anybody there seeing different feature distribution problem for each year? One of my feature set, which is explained below in detail, gave me errors 0.476 for Yr1, 0.463 for Yr2, and 0.501 for Yr3 (real submission). It is quite strange to me how the results can flunctuate this much for different years. Is there anyone having the same problem? The feature set counts frequency of each DSFC of a patient. For example, if a patient has five claims at Y1 with DSFC, 1, 1, 4, 5, 12, respectively, the feature for that patient is represented as 2 0 0 1 1 0 0 0 0 0 0 1.",0,None,3 ,Thu Sep 15 2011 00:30:46 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/846,/competitions/hhp,138th /signipinnis,Thanks for lifting the boycott,"... apparently I missed the email announcing it, so I never knew what it was all about, but I for one am glad it's over. This isn't the most talkative of forums, but I missed the old give-n-take anyway.",0,None,3 ,Thu Sep 15 2011 04:30:41 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/847,/competitions/hhp,864th /woshialex,How to use R to prepare data for individuals?,"Hi everybody, I used to use C++ to do everything and just started to learn to use R. I found it is really inconvenient to deal with data freely.. I want to load the data, then generate an array of customers, each customer is a list. Then I could feed eah customer data in to some functions to do predictions.. But how could I get an array of custorms like customer[N]?",0,None,5 ,Fri Sep 16 2011 06:18:12 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/849,/competitions/dunnhumbychallenge,55th /mlearn,Direct messaging,Could a direct messaging feature be added to Kaggle (along the lines of many bulletin board systems) so we can directly contact other competition participants (i.e. privately as opposed to via the public forums)? Or is this feature hiding away somewhere? Thanks.,0,None,3 ,Sat Sep 17 2011 16:33:42 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/850,None,None /diederikvanliere,Competition is almost over!,"Dear Contestants, The WikiChallenge is almost over, there are only 2 days left! We at the Wikimedia Foundation are very curious to see your submissions, so please submit your full solution including source code, description of the algorithm and a pseudo-code version of your algorithm. We will verify and replicate the results in the coming weeks and we aim to have a final winner announced around October 15th. We will send out a short survey as well to get to know you and the general direction that you took better, we will also ask for your feedback on how to improve this competition if we decide to run another next year. I never expected so many participants and entries and so in that regard I already consider this competition a huge success. Now I am just hoping for real cool solutions to our data! Thank you so much for participating, Diederik van Liere & Howie Fung (Wikimedia Foundation)",0,None,4 ,Sun Sep 18 2011 11:27:01 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/851,/competitions/wikichallenge,None /timgluz,"I got submission error: ""matching destination row for.X""","What's wrong with my submission, if i get response: "" Error: Couldn't find matching destination row for '2' on sortable column 'Row_ID' (Line 2)"" . My submission looks like: (no headers, first row is integer, second one is real number, file is built on linux). 1,158.0546802,13.7203993,25.008185 ... 4314863,6.3881894314864,52.0759004314865,0.050000",0,None,7 ,Sun Sep 18 2011 21:09:17 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/852,/competitions/ClaimPredictionChallenge,92nd /wbhumanoid,Timeline and guidelines for submitting code,"Can someone point me to information about when and if we need to submit our code and any guidelines about doing that? My code is incredibly messy, so it's going to take me a little while to rename variables, put in comments etc. to make it understandable for anyone else. So I want to know if I need to start now. Also, do you only need to submit code if you place in the top 3 positions in the final evaluation?",0,None,1 Comment,Mon Sep 19 2011 11:57:19 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/853,/competitions/dunnhumbychallenge,17th /madavidj,When does the 2 submissions per day renew?,"As you know, you are allowed 2 submissions per 24 hours. Kaggle's reset time is everyday at 7:00pm (our time).Cheers",1,None,1 Comment,Tue Mar 29 2011 04:49:43 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/858,/competitions/stat331w11,None /muzhu1,Troll Accounts,There are 124 students enrolled in the class and 200 teams less this account and growing.,0,None,6 ,Thu Apr 07 2011 03:40:14 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/861,/competitions/stat331w11,84th /madavidj,"A little note, please read","Dear STAT331,Clearly people have become obsessed with making their way up the public leaderboard. The point of this public leaderboard was to remedy last semester's ""one-shot-only"" limitation. If you think about it, this public leaderboard is only displaying the score on 20% of the test set. If you tune your model to that set only, you are overfitting towards that set; you're missing a major concept of the class. I guarantee you that ranks/scores will shift wildly if you don't do your own cross-validations. Making fake accounts to make more predictions probably won't help them much anyways.Think of it this way: You are trying to aim/shoot a target. In one case, after each shot, a drunk man tells you how close you were to the target. In the other case, you go up to the target and see for yourself. Sure, it's more effort for you to go all the way to the target, but can you truly rely on the drunk man? The public leaderboard is the drunk man.You can make as many submissions as you want (create new accounts). But which of the predictions will you submit on UWACE? Can you really trust the drunk man? To improve, you need reliable feedback.I was hoping that a leaderboard would push people to search further than a simple stepwise AIC/BIC. Last semester, a BIC would put you above average. Indeed, these publicly displayed scores did push people to give a better effort, albeit not without some extra drama as we can see.To conclude, personally, I'm not really worried about the extra submissions. I just feel bad for Kaggle having to put up with this kind of thing. And I'm sorry for the ones who had to resort to these kind of ""solutions"".I've learned quite a few things from watching this contest unfold. I hope you've learned a few things from the contest as well.Regards,David",1,None,11 ,Thu Apr 07 2011 07:51:12 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/862,/competitions/stat331w11,None /antgoldbloom,My simple R script,"library(""randomForest"")setwd(""C:\\Users\\antgoldbloom\\Dropbox\\Kaggle\\Competitions\\Credit Scoring"")training <- read.csv(""cs-training.csv"")RF <- randomForest(training[,-c(1,2,7,12)],training$SeriousDlqin2yrs ,sampsize=c(10000),do.trace=TRUE,importance=TRUE,ntree=500,,forest=TRUE)test <- read.csv(""cs-test.csv"")pred <- data.frame(predict(RF,test[,-c(1,2,7,12)]))names(pred) <- ""SeriousDlqin2yrs""write.csv(pred,file=""sampleEntry.csv"")",17,silver,15 ,Mon Sep 19 2011 20:54:16 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/863,/competitions/GiveMeSomeCredit,357th /domcastro,Evaluation function,"Hi Sorry if I've missed the information, but how are the predictions evaluated please? EDIT: OK - seen AUC on leaderboard",1,bronze,20 ,Tue Sep 20 2011 00:51:26 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/865,/competitions/GiveMeSomeCredit,58th /dellzhang,Test data,"Could Wikimedia and Kaggle please release the full test data, i,.e., the actual number of visits for each user, after the contest, so that we can keep working on this problem? It would enable us to perform more analysis, try more algorithms, and do more experiments. Thank you!",0,None,5 ,Wed Sep 21 2011 02:07:04 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/866,/competitions/wikichallenge,3rd /sirguessalot,NumberOfTime30-59DaysPastDueNotWorse: 96 and 98?,"I see the following distribution of values for this field (similar for the other ""PastDue"" fields): value count ----------- ----------- 0 126018 1 16033 2 4598 3 1754 4 747 5 342 98 264 6 140 7 54 8 25 9 12 96 5 10 4 12 2 11 1 13 1 Are 96 and 98 actual values or some type of special code?",0,None,13 ,Wed Sep 21 2011 04:21:05 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/867,/competitions/GiveMeSomeCredit,28th /martschink,Counting and NumberOfTimes90DaysLate,"Let's say Sally Shopmonger swings by a local outlet mall and purchases an entire new wardrobe of cocktail dresses and designer costumery for her shibaboodle puppy. She looks stunning. Her puppy is the belle of the doggie park. Sally decides to hide the evidence of her spending by throwing her credit card into the ocean. She does the same with her unopened credit card bills for months on end until she is 90 days late, the notice for which she flings into the ocean and shouts, ""you mean 90 days fashionably late!"" Another 30 days pass. And another. And another. And another. This continues for an entire year. Would the NumberOfTimes90DaysLate for Sally Shopmonger be only 1? Or would each additional 30 days after the 90-day threshold count as another time being 90 days or more late?",0,None,3 ,Wed Sep 21 2011 07:58:23 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/868,/competitions/GiveMeSomeCredit,None /zimdot,Prize fund too low?,"Not to be disrespectful, but isn't the prize fund for this competition a little low? A good algorithm could save the company in question thousands or millions of $. But the prize fund is only a *very* small fraction of that. If I produced a good predictive algorithm I'd rather liscence my code out to whichever companies want it, rather than recieve $5000 and sign my code away for someone else to make money from.",2,bronze,27 ,Wed Sep 21 2011 15:30:50 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/870,/competitions/GiveMeSomeCredit,None /zimdot,Coding language?,"Hi, What coding languages are acceptable for this challenge? Am I only allowed to submit results produced using open-source packages e.g. R? Or If I produced something in, say, Matlab, would this be acceptable? Thanks.",0,None,3 ,Wed Sep 21 2011 15:34:03 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/871,/competitions/dunnhumbychallenge,None /wcukierski,"Which is better, your date or spend error?","I realize this close to the end of the competition nobody wants to share details about what they are doing. But, I am curious if people are willing to share which of their errors is better. I am doing better on the dates than the spends. How about you?",0,None,21 ,Wed Sep 21 2011 17:13:38 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/872,/competitions/dunnhumbychallenge,4th /sophie8984,Rules to take part in this competition,"Hi, It's the first time i'm visiting this website. Could you please tell me how many members a team must be ? Is it allowed to use any of the statistical softwares ? Is there somewhere an article which explains the rules and how the efficinecy of the algorithm is calculated ? Thank you in advance for you answers. Kind regards. Sophie",0,None,6 ,Wed Sep 21 2011 18:08:16 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/873,/competitions/GiveMeSomeCredit,None /twanvl,Errors in data,"In cs-training.csv I found some lines that are wrong. For example: 85490,0,50708,55,0,0.221757322,38000,7,0,2,0,0 Notice that RevolvingUtilizationOfUnsecuredLines = 50708. But this should be a percentage (or rather, a fraction), so it should be at most 1. I assume that the actual value is 0.50708. There are also many cases where DebtRatio > 1. In fact, this seems to correspond to rows where MonthlyIncome=NA. Perhaps these columns are swapped in that case, and it is DebtRatio that is NA. MonthlyIncome",2,bronze,5 ,Thu Sep 22 2011 18:34:24 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/874,/competitions/GiveMeSomeCredit,439th /sjgardner3,Data Cleaning Questions,"NumberOfTimes90DaysLate, NumberofTime60-89DaysPastDueNotWorse, NumberofTime30-59DaysPasDueNotWorse -- there are 5 rows that each of these have value of 95,and 264 rows that each have a value of 98. For the rest of the data table, these variables have values that range from 0 to 20 or so. are the unusual large values missing data coding, real, or errors? Also, all of these rows have for RevolvingUtilizationOfUnssecuredLines the value 0.9999999 RevolvingUtilizationOfUnssecuredLines has a lot of rows that seem unusual, also. This defined in the data dictionary as ""Total balance on credit cards and personal lines of credit except real estate and no installment debt like car loans divided by the sum of credit limits"", which to me means (total non-secured debt)/(total non-secured credit limit), so it should alway be between 0 and 1, but ~2.5% of the training data has values that are >1, and the maximum value is over 50000. Any information on this unusual data? The income distribution seems off. Is this in USD or some other currency? The 99.5%-tile of income is 35000. NumberOfTimes90DaysLate -- shouldn't this be perfectly predictive of the response variable SeriousDlqin2yrs, which is defined as "" Person experienced 90 days past due delinquency or worse "". It doesn't appear that it is.",9,bronze,12 ,Thu Sep 22 2011 18:35:43 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/875,/competitions/GiveMeSomeCredit,None /noch7485,Downloading the data,Internet Explorer won't let me download the data. Any ideas?,0,None,1 Comment,Thu Sep 22 2011 22:26:23 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/876,/competitions/GiveMeSomeCredit,None /dlehman,How many records are there?,"The description says that there is data on 250,000 people but when I download it I only have 150,000 records. Did I miss something somewhere?",0,None,1 Comment,Fri Sep 23 2011 06:58:55 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/877,/competitions/GiveMeSomeCredit,823rd /kazooie,Δ1w,"Hi, Just made my first submission today - so I'm a new at it. Can someone help me. What does the Δ1w column mean on the leaderboard?",0,None,1 Comment,Sat Sep 24 2011 12:19:58 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/878,/competitions/dunnhumbychallenge,227th /matthew3,Jobs / Resumes from competing in HHP/ Kaggle,"HHP, Has any one else had success including competing in the HHP in their resume? I recently updated my resume and included it and it seems to have improved the receptiveness of recruiters to talk to me. I was wondering if any one else had had the same exerpiances. It certainly provided a good talking point about using a massive data set to draw business insights! Is there a job board that Kaggle maintains for data scientists? Anyways, I was succesful in getting a job in a social network gaming company. If any one is interested in learning more about what a data analyst does at a socail games company, let me know, and I can write more about it in this forum.",0,None,3 ,Sat Sep 24 2011 20:44:22 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/879,/competitions/hhp,274th /quansun,Would someone please suggest some introduction papers for the credit scoring problem in general?,"Hi Guys, would be great if you could introduce some classic papers in this field. eg which algorithm do CS people usually use? how to evaluate the model? Thanks",0,None,4 ,Sun Sep 25 2011 02:41:13 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/880,/competitions/GiveMeSomeCredit,18th /argv,Welcome to the Semi-Supervised Feature Learning Competition!,"We've just kicked off this short competition around semi-supervised feature learning, and hope that it will provide an interesting task and a good range of comparisons. We're particularly hopeful that there will be a wide diversity of methods applied. Please feel free to share thoughts and ideas on this forum, and also to post questions if you need help with any of the data. Good luck, and thanks for taking part!",0,None,8 ,Sun Sep 25 2011 22:14:52 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/881,/competitions/SemiSupervisedFeatureLearning,None /kenanalytics,Data Details,Is this real data from a financial institution? It just seems that some of the most important data elements haven't been included?,0,None,2 ,Mon Sep 26 2011 08:15:17 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/882,/competitions/GiveMeSomeCredit,None /larryholder,Opportunity to submit paper to ICDM conference,"Wikipedia Data Mining Challenge Participants: Thank you for your participation in the Wikipedia Data Mining Challenge. We would like to invite you to submit a paper describing your solution to the IEEE International Conference on Data Mining. Selected papers will be included in the conference proceedings. Instructions for submitting your paper can be found on the ICDM Contest website at [Link]:http://www.eecs.wsu.edu/%7Eholder/icdm2011contest/. The deadline for paper submission is October 3, 2011. Sincerely, Contest Co-Chairs",1,bronze,1 Comment,Tue Sep 27 2011 01:07:30 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/883,/competitions/wikichallenge,None /pwfrey42,Question for Sponsor,"Is their some reason why the outcome for this competition is a simple binary variable? Would it not make more commercial sense to have a numerical outcome such as the dollars lost when an account goes bad? An even more relevant outcome would be the profitability of each account with numerical values that were both positive and negative. The use of a 1, 0 outcome seems to be a hold over from the old days when linear prediction models had a problem with outcomes in which the distribution of the variable was not Gaussian.",0,None,6 ,Tue Sep 27 2011 16:44:36 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/884,/competitions/GiveMeSomeCredit,None /vishal17,Book/Website recommendation,"Fact: I will not win this competition. Goal: I love data and my current job involves analyzing it but nothing close to this. I would like to learn Data Mining/Prediction & move towards it eventually. I have already made a start - went back to brush up on my Statistics, have an ‘R Book’ - getting there slowly, pretty much read all the posts here. But is there anything else that you would recommend in terms of Books/Wikis/Website/classes, which would help.",0,None,2 ,Tue Sep 27 2011 17:35:33 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/885,/competitions/hhp,250th /dirknbr,AUC 0.98,"Assuming that AUC will be similar between leaderboard data and 'offline' data, how much better is the winner supposed to get given we have 0.98 as sample entry on the leaderboard? Dirk",0,None,3 ,Tue Sep 27 2011 18:19:40 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/886,/competitions/SemiSupervisedFeatureLearning,None /dirknbr,Effort,"Sorry to be critical, but this task requires quite a lot of effort. You need to manipulate the massive data file, find the best reduction, execute perl, then train SVM using another language you might never have used and then document everything, in less than a month. It would be really nice if the whole process could be run in one system.",0,None,5 ,Tue Sep 27 2011 18:54:18 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/887,/competitions/SemiSupervisedFeatureLearning,None /chrisraimondi,Thanks for changing the link color!,"It was irking me when it was black, but I didn't want to complain - as I am sure you had much more pressing matters - just wanted to let you know it was noticed and appreciated! I just noticed, so I don't know how long it had been - it may have been a while...",0,None,3 ,Tue Sep 27 2011 20:06:23 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/888,None,None /del=92525096498f3bbd,Possible Algorithms ,Could a possible algorithm be: Principal Component Algorithm(PCA). Though that is supervised. And probably doesn't work too well to reduce from a 1000000 plus features to less than 100 while storing the same information...,0,None,6 ,Tue Sep 27 2011 20:53:03 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/889,/competitions/SemiSupervisedFeatureLearning,None /robrenaud,How does this differ from semi supervised learning?,"Let's say I am a jerk who just wants to win the contest, but has no interest specifically in treating this problem as a feature learning problem. What prevents me from learning the best classifier I can and just sending 1 bit worth of features (my classification)?",0,None,7 ,Wed Sep 28 2011 02:46:44 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/890,/competitions/SemiSupervisedFeatureLearning,None /austro,About acknowledgement of contestants,"I find the competition interesting and I was considering taking part in it. However, I would like to have clarification regarding this paragraph: Contestants will be acknowledged by name in this paper for noteworthy performance, including results that do especially well or which are especially interesting. The contestants, having to develop and test an approach for this problem and writing it up the for inclusion in the paper, would be included as co-authors or would be just mentioned in the acknowledgement section? Under the second option, I would understand this as having sense if the approach used by them was not a sophisticated one (a standard method without any innovative ideas) , otherwise it would make more sense it would seem for the contestants to publish a paper on their own.",0,None,4 ,Wed Sep 28 2011 04:14:02 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/891,/competitions/SemiSupervisedFeatureLearning,None /mattfrancis,Prediction the probablity... of what?,"I'm confused by the lack of information for this challenge. It's clear we need to predict the probablity of something occuring, based on the supplied information, but what exactly are we predicting? Prob of a serious delinquency in the next year? Probability of bankruptcy in the next 10 years? The list could go on. Or am I missing the point and only the relative ranking of probabilites, which represents some un-normalised propencity to not pay debts, matters? Generally speaking, the lack of information and nature of the dataset seems to indicate the sponsor is assuming general knowledge of how credit scoring is performed. That is fine, but I would have thought they'd be interested in getting some experts from other areas to see if they can take a fresh look at this type of problem, instead of people who deal with this everyday and are more likely to fall back into existing solutions instead of pushing the boundaries.",0,None,14 ,Wed Sep 28 2011 06:20:31 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/892,/competitions/GiveMeSomeCredit,294th /del=92525096498f3bbd,Stanford Lectures on UnSupervised Feature Learning and Deep Learning,For those of you are as confused as me as to what this topic is about.... Heres a lecture from stanford which might be useful... [Link]:http://openclassroom.stanford.edu/MainFolder/CoursePage.php?course=ufldl,0,None,5 ,Wed Sep 28 2011 07:21:16 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/893,/competitions/SemiSupervisedFeatureLearning,None /tomseward,Reason for limiting only 2 daily submissions?,Is there a reason for only allowing 2 submission a day? Is this only for this competition or all Kaggle competitions?,0,None,6 ,Wed Sep 28 2011 16:15:48 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/894,/competitions/GiveMeSomeCredit,367th /dirknbr,Can't open tmp.out: No such file or directory at ./parseTestResults.pl line 3.,I get this error running the runLeaderboardEval script Can't open tmp.out: No such file or directory at ./parseTestResults.pl line 3.,0,None,1 Comment,Wed Sep 28 2011 16:34:09 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/895,/competitions/SemiSupervisedFeatureLearning,None /samuevan,result of the prediction,"I have some doubts about the file sampleEntrys, what we are supposed to predict? Just the answer yes/no, or the probability of a client pay the debts. if we look to the training file, we can see the class attribute SeriousDlqin2yrs with just 0's and 1's, so why in the file sampleEntries we have those numbers in the second column?",0,None,3 ,Wed Sep 28 2011 17:43:53 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/896,/competitions/GiveMeSomeCredit,None /byang1,Is this one too easy ?,"So the benchmark score is .980, and someone already got to .993 on the leaderboard now. I'm thinking any serious effort will get you above .995 (or even .999), and any difference after that won't be statistically significant any more. Anyway, what's the benchmark algorithm ?",0,None,2 ,Thu Sep 29 2011 00:14:41 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/897,/competitions/SemiSupervisedFeatureLearning,None /argv,open source code that supports svm-light format data,"For those who are interested, there's a great list of open-source projects that support svm-light format data, available here: http://mloss.org/software/dataformat/svmlight/",0,None,1 Comment,Thu Sep 29 2011 02:30:53 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/898,/competitions/SemiSupervisedFeatureLearning,None /beingzy,import .svm.dat data into R,"Hello, everyone here, is anyone willing to give an instruction on importing the train or test dataset .svm.dat into R? I am newbie in the SVM data format given that I had worked on R for years. I will appreciate any kind help here. Thanks in advance.",0,None,5 ,Thu Sep 29 2011 05:17:54 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/899,/competitions/SemiSupervisedFeatureLearning,None /pprett,Dataset creation,"Can you describe how the train, test, and unlabeled datasets have been created? In particular, can we expect that the class distribution is similar in all three sets? Have the documents been sampled independently? thanks, Peter",0,None,2 ,Thu Sep 29 2011 16:19:45 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/900,/competitions/SemiSupervisedFeatureLearning,9th /kkevin,Submission (when does the clock reset),"Late in joining the game, but with the clock ticking and not many more opportunities to submit I just wanted clarify what the submission limit is: I think it's two submissions per day, but when does the day begin and the submission counter reset?",0,None,2 ,Thu Sep 29 2011 22:15:20 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/901,/competitions/dunnhumbychallenge,5th /argv,"Benchmark minibatch k-means, step by step","I've just posted another benchmark, using the minibatch k-means from sofia-ml. The cluster centers were learned on the combination of the unlabeled dataset and the training data set. This benchmark used larger minibatch sizes and more iterations than the ""example entry benchmark"", and took about 25 minutes to train the cluster centers on a normal laptop rather than about 60 seconds. Since there had been some questions previously, I thought it might be helpful to give the full set of commands used to produce this benchmark: # Step 1. Combine the unlabled data set with the training data. cat ../competition_data/unlabeled_data.svmlight.dat ../competition_data/public_train_data.svmlight.dat > concatenated_data.dat # Step 2. Learn the cluster centers. $HOME/sofia-ml/sofia-kmeans \ --k 100 \ --opt_type mini_batch_kmeans \ --dimensionality 1000001 \ --training_file concatenated_data.dat \ --model_out full_kmeans_model.txt \ --iterations 10000 \ --mini_batch_size 1000 \ --objective_after_init \ --objective_after_training \ # Step 3. Apply the learned cluster centers to the training data.$HOME/sofia-ml/sofia-kmeans \ --model_in full_kmeans_model.txt \ --test_file public_train_data.svmlight.dat \ --objective_on_test \ --cluster_mapping_out full_kmeans.train.dat \ --cluster_mapping_type rbf_kernel \ --cluster_mapping_param 0.01 \# Step 4. Apply the learned cluster centers to the test data.$HOME/sofia-ml/sofia-kmeans \ --model_in full_kmeans_model.txt \ --test_filepublic_test_data.svmlight.dat \ --objective_on_test \ --cluster_mapping_out full_kmeans.test.dat \ --cluster_mapping_type rbf_kernel \ --cluster_mapping_param 0.01 \ # Step 5. Create dense CSV format versions of the new data sets. ./svmlightToDenseFormat.pl full_kmeans.train.dat > full_kmeans.train.dense.dat ./svmlightToDenseFormat.pl full_kmeans.test.dat > full_kmeans.test.dense.dat # Step 6. Execute the ./runLeaderboardEval.pl script. ./runLeaderboardEval.pl full_kmeans.train.dense.dat ../competition_data/public_train.labels.dat full_kmeans.test.dense.dat /Users/dsculley/libsvm-3.1/ test.full_kmeans.out",0,None,2 ,Fri Sep 30 2011 04:18:53 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/902,/competitions/SemiSupervisedFeatureLearning,None /stillsut,Related News on Healthcare ,"Not too familiar with the practices in healthcare of but a couple or articles have caught my eye, now that I've started this competition: [Link]:http://www.businessweek.com/ap/financialnews/D9NAR60G1.htm: Califronia has succesfully sued Quest Diagnostics for $200M+ for a scheme where doctors received kickbacks for recommending patients to Quest clinics. Especially important seemed to be de-frauding of patients using Medi-Cal. [Link]:http://www.boston.com/Boston/whitecoatnotes/2011/09/state-penalize-hospitals-that-readmit-too-many-patients/orkMrIXGmideu0PkL3CgHL/index.html Massachusetts Medicare plans to dock ""the pay of hospitals that readmit high numbers of patients within 30 days of discharge"" A follow up letter to the editor argued, this will encourage hospitals to keep patients longer the first time around. From my very limited knowledge though, it seems these are the type of issues that HPN was built to avoid, and also the type of issues we're getting offered a reward to try to solve. I'm curious if anyone understands: -> Does being a member of HPN mean you usually referred to an in-network provider of say lab testing (unless obviosuly it is some specialty unavailable)? -> Can you be a member of HPN and have gov't sponsored insurance eg Medicare, Medi-Cal? Here's the datamining part: if so, can we somehow identify this trait ( I'm thinking some standard PayDelay from such a giant admin system) and is this trait predictive of DIH, LOS, etc?",0,None,4 ,Fri Sep 30 2011 04:52:51 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/903,/competitions/hhp,None /jhoward,Milestone winners' papers available,"The papers written by the milestone winners are now available [Link]:http://www.heritagehealthprize.com/c/hhp/Leaderboard/milestone1. As described in section 13 of the [Link]:http://www.heritagehealthprize.com/c/hhp/Details/Rules, if you have any concerns about these papers, you have 30 days from their posting to provide your feedback.",3,bronze,51 ,Fri Sep 30 2011 11:12:07 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/904,/competitions/hhp,None /domcastro,Thanks,I really enjoyed this competition even though I wasn't very successful. Just saying thanks,0,None,15 ,Fri Sep 30 2011 12:57:47 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/905,/competitions/dunnhumbychallenge,58th /pwfrey42,Prize Fund,"I was amused to note that a forum submission by Zimdot in the Give Me Some Credit competition has elicited 1381 views and 20 replies. He (she) asks ""Is the prize fund too low?"" His (her) reasoning is that the value of the improved predictions for the sponsor exceeds the prize amount by several orders of magnitude. It would seem that the same considerations apply to the Claim Prediction Challenge. Is this reasoning valid or am I missing something?",0,None,6 ,Fri Sep 30 2011 17:49:31 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/906,/competitions/ClaimPredictionChallenge,4th /wcukierski,Leaderboard Saturation,Is the leaderboard already starting to saturate? The first 20 places are seperated by an AUC difference of 0.00166! The first 100 are seperated by 0.01458. I haven't looked at the data. Is nothing working for you guys/gals? Seems a little early in the contest for everyone to be so bunched up.,0,None,14 ,Fri Sep 30 2011 21:36:11 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/907,/competitions/GiveMeSomeCredit,None /sirguessalot,Data Dictionary?,The file Carvana_Data_Dictionary.txt seems to be missing in the Data downloads.,1,bronze,9 ,Fri Sep 30 2011 22:34:37 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/908,/competitions/DontGetKicked,4th /junji14430,"For the claims data, are the claims ordered by time? Do we know which is the first , which is the second et al?","Hello, everyone, I'm a newbie here, I have two questions as follows: For the claims data, are the claims ordered by time? Do we know which is the first , which is the second et al? Thanks a lot in advance.",0,None,2 ,Fri Sep 30 2011 23:33:17 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/909,/competitions/hhp,349th /jeffmoser,Leaderboards for visit_spend and visit_date ,"This competition was especially interesting because you had to predict two types of variables (visit_spend and visit_date). However, as [Link]:https://www.kaggle.com/c/dunnhumbychallenge/forums/t/872/which-is-better-your-date-or-spend-error, it's interesting to know how well you did on predicting each individual variable. When you submitted your results to Kaggle, we also calculated your percent correct for each variable. I've gone ahead now made this information public in 4 additional leaderboards: [Link]:https://www.kaggle.com/c/dunnhumbychallenge/Leaderboard/PublicVisitSpend [Link]:https://www.kaggle.com/c/dunnhumbychallenge/Leaderboard/PublicVisitDate [Link]:https://www.kaggle.com/c/dunnhumbychallenge/Leaderboard/PrivateVisitSpend [Link]:https://www.kaggle.com/c/dunnhumbychallenge/Leaderboard/PrivateVisitDate I hope these additional leaderboards provide an interesting additional dimension to what happened in this fun competition.",5,bronze,11 ,Sat Oct 01 2011 03:14:30 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/910,/competitions/dunnhumbychallenge,None /teamsmrt,Company info?,"A google search for ""Carvana"", ""Carvana autos"", ""Carvana auctions"", and ""Carvana sell no evil"" returns no useful results. I think it's kind of odd that a company willing to put up $10k for a competition doesn't have a website that comes up in a google search. It is the name of a Brazilian actor and an Iranian steel company however. Is ""Carvana"" a made up company to keep us from knowing the real company behind this competition? I don't mind if it is a cover name, I just want to know the company is reputable (i.e. the prize money is real) before I put any effort into the competition. All I can infer from the data is that they are located in the U.S. and they have bought cars in most of the 50 states but not all of them (Florida and Texas seem to be very popular). Can anyone offer some insight/info about this company? I don't want to sound like a conspiracy nut, but I am really curious.",0,None,3 ,Sat Oct 01 2011 06:56:50 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/911,/competitions/DontGetKicked,153rd /leiflauridsen,"Great competition, but..","I agree that this was a very enjoyable competition. Thank you. But I am afraid it did not necessarily produce the best forecasting models for the shopper problem. The problem as I see it is that the 30% test sample our submissions were evaluated against, were apparantly not very representative of the full dataset. In my case I effectively dropped out of the competition in the beginning of September when my submissions seemed to become significantly worse. I made a submission on August 23 which produced an accuracy of 16.57% on the test sample (which as it turns out was 17.44% on the full set) while I pretty much gave up after my improved methods submitted on September 5 only produced an accuracy of 16.13% on the test sample. In fact I now realise that the accuracy on the full dataset was much better, namely 17.89% which at the time must have been one of the best entries. Had I known this I would have doubled my efforts on that approach instead of dedicating my spare time to my family. So my family was happy but the bottomline is you may well have missed out on even better models because of the test sample not being representative of the full data set. I am not sure if the issue could have been avoided but I thought I should point it out.",1,None,8 ,Sat Oct 01 2011 10:24:41 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/912,/competitions/dunnhumbychallenge,15th /benhamner,Who is behind this competition?,"I'm very interested in unsupervised and semi-supervised learning, and could be interested in this competition. However, you are asking a lot for the chance to be included in the acknowledgements section of a workshop paper, which is virtually meaningless. Will you please clarify who is running this competition along with your affiliation and your connection (if any) to the organizers of the NIPS workshop?",0,None,3 ,Sat Oct 01 2011 16:41:21 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/913,/competitions/SemiSupervisedFeatureLearning,1st /wcukierski,Standardizing the SVM,"Will the final judgement on this competition be made by using the features rather than our own SVM labels? Doesn't this mean that the leaderboard is rather useless? Couldn't somebody use a fancier kernel or other method to game the leaderboard? Also, due to minor implementation/language differences, how are we to know that all linear C=1 SVMs will give the same labels? Perhaps the organizers could release a ""sanity check"" data set to validate that our own SVM classifier is in line with the one that will be used for the final judgement?",1,None,2 ,Sat Oct 01 2011 21:50:11 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/914,/competitions/SemiSupervisedFeatureLearning,1st /ccccat,Prize Fund is Too Low ? Pt2(or3),"This is the continuation of the discussion started at: [Link]:http://www.kaggle.com/c/GiveMeSomeCredit/forums/t/870/prize-fund-too-low and [Link]:http://www.kaggle.com/c/ClaimPredictionChallenge/forums/t/906/prize-fund Initially I was skeptical about discussion of prize amounts. Probably, because I was participating in competitions mainly for fun. I did not see worrisome trend forming. Unfortunately now it is becoming clear - businesses are trying to use Kaggle to solve problems with potential multimillion return and paying peanuts for that. Let's consider Don't Get Kicked! competition. Here are quotations: "" Kick cars can be very costly to dealers after transportation cost, throw-away repair work, and market losses in reselling the vehicle."" "" Carvana is a start-up business that is being launch by a well-established American company. Its goal is to completely change the way people buy, finance, and trade their used vehicles by replacing physical infrastructure with technology and top of the line scientific models."" If we assume that there are approximately 1M used vehicles sold in one year in USA and 10% of them are ""kicks"" then reducing ""kicks"" just to 9% will generate profit of at least $50M per year. They are admitting that scientific models are the foundation of their business model, and at the same time they want to pay only $10K for 4 best models. The question is : do we want to do it? Can you imagine selling your algorithm for $5K and in several years watching on TV ""largest IPO of the century of ""the next Google"""" based on that algorithm? I do not have any problems with spending my time for common good (like scientific problem). However if somebody wants to make money using my solutions, then they should pay handsomely. I will continue participate in competitions because I like it. But I will not claim the prize and will not release algorithm if I think that we are taken advantage of.",2,bronze,27 ,Sat Oct 01 2011 23:33:24 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/915,/competitions/DontGetKicked,None /mlearn,Image attachments,"How should we be attaching images to forum posts? In this [Link]:http://www.kaggle.com/c/dunnhumbychallenge/forums/t/912/great-competition-but I attached three png files. When I click on them I get told they're binary files from a ""windows.net"" domain rather than getting the image displaying directly in my web browser (Firefox). Should I have been doing something different? Ideally I'd like the picture to appear inline in the post but the only way of doing that seemed to be if I had somewhere else to host the image first. Thanks.",0,None,6 ,Sun Oct 02 2011 20:07:49 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/916,None,None /chenguang0,SVM training time,"Hi! I wonder what's the typical SVM training time on your side for the 50000 samples? I ran on a very good machine. 20 minutes passed and it seems it will still go on for ever... (Should I add the ""-h 0"" suggested by libsvm to speed up?) Thanks!",0,None,5 ,Tue Oct 04 2011 05:25:18 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/917,/competitions/SemiSupervisedFeatureLearning,6th /xiaonanji,Why RevolvingUtilizationOfUnsecuredLines is much greater than 1?,Is there something that I missed? I would assume this variable is between 0 and 1 but appearently there are many values that are greater than 1. Any suggestion? Thanks.,1,None,6 ,Tue Oct 04 2011 08:41:33 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/918,/competitions/GiveMeSomeCredit,534th /pprett,AUC implementation,"Hi! What kind of tool is used to compute the leaderboard scores? Is it `perf` [1], libsvmtools [2], or some custom code? I wonder how score ties are handled... [1] [Link]:http://osmot.cs.cornell.edu/kddcup/software.html [2] [Link]:http://www.csie.ntu.edu.tw/~cjlin/libsvmtools/ thanks!",0,None,12 ,Tue Oct 04 2011 10:07:43 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/919,/competitions/SemiSupervisedFeatureLearning,9th /venki16197,Request for more info on fields,"How is the value for the fields of Acquisition price derived? What is the difference between Acquisition price and Acquisition price in retail market? AUCGUART - How does this field map to the Average condition of the Car WarrantyCost - What does this field mean, is this an additional expense to the Acquisition cost KickDate - is this date, the date vehicle is put in to auction again after the AutoDealer has purchased it? or The date in which the vehicle enters the Auction before auto dealer has purchased the vehicle? BYRNO - Is the Auto Dealer buyer here?",2,None,10 ,Wed Oct 05 2011 07:26:39 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/921,/competitions/DontGetKicked,88th /nschneider,2nd Place Methodology,"I am gathering code and writing my process for Kaggle. As I am doing that I thought I would share findings and methodology as I go along. I handled my data manipulations and spend predictions in SAS. I used R to run Generalized Boosted Regression Modeling (GBM package) to predict the visit date. I also used JMP to visualize the data along the way. First, I focused on predicting spend amounts. Testing was done on the actual next spend amounts in the training data regardless of the next visit date. I tried a suite of median statistics: entire year, most recent 3 months, and recent 17 spend (based on roobs forum discussion). Then I tried some time series projections. I used Croston's exponential smoothing for sparse data to develop projections. This is really projecting usage of groceries and produced strange results due to small purchases after a long time period and large purchases after a short time period. I modified my formulas to predict needed inventory levels, i.e. how much does a customer need to refill their pantry. None of these time series methods outperformed the median estimates, so I abandoned this line of reasoning. Finally, after looking at the distribution of claims realized that the range covered by the median did not cover as much as other $20 ranges could. The final methodology used in the spend prediction is written below. This is from the documentation I am preparing for Kaggle and will discuss the date methodology in later post. Visit_Spend Methodology All presented methods use the same spend amounts. The amounts will differ based on the projected day of the week for the shopper's return, but the methodology is the same. A members next spend amount was developed on historical data only. There was no training a model on data past March 31, 2011. Training data are used later to optimize method selection. The chosen method optimizes the results based on the testing statistic for this competition. The metric for determining if the projected visit spend amount is correct was being with $10 of the actual spend amount. Maximizing the number of spends within the $20 window was accomplished by empirically calculating the $20 range that a customer most often spends. I termed this window the Modal Range. Typically, it is less than both the mean and the median of a customer's spending habits. Predictions were further enhanced by determining a modal range for each day of the week. In the final submissions, these values were also front weighted by triangle weighting the dates from April 1, 2010. (A spend on April 1, 2010 has a weight of one and a spend on March 31, 2011 has a weight of 365.) The projected visit spend was based off the day of the week of the projected visit date. In cases where the customer does not have enough experience on the return day of the week, their overall modal range is assumed. The training data were used to develop credibility thresholds for selecting a customer's daily modal range verse their overall modal range. The thresholds were hard cutoffs. If the customer did not have enough experience on a particular day of the week, the overall modal range was projected. The overall modal range was not front weighted like the daily ranges. Future considerations would have included replacing the thresholds cutoffs with a blending of the daily modal range and the overall modal range based on experience. EDIT: Added language that the fallback overall modal range was not weighted.",2,None,4 ,Wed Oct 05 2011 20:44:12 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/922,/competitions/dunnhumbychallenge,2nd /pavelsergeev,Какую задачу мы решаем?,"В прошлом году было известно, что представляет собой задача классификации. Например ""Определение сорта ириса"" или ""Выявление предрасположенности к болезням печени"". В этой раз даны просто ""голые"" числа, без какой-либо информации. Это делает процесс решения менее интересным и увлекательным =)",0,None,1 Comment,Thu Oct 06 2011 03:04:45 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/923,/competitions/AlgorithmicCompositions,3rd /larryc0,Evaluation Criterion,"I thought I knew what I was predicting, but now I am not so sure. The leader board lists a gini for each entry. I am familar with that in looking at income distributions, but the exact implementation here is not clear to me. I had assumed that I was going to supply a zero one variable as my prediction, but as I was checking here on how the gini has been used in the past and found that in at least one case the sort order was important. That would imply that I am predicting an ordering of the probabilities. The example entry does not give any clue as all the predictions have a value of zero. There is no tab on this challenge indicating the precise evaluation criterion. Google searches give results both for zero one classifications and for continuous classifications. So I am a bit confused on what the actual evaluation function is for this challenge. I would suggest that an evaluation button be added with the details spelled out to eliminate any confusion. Those who have been around this site for a while are likely making assumptions based on past challenges, but I would think that it is best to provide the specifics with each challenge.",4,bronze,31 ,Thu Oct 06 2011 23:11:43 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/925,/competitions/DontGetKicked,None /salimali,Another Interesting Observation? ,"If you look at debtratio > 1, then you will see that if you have an integer value then you are a lot less likely to be a baddy than if you have a non-integer. If you track these integer people down, you will see they have missing incomes. So the conclusion from this is that income is estimated for these people and rounded. Either they are a completely different cohort with a reason for not having a personal income (businesses?) or the income estimation used in calculating debt ratio is very wrong! Any other hypostheses?",1,None,4 ,Fri Oct 07 2011 12:11:42 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/926,/competitions/GiveMeSomeCredit,89th /matthewroos,Claim_Amount range,"Having read in the data I'm getting a total number of rows as 13184291. Of these, 95605 have a Claim_Amount > 0. The range of non-zero Claim_Amount is 0.0001531019 to 11440.75. This range seems quite strange if it is units of dollars or other major currency. Are other people getting similar numbers?",0,None,1 Comment,Fri Oct 07 2011 15:58:46 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/928,/competitions/ClaimPredictionChallenge,30th /glebkolobaev,Реальная дата окончания.,"На вики шада написано, что срок сдачи второй домашки 11 октября в 9.00. Здесь висит дата окончания соревнования 12 октября 11:59. Когда реальный дедлайн?",0,None,1 Comment,Sat Oct 08 2011 00:55:37 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/929,/competitions/AlgorithmicCompositions,75th /dpantele,Количество классов,"Правда ли, что классов всего 6, от 1 до 7 без 6? В тестирующей выборке тоже нет почвы класса 6?",0,None,1 Comment,Sun Oct 09 2011 16:59:15 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/932,/competitions/AlgorithmicCompositions,33rd /mattfrancis,Theory behind AUC?,"I realise there are a number of threads around about AUC, but I'd like to ask a general question about the motivation for using AUC, and where it is derived from. When a series of binary events is being predicted probabilistically, it is easy to show from probability theory (i.e. pretty much straight from Bayes theorem) that the optimal solution is given by minimising the following fitness function (in pseudo code) Sum i from 1 to num_events if result_i == 1 then sum += 1 + ln P_i else sum += 1 + ln(1-P_i) where P_i is the predicted probability (between 0-1) of each event occuring. Now, this gives a different result to AUC, and hence as far as I can see AUC does not (neccesarilly) reward the model that most accurately reproduces the 'real' probabiliy of events occuring. The downside is that the predictions must be probabilities, rather than arbitrary real numbers as in the case of AUC, but that's just a question of model construction. Is there a sound theoretical basis to AUC that makes it preferable to that suggested by probability theory? It seems a little ad hoc to me, although I am happy to be corrected if someone can give me some more details. One thing that bugs me is that if two models give the same ordered ranking then they are identical in AUC, but one might reproduce the probabilites better than the other. AUC seems inherently limited in this respect?",0,None,2 ,Mon Oct 10 2011 14:07:57 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/933,None,None /daniilkononenko,финальное тестирование,В ходе соревнования я сделал несколько submissions. Какой из моих векторов ответов будет использоваться для финального тестирования на всей тестовой выборке?,0,None,3 ,Mon Oct 10 2011 18:55:38 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/934,/competitions/AlgorithmicCompositions,22nd /wilsta,Model Validity,"Are there any criteria around model validity? i.e. does the model need to make intuitive sense? I'm guessing that if it doesnt make intuitive sense (perhaps through overfitting), then its economically meaningless. My question is, does that matter for this competition? We all have a very similar AUC, however I'm certain we don't all have an economically meaningful model (i.e. a model that can be approved by a banks internal credit committee or external regulator for instance).",0,None,7 ,Tue Oct 11 2011 02:06:31 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/936,/competitions/GiveMeSomeCredit,329th /wilsta,What gives?!,"I get a strong gini on my development set, then a rubbish gini in the hold out set after submission. There must be differences in the distributions or something. Haven't looked at it hard enough yet ... but are you guys getting the same?",0,None,13 ,Tue Oct 11 2011 09:48:19 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/937,/competitions/DontGetKicked,202nd /lapakshi,Submission...,"Hi Folks, Just wondering about the submission file format. The example_entry.csv has two vars - RefID, IsBadBuy My question is, does the second column have to be in 1/0 format or a measure of predicted probability - real number in [0,1]? Thanks!",0,None,4 ,Tue Oct 11 2011 22:39:41 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/938,/competitions/DontGetKicked,374th /argv,deep learning methods tried yet?,We're at around the halfway point of the competition; thanks to everyone who has submitted results so far! I wanted to check in and see if anyone has applied deep learning methods or deep auto-encoders so far. Have they helped? What other methods have folks tried that have worked well -- or not as well as expected?,0,None,3 ,Fri Oct 07 2011 14:53:29 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/927,/competitions/SemiSupervisedFeatureLearning,None /egorfilonov,оценка за соревнование,какая корелляция с местом/результатом? если все 100% наберут то все получают 20? и если ничего не получится то 0?,0,None,1 Comment,Sat Oct 08 2011 11:51:11 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/931,/competitions/AlgorithmicCompositions,25th /tuzes17475,submission korlát,"Miért van az, hogy csak 2-t próbálkozhatok naponta? Szeretnék akár 10-et is beküldeni, nem megoldható? Pl szeretném megmtudni, hogy apró változtatások a kódomon hogyan segítenek közelebb a megoldáshoz.",0,None,3 ,Mon Oct 10 2011 22:25:45 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/935,/competitions/PhotometricRedshiftEstimation,3rd /statgeek,Data Quality Issues,"In case anyone hasn't noticed under transmission there's 3 entries 'AUTO', 'MANUAL' and 'Manual'. Depending on what you're doing you might miss the case change.",2,bronze,1 Comment,Tue Oct 11 2011 23:04:03 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/939,/competitions/DontGetKicked,358th /del=510c80756bf93118,test labels,"Dear Organisers, could you, please, publish test labels for both dates and spends, so that we shall be able to use them in our educational and research programs here at the Vyatka State University. Many thanks,",0,None,3 ,Wed Oct 12 2011 06:20:08 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/940,/competitions/dunnhumbychallenge,None /batman3,Batman is not happy...,Robin: I'm coming after you. Give me 24 hours buddy.,0,None,2 ,Wed Oct 12 2011 06:34:53 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/941,/competitions/BeerSalesPrediction,13th /barrenwuffet,R script for AUC calculation,"Out of curiosity, does anyone have the calculation for AUC written in R? If you'd post it, it would be great, otherwise I know what I'll be doing this weekend. Thanks.",0,None,3 ,Wed Oct 12 2011 07:38:22 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/942,/competitions/GiveMeSomeCredit,232nd /yaaang,Open-sourcing the winning entries,"In [Link]:http://www.kaggle.com/c/wikichallenge/forums/t/851/competition-is-almost-over you mention that you're collecting the entry sources. Given the background of this competition and the open nature of the Wikipedia foundation, would you consider releasing the sources you receive (or at least of the winning entries)? It seems at the same time both appropriate and an opportunity not common to all Kaggle competitions. We all stand to learn a great deal from this work. The blog posts are important, but being able to study the sources would be a game-changer. Thanks!",0,None,8 ,Thu Oct 13 2011 10:10:03 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/943,/competitions/wikichallenge,8th /subratac,Refid missing in the dataset,"I noticed that some of the Refid are missing in the training dataset. For example, there is no RefId 797 in the training dataset. There are many more Refid that are missing ... this one is just one example. Is this true or am I reading the dataset incorrectly? Thanks!",0,None,7 ,Fri Oct 14 2011 04:59:17 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/944,/competitions/DontGetKicked,112th /madhavkumar2005,Clarification regarding the submission.,"This might sound a little stupid, but we have to predict the probability that a customer will experience financial ""distress"" right? The reason why I am asking is because on the submission page, when I try to make a new submission, it says on the right side in the blue box Each predicted value must be: A real number. That is, a real-valued number in the interval (-∞, ∞). Is that a typo or am I missing out on something?",0,None,2 ,Sat Oct 15 2011 15:16:02 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/946,/competitions/GiveMeSomeCredit,349th /tuzes17475,módszerek és eredmények,"Itt jó lenne leírni, hogy ki milyen módszer(ekk)el dolgozott, és milyen eredményeket ért el vele. Kezdem magam: egyszerű átlagát vettem a tanuló halmaz redshiftjének, és ezt adtam meg mindegyik megmérendő galaxis redshiftjének RMSE: 0.115 körül futási idő: másodperc k szomszéd közelítés k=1 mellett RMSE: 0.043 futási idő: 2-2,5 óra @ 1.2GHz k szomszéd közelítés k=2 mellett, figyelembe véve, hogy a mérési eredményeknek szórása is van. (A hibát gauss típusúnak feltételeztem, a megadott hibát pedig mint a gauss függvény szigmája vettem - 5 dimenzióban. A redshiftet a k legközelebbi szomszéd súlyozott redshiftjéből számoltam, ahol a súlyozás az 5 dimenziós gaussok átfedési integrálja alapján történt, azaz a legközelebbi galaxis súlya úgy aránylik a 2. legközelebbiéhez, mint az integrál (mérendő galaxis 5dim gaussa * 1. legközelebbi galaxis 5dim gaussa) és a integrál (mérendő galaxis 5dim gaussa * 2. legközelebbi galaxis 5dim gaussa) RMSE: 0.03807 futási idő: 5-6 óra k szomszéd közelítés k=200 mellett, figyelembe véve a hibákat RMSE: 0.03599 k szomszéd közelítés k=200 mellett, figyelemen kívül hagyva a hibákat (helyettük 0,1 értéket vettem) RMSE: 0.03519",0,None,5 ,Sat Oct 15 2011 23:42:16 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/947,/competitions/PhotometricRedshiftEstimation,3rd /however,submission,He. where can i find an example of submission. i read tha we can use many programs to create the algorithm but i can't understand how and what do i have to send. can someone post a valid submission form? thankyou,0,None,1 Comment,Sun Oct 16 2011 13:54:41 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/950,/competitions/hhp,None /smcinerney,Writeups of winning methodologies?,"We are doing a DM/ML course project and I would like to use the winning approaches to this competition as material, but in order for that I would need to know the details this week. [Link]:http://www.kaggle.com/c/dunnhumbychallenge/forums/t/922/2nd-place-methodology/6041#post6041 already posted a good writeup. D'yakonov Alexander, Ben Hamner and all the others - are you doing writeups of your winning methodologies? Can you at least tell us in one paragraph what you tried and what you used? Thanks in advance",1,None,2 ,Sun Oct 16 2011 15:55:51 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/951,/competitions/dunnhumbychallenge,None /clancybirrell,ROC AUC as an incoherent classifier measurement,"Hi, Wondering if the admin's at Kaggle have read or heard about this paper and if so what are the thoughts on continuing to use ROC AUC for competition relative measures. (I admit I have not pored over the conditions to confirm if the ROC is still in fact the AUC being used) http://www.google.com.au/url?sa=t&source=web&cd=4&sqi=2&ved=0CDkQFjAD&url=http%3A%2F%2Fwww.springerlink.com%2Findex%2Fy35743hp7010g354.pdf&ei=gO-aTriKGeaQiAfJ2ZisAg&usg=AFQjCNFTGCEEx3S8Qe9__OUZYbI3vepGQQ Regards, Clancy.",1,None,1 Comment,Sun Oct 16 2011 16:55:19 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/952,None,None /vishalsurana,SeriousDlqnin2yrs vs 90DaysLate,"Hello, I couldn't quite grasp the difference between SeriousDlqnin2yrs vs NumberofTimes90DaysLate. Based on the definition of the former, I would guess that if a person is 90 days or more past due, then he/she is delinquent as per the definition of SeriousDlqnin2yrs. Can you also illustrate the difference with an example? Additionally, for a person to be SeriousDlqnin2yrs, is not necessary for him/her to be 90DaysLate atleast once? If that is the case, then how do we have entires where a person has been delinquent, but hasn't been 90 days past due? Thanks!!",1,None,2 ,Mon Oct 17 2011 05:43:06 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/954,/competitions/GiveMeSomeCredit,None /cbusch,RMSLE,What is the accuracy percentage of a model with an RMSLE of .4?,0,None,8 ,Mon Oct 17 2011 21:53:47 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/955,/competitions/hhp,266th /argv,Final reminder: send descriptions to semisupervisedfeatures@gmail.com,"Thanks again to all of the competitors -- the range of approaches and quality of submissions has been fantastic! As a final reminder, remember to send the short descriptions of your four selected submissions to semisupervisedfeatures@gmail.com by the end of the competition deadline so that the results can be eligible for the prize and your work properly credited in the competition writeup. An example description appears in the thread ""Final Submission Details"" Thanks again for participating!",0,None,1 Comment,Tue Oct 18 2011 00:56:27 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/956,/competitions/SemiSupervisedFeatureLearning,None /benhamner,Contest methods,"Surprisingly, one supervised method we tried on a whim to establish a supervised baseline ended up as the top performing model. Here's a brief description of our method: Though we explored various unsupervised and semi-supervised options, our best submission consisted of purely supervised features: the posterior probabilities output from Breiman's Random Forest algorithm. Input features to this algorithm came from the union of two sets: the top k features with the most non-zeros, and the top k features with the largest difference between class means. The sum of all features for each data point was included as well. Only the labeled dataset was used to select these features and train the model. A total of 5 submissions were included as final features, each comprising a different number of the top features from the two feature selection methods (ranging from ~600 to ~1400 total features) and slightly different random forest parameters. This method got first on the public and private sets, with a private AUC of 0.9771. Here's several of the alternatives we explored: Weighted k-means / mini-batch k-means Wrapper methods around supervised methods to incorporate unsupervised data Wrapper methods + multiple views around supervised methods SVD Here's some of the other methods that we did not have the time and/or computational resources to explore, but we wanted to and are curious to see if other contestants gave them a shot: Sparse autoencoders Deep belief networks Restricted boltzmann machines Latent dirichlet allocation Self-organizing maps Graphical models",5,bronze,17 ,Tue Oct 18 2011 04:37:00 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/959,/competitions/SemiSupervisedFeatureLearning,1st /szira18868,Csevegő,"Csak kipróbálom a fórum funkciót, ha már itt járok. Legalább lesz hova írni, ha valakinek valami bánata van a házival :) Pá, Norbi",0,None,3 ,Tue Oct 18 2011 10:38:34 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/961,/competitions/DMT2011RapidMiner,4th /robrenaud,Did anyone try to bootstrap the unlabeled data?,"Train a classifier on the labelled data. Label all the unlabelled data. For the most confident freshly labelled data, throw it back into the labelled data set and retrain.",0,None,1 Comment,Tue Oct 18 2011 19:09:07 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/962,/competitions/SemiSupervisedFeatureLearning,None /yogahariman,DebtRatio(Percentage) >1 for MonthlyIncome,"if i show values of DepthRatio>1, i get MonthlyIncome pola like this DebtRatio(Percentage) MonthlyIncome 2 NaN 2 NaN 2 NaN 2.00039994700000 7500 2.00059988000000 5000 2.00159936000000 2500 ....... 2.97956577300000 782 2.99375780300000 800 2.99467376800000 750 3 NaN 3 NaN 3 NaN .... Data will be NaN(Null Value) if DebtRatio>1 and values of DebtRatio is integer how to explain this pattern??",0,None,1 Comment,Thu Oct 20 2011 08:40:37 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/964,/competitions/GiveMeSomeCredit,416th /jeffmoser,External Data,"The competition host is allowing the use of external data in this competition with the following restrictions: The dataset must be freely available and usable for commercial purposes The dataset must be helpful in making future looking predictions (i.e. you shouldn't use a dataset that is specific to one year simply because this competition's test set is in the past. The dataset should be helpful in making future predictions) To ensure compliance with the above guidelines, you must provide a link to external dataset(s) you use to generate a submission that you upload for scoring. This link should be provided as part of this forum topic.",5,bronze,24 ,Thu Oct 20 2011 17:38:28 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/965,/competitions/DontGetKicked,None /atos18420,Get start wit kaggle,"I want to get star with kaggle and i dont know difference between test and trainig data. also what should i produce as an outcome?dose it have percentage type or binary(0,1) like training data???",0,None,2 ,Fri Oct 21 2011 11:56:13 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/966,/competitions/GiveMeSomeCredit,None /jfister,Natural Language Processing,"As a Kaggle member, I thought this would be of interest to others. We are looking for someone with a background in NLP and Machine Learning for 20-40 hours per week. Here are the details EDIT: no longer available",1,None,2 ,Fri Oct 21 2011 15:53:36 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/967,None,None /solorzano,Leaderboard Layout,It appears to be broken in FireFox 7. (Looks fine in IE 8).,1,bronze,3 ,Fri Oct 21 2011 20:10:56 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/968,None,None /laszlomocsar,Leaderboard screwed,"For 2-3 days now, leaderboard seems screwed in my Firefox 7.0.1 (see attachment) What's changed? Can you undo it in the webpage code? (in Opera 11.5, it looks ok) Any ideas? [Link]:https://storage.googleapis.com/kaggle-forum-message-attachments/1359/leaderboard.jpg",0,None,2 ,Fri Oct 21 2011 23:15:42 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/969,/competitions/DontGetKicked,283rd /signipinnis,Improving the models,"Now I feel bad, my last comment in the Milestone Papers thread has apparently killed off all the conversation in the entire forum. Not my intention. Okay, I'll ante up a chip ... I'm imputing sex for all the sex=unknown members. Which is to say I've just started, I don't have any results yet to know if it improves the models.(I'm a slow worker, and literally all I knew about R when I started was that it was the the 19th letter in the alphabet. (See, even that micro bit was wrong.)) Anybody else find inputing sex helps ? Or not ?",0,None,10 ,Sat Oct 22 2011 05:54:41 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/970,/competitions/hhp,864th /silicon2006,questions on D'yakonov Alexander's methodology,"We got the following questions on D'yakonov's methodology and wonder if there are any explanations. (1) In the formula to calculate Pt on the 1st page, why do you weight the data from each week by (53-r)^2? This factor varies significantly! (2) In the formula above, why you consider the first visit of each week twice using 0.125*delta( t-7r)? The importance of ""first visits"" are already taken into considerations in the later discussions, e.g. p2' = (1-p1)p2. (3) Is there any theoretical basis for using pt'*(mt + epsilon) to find the most probable date of first visit? Why not using pt alone? or pt *sqrt(mt + epsilon)? (4) On the 2nd page, (mu(1), ..., mu(m)) = (mu(1),...,mu(n1), mu(1)',...,mu(6)', mu(7)',...mu(n2)'), what does the formula mean? (5) On the 2nd page, what does it mean by ""add the last purchases, no more than 6+ 0.4 *n1""? (5) On the 2nd page, what does it mean by ""using weights sqrt(m), sqrt(m-1),....""? Best regards, Bo.",0,None,1 Comment,Sat Oct 22 2011 10:44:52 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/971,/competitions/dunnhumbychallenge,None /matthewroos,What approach did you use? Any boosting?,"Hi, Everyone. Things have been quiet here and I'm eager to hear about the methods the winners used. Is anyone else willing to share his or her approach (regardless of how successful it was)? I joined the competition late and only had time to use some of the categorized variables but below is a summary of what I tried. It gave a score of 0.1 on the training data (I did not segment the training data into subsets for cross validation so there was probably some overfitting). My final test score was only 0.079 but maybe that's not so bad given that I didn't use any continuous variables or the model and year information. I utilized the vehicle make, the 12 alphabetic vehicle categorical variables, the 1 ordered vehicle categorical variable, and the 1 alphabetic non-vehicle categorical variable. I’m working with a 5 year-old MacBook so the large data set was cumbersome. I only worked with one variable at a time, reading the data in and labeling all categories with integers, rather than alphabetic characters, for further processing in Matlab. For each category I computed the mean amount paid for each label within the category (e.g., ‘A’, ‘B’, etc.) and stored the mean values (and number of entries (Row_IDs) associated with each label). Then, for each entry/Row_ID I computed the sum of the means affiliated with labels. For example, imagine an entry had a Cat1 value of ‘E’ and a Cat2 value of ‘B’. If all entries with an ‘E’ for Cat1 had a mean amount paid of 0.0059 and all entries with a ‘B’ for Cat2 had a mean amount paid of 0.0079 then the total score for the vehicle would be S = 0.0059 + 0.0079 +…. (add values for all other categories). The final ordering prediction/estimate was derived from a ranking of these scores. It was slightly interesting to note that one can estimate the normalized gini based on a *single* categorical variable, given the mean amount paid and number of entries associated with each label (i.e., one doesn’t need to compute the gini directly). NormGiniEstimate = 1 – (A1+A2)/A0 A0 = SUMi(Ni*Mi)*SUMi(Ni)/2 A1 = SUMi(Ni^2*Mi)/2 A2 = SUMi(Ni*SUMj(Nj*Mj)), where SUMj is taken over j ...where Mi is the mean amount paid for the i-th label and Ni is the number of entries with the Mi label. Mi and Ni are sorted by Mi such that M_i <= M_i+1. A small number of categorical variables had notably higher gini scores and most of the predication power came from these variables alone (adding all others to the total score made little or no difference in the final ranking). I tried weighting scores from each variable differently based on things such as the predicted single-variable gini values, variance of label means within a category, etc., but found nothing better than equal summation. I also tried boosting using AdaBoost. For this approach I used each variable as the sole input to a classifier. If an entry had a label with a mean amount paid below the total average, I classified it as non-paying, otherwise as paying. All classifier outputs were weighted based on AdaBoost. The results were very disappointing, approaching complete randomness. I’m new to boosting and this was my first attempt to use it, so I’m very curious to know if others tried it and what the results were.",1,bronze,2 ,Mon Oct 24 2011 17:28:07 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/973,/competitions/ClaimPredictionChallenge,30th /blindape,someone is trying to interpret data from a clinical point of view,or you direct efforts preferentially to the predictive model? I am having serious problems with the clinical interpretation and am about to give it up and focus on more theorical/abstract data mining,0,None,1 Comment,Mon Oct 24 2011 19:57:18 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/974,/competitions/hhp,3rd /dslate,Splitting of data into training and test,"The Data page says: ""The data set is split to 60% training and 40% testing."" Can you confirm that records are assigned randomly between training and testing, and also randomly between the Public (Leaderboard) and Private datasets?",1,None,19 ,Tue Oct 25 2011 06:40:49 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/975,/competitions/DontGetKicked,2nd /wepredict,Full time job in the UK solving this problem.,"I’m not sure how long this post will stay up, or if this is even within the rules, however here goes. We are a predictive analytics company and we have teamed up with the an NHS team to use NHS data to work on the unplanned hospitalization problem. We have fantastic academic support as well as a dedicated team to help you work on this problem. We want to hire someone full time to develop predictive health solutions with this competition as a back drop. If you are interested please contact me. James Davies Managing Director jdavies@wepredict.co.uk www.WePredict.co.uk",0,None,2 ,Tue Oct 25 2011 13:36:14 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/976,/competitions/hhp,291st /sashikanthdareddy,Leaderboard ranking,"Hi, Any plans in making the leaderboard report performance statistic averaged on a N-bootstraped samples rather than a point estimate? While you are at it maybe add some confidence levels to the average obtained from the bootstrap.",0,None,1 Comment,Tue Oct 25 2011 19:49:14 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/977,/competitions/GiveMeSomeCredit,426th /blindape,I repeat this question I think important,May be a patient in target file who has died in Y4 after one or more claims. It's unclear whether patients who died after some claims in Y4 has been deleted.,0,None,1 Comment,Tue Oct 25 2011 21:11:44 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/978,/competitions/hhp,3rd /sirguessalot,"interpretation of ""probability of default""","This is a question for fellow Kagglers. Suppose a person scores x = 0.25 as their probability to default. Is there some industry-standard benchmark or formula that is used to determine whether that person should be given credit or not? Other than the obvious ""probability cutoff"" (e.g. nobody above 0.75 is given credit) what approach would you use to determine eligibility?",0,None,7 ,Wed Oct 26 2011 18:12:22 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/979,/competitions/GiveMeSomeCredit,28th /keiththerring,Wikipedia Participation Challenge : An unfortunate ending,"I'd like to discuss what has turned out to be an unfortunate conclusion to the Wikipedia Challenge. First let me say though that I appreciate all the effort put in by Diederik van Liere from Wikipedia who sponsored the competition. Also I'd like to say that I think Kaggle is a great concept and I'm really rooting for its continued success. As a data enthusiast, the two competitions that I've had a chance to participate in have been a fun way to spend a bit of my spare time. So I hope the lessons learned from the Wikipedia Challenge will help to improve the experience for participants in future competitions. At the end of the competition I looked over the solution submitted by Benjamin Roth and Fridolin Roth, who ended the competition at the top of the leaderboard. Before getting into the issue, I'd like to say that I mean no disrespect to them and appreciate their participation in the challenge. That said, I was surprised to see that their winning solution was simply a standard/vanilla linear regression model on approximately a dozen features. Upon further inspection I noticed that they made an unusual/arbitrary choice of training two linear regressions, one on the ""odd"" editors and another on the ""even"" editors. Here ""odd"" and ""even"" is defined w.r.t. their index of appearance/ordering in the original training set, training.tsv, not by user id. From a learning theory perspective, I found this to be puzzling since this odd/even distinction is an arbitrary feature of the data set construction and should have no influence on the quantity being predicted, future edits. Further investigation revealed that there was a significant flaw in how the training set, training.tsv, was constructed. The ""odd"" editors all had zero future edits and the ""even"" users all had greater than zero future edits. In other words, the order of the training set was not properly randomized. Through no fault of their own, Benjamin and Fridolin stumbled upon this mistake in the data set construction, perhaps unaware that it was an artifact/mistake. According to Diederik: ""they were curious about it, but it just performed very well"", which suggests they are still new to data analytics and learning, a position we all start from. It is through perfect knowledge of which editors quit (zero future edits) that their model was able to achieve such high performance using only a standard linear regression. Unfortunately though it would be impossible to know this information in general, since this is precisely a significant fraction of the quantity/output in which the model is designed to predict. As such it is not a valid model and can't be used by Wikipedia to gain insight on editor participation. Had their model been applied to randomized data, i.e. without the knowledge of which users quit, it would have performed outside the top 50 w.r.t. the final leaderboard. In short I hope that Kaggle can apply the lessons learned from this unfortunate randomization error and take a more active role in helping the sponsor in constructing their data sets. I think it would be a shame for these competitions to be decided based on mistakes/artifacts in the data set, particularly mistakes like not randomizing the order of the training samples, rather than on predictive capability. Happy mining to all......",10,silver,9 ,Wed Oct 26 2011 18:44:08 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/980,None,None /keiththerring,An unfortunate ending to this competition,See my post on the main forum for an explantion on how the first place team was able to achieve their result. It is an unfortunate ending to an otherwise great competition.,0,None,5 ,Wed Oct 26 2011 19:18:23 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/982,/competitions/wikichallenge,2nd /cloud9,Websites for health information management professionals,Slightly OT but are there any websites that people here frequent to stay on top of trends in healthcare information management?,0,None,3 ,Fri Oct 28 2011 00:25:25 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/983,/competitions/hhp,None /petewarden,Welcome!,"I just wanted to post a quick note thanking everyone who's taking a shot at this. I've been a fan of Kaggle for a long time, so I was excited when I hit a problem that seemed like a good fit. I'm going to be following the forum, so feel free to shoot me questions here and I'll do my best to answer. I'm also working on putting together an open example showing our current best solution (which is extremely primitive) just as a reference. As a tiny startup, I think we're in an unusual position. A lot of the competitions here seem to be intractable problems that have baffled internal teams at large organizations. I've been so impressed with the quality of the solutions that have come up here that I'm turning to you folks first, not as a last resort. I'm looking forward to seeing how this approach works out, and I wish you all luck! cheers, Pete Warden, @petewarden",0,None,16 ,Sat Oct 29 2011 23:50:50 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/985,/competitions/PhotoQualityPrediction,None /robertlachlan,Other data sets?,"Are we allowed to look for external data to train our algorithm? Something that comes to mind is in the data description, where you mention that you expect ""areas in Africa rich in wildlife [have] a high proportion of good photos"". So following that example, are we allowed to look for geographic data sets that might help interpretation of the latitude and longditude values?",0,None,13 ,Sun Oct 30 2011 06:59:49 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/986,/competitions/PhotoQualityPrediction,None /alecstephenson,Website Small Technical Issue,"I posted my first submission about an hour ago giving me a public score 0.22024, but there appears to be some technical issue and I am not shown on the leaderboard. Should be 4th currently. Thanks.",1,bronze,1 Comment,Sun Oct 30 2011 12:53:49 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/987,/competitions/PhotoQualityPrediction,91st /sirguessalot,Anonymization technique?,"I have a couple of questions about the technique used to convert words into integer tokens (name, description, and caption fields): 1. Were ""stop words"" removed prior to anonymizing? (stop words in English would be the most common ones that don't really carry a meaning, e.g. ""the"", ""a"", ""on"", ""of"", etc.). 2. Was the word order preserved prior/after anonymization? 3. Do any of the integers represent more than one word? E.g. 306 = ""Eiffel Tower""? Thanks.",1,None,3 ,Sun Oct 30 2011 18:41:54 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/988,/competitions/PhotoQualityPrediction,16th /blindape,"489 with age>90 and 13 with age>100, the record is 109",I think some data are for a few years ago and age has been calculated with (birthday date - actual date).,0,None,1 Comment,Mon Oct 31 2011 14:48:57 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/989,/competitions/GiveMeSomeCredit,62nd /chunk18552,Pending Score,I've had a submission not scored but listed as 'Pending'. Anyone know how long it'll take to be scored?,0,None,6 ,Mon Oct 31 2011 23:27:37 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/990,/competitions/AdvancedDataManagementMusicIdentification,1st /petewarden,Here's a benchmark and example code,"The Kaggle folks requested that I put together a benchmark demonstrating our current approach, so I pulled together a project and open-sourced the code: http://petewarden.typepad.com/searchbrowser/2011/11/how-to-enter-a-data-contest-machine-learning-for-newbies-like-me.html I don't think you'll learn much from it (except how much I need your expertise!) but I wanted to give you all a heads-up that it was out there.",1,bronze,13 ,Tue Nov 01 2011 07:30:51 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/991,/competitions/PhotoQualityPrediction,None /dirknbr,Naive Bayes,"As this is a spam type problem, has anyone tried Naive Bayes yet?",1,None,3 ,Tue Nov 01 2011 11:00:22 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/992,/competitions/PhotoQualityPrediction,86th /dirknbr,user id,"Wouldn't user id's be good variables to use, for the photo producer as well as the rating user?",0,None,5 ,Tue Nov 01 2011 11:01:28 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/993,/competitions/PhotoQualityPrediction,86th /del=1b2c033727f817d4,Free Online Course about Machine Learning,"I just found this free course, got excited and I thought I share the information, maybe you will find it useful :-) http://www.ml-class.org/course/auth/welcome Best, Csiga",1,bronze,1 Comment,Tue Nov 01 2011 14:55:14 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/994,/competitions/hhp,1250th /clueless0,Distance between two points,"Here's a little tidbit that may save someone a little search-engine time... Fun note: I don't sail or fly, so I didn't learn this from a navigation course. Instead I once had a data analysis problem for which I needed to ""cover"" the surface of variable-size sphere with N near-equidistant points, where N was variable (between 6 and ~50000 typically). Easily done for certain numbers of points, obviously, but much more difficult for others. So, without further ado: Given two points defined by latitude and longitude you can calculate the distance using the spherical law of cosines. Actually the haversine method is more accurate, but since the lat/lon points we're given are rounded there's no point in using that. You can also calculate a bearing, but I'll leave that as an exercise for the reader (LOL). And now for the actual math: Given: Two points defined as [lat1, lon1] and [lat2, lon2] and R (the mean radius of the earth in km: ~6371). distance = acos(sin(lat1)*sin(lat2) + cos(lat1)*cos(lat2)*cos(lon2-lon1)) * R If anyone spots an error in the formula I apologize in advance, but please tell me so I can edit this post accordingly.",2,bronze,2 ,Tue Nov 01 2011 20:05:53 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/996,/competitions/PhotoQualityPrediction,47th /scifipix1,AUC code for MATLAB,"Hello everyone, Does anyone know how the AUC is being calculated for the leaderboard positions? It will be very convenient if I could have the AUC code for MATLAB to test my code & then upload it. Thank you.",0,None,7 ,Wed Nov 02 2011 00:47:33 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/997,/competitions/GiveMeSomeCredit,174th /jeffmoser,Optimized Constant Benchmark,"I [Link]:http://www.heritagehealthprize.com/c/hhp/forums/t/661/the-optimized-constant-value-benchmark from the Heritage Health Prize and created the ""Optimized Constant Benchmark"" of submitting all −s0−log(1−p0)log(p0)−log(1−p0)=¯ei where s0 is the public score for all zeros , p0 is the ""capped"" score for 0 which is 0.01 and ei is the average expected score which comes to be ≈0.262776080526733. I might have made a mistake in my derivation. Can anyone double check my math? Alternatively, prove me wrong with a better constant value submission :)",0,None,5 ,Wed Nov 02 2011 00:48:08 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/998,/competitions/PhotoQualityPrediction,None /mgolden,Limited participation,"Hi, I'm new here and can't seem to find on the website what is meant by a ""Limited participation competition""? How does one request to join such a competition?",8,None,9 ,Thu Nov 03 2011 10:14:42 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/999,None,None /gurujeet,Where is external data posted,"I'm new here, and apologies if this has been asked and answered. The rules indicate that it is ok to link in external data so long as it is published in some way for others to also see and use. Where do we find any external data that others may be using? Thanks, Gurujeet",0,None,2 ,Thu Nov 03 2011 19:57:11 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1000,/competitions/hhp,None /i9051815,Delete a submission,Anyone know if you can delete a submission? I accidentially submitted ones based on training not test data.,0,None,1 Comment,Thu Nov 03 2011 21:31:24 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1001,/competitions/AdvancedDataManagementMusicIdentification,9th /daywednes,"programming languages, and tools","Hi all, I'm a newbie in this kind of competition. I only know a bit octave to do machine learning problem. After trying to use octave to load training data, I feel like it's kinda slow compared to a program written in C++. Would anyone here recommend me some other tools used in this kinda competition?. It would be useful to some other newbies like me. I found out python-sklearn is pretty faster than octave. Thanks alot",0,None,7 ,Fri Nov 04 2011 06:43:57 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1002,/competitions/PhotoQualityPrediction,171st /blindape,A question to the netflix competitors,"Based in your experience in that challenge, do you think the current pace of improvement in HHP prize is sufficient for reach the 0.40 threshold?",0,None,9 ,Fri Nov 04 2011 13:48:41 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1003,/competitions/hhp,3rd /changeagents,Population Characteristics,"I appreciate the commpetitive opportunity, although I doubt I will spend much energy on this. The reason is that where admirable; it appears you are using isolated populations with limited data save the characteristics of and demographics of each episode. Labs help, meds help ect. But I sense you will be repeating auto-correlation models that can reach great degrees of significance but onlyy apply to a small subset. In the past many predictive variables have emerged that we never considered, dating back to work on the Pra and John Ware's SF models. \ My industry has now spent millions on DCG, ACGs, ETGs etc and we find that insurance organizations continue to rely on actuarys. Many of the models work well with sample sized of >20,000 they fall appart with the individual. Trust me I have run the best models on 1.2 million members. Now, if you will provide other info such as living status (alone etc) we might get you to where you want to be: Ready to assume the risk. I imagine saving 100 hospitalization epidodes is worth it? Blessings",0,None,7 ,Fri Nov 04 2011 20:28:38 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1006,/competitions/hhp,None /skeptopotamus,Training vs Testing data,"So in Kaggle, there is training data and testing data. But selecting the best model out of many models, judging each model by how well it fits the testing data, is equivalent, at the meta-level, to using the testing data. Of course this is a problem in any situation where the testing data is used more than once, but it seems especially serious in this case since you test so many models. Why not have three data sets: training, testing (to find the best model), and meta-testing (to compare the best model with the previous best model, before the Kaggle competition). Wouldn't that allow for fairer claims about the prediction quality of the models that Kaggle chooses?",0,None,3 ,Sat Nov 05 2011 03:40:24 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1007,None,None /ian221110,Will this competition create a usable model?,"I just found this web site (and this competition) today due to an article in my local paper :-). The first thing that strikes me about this competition is that is that is quite restricted in terms of data provided - specifically the number of members and claims provided to model from ie only 113,000 records in members.csv and only 2,668,990 records in claims.csv. (release 3) - and even worse it only covers a small range of years. I've got decades of professional experience building predictive models, mainly in the investment and insurance industries, and I can tell you right now that if HPN want to end up with a usable algorithm they need to (at least) significantly expand the supplied data. It may be the HPN doesn't have data for more members (I realise they are pretty small) but they really really really need to have data for a longer time period if they want a chance to get a better model than the infinite monkey theorem would provide – which is the main risk of modelling in a competitive environment… ie the more models that are submitted the more likely it is that a model with no actual predicative capacity will be the best fit for the out of sample period – and possibly the worst in the period after that. I’ve seen this again and again in the investment industry, which is why it is normally practise that you run your model live for a year ‘on paper’ out of sample as realistically as possible, then for another year with a small amount of money, then slowly scale up from there.. If I was being professionally paid for this job I’d want - 10 years+ of data for as many clients as possible - The actual years the data applied to, so I could cross link it with other factors - obviously using only factors known in real time (yet another newbie mistake!). And I don't know if they have kept the data set sizes really small so that people not normally handling data can have a go at it - but if that is the case they should provide these small ones as well as some decent sized ones as well.. Ian",1,None,7 ,Sat Nov 05 2011 13:22:43 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1008,/competitions/hhp,None /uhejmadi,Optimized weights for ensemble models,"Could someone give me references for techniques to calculate optimum weights for ensemble models, please? Thanks. M",0,None,1 Comment,Sat Nov 05 2011 22:02:32 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1009,/competitions/hhp,244th /byang1,"Questions about teams, team splits, and collaboration","Team split is not mentioned in contest rules or FAQ, but I assume: 1. A member can leave the team at any time, and he doesn't need approval from other team members or Kaggle to leave the team (although other team members may beat him up with machine-learning books for quitting). 2. When a team splits, the new teams carry on with the submission count of the old team. Correct ? ------------------------------------------------------- Now there're official meanings of 'member' and 'team' according to the contest rules, and there're common sense understanding of these words. For example, the 'one team per member' rule and the 'maximum 8 members per team' rule refer to the official meanings of member and team: a member is someone who registered, and a team consists of 2 or more registered members. So let's say there's a group of 20 people working as a team (common sense meaning), and only one of them is a registered member, then no contest rule is broken, correct ? What if 2 people out of the 20 registered as 2 separate teams ? Are they considered to be in violation of the '1 team per member' rule ? And how far can 2 official members/teams collaborate before they're considered to be really just one team ? Sharing ideas is encouraged, but if I give my source code or prediction files to another team, am I considered a member of that team ? Scenario: a 3-person team decides they need more submission slots to win, so one member split off into another team. His former team has full access to his source code and submission files, but he makes no claim to being a member of his old team. That is, he gives up his official position in a potential winning team for the benefit of the team. Is this kind of altruistic behavior allowed in this contest ? And what if 2 teams decide to share algorithms, source code, and submission data without merging, is this OK ?",0,None,1 Comment,Sun Nov 06 2011 00:39:47 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1010,/competitions/hhp,2nd /subratac,"Question on the numerical coding of name, description and caption","Do the numbers on the name, description and caption each denote a specific word? For example, in the training dataset, we have 454 and 1659 as the two numbers for the name field. If, for example, the number 454 denotes ""green' and ""1659"" denotes ""grass,"" can I then assume that wherever I see the number 454 in any of the three fields, the field contains the word ""green?"" Similarly for the number 1659... Thanks",0,None,1 Comment,Sun Nov 06 2011 15:30:18 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1012,/competitions/PhotoQualityPrediction,None /markwaddle,R function for binomial deviance,"I have not been able to find a binomial deviance function in R. Can someone post one or point me to one? Thanks, Mark",0,None,1 Comment,Sun Nov 06 2011 22:56:07 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1013,/competitions/PhotoQualityPrediction,116th /markwaddle,Submission not showing in leaderboard,"My first submission is not showing on the leaderboard, although it shows a valid score on the submissions tab. I have never seen this before. Is this due to maintenance or some other sort of delay today (Sunday, 11/6 2:20PM PST)? Or is it a bug?",0,None,2 ,Sun Nov 06 2011 23:19:28 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1014,/competitions/PhotoQualityPrediction,116th /itsnadeem85,"predicted values between [0,1]","Hello there, This is nadeem, a newbie in the field of data mining, so pardon me if my question sounds stupid. After going through the training and test tests and example entries document of this problem, i am not able to digest the concept of calculating prediction values between [0,1]. This is a classification problem, where i have to predict either 0 or 1 (yes or no), how is this classifcation problem converted into regression (predicting real values)???? Can someone please explain this thing to me ??? Problem is quite exciting but only this question mark is lingering in my mind.",0,None,6 ,Mon Nov 07 2011 01:35:38 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1015,/competitions/DontGetKicked,None /jhoward,Judges' review of Market Makers paper,"Here is the [Link]:https://docs.google.com/document/pub?id=1NXSp1hhsxgp6rt5b46p42F5ZfE5--HjhIFkdycPmkxw of the Market Makers paper. The judges have reviewed both the paper and the comments on the forum, and have identified the issues in this paper as being those requiring attention. The judges felt that some of the requests made by competitors on the forum went beyond what would normally be expected from an academic paper. Therefore some issues raised on the forum are not included in the judges' review. If you made a request on the forum which does not appear in the judges' review, please do not ask for them to change their mind and add it - the review is finalised and additional issues will not be added. Market Makers now have 7 days to update their paper to answer these issues.",0,None,2 ,Tue Nov 08 2011 00:26:51 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1016,/competitions/hhp,None /class21475,Training Data Issue,"Is anyone having trouble handling the training file? It seems that in both the .zip and .7z versions the extracted .CSV file is fine until line 116,730 (row_id 116,729) -- then that line breaks early and there is a gap of 212,781,182 or so empty lines -- then presumably it gets back on track. So there are 477,123 rows of data in 213,258,306 lines? I'm a bit new to this so perhaps I'm missing something in how the file is structured or the lines are terminated -- but before I go crazy I just wanted to see if anyone else is having the same phenomenon.",1,bronze,2 ,Tue Nov 08 2011 19:43:57 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1019,/competitions/AlgorithmicTradingChallenge,26th /jce0121773,Will winning algorithm be used as rationale to boot patients?,"I'm brand new here and haven't seen all the posts so forgive me if I am probing into an area already covered. It seems like a truly good algorithm could be used to pre-emptively identify patients that may have future problems and try to help them before a problem becomes acute. Alternatively, a nasty scrooge type could use the information to identify potentially costly patients to boot from the health care program. I'm not saying this would be a problem for me, since I don't have much in the way of a conscience and there is very little chance that I will come up with anything worthwhile anyway. But for those that do have a conscience, has there been any pronouncement, legally binding or otherwise, to the effect that the resulting winning algorithm will not be used as an input in the decision to terminate a policy.",0,None,2 ,Wed Nov 09 2011 02:58:43 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1020,/competitions/hhp,690th /chefele,Questions about the Data,"This seems like an interesting challenge! I have a some initial questions, mostly about the definitions of trade events & quote events & how they're interrelated: A trade at time t might result in a new best bid/ask price. In that case, are the bid/ask prices quoted at time t the prices that exist immediately before or immediately after the trade executes? If the best bid/ask did change as a result of a trade at event time t, would it always be reported as a quote event at event time t+1? This seems to be the case for the 'liquidity event' trade -- is this also true for all trades? Are there any other circumstances that would lead to a quote event? (e.g. best bid/ask canceled) In figure 1 on the ""Background"" page, it shows both a trade and a quote at t=0. Shouldn't these be seperate events at seperate event-times? Are all the bid/asks we're asked to predict guaranteed to be trade events only, or are there quote events, too? Timestamps are given in the data, but dates are not. Was all the data collected on the same day, or on different days?",0,None,1 Comment,Wed Nov 09 2011 04:45:18 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1021,/competitions/AlgorithmicTradingChallenge,4th /ppcguru,Congratulations on $11million SeriesA funding. So... now that Hal Varian is on board - Will their be a GoogleAdwords algo competition? [Please],"@kaggle Congratulations on $11million SeriesA funding :) http://blog.kaggle.com/2011/11/03/venture-capital-jobs-and-a-new-competition/ So... now that Hal Varian & Gil Elbaz is on the board - Will their be a Google Adwords PPC algo competition? For both: * historic auctions revenue optimisation model & dataset and * the future-looking prediction that is used when a new Adcopy or DisplayURL is added, based on historic dataset. Thanks Phil (I`m new to kaggle, btw) [Link]:http://www.linkedin.com/in/philpearce P.S here is an excel version of the GoogleAdwords Algo I created: [Link]:https://bitly.com/qlvbeH [attached] P.S.S. The RichTextEditor you are using for this forum should not allow JavaScript events or PopUps to be inserted, as this could create a vunerability by loading an malicious external script or iframes (you might want to disable those tabs within the backoffice CMS) and run a scanner such as websitedefender.com through the forum to make sure the RTE php code is secure. Also consider hosting the forum of a different IP to the main website, to improve security even further. http://www.bing.com/search?q=ip%3A65.52.203.186 [Link]:https://storage.googleapis.com/kaggle-forum-message-attachments/1475/google_adwords_ppc_algo.xls",1,None,2 ,Wed Nov 09 2011 10:27:26 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1022,None,None /del=45a7769885f2d2c0,Not able to download data files,"I just signed up this website and looking in to this competition now. I am not able to find any link to download data files right now. FYI: I have accepted rules. I only find Data Schema and Data Fields sections. No link to data files which other people are talking about in Training Data Issue and Questions about the Data topics. Looks like organizer taken the files down for a while as they were talking in previous thread. Would you please advise, if the files get back again. I thought it would get back at USA morning time. This is my first time here. May be that’s why I posted here being impatient looking for files. Anyway, this looks like a nice challenge. Thank you very much.",0,None,5 ,Wed Nov 09 2011 15:42:23 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1023,/competitions/AlgorithmicTradingChallenge,85th /blindape,In albums with more than one photo,Are width and height an average of pics or the values of one random of the album?,0,None,2 ,Wed Nov 09 2011 19:44:31 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1024,/competitions/PhotoQualityPrediction,22nd /leazar,Identifying when the liquidity shock occurs,"Since we are still waiting for the data :) Reading over the description of the data, does it contain information on when the liquidity shock occurs at time 'T'? The data does seem to contain the block trade that caused the shock, but not information on when it occured in the datastream for the security.",0,None,2 ,Thu Nov 10 2011 03:39:02 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1026,/competitions/AlgorithmicTradingChallenge,61st /byang1,Let's guess leader's secret,"Hi everyone, PlanetThanet/Jason Tigg has a huge lead over everyone else since the early days of this contest. Let's brainstorm what his secret might be. He probably discovered something simple that everyone else missed, or some really good external data. I have tried SVM, random forest, KNN, & GBM. I can do more tuning but it will only get me relatively small improvements. Jason, feel free to chime in. :)",0,None,9 ,Thu Nov 10 2011 03:52:44 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1028,/competitions/PhotoQualityPrediction,1st /tinkerer,Group v.s. Individuals ,"Hi, I'm new here and I have a question. I signed up as an individual. What if there is a group of n scientists sign up (yes, they are sharing one account) and compete with other individuals (like myself). Of course, this creates an advantage for them to win (yes, I'm talking about money prize). Does Kaggle allow group sign-ups? Just wondering so that I know how hard I should try to win. (Yes, of course, I will learn by solving many different problems) -Bon",0,None,3 ,Thu Nov 10 2011 07:24:36 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1029,None,None /dirknbr,Why has the test data bid51-100 and ask51-100 populated?,Why has the test data bid51-100 and ask51-100 populated? Aren't we supposed to predict that?,1,bronze,7 ,Thu Nov 10 2011 15:13:55 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1030,/competitions/AlgorithmicTradingChallenge,41st /shankark,Algorithmic Trading Challenge contest,"I don't see the Algorithmic Trading Challenge contest appearing any more on the list of running competitions. Has it been withdrawn temporarily, or for good?",1,bronze,6 ,Thu Nov 10 2011 21:10:03 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1031,None,None /jeffmoser,Importing the data into a database,"Since this dataset is quite large, I wanted to offer the database schema I used in case it helps other. I tried to use the smallest data type possible per column to be more efficient. I used SQL Server 2008, but see other related threads in the [Link]:https://www.kaggle.com/c/wikichallenge/forums/t/668/importing-the-data-into-sql-server and [Link]:https://www.kaggle.com/c/ClaimPredictionChallenge/forums/t/711/importing-to-sql-server-and-aggregate-statistics competitions for tips on how to do similar things in other databases. Here are my tables (note the use of smallmoney and time(3)): CREATE TABLE [training]( [row_id] [int] NOT NULL, [security_id] [tinyint] NOT NULL, [p_tcount] [int] NOT NULL, [p_value] [bigint] NOT NULL, [trade_vwap] [smallmoney] NOT NULL, [trade_volume] [int] NOT NULL, [initiator] [char](1) NOT NULL, [transtype1] [char](1) NOT NULL, [time1] [time](3) NOT NULL, [bid1] [smallmoney] NOT NULL, [ask1] [smallmoney] NOT NULL, [transtype2] [char](1) NOT NULL, [time2] [time](3) NOT NULL, [bid2] [smallmoney] NOT NULL, [ask2] [smallmoney] NOT NULL, [transtype3] [char](1) NOT NULL, [time3] [time](3) NOT NULL, [bid3] [smallmoney] NOT NULL, [ask3] [smallmoney] NOT NULL, [transtype4] [char](1) NOT NULL, [time4] [time](3) NOT NULL, [bid4] [smallmoney] NOT NULL, [ask4] [smallmoney] NOT NULL, [transtype5] [char](1) NOT NULL, [time5] [time](3) NOT NULL, [bid5] [smallmoney] NOT NULL, [ask5] [smallmoney] NOT NULL, [transtype6] [char](1) NOT NULL, [time6] [time](3) NOT NULL, [bid6] [smallmoney] NOT NULL, [ask6] [smallmoney] NOT NULL, [transtype7] [char](1) NOT NULL, [time7] [time](3) NOT NULL, [bid7] [smallmoney] NOT NULL, [ask7] [smallmoney] NOT NULL, [transtype8] [char](1) NOT NULL, [time8] [time](3) NOT NULL, [bid8] [smallmoney] NOT NULL, [ask8] [smallmoney] NOT NULL, [transtype9] [char](1) NOT NULL, [time9] [time](3) NOT NULL, [bid9] [smallmoney] NOT NULL, [ask9] [smallmoney] NOT NULL, [transtype10] [char](1) NOT NULL, [time10] [time](3) NOT NULL, [bid10] [smallmoney] NOT NULL, [ask10] [smallmoney] NOT NULL, [transtype11] [char](1) NOT NULL, [time11] [time](3) NOT NULL, [bid11] [smallmoney] NOT NULL, [ask11] [smallmoney] NOT NULL, [transtype12] [char](1) NOT NULL, [time12] [time](3) NOT NULL, [bid12] [smallmoney] NOT NULL, [ask12] [smallmoney] NOT NULL, [transtype13] [char](1) NOT NULL, [time13] [time](3) NOT NULL, [bid13] [smallmoney] NOT NULL, [ask13] [smallmoney] NOT NULL, [transtype14] [char](1) NOT NULL, [time14] [time](3) NOT NULL, [bid14] [smallmoney] NOT NULL, [ask14] [smallmoney] NOT NULL, [transtype15] [char](1) NOT NULL, [time15] [time](3) NOT NULL, [bid15] [smallmoney] NOT NULL, [ask15] [smallmoney] NOT NULL, [transtype16] [char](1) NOT NULL, [time16] [time](3) NOT NULL, [bid16] [smallmoney] NOT NULL, [ask16] [smallmoney] NOT NULL, [transtype17] [char](1) NOT NULL, [time17] [time](3) NOT NULL, [bid17] [smallmoney] NOT NULL, [ask17] [smallmoney] NOT NULL, [transtype18] [char](1) NOT NULL, [time18] [time](3) NOT NULL, [bid18] [smallmoney] NOT NULL, [ask18] [smallmoney] NOT NULL, [transtype19] [char](1) NOT NULL, [time19] [time](3) NOT NULL, [bid19] [smallmoney] NOT NULL, [ask19] [smallmoney] NOT NULL, [transtype20] [char](1) NOT NULL, [time20] [time](3) NOT NULL, [bid20] [smallmoney] NOT NULL, [ask20] [smallmoney] NOT NULL, [transtype21] [char](1) NOT NULL, [time21] [time](3) NOT NULL, [bid21] [smallmoney] NOT NULL, [ask21] [smallmoney] NOT NULL, [transtype22] [char](1) NOT NULL, [time22] [time](3) NOT NULL, [bid22] [smallmoney] NOT NULL, [ask22] [smallmoney] NOT NULL, [transtype23] [char](1) NOT NULL, [time23] [time](3) NOT NULL, [bid23] [smallmoney] NOT NULL, [ask23] [smallmoney] NOT NULL, [transtype24] [char](1) NOT NULL, [time24] [time](3) NOT NULL, [bid24] [smallmoney] NOT NULL, [ask24] [smallmoney] NOT NULL, [transtype25] [char](1) NOT NULL, [time25] [time](3) NOT NULL, [bid25] [smallmoney] NOT NULL, [ask25] [smallmoney] NOT NULL, [transtype26] [char](1) NOT NULL, [time26] [time](3) NOT NULL, [bid26] [smallmoney] NOT NULL, [ask26] [smallmoney] NOT NULL, [transtype27] [char](1) NOT NULL, [time27] [time](3) NOT NULL, [bid27] [smallmoney] NOT NULL, [ask27] [smallmoney] NOT NULL, [transtype28] [char](1) NOT NULL, [time28] [time](3) NOT NULL, [bid28] [smallmoney] NOT NULL, [ask28] [smallmoney] NOT NULL, [transtype29] [char](1) NOT NULL, [time29] [time](3) NOT NULL, [bid29] [smallmoney] NOT NULL, [ask29] [smallmoney] NOT NULL, [transtype30] [char](1) NOT NULL, [time30] [time](3) NOT NULL, [bid30] [smallmoney] NOT NULL, [ask30] [smallmoney] NOT NULL, [transtype31] [char](1) NOT NULL, [time31] [time](3) NOT NULL, [bid31] [smallmoney] NOT NULL, [ask31] [smallmoney] NOT NULL, [transtype32] [char](1) NOT NULL, [time32] [time](3) NOT NULL, [bid32] [smallmoney] NOT NULL, [ask32] [smallmoney] NOT NULL, [transtype33] [char](1) NOT NULL, [time33] [time](3) NOT NULL, [bid33] [smallmoney] NOT NULL, [ask33] [smallmoney] NOT NULL, [transtype34] [char](1) NOT NULL, [time34] [time](3) NOT NULL, [bid34] [smallmoney] NOT NULL, [ask34] [smallmoney] NOT NULL, [transtype35] [char](1) NOT NULL, [time35] [time](3) NOT NULL, [bid35] [smallmoney] NOT NULL, [ask35] [smallmoney] NOT NULL, [transtype36] [char](1) NOT NULL, [time36] [time](3) NOT NULL, [bid36] [smallmoney] NOT NULL, [ask36] [smallmoney] NOT NULL, [transtype37] [char](1) NOT NULL, [time37] [time](3) NOT NULL, [bid37] [smallmoney] NOT NULL, [ask37] [smallmoney] NOT NULL, [transtype38] [char](1) NOT NULL, [time38] [time](3) NOT NULL, [bid38] [smallmoney] NOT NULL, [ask38] [smallmoney] NOT NULL, [transtype39] [char](1) NOT NULL, [time39] [time](3) NOT NULL, [bid39] [smallmoney] NOT NULL, [ask39] [smallmoney] NOT NULL, [transtype40] [char](1) NOT NULL, [time40] [time](3) NOT NULL, [bid40] [smallmoney] NOT NULL, [ask40] [smallmoney] NOT NULL, [transtype41] [char](1) NOT NULL, [time41] [time](3) NOT NULL, [bid41] [smallmoney] NOT NULL, [ask41] [smallmoney] NOT NULL, [transtype42] [char](1) NOT NULL, [time42] [time](3) NOT NULL, [bid42] [smallmoney] NOT NULL, [ask42] [smallmoney] NOT NULL, [transtype43] [char](1) NOT NULL, [time43] [time](3) NOT NULL, [bid43] [smallmoney] NOT NULL, [ask43] [smallmoney] NOT NULL, [transtype44] [char](1) NOT NULL, [time44] [time](3) NOT NULL, [bid44] [smallmoney] NOT NULL, [ask44] [smallmoney] NOT NULL, [transtype45] [char](1) NOT NULL, [time45] [time](3) NOT NULL, [bid45] [smallmoney] NOT NULL, [ask45] [smallmoney] NOT NULL, [transtype46] [char](1) NOT NULL, [time46] [time](3) NOT NULL, [bid46] [smallmoney] NOT NULL, [ask46] [smallmoney] NOT NULL, [transtype47] [char](1) NOT NULL, [time47] [time](3) NOT NULL, [bid47] [smallmoney] NOT NULL, [ask47] [smallmoney] NOT NULL, [transtype48] [char](1) NOT NULL, [time48] [time](3) NOT NULL, [bid48] [smallmoney] NOT NULL, [ask48] [smallmoney] NOT NULL, [transtype49] [char](1) NOT NULL, [time49] [time](3) NOT NULL, [bid49] [smallmoney] NOT NULL, [ask49] [smallmoney] NOT NULL, [transtype50] [char](1) NOT NULL, [time50] [time](3) NOT NULL, [bid50] [smallmoney] NOT NULL, [ask50] [smallmoney] NOT NULL, [bid51] [smallmoney] NOT NULL, [ask51] [smallmoney] NOT NULL, [bid52] [smallmoney] NOT NULL, [ask52] [smallmoney] NOT NULL, [bid53] [smallmoney] NOT NULL, [ask53] [smallmoney] NOT NULL, [bid54] [smallmoney] NOT NULL, [ask54] [smallmoney] NOT NULL, [bid55] [smallmoney] NOT NULL, [ask55] [smallmoney] NOT NULL, [bid56] [smallmoney] NOT NULL, [ask56] [smallmoney] NOT NULL, [bid57] [smallmoney] NOT NULL, [ask57] [smallmoney] NOT NULL, [bid58] [smallmoney] NOT NULL, [ask58] [smallmoney] NOT NULL, [bid59] [smallmoney] NOT NULL, [ask59] [smallmoney] NOT NULL, [bid60] [smallmoney] NOT NULL, [ask60] [smallmoney] NOT NULL, [bid61] [smallmoney] NOT NULL, [ask61] [smallmoney] NOT NULL, [bid62] [smallmoney] NOT NULL, [ask62] [smallmoney] NOT NULL, [bid63] [smallmoney] NOT NULL, [ask63] [smallmoney] NOT NULL, [bid64] [smallmoney] NOT NULL, [ask64] [smallmoney] NOT NULL, [bid65] [smallmoney] NOT NULL, [ask65] [smallmoney] NOT NULL, [bid66] [smallmoney] NOT NULL, [ask66] [smallmoney] NOT NULL, [bid67] [smallmoney] NOT NULL, [ask67] [smallmoney] NOT NULL, [bid68] [smallmoney] NOT NULL, [ask68] [smallmoney] NOT NULL, [bid69] [smallmoney] NOT NULL, [ask69] [smallmoney] NOT NULL, [bid70] [smallmoney] NOT NULL, [ask70] [smallmoney] NOT NULL, [bid71] [smallmoney] NOT NULL, [ask71] [smallmoney] NOT NULL, [bid72] [smallmoney] NOT NULL, [ask72] [smallmoney] NOT NULL, [bid73] [smallmoney] NOT NULL, [ask73] [smallmoney] NOT NULL, [bid74] [smallmoney] NOT NULL, [ask74] [smallmoney] NOT NULL, [bid75] [smallmoney] NOT NULL, [ask75] [smallmoney] NOT NULL, [bid76] [smallmoney] NOT NULL, [ask76] [smallmoney] NOT NULL, [bid77] [smallmoney] NOT NULL, [ask77] [smallmoney] NOT NULL, [bid78] [smallmoney] NOT NULL, [ask78] [smallmoney] NOT NULL, [bid79] [smallmoney] NOT NULL, [ask79] [smallmoney] NOT NULL, [bid80] [smallmoney] NOT NULL, [ask80] [smallmoney] NOT NULL, [bid81] [smallmoney] NOT NULL, [ask81] [smallmoney] NOT NULL, [bid82] [smallmoney] NOT NULL, [ask82] [smallmoney] NOT NULL, [bid83] [smallmoney] NOT NULL, [ask83] [smallmoney] NOT NULL, [bid84] [smallmoney] NOT NULL, [ask84] [smallmoney] NOT NULL, [bid85] [smallmoney] NOT NULL, [ask85] [smallmoney] NOT NULL, [bid86] [smallmoney] NOT NULL, [ask86] [smallmoney] NOT NULL, [bid87] [smallmoney] NOT NULL, [ask87] [smallmoney] NOT NULL, [bid88] [smallmoney] NOT NULL, [ask88] [smallmoney] NOT NULL, [bid89] [smallmoney] NOT NULL, [ask89] [smallmoney] NOT NULL, [bid90] [smallmoney] NOT NULL, [ask90] [smallmoney] NOT NULL, [bid91] [smallmoney] NOT NULL, [ask91] [smallmoney] NOT NULL, [bid92] [smallmoney] NOT NULL, [ask92] [smallmoney] NOT NULL, [bid93] [smallmoney] NOT NULL, [ask93] [smallmoney] NOT NULL, [bid94] [smallmoney] NOT NULL, [ask94] [smallmoney] NOT NULL, [bid95] [smallmoney] NOT NULL, [ask95] [smallmoney] NOT NULL, [bid96] [smallmoney] NOT NULL, [ask96] [smallmoney] NOT NULL, [bid97] [smallmoney] NOT NULL, [ask97] [smallmoney] NOT NULL, [bid98] [smallmoney] NOT NULL, [ask98] [smallmoney] NOT NULL, [bid99] [smallmoney] NOT NULL, [ask99] [smallmoney] NOT NULL, [bid100] [smallmoney] NOT NULL, [ask100] [smallmoney] NOT NULL ); CREATE TABLE [testing]( [row_id] [int] NOT NULL, [security_id] [tinyint] NOT NULL, [p_tcount] [int] NOT NULL, [p_value] [bigint] NOT NULL, [trade_vwap] [smallmoney] NOT NULL, [trade_volume] [int] NOT NULL, [initiator] [char](1) NOT NULL, [transtype1] [char](1) NOT NULL, [time1] [time](3) NOT NULL, [bid1] [smallmoney] NOT NULL, [ask1] [smallmoney] NOT NULL, [transtype2] [char](1) NOT NULL, [time2] [time](3) NOT NULL, [bid2] [smallmoney] NOT NULL, [ask2] [smallmoney] NOT NULL, [transtype3] [char](1) NOT NULL, [time3] [time](3) NOT NULL, [bid3] [smallmoney] NOT NULL, [ask3] [smallmoney] NOT NULL, [transtype4] [char](1) NOT NULL, [time4] [time](3) NOT NULL, [bid4] [smallmoney] NOT NULL, [ask4] [smallmoney] NOT NULL, [transtype5] [char](1) NOT NULL, [time5] [time](3) NOT NULL, [bid5] [smallmoney] NOT NULL, [ask5] [smallmoney] NOT NULL, [transtype6] [char](1) NOT NULL, [time6] [time](3) NOT NULL, [bid6] [smallmoney] NOT NULL, [ask6] [smallmoney] NOT NULL, [transtype7] [char](1) NOT NULL, [time7] [time](3) NOT NULL, [bid7] [smallmoney] NOT NULL, [ask7] [smallmoney] NOT NULL, [transtype8] [char](1) NOT NULL, [time8] [time](3) NOT NULL, [bid8] [smallmoney] NOT NULL, [ask8] [smallmoney] NOT NULL, [transtype9] [char](1) NOT NULL, [time9] [time](3) NOT NULL, [bid9] [smallmoney] NOT NULL, [ask9] [smallmoney] NOT NULL, [transtype10] [char](1) NOT NULL, [time10] [time](3) NOT NULL, [bid10] [smallmoney] NOT NULL, [ask10] [smallmoney] NOT NULL, [transtype11] [char](1) NOT NULL, [time11] [time](3) NOT NULL, [bid11] [smallmoney] NOT NULL, [ask11] [smallmoney] NOT NULL, [transtype12] [char](1) NOT NULL, [time12] [time](3) NOT NULL, [bid12] [smallmoney] NOT NULL, [ask12] [smallmoney] NOT NULL, [transtype13] [char](1) NOT NULL, [time13] [time](3) NOT NULL, [bid13] [smallmoney] NOT NULL, [ask13] [smallmoney] NOT NULL, [transtype14] [char](1) NOT NULL, [time14] [time](3) NOT NULL, [bid14] [smallmoney] NOT NULL, [ask14] [smallmoney] NOT NULL, [transtype15] [char](1) NOT NULL, [time15] [time](3) NOT NULL, [bid15] [smallmoney] NOT NULL, [ask15] [smallmoney] NOT NULL, [transtype16] [char](1) NOT NULL, [time16] [time](3) NOT NULL, [bid16] [smallmoney] NOT NULL, [ask16] [smallmoney] NOT NULL, [transtype17] [char](1) NOT NULL, [time17] [time](3) NOT NULL, [bid17] [smallmoney] NOT NULL, [ask17] [smallmoney] NOT NULL, [transtype18] [char](1) NOT NULL, [time18] [time](3) NOT NULL, [bid18] [smallmoney] NOT NULL, [ask18] [smallmoney] NOT NULL, [transtype19] [char](1) NOT NULL, [time19] [time](3) NOT NULL, [bid19] [smallmoney] NOT NULL, [ask19] [smallmoney] NOT NULL, [transtype20] [char](1) NOT NULL, [time20] [time](3) NOT NULL, [bid20] [smallmoney] NOT NULL, [ask20] [smallmoney] NOT NULL, [transtype21] [char](1) NOT NULL, [time21] [time](3) NOT NULL, [bid21] [smallmoney] NOT NULL, [ask21] [smallmoney] NOT NULL, [transtype22] [char](1) NOT NULL, [time22] [time](3) NOT NULL, [bid22] [smallmoney] NOT NULL, [ask22] [smallmoney] NOT NULL, [transtype23] [char](1) NOT NULL, [time23] [time](3) NOT NULL, [bid23] [smallmoney] NOT NULL, [ask23] [smallmoney] NOT NULL, [transtype24] [char](1) NOT NULL, [time24] [time](3) NOT NULL, [bid24] [smallmoney] NOT NULL, [ask24] [smallmoney] NOT NULL, [transtype25] [char](1) NOT NULL, [time25] [time](3) NOT NULL, [bid25] [smallmoney] NOT NULL, [ask25] [smallmoney] NOT NULL, [transtype26] [char](1) NOT NULL, [time26] [time](3) NOT NULL, [bid26] [smallmoney] NOT NULL, [ask26] [smallmoney] NOT NULL, [transtype27] [char](1) NOT NULL, [time27] [time](3) NOT NULL, [bid27] [smallmoney] NOT NULL, [ask27] [smallmoney] NOT NULL, [transtype28] [char](1) NOT NULL, [time28] [time](3) NOT NULL, [bid28] [smallmoney] NOT NULL, [ask28] [smallmoney] NOT NULL, [transtype29] [char](1) NOT NULL, [time29] [time](3) NOT NULL, [bid29] [smallmoney] NOT NULL, [ask29] [smallmoney] NOT NULL, [transtype30] [char](1) NOT NULL, [time30] [time](3) NOT NULL, [bid30] [smallmoney] NOT NULL, [ask30] [smallmoney] NOT NULL, [transtype31] [char](1) NOT NULL, [time31] [time](3) NOT NULL, [bid31] [smallmoney] NOT NULL, [ask31] [smallmoney] NOT NULL, [transtype32] [char](1) NOT NULL, [time32] [time](3) NOT NULL, [bid32] [smallmoney] NOT NULL, [ask32] [smallmoney] NOT NULL, [transtype33] [char](1) NOT NULL, [time33] [time](3) NOT NULL, [bid33] [smallmoney] NOT NULL, [ask33] [smallmoney] NOT NULL, [transtype34] [char](1) NOT NULL, [time34] [time](3) NOT NULL, [bid34] [smallmoney] NOT NULL, [ask34] [smallmoney] NOT NULL, [transtype35] [char](1) NOT NULL, [time35] [time](3) NOT NULL, [bid35] [smallmoney] NOT NULL, [ask35] [smallmoney] NOT NULL, [transtype36] [char](1) NOT NULL, [time36] [time](3) NOT NULL, [bid36] [smallmoney] NOT NULL, [ask36] [smallmoney] NOT NULL, [transtype37] [char](1) NOT NULL, [time37] [time](3) NOT NULL, [bid37] [smallmoney] NOT NULL, [ask37] [smallmoney] NOT NULL, [transtype38] [char](1) NOT NULL, [time38] [time](3) NOT NULL, [bid38] [smallmoney] NOT NULL, [ask38] [smallmoney] NOT NULL, [transtype39] [char](1) NOT NULL, [time39] [time](3) NOT NULL, [bid39] [smallmoney] NOT NULL, [ask39] [smallmoney] NOT NULL, [transtype40] [char](1) NOT NULL, [time40] [time](3) NOT NULL, [bid40] [smallmoney] NOT NULL, [ask40] [smallmoney] NOT NULL, [transtype41] [char](1) NOT NULL, [time41] [time](3) NOT NULL, [bid41] [smallmoney] NOT NULL, [ask41] [smallmoney] NOT NULL, [transtype42] [char](1) NOT NULL, [time42] [time](3) NOT NULL, [bid42] [smallmoney] NOT NULL, [ask42] [smallmoney] NOT NULL, [transtype43] [char](1) NOT NULL, [time43] [time](3) NOT NULL, [bid43] [smallmoney] NOT NULL, [ask43] [smallmoney] NOT NULL, [transtype44] [char](1) NOT NULL, [time44] [time](3) NOT NULL, [bid44] [smallmoney] NOT NULL, [ask44] [smallmoney] NOT NULL, [transtype45] [char](1) NOT NULL, [time45] [time](3) NOT NULL, [bid45] [smallmoney] NOT NULL, [ask45] [smallmoney] NOT NULL, [transtype46] [char](1) NOT NULL, [time46] [time](3) NOT NULL, [bid46] [smallmoney] NOT NULL, [ask46] [smallmoney] NOT NULL, [transtype47] [char](1) NOT NULL, [time47] [time](3) NOT NULL, [bid47] [smallmoney] NOT NULL, [ask47] [smallmoney] NOT NULL, [transtype48] [char](1) NOT NULL, [time48] [time](3) NOT NULL, [bid48] [smallmoney] NOT NULL, [ask48] [smallmoney] NOT NULL, [transtype49] [char](1) NOT NULL, [time49] [time](3) NOT NULL, [bid49] [smallmoney] NOT NULL, [ask49] [smallmoney] NOT NULL, [transtype50] [char](1) NOT NULL, [time50] [time](3) NOT NULL, [bid50] [smallmoney] NOT NULL, [ask50] [smallmoney] NOT NULL ); -- Optionally add primary keys ALTER TABLE training ADD CONSTRAINT PK_training PRIMARY KEY CLUSTERED (row_id); ALTER TABLE training ADD CONSTRAINT PK_training PRIMARY KEY CLUSTERED (row_id); Finally, here's a quick sanity check: SELECT COUNT(*) FROM training -- 754018 SELECT SUM(p_value) FROM training -- 3542062365303403 SELECT COUNT(*) FROM testing -- 50000 SELECT SUM(p_value) FROM testing -- 155032668754821 Feel free to post in this thread any other tips you have while working with the dataset using a database.",6,silver,1 Comment,Fri Nov 11 2011 19:34:09 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1032,/competitions/AlgorithmicTradingChallenge,None /ssgr22599,DaysInHospital and Claims.LengthofStay,"The LenghthofStay in the Claims table does not agree with the DaysInHospital table. Example: MemberId=10800005. I have looked at the supression flag but that does not seem to be it... More broadly, if you look at members who 1) had between 1 and 6 days as per DaysInHospital_Y3 2) AND had NO supressed records then the sum(Claims.LengthofStay) agrees the DaysInHospital_Y3 field 3481 out of 6052 records. This does not inspire a lot of confidence in the data",0,None,2 ,Fri Nov 11 2011 20:12:57 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1033,/competitions/hhp,None /atreides,How to send messages to other menbers,"Just wondeting, how do you contact other members here?",0,None,3 ,Fri Nov 11 2011 22:20:39 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1035,None,None /chefele,404 Page Not Found ,"Is this ""Page Not Found"" page new? If so, I really like it! www.kaggle.com/K",2,bronze,1 Comment,Fri Nov 11 2011 23:16:32 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1036,None,None /zzgorme,car auction expert available,"Hi, I have a friend who has been working at, running, and buying from car auctions for decades in Australia. He is happy to give any advice people want for this competition. Also he can provide auction lists and links for ones he goes to so people can suggest cars for him to buy and he might work out a deal so they get a percentage of profits. He also has contacts with large auctions that buy a lot of cars from card dealers and then resell in auctions to the public and dealer auctions, they might well pay good money for software that improves their business. Usually they have several million dollars worth of cars on hand so it is worth them paying a good price for soemthing that works. Often he might point out mistakes before the auction like some car models are prone to problems. A good rule of thumb is if you see a car model a lot on the road then that is a good one to buy in an auction, it will sell well, also check the sites online that show typical car problems. Another factor is that family cars are better sellers than car for single people like sports cars. Some car colors sell much better than others, I think green is the poorest seller and white the best. Think of a car yard like a supermarket where the shelf space has a value compared to the rent they pay for the store, the yard needs to move a car in a particular space often enough to pay the rent. So a popular car might sell faster but with competition they pay more for them. A sports car might sell more rarely but they make more profit, a young guy might not worry about defects in a fast car that would turn off a family car buyer so this affects the auction price. Young girls might buy certain models but their fathers might do the actual buying. The tradein price also affects all this, if a family car sale usually gets a good tradein then they might pay more at an auction because they usually get another good car to sell from it. Some car yards try to just buy from the public because auctions tend to have more problem cars, dealers get rid of the bad sellers and more problem prone models that way. Expect more hidden problems in auctions. Some cars in auctions are from insurance companies such as damaged from a theft. Some cars are on consignment in an auction from the public or a dealer, others are bought by the auction house. People will buy cars an an online auction like Ebay but usually only cheap ones without inspecting it. Often car yards selling the cheapest cars make the most money. There are also psychological factors in an auction, the first cars often go much cheaper because people arrive late or aren't confident in bidding, then when they realize they missed out they bid much better after 10 minutes or so. Towards the end more people have bought cars so there may be cheaper cars then, but people waiting until near the end might be afraid of missing out and start paying more. If the weather is bad, there is something else better to go to like a sports game, the time of day is inconvenient, something good on tv, etc then more or less people might go to the auction. Sometimes the better cars might be sold before the auction, people come for a look and make an offer to the salesmen there. Most cars can still have the odometer changed, even those with an electronic readout. A humid day might make the exhaust in the cars when they come past look more smokey so people think they are burning oil and don't bid. Sometimes the shapes of cars might change from manufacturing advances, at one point in the cars made in the 90s I think there was a sudden change in the shape of the panelling. At auctions they would refer to cars from this onwards as the new shape. Some cars have mechanical problems that occur more regularly and dealers might pay less for that car, for example gearbox problems. More modern cars have this much less because of robotic assembly and being designed on computer programs so this is less of a factor now. Any questions you can message me or post them in the forum and I can ask the dealer for you or put you directly in contact with him, he isn't that tech savvy though. He also wants to set up an online car auction using software with live video feeds at auctions so he might pay good money for some solutions involving that.",0,None,1 Comment,Sat Nov 12 2011 03:07:24 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1037,/competitions/DontGetKicked,None /bseven,What is the 2 digit number after AUC on leaderboard?,"Also, is the AUC the percentage of correct predictions?",0,None,4 ,Sat Nov 12 2011 14:32:03 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1038,/competitions/GiveMeSomeCredit,None /passtheroc,"ask50==ask51, bid50==bid51; and some timing questions","At least in the sample I'm working with now, ask50==ask51 and bid50==bid51. What happens, if anything, between 50 and 51? Also, some timing questions: frequently, the time value doesn't change from event to event pre-shock. E.g. time36==time37==time38. What is happening here? Given the uneven real-time-deltas (time_i - time_i-1) pre-shock, how should we think about the timing post-shock? Are the response measurements equally spaced in time, even if the pre-shock measurements aren't? It looks like in some cases, the market does not fully recover within the 50 response time ticks. Is the phenomenon you are interested in the market resiliency within a short time window?",1,bronze,3 ,Sun Nov 13 2011 04:49:33 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1039,/competitions/AlgorithmicTradingChallenge,None /dirknbr,RMSE,Is the RMSE calculated as sqrt(mean((y-p)^2)) where y is actual and p is prediction or sqrt(mean(mean((y-p)^2))) where we take mean of a row first?,0,None,8 ,Sun Nov 13 2011 10:40:15 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1040,/competitions/AlgorithmicTradingChallenge,41st /salimali,Identical Events,"what does it mean when everything in t1,t2,t3 etc are identical, including the time stamp (eg row id 5666)? Are these 50 different events that just all happened at the same time?",0,None,2 ,Sun Nov 13 2011 13:32:15 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1041,/competitions/AlgorithmicTradingChallenge,None /blindape,Best CPU for performance,"Since I'm using heavy algorithms and cross-validation option feel I need to upgrade my old processor. In R FAQ I found this: 2.23 Why does R never use more than 50% of my CPU? This is a misreading of Windows' confusing Task Manager. R's computation is single-threaded, and so it cannot use more than one CPU. What the task manager shows is not the usage in CPUs but the usage as a percentage of the apparent total number of CPUs. We say `apparent' as it treats so-called `hyper-threaded' CPUs such as many Pentium 4s as two CPUs even though there is only one physical CPU. Hyper-threading has been re-introduced for Intel i3/i5/i7 CPUs and some Xeons: these will usually be reported as 4 or more CPUs and so R will be shown as using 25% or less. You can see how many `CPU's are assumed by looking at the number of graphs of `CPU Usage History' on the `Performance' tab of the Windows Task manager. Probably a less cores CPU with higher frecuency will be better for processing than a 4-8 multicore. What is in your oppinion the best strategy to update a CPU?",0,None,3 ,Sun Nov 13 2011 13:33:42 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1042,/competitions/PhotoQualityPrediction,22nd /alecstephenson,Benchmark Scores,Can you please recheck the benchmark scores on the leaderboard - the linear benchmark gives 0.95498 for example (which is why competitors have been on that exact value). Thanks.,0,None,1 Comment,Mon Nov 14 2011 08:42:42 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1043,/competitions/AlgorithmicTradingChallenge,11th /caius22441,Maltab or C code for Gini,"Hi, Does anyone have Matlab or C code which they can share to calculate the evaluation criteria (Gini coeffiecnt) ? Thanks, Caius",0,None,19 ,Tue Nov 15 2011 02:09:36 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1044,/competitions/DontGetKicked,282nd /alecstephenson,Global Maximum Members Per Team?,"I'd like to start a discussion among kagglers on whether in future there should be a maximum members per team rule across all comps. The HHP has 8 as a maximum. I was thinking of this because a team has just made a first submission in the Trading Challenge with 10 members. As kaggle is expanding, it won't be too long before we could be seeing teams with 50-100 people in them. Is this a good thing or not, for competitors and competition hosts? On the one hand it may mean more competitors and more collaboration, but on the other it may put off individuals from attempting to compete with large groups. What do you think?",1,None,2 ,Tue Nov 15 2011 06:25:40 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1046,None,None /temp19337,Anyone interested in trying to combine results?,We rank 19th on the leader board. we are looking for a group that ranks < 50-60th (or result > 0.8605) and is welling to try to combine results. Anyone? (please leave your email if interested),1,None,1 Comment,Wed Nov 16 2011 20:33:24 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1047,/competitions/GiveMeSomeCredit,6th /maxpowers,Capped Variances for Training Set? ,What sorts of capped variances are you guys getting for the training data compared to the test data?,0,None,14 ,Fri Nov 18 2011 04:01:32 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1048,/competitions/PhotoQualityPrediction,189th /caius22441,question about missing entries in data,"Apologies in advance if this question has already been asked before. I am a little confused about the missing entries in the data- What is the difference between a entry which shows the word ""NULL"" and one which is just left blank (empty cell)? Are both of these to be treated as a ""missing entry"" ? Some of the features (columns) have both ""NULL"" and blank cells. What is the definition of a ""missing entry"" ? Thanks, Caius",0,None,4 ,Fri Nov 18 2011 20:48:02 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1053,/competitions/DontGetKicked,282nd /bigcat0,Data Science Words of Wisdom?,"Howdy. Noob here, trying to get in on the data science/hacking scene. I'm fairly well versed in the theory behind a lot of machine learning techniques, but, one thing I've learned by doing my 1st competition (and playing with other problems) is that the cutest math/models hardly ever get you the good results in the real world. Anywho, I was looking for random bits and pieces of advice from the veterans in here. How do you strike a balance between 'thinking' (of sophisticated models that might get you no where) and 'doing' (very simple things that get you a good result quickly)? To give you an example, for the Photo Prediction Challenge, I came up with what I thought was a very slick location-based spam filtering type model. Guess what? it was an epic fail on the actual data, haha: the model did worse than the constant benchmark for that competition...I'd like to believe that, as a sport, Data Science is more about just 'trying things'. Perhaps I'm being naive? At any rate, I'm eager to hear your thoughts on this, admittedly philosophical, post. Thanks in advance.",0,None,3 ,Sat Nov 19 2011 00:09:35 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1055,None,None /thomaslotze,"Some more background on IRT, LMER, and the starting benchmark","The benchmark we provided was generated using R; the source code is available as benchmark_lmer.r in the data section. It uses a pretty standard application of [Link]:http://en.wikipedia.org/wiki/Item_response_theory, creating a separate model for each track, where each student has an ability and each question has a difficulty. Then the probability of the user getting a question correct is simply logit(ability - difficulty). These abilities and difficulties are estimated using the lmer function from the lme4 package, but you could also use other ways to try to find parameters. IRT is the basis of most student assessment today, especially as many tests move to being computer-based (allowing for adaptively selecting questions with appropriate difficulty in response to the students' previous answers). Figuring out a better set of features to use can definitely result in a competitive method, and using something more than a single parameter per question (either having multiple ability estimates, or adding a guessing or discrimination parameter) can also give you a better fit. But I definitely don't think that IRT is the only (or necessarily the best) way to approach the problem! There are a host of other methods I think are worth exploring. To name a few: * Clustering the questions into more meaningful and useful groups based on students' responses (rather than just using the manually-entered tags) would be useful just on its own, and could also be a part of improving other methods (such as IRT itself). * Specifically, looking at students' recent question history and/or using recommender systems (as [Link]:http://cacm.acm.org/blogs/blog-cacm/101489-massive-scale-data-mining-for-education/fulltext last year) to find similar questions and similar users might work very well. * Finding questions or users who don't seem to be acting in the same way as others in the cluster (like users who aren't taking the questions seriously or questions which aren't strongly related to the subject) and removing these outliers from the training data. * Coming up with a model for proficiency, either following in [Link]:http://david-hu.com/2011/11/02/how-khan-academy-is-using-machine-learning-to-assess-student-mastery.html or considering something like [Link]:http://www.springerlink.com/content/m50h664760426738/. Again, I'm extremely excited about the possibilities here. Good luck to all!",2,bronze,5 ,Sat Nov 19 2011 01:02:57 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1057,/competitions/WhatDoYouKnow,None /jhoward,Updated progress prize winners' papers available,"The progress prize winners have now updated their papers to respond to the [Link]:http://www.heritagehealthprize.com/pages/team' comments. The papers are [Link]:http://www.heritagehealthprize.com/c/hhp/Leaderboard/milestone1. Many thanks to the prize winners for their hard work, to the judges for their thoughtful reviews, and to all competitors who assisted in reviewing the papers. We have already seen the top 50 placeholders in the competition improve dramatically since the release of the original papers - it's great to see how the progress prize winners ideas are being utilised by other Kagglers.",0,None,10 ,Sat Nov 19 2011 01:57:56 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1058,/competitions/hhp,None /miranlevar,Nova testna množica,"Mislim, da nekaj ni vredu z novo testno množico, vrača namreč bistveno slabše (razlika 0.4+ rmse) rezultate kot jih vrne prečno preverjanje. Najboljše metode se posledično obnesejo enako ali še slabše kot random oddaja ali oddane same enke.",0,None,2 ,Sat Nov 19 2011 23:01:22 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1060,/competitions/CitizensProposals,30th /stevenmarkford,Submissions Explained,"I am wanting to predict a set of data for submission but I don't know where that set is which I am required to predict. Where can I find the set of data that is required to predict? Many thanks, Steven Mark Ford",0,None,11 ,Sat Nov 19 2011 23:55:55 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1061,/competitions/WhatDoYouKnow,153rd /jacobj,A better benchmark,"If you can't get an RMSE equal to 0.85067, or lower, here's a tip: simply repeat the bid50 and ask50 for each row, for your whole entry. No change at all. You will achieve this RMSE. This should suggest quite a bit as to how to go about the rest of the competition too.",0,None,3 ,Sun Nov 20 2011 06:53:34 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1063,/competitions/AlgorithmicTradingChallenge,39th /ahassaine,Users ranking method?,"Hi all, I don't know if this has been mentionned somewhere else, but I have seen in the recent talk of Jeremy that you have now an algorithm for ranking kagglers. [Link]:http://www.kaggle.com/users I was wondering what method is used for that purpose. Thanks, Ali",4,bronze,138 ,Sun Nov 20 2011 07:08:04 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1064,None,None /jzach10891,Explanation of Game Types,"I was wondering if it was possible to get more information on what the different game types are, and how they are played. For instance, how does a competition game work? Does everybody get a chance to submit their answer, or do other people get locked out once the correct answer is submitted? And then everybody who hasn't answered gets a ""timeout"" or ""skipped."" Thanks, Zach",0,None,2 ,Sun Nov 20 2011 20:44:56 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1065,/competitions/WhatDoYouKnow,None /thrasibule,RMSE clarification,"You say RMSEs are computed for bid and ask separately but you don't explain how you combine them afterwards. And then you say: ""The winning model will be the one with the lowest cumulative RMSE across the entire prediction set."" Cumulative means there is a sum going on, but that's clearly not what you're computing, so I assume you mean ""the lowest average RMSE across the prediction set"". So can we just get a formula of how you compute it? To make things precise, let B be the matrix of actual bids and Bpred matrix of predicted bids, we define A and Apred similarly. We have N observations so all matrices are dimensions N by 50. The evaluation mentions the RMSE will be computed separately for the bid and ask, so I assume that for observation i, RMSE_i=0.5\sqrt{1/50*(\sum_{j=1}^50 (B_{i,j}-Bpred_{i,j})^2)}+0.5\sqrt{1/50*(\sum_{j=1}^50 (A_{i,j}-Apred_{i,j})^2)} (in latex notation). Then do we take the average over the all observations with RMSE=1/N\sum_{i=1}^N RMSE_i? Or is it that the RMSE is computed at each time slice for bid and asks separately, with something like: RMSE_j=0.5\sqrt{1/N*(\sum_{i=1}^N (B_{i,j}-Bpred_{i,j})^2)}+0.5\sqrt{1/N*(\sum_{i=1}^N (A_{i,j}-Apred_{i,j})^2)} and RMSE=1/50\sum_{j=1}^50 RMSE_j They won't be the same due to convexity of the square root.",0,None,18 ,Sun Nov 20 2011 20:56:58 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1066,/competitions/AlgorithmicTradingChallenge,10th /clueless0,Congratulations!,"Just wanted to say congratulations to all the winners... And especially Bo for the surprise down-to-the wire ending, 0.00003! I look forward to seeing you all in future competitions, and one of these days I'll have enough time to give you folks a run for your money :)",6,bronze,16 ,Mon Nov 21 2011 04:17:20 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1068,/competitions/PhotoQualityPrediction,47th /tr00don,Interesting...,"There is a clearly defined data category, representing approximately 8.6% of the training set, where the probability of a car to be a ""bad buy"" is 4%-5% rather than the average 12.3%, i.e., it is approximately 3x lower. Based on the information provided, it should be possible in theory to determine whether a new car falls into this special category prior to making the transaction. Attached are test data entries that fall into the special category mentioned above. I wonder if the kaggle team would care to confirm that only 4%-5% of these test entries are ""bad buys"". The method I used can help determine (a) whether or not a new car falls into the special category; and (b) that the probability of a ""bad buy"" in this category is 3x lower than that for the complete training data set. Should the sponsor be also interested in this approach please let me know. Cheers! [Link]:https://storage.googleapis.com/kaggle-forum-message-attachments/1528/test-special-category-RefID.txt",0,None,3 ,Mon Nov 21 2011 06:47:21 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1069,/competitions/DontGetKicked,271st /stellar,defn of liquidity shock,"Hi Admin in an earlier post you defined a liquidity shock as - ""we define a liquidity shock to be a trade that results in a new inside bid/ask spread where the trade and quote message timestamps are identical."" Is this a necessary and sufficient condition for you to classify a state as a liquidity shock? Do you also classify this condition as a shock - If the bid/ask spread remains constant, but the bid price and ask price change, and the T and Q timestamps are identical. thx",0,None,3 ,Mon Nov 21 2011 07:12:39 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1070,/competitions/AlgorithmicTradingChallenge,None /arthurb,deactivated_at > round_started_at,There are a small number of entries (357 in the training set) where the round_started_at datetime comes after the deactived_at timestamp. Is this a glitch in the data or does it genuinely indicates something particular?,0,None,1 Comment,Mon Nov 21 2011 14:22:49 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1072,/competitions/WhatDoYouKnow,37th /matthewroos,What's the winning method?,"Maybe I'm just too impatient, but shouldn't we have heard from Matt C about the winning method by now?",0,None,2 ,Mon Nov 21 2011 15:27:12 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1073,/competitions/ClaimPredictionChallenge,30th /gabiteodoru,Question about quote events and prediction timestamps,"The Challenge Background states that ""A quote event occurs whenever the best bid or the ask price is updated"", and the figure indicates that a quote event appears at a change in bid or ask price. However, looking at the training data provided, first row, the first few events I see are (event type, timestamp, bid, ask): 'Q' '08:00:20.799' '2225' '2314.5' 'Q' '08:00:20.799' '2225' '2314.5' 'Q' '08:00:20.799' '2225' '2314.5' 'Q' '08:00:20.799' '2225' '2314.5' 'Q' '08:00:20.799' '2225' '2314.5' 'Q' '08:00:20.799' '2225' '2373.5' 'Q' '08:00:20.801' '2393' '2394' So there are a good number of quote events with identical time, bid and ask price. What does that mean? Why is it an event? Furthermore, from what I understand, for the prediction task the timestamps aren't given, and from what I see in the training data, I can very well expect that all bid,ask prices at t=51-100 might very well be identical, as they are sampled at identical times. Or they could be sampled hours apart. Am I missing something? The lack of a prediction timestamp appears to me to render the problem useless. Thanks!",1,bronze,2 ,Mon Nov 21 2011 19:37:53 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1075,/competitions/AlgorithmicTradingChallenge,84th /nlubchenco,Choosing an Evaluation Metric? (resource suggestions?),"I've noticed that Kaggle competitions have used a number of different evaluation metrics (AUC, Binomial Deviance, Gini etc.) and was wondering if anyone could direct me to a good source for choosing an evaluation metric. Many of the contests have been attempting to predict whether an observation was in or out of a certain group, but I'm currently working on project that is attempting to predict an the efficacy of malarial drugs (so rather than being binary, the key variable is a percentage). Any suggestions about evaluation metrics in general or my project specifically would be appreciated. Thanks.",0,None,4 ,Mon Nov 21 2011 22:51:27 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1077,None,None /huuh24285,philosophy behind kaggle,"isn't the prize money is too low ?? a company gets to indirectly employ hundreds of top minds for months, for a mere $10K ??? Instead, if they had to hire even 1 data scientist, it would cost them $50k/year. It took me 6 years (BS + MS) and 1000s of $ to become qualified for a statistics / machine learning data analyst position, and then what happens? Some super-brilliant folks who already have nice salaries decide to moonlight, band together @ Kaggle, and in the words of Southpark 'They took our jobsss'. Professors / Students should stay that way - either teach or learn. But instead, once they start applying their immense knowledge for commercial purposes, a single tenured professor can put the livelihood of 10 regular BS/MS employees in trouble / Unemployment Insurance. To the gifted folks here : You are extremely intelligent - the top 1% of the world in brains. If you abuse it for money, you may become the other top 1% whom none likes these days (i.e. #Occupy Kaggle) Protect your academic integrity and independence, by not selling the final model. Atleast, not for such extremely low prices - that is just #@!$. Cos next thing you know, we might have bidding wars - they guy who came in 4th / 5th willing to sell his model for a fraction of the original prize money. And boom, enter 'cut-throat society'! You may have brains and savings to survive, but think of the lesser souls. And don't forget, you weren't always this brilliant. And don't forget newton's 3rd law (aka karma). P.S : I like that Kaggle is bringing together geniuses for solving medical problems, dark matter etc. Although who really cares about dark matter? I can't even see it :) .. just kidding",0,None,9 ,Tue Nov 22 2011 17:32:46 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1079,None,None /bhm20038,Subtrack,What is the significance of the subtrack? How does it relate to the track or tracks? Thanks.,0,None,2 ,Tue Nov 22 2011 19:00:21 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1080,/competitions/WhatDoYouKnow,47th /robertlachlan,Purpose of valid_test and valid_training files?,I couldn't find the answer to this question on the data page or in the forums: what is the purpose of the valid_training.csv and valid_test.csv files?,0,None,16 ,Tue Nov 22 2011 23:33:23 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1083,/competitions/WhatDoYouKnow,None /kilotrader,Understanding the datasets,"The training data has some column headers that are making me scratch my head p_tcount p_value trade_vwap (maybe the volume weighted av price) initiator (S - Sell?, B- Buy?",0,None,1 Comment,Wed Nov 23 2011 01:58:48 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1084,/competitions/AlgorithmicTradingChallenge,None /stevejackson,30% - 70% split,Is the test data split as 30% - 70% for public testing and private testing in the way that total random split? I.e. each of the 50000 records have 30% probability in the public testing set and 70% probabiliy in the private testing set?,0,None,3 ,Wed Nov 23 2011 04:33:37 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1085,/competitions/AlgorithmicTradingChallenge,54th /gfm23355,MMRCurrent* Variables,"I am a new Kaggle member, so please forgive me if this has already been discussed and I just missed it. I noticed that all of the MMRCurrent* variables are defined as acquisition price for the vehicle as of current day. Is Carvana sure they want these variables available to us for modeling purposes? None of that information would be available in a production environment, since that is data gathered after the target event (purchase). If a model were to be put into production, these variables would not be available and any model that is proposed to Carvana that uses one of these variables would not be much help, especially if the variable is a significant contributor to the overall model. Hopefully that makes sense, I can explain further if needed.",0,None,1 Comment,Wed Nov 23 2011 06:12:33 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1087,/competitions/DontGetKicked,402nd /dirknbr,Individuals with many questions,"Here are some users with more than 5000 questions answered, how likely is that? group_ Obs user_id name taken _TYPE_ _FREQ_ correct 35346 23819 2 1 7 5357 0.66474 46422 31228 2 1 7 6796 0.72822 65210 43889 2 1 7 6271 0.77452127928 86139 2 1 7 5490 0.85847140900 94877 1 1 7 5862 0.73490142611 96046 1 1 7 5547 0.76654177187 119351 1 1 7 5647 0.81264183699 123742 2 1 7 5360 0.82500198189 133472 2 1 7 8184 0.95308207268 139564 2 1 7 7051 0.26422252453 169858 2 1 7 5311 0.53191",1,bronze,2 ,Thu Nov 24 2011 12:19:19 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1090,/competitions/WhatDoYouKnow,49th /andrewprayle,What methods have you been using to optimise a model with just the training set?,"Dear all, I've got stumped at about the 0.86 mark. I've been optimising models using the ROC area under the curve on just the training set, and I think I have found that I've hit an ""overfitting"" type issue, where my models improve to an AUC of about 0.94, but when I submit these models they do considerebly worse than my current best one (which is a randomForest). So - what techniques have you been using to assess models prior to picking a ""best"" one for submission? Andrew",0,None,4 ,Thu Nov 24 2011 19:55:26 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1091,/competitions/GiveMeSomeCredit,222nd /riadhdridi,"Question about languages, methodes and algorithms","Hi, I'm new to this site. Can someone tell me please if we must set our own algorithms and implemanting them in any langage, or if we have the possibility to use those already implemented in existant softwares and packages (R, SAS, Matlab...). Thank you for help, i don't speak english very well.",0,None,2 ,Thu Nov 24 2011 23:16:10 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1092,None,None /vsumpter,Chief Algorithms Officer needed in Chicago,"What venues would be the best place to meet candidates for a Chief Algorithms Officer role in Chicago? Accretive Health is looking for creatively brilliant professionals to leverage computational science, statistics and operational research disciplines in the pursuit of vastly better health outcomes. We have an extremely large multi-dimensional dataset and are looking for excellent researchers who have a burning desire to apply their capabilities in the pursuit of tangible results. Thanks in advance for your time and perspective!",0,None,2 ,Fri Nov 25 2011 16:25:56 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1093,/competitions/hhp,None /hshshs0,Building the model,"I have a question regarding building the model. I'm thinking of using OneR algorithm and I'm wondering if that would work fine on this data set. Also what kind of algorithms people have been using to build a model for this data set? Thanks,",0,None,1 Comment,Fri Nov 25 2011 23:00:52 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1094,/competitions/GiveMeSomeCredit,None /qqlara,Can I use this competition for a term paper?,I'm taking a data analytics class this semester for which there is a term paper. I would like to use the problem presented in this competition as the subject of the paper. Is that okay? I would definitely submit my final model to kaggle.,0,None,1 Comment,Sat Nov 26 2011 02:29:26 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1095,/competitions/WhatDoYouKnow,None /benhamner,Data Analysis Tools and Methods,"In light of this [Link]:http://blog.kaggle.com/2011/11/27/kagglers-favorite-tools, I wanted to kick off a discussion on the tools and methods people use to tackle predictive analytics problems. My toolset has primarily consisted of Python and Matlab. I use Python mainly to preprocess the data and convert it to a format that is straightforward to use with Matlab. In Matlab, I explore and visualize the data, and the develop, run, and test the predicive analytics algorithms. My first approach is to develop a quick benchmark on the dataset (for example, if it's a standard multiclass classification problem, throwing all the features into a Random Forest), and score that benchmark using the training set. To score the benchmark, I use out-of-bag predictions, k-fold cross-validation, or internal training / validation splits as appropriate. At that point, I iterate rapidly on the benchmark by engineering features that may be useful for the problem domain, and evaluating/optimizing various supervised machine learning algorithms on the dataset. For some problems, I've also touched a variety of other tools, including Excel, R, PostgreSQL, C, Weka, sofia-ml, scipy, and theano. Additionally I use the command-line / Matlab interfaces to packages such as SVM-Light and LIBSVM heavily. My main grievance is that I've not found a good tool for interactive data visualization, which would make it easier to develop insights on the data that would help increase predictive performance. What are your favorite tools and how do you use them? What is difficult or missing in them, that would make generating predictive models easier?",0,None,11 ,Sun Nov 27 2011 18:36:53 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1099,None,None /makis23760,Multiple liquidity shocks per line of training / test sets?,If there are multiple liquidity shocks per line of the sets given (defined as a Quote following a Trade event with the same timestamp and increased spread) should we ignore any of those shocks preceding the t49-t50 event in the prediction of the bid ask values from t51 onwards? Thanks,0,None,1 Comment,Sun Nov 27 2011 20:18:09 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1100,/competitions/AlgorithmicTradingChallenge,98th /smcinerney,Who's using R? Python? SASS? STATA? SQL? and what else?,"I'm trying to decide which choice of language is most suited to use on this: At first glance R strikes me as best. But it would be neat to apply Python timeseries. One of my partners is learning timeseries in Python, and the other is experienced in R. What are people using? PS: R's easy-to-use graphing to visualize arbitrary slices of data doesn't really exist in other languages. How are you handling that? Thanks in advance, Stephen",0,None,17 ,Mon Nov 28 2011 05:03:36 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1101,/competitions/AlgorithmicTradingChallenge,None /colingreen,Random Forests Newbie Question,Given the success of Random Forests I thought I should look into them some more. The first question that came to mind is how is the number of nodes in a each decision tree is decided. Is it typicaly to just try a range of values and just choose what works? So each forest is made up of trees with the same number of nodes each? Thanks.,0,None,10 ,Tue Nov 29 2011 01:12:17 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1106,None,None /tomtech,What are our predictions measured against.,"Both of the public contests due in the next few week (""Don't get Kicked"") are asking for probabilities even though the training sets use binary true or false as the test data field. Are our submissions being compared to the prediction algorism they currently use or is it being compared to a known set of deliquencies (bad deals)? If our submissions are being compared to an existing algorithm it seems that the purpose of the assignment is to get closest to an existing algorithm. If our submissions are being compared to known deliquencies (or bad deals) then the winning methods would have a meaningful use. I submitted my predictions for the ""don't get kicked"" contest and got a terrible placement while my raw probabilies for the same set made it to # 40 on the leaderboard. I placed this here since this contest seems to have the same issue based on the example entry and it's due date is sooner which makes this info more immediatly relative.",0,None,2 ,Tue Nov 29 2011 17:05:41 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1107,/competitions/GiveMeSomeCredit,427th /cbusch,Model Accuracy,Could anyone quantify the difference in the predictive accuracy between a model with RMSLE of .454 and .485?,1,None,3 ,Tue Nov 29 2011 18:37:47 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1108,/competitions/hhp,266th /xavierconort,NAs Imputation and test set,"I am not sure to have seen any rules concerning NAs (missing values) imputations in Kaggle's competitions. Can we train our algorithm on the combination of the training and test sets? or only the training set can be used? Thanks, Xavier",2,bronze,1 Comment,Wed Nov 30 2011 12:08:23 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1109,/competitions/GiveMeSomeCredit,2nd /stillsut,"Clustering VendorID's into ""Facilities""","The purpose of this thread is a central location for anyone that wants to ""crowd-source"" a specific problem I've noticed with the claims table we are given: Vendor appears to refer a particular dept. within a hospital instead of a total hospital. To understand the individual facilities (what we will call the total of dept's within a physical location aka ""hospital""), we will need to construct a clustering of VendorID's. So say VendorID's #2, #8, #12, and #17 are all found to have made claims to the same patient at the same time. Then we will cluster them, and our model can now have column ""FacilityID"", say = #2 for all the above VendorsID's above. I think the number of distinct ""inpatient hospitals"" facilities should be pretty low for HPN (compared to total distinct VendorID's). Probably around 5-30? This is a pretty tangential question to pursue for one team, as it does not come close to answering the target variable. However, from what I've seen in discussion, having usable inpatient hospital identifiers would be useful for all, and maybe get us all a little closer to .40000. This is why I am trying to open up the investigation.",0,None,10 ,Wed Nov 30 2011 19:38:37 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1110,/competitions/hhp,None /pidtis,Milestone Entries and Reviews,Are milestone entry winners required to provide an academic paper like the HHP?,0,None,42 ,Wed Nov 30 2011 20:49:38 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1111,/competitions/AlgorithmicTradingChallenge,21st /peter20,Multiple choice with multiple correct answers?,"The training set contains the following lines: 1,1,121052,2526,0,0,3,12,34 50 152 162 212 243,2011-11-10 18:37:50,2011-11-10 18:38:15,2011-11-10 18:38:17,14094,7,2,NULL,391 1,1,7120,2526,0,0,3,12,34 50 152 162 212 243,2011-02-17 18:18:39,2011-02-17 18:19:09,2011-02-17 18:19:20,14094,7,1,NULL,391 1,1,155663,2526,0,0,3,12,34 50 152 162 212 243,2009-11-22 23:25:18,2009-11-22 23:25:26,2009-11-22 23:25:26,14095,7,1,NULL,391 That is, question 2526 is marked as correct for answerid 14094 and 14095. The question type is 0, 'MultipleChoiceOneCorrect'. Why are two different answers both being marked as correct? Thank you.",1,bronze,2 ,Wed Nov 30 2011 23:30:42 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1112,/competitions/WhatDoYouKnow,None /brian4,Why are user times to answer questions included in the test set?,"It seems that since the goal is to predict what type of questions a user will have difficulty with (before the user actually encounters the given question), it doesn't make sense to include features related to how long a user took on a question in the test set. This requires the user to actually encounter and answer the question, in which case there is no need for prediction since you already have the user's answer and the correct answer (as the one giving the test). This information could be used to improve predictions since an obvious strategy, e.g. for single user test, is to decrease the probability of success for questions the user takes longer to answer, or at least incorporate this in the predictive model in some way. Since such features seem to be potentially useful, but realistically would not be available for predicting a user's success for unseen questions, I would guess they could lead to misleading results for the competition as the best models may rely on these features. In particular I am referring to ""round_started_at,answered_at,deactivated_at"" which show up in the test set. Am I missing something here?",1,bronze,16 ,Thu Dec 01 2011 01:24:12 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1113,/competitions/WhatDoYouKnow,None /mikel1,What does Outcome 0 (zero) mean?,"Friends: In line 244068 of valid_training.csv, the Outcome code is 0. Outcomes are defined as: outcome: a numeric code representing 'correct' (1), 'incorrect' (2), 'skipped' (3), or 'timeout' (4): a more detailed indicator of the outcome. What does Outcome 0 mean?",0,None,5 ,Thu Dec 01 2011 12:26:10 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1114,/competitions/WhatDoYouKnow,32nd /mattfrancis,Anyone know of a suitable mobile Kaggleable netbook?,"Hi All. I've been working on Kaggle competitions during my commute (about an hour each way), using my little Asus eeePc. It's very light and has plenty of battery life but the 1G of RAM just doesn't cut it for the size of the data sets used in most Kaggle competitions. I've been trying to setup jobs by playing around with subsampled data sets and then run them on the full set using my gruntier box at home overnight, copying the results to the netbook in the morning to look at on the train. Needless to say this is a real PITA since it breaks the flow of the analysis. So, can anyone reccomend a netbook with more grunt that might be able to handle Kaggle work, but still nice and small and light (and cheap!)? When I was shopping around for the one I've got, the more expensive models had useless things like Bluetooth or 3D graphics cards. I really need a lean mean processing machine without extra bells and whistles inflating the price. I guess the market for such a thing is pretty small, but I'm open to suggestions if anyone knows of a good product.",1,None,5 ,Thu Dec 01 2011 23:21:55 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1117,None,None /mikel1,Server error in application,Kaggle folks: your server failed when I submitted a valid-looking file. Here is the screen-shot.,0,None,2 ,Fri Dec 02 2011 02:47:14 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1118,/competitions/WhatDoYouKnow,32nd /alegro,Submission parser bug?,"I tried to zubmit all zero values as the prediction results and got a bunch of following errors: ""ERROR: The value '0.0' in the required column 'bid51' must be a real-valued number in the interval (0, 8000). (Line 2, Column 8) ..."" Is it a bug or just out of bound value? By the way. Note ""Your entry must: have exactly 50,000 rows"" at the submission page probably is not correct.",0,None,1 Comment,Fri Dec 02 2011 10:00:03 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1119,/competitions/AlgorithmicTradingChallenge,2nd /statpassion,Price at Trade Event,"This is probably a dumb question, but the bid and ask prices should be equal for a trade event. I do not see that happening. Also, does each row correspond to one security? Thanks, Nandini",0,None,1 Comment,Sat Dec 03 2011 01:23:50 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1120,/competitions/AlgorithmicTradingChallenge,None /barrenwuffet,Blending models,Any one have a link to information about how to blend models?,0,None,1 Comment,Sat Dec 03 2011 15:41:10 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1121,/competitions/GiveMeSomeCredit,232nd /del=4573194f4601b602,How to load data?,"I am a high school student. I just learned Machine Learning Online. So I want to practice what I learned. Now, I run into a problem that I don't know how to load CSV to Octave (a free version of Matlab) . Anyone can help?",0,None,33 ,Sun Dec 04 2011 01:24:46 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1122,/competitions/WhatDoYouKnow,None /outsider,Why 3 years of claims and only 2 years of outcomes?,The title says it all really. Thanks in advance...,0,None,9 ,Sun Dec 04 2011 14:51:28 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1123,/competitions/hhp,569th /lumi25649,user_id issue,"The training and test files do not seem to be created according to the description in the Data section (""The test/training split is derived by finding users who answered at least 6 questions, taking one of their answers (uniformly random, from their 6th question to their last), and inserting it into the test set. Any later answers by this user are removed, and any earlier answers are included in the training set""), as looking into the test file, there are quite a number of user_ids that do not exist in the training.csv file (for example user_ids from 0 to 6). I did not yet check if for any user_id in the test file, that is represented in the training one, there are at least 5 answers in the training file. Any purpose for these extra user_ids? Should they just be discarded? Thanks",0,None,1 Comment,Sun Dec 04 2011 15:13:04 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1124,/competitions/WhatDoYouKnow,None /nalla21302,The LengthOfStay in Claims and DaysInHospitalY2/Y3 don't match - Some questions/ideas for Organizers,"After going through various posts and replies on this mismatch of LengthOfStay and DaysInHospital issue, I am still left puzzled. For example, the MemberID '40050285' doesn't have any rows in DaysInHospitalY2 and the member's data is NOT suppressed as indicated by the member's Claim data. In Claim data, it shows that this member has Year LengthOfStay PlaceOfService DSFS ==== ========== ============ ===== 2 3 Inpatient Hospital 1 2 3 Inpatient Hospital 2 2 1 Urgent Care 5 Notice in the above Claim data even the DSFS is distinct and I expect to see 7 days in the DaysInHospitalY2.csv for this member ""40050285"", yet there are no matching rows for this member in the DaysInHospitalY2.csv. Request to the Organizers: Please explain why there is no record in DaysInHospitalY2.csv Also, it seems like there was much discussion about cleaning data or not cleaning data in trying to interpret LengthOfStay. In my view, this is a simple problem to solve for the Organizers by simply providing the DateOfService data for every claim row INSTEAD of clubbing all claims in a month. i.e., to be more specific, there is no way to find out whether two claims (or two claim rows) that has the same DSFS value, say 1 indicating it happened in the 1st month, should be treated as two procedures that happened on the same day resulting in DaysInHospital value of 1 or should be treated as two procedures that happened on different days in the same first month resulting in DaysInHospital value of 2. Once DateOfService is provided, then it should be fairly straightforward to associate LengthOfStay in Claims table to DaysInHospital in DaysInHospitalY2/Y3 table while applying the simple rules the organizers mentioned, such as PlaceOfService value of ""Inpatient Hospital"" or ""Urgent Care"" to interpret a claim as resulting in some hospital stay for LengthOfStay > 0. I think this has caused so much confusion and wasted huge cumulative hours of research across participants which could have been well spent on coming up with a good prediction algorithm.",0,None,2 ,Mon Dec 05 2011 01:12:15 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1125,/competitions/hhp,None /stellar,price increment,Hi Admin what is the minimum price increment for this data? Is it a function of price only? thx stellar,0,None,1 Comment,Wed Dec 07 2011 06:45:50 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1129,/competitions/AlgorithmicTradingChallenge,None /ranzhang0,PLS model,"Hello there, I am a freshman for data managing and I choose this topic as my course project. I built a PLS model, but the R2 is very low (around 3%) even if I delete some outliers. Anyone can help me? I am very appreicate.",0,None,1 Comment,Wed Dec 07 2011 20:22:57 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1131,/competitions/DontGetKicked,194th /ccccat,Why so many teams?,"Why there are so many teams in this competition? I ,personally, would really prefer Algorithmic Trading Contest, but it was not available when I made first submission here. Now I am stuck competing with 817 teams for meager prize. ;(",0,None,8 ,Wed Dec 07 2011 20:46:55 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1132,/competitions/GiveMeSomeCredit,8th /mikel1,"Can we publish our findings on our websites, blogs, etc. ?","The Grockit data reveals insight into student behavior that educators and test-constructors should know about. Am I allowed to publish these insights on my website, etc.?",0,None,4 ,Thu Dec 08 2011 06:38:44 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1134,/competitions/WhatDoYouKnow,32nd /robert2012,Training set data missing.,"Hi Everyone, I've just made a start on this and I have noticed some issues in the data...I have read the forums on data issues, so this should be a previously undiscussed problem. Sorry if I missed a relevant post. Viewing the data as a set of claims in years 1, 2 and 3, you find that there are people who claimed in year 3 who did not claim in years 1 or 2. There are also people who claimed in year 1 and 3 but not year 2. Similarly, there are people who claim in years 1,2 and 3, and those who claim in year 1 but not years 2 or 3. However, there are no records for people who claimed in year 2 and 3 but did not claim in year 1 . Nor are there records for people who claimed in year 2 but not years 1 or3. I would have expected that there would be some people who claimed in year 2 who had not claimed in year 1 because maybe they joined that year, or maybe they didn't have to claim in year 1. To me it suggests a sampling error when the data was produced. Perhaps it is explained somewhere I haven't seen? (I have read the data dictionary.) The reason this important is that we can make y3 predictions based on y1 claims because we have the data to do that. We don't have data to make y4 predictions based on y2 claims. That would be OK if we knew for sure that there was no case in the y4 test set of individuals who claimed only in y2....but we don't know that do we? This part may not be an issue since we are to submit predictions for 70942 people, which is the number of people who have claims in y3. However the next part still applies. Moreover, we don't have the data to check our 1-year horizon predictions by comparing our y1 claim->y2 bed vs y2 claim->y3 bed. This means we're not as secure in our y3 claim->y4 predictions. Hoping this all makes sense.",0,None,9 ,Thu Dec 08 2011 08:15:07 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1135,/competitions/hhp,None /vitalyg,Members Backgroud,"I was thinking that it would be interesting to find out the mix of profiles we have here at Kaggle. Here is mine: Age - 28 Location - Israel Academic background - B.Sc in Computer Science and an MBA Profession background - mostly data mining, currently I'm a data scientist at a meduim size CRM company What's yours?",0,None,3 ,Thu Dec 08 2011 09:08:14 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1136,None,None /smcinerney,"For R users, what type of plot do you use? and normalization?","This is an R graphing question; I'm spinning my wheels getting a reasonable plot of one row event (bid & ask prices for timeslots 1..100 in the training set). I'm trying to get the bid series as solid green bars (downwards) and ask as solid red bars (going upward, which is tricky). Alternatively, I could create a stacked barplot with a pseudoseries spread=ask-bid (of filled black bars stacked on top of the bid series). Maybe slightly less desirable, because it doesn't plot the series directly. do you use stairstep : plot(..., type='s') ? or stacked barplot()? (if so, I don't see how to get the filled ask bars to go upwards) or do you use [Link]:http://had.co.nz/ggplot2/ , if so which type of plot? [Link]:http://had.co.nz/ggplot2/geom_bar.html, [Link]:http://had.co.nz/ggplot2/geom_ribbon.html, custom qplot ...? Second question: do you normalize the plot, if so to what? the average width of the bid-ask spread during the predictor window (slots 1..50), rounded to nearest unit? the max width of the bid-ask spread? Thanks in advance, Stephen",1,bronze,2 ,Thu Dec 08 2011 11:09:37 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1137,/competitions/AlgorithmicTradingChallenge,None /sbagley,Here is a faster version of the lmer benchmark code in R,"The lmer benchmark code was a good start, but I decided to tweak it. The attached version is more general, modular, and about 7 times faster by my informal timing. It keeps the same general structure of iterating over one variable (track_name in the original example) which you can change, and lets you specify the other training variables in a more general way. It uses hash tables to speed access of the model results. The results are within a low-order bit of the original but not identical because I changed the logit function slightly to avoid exponentiation. Feel free to adapt and improve. It's semi, sort of, self documenting. --Steve [12/13/2011: This version has a bug. See my note below.] [Link]:https://storage.googleapis.com/kaggle-forum-message-attachments/1574/lmer-kaggle.R",1,bronze,3 ,Fri Dec 09 2011 00:38:47 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1138,/competitions/WhatDoYouKnow,33rd /olddog1,How good can we get and how do we know when we are there?,"I tried to estimate how good a score a ""perfect"" answer would give. Using Monte Carlo to provide answers in accordance with the LMER benchmark probabilities, and then calculating the score based on those answers and probabilities, provides a CBD score of 0.2504, eerily close to the best score at present. Of course you can beat this by moving probabilites of answers that happen to be correct towards 1 and coversely (if you know them or by trial and error). But this doesn't help predict real students' scores or help them understand what to study. So, have the current leaders already achieved an effectively perfect result or is my analysis wrong? Would someone like to do a similar exercise and post the result? I've purposely omitted details of my calculation so as not encourage repetition of any mistake I've made but I'm happy to post details if required. Any thoughts?",0,None,4 ,Fri Dec 09 2011 09:41:13 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1139,/competitions/WhatDoYouKnow,50th /ccccat,Code for verification,"1. For verification purposes participants will have to submit standalone executable code. Will it be possible to submit Matlab code compiled to p-code?(Matlab will be needed to execute it). 2. ""executable code for a standard platform"". What is the ""standard platform"" ?",0,None,2 ,Fri Dec 09 2011 15:32:38 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1140,/competitions/GestureChallenge,None /bdol18739,Segfault with sample code?,"I tried running the sample code with Matlab 2010b on 64 bit Linux, but I'm getting a Matlab segfault with no stack trace as soon as I start the training procedure from main.m. When I initially tried running the code, I was asked if I'd like to use the 64 bit version of libavbin.so, and I said yes. Is anyone else getting this problem? Thanks.",0,None,5 ,Fri Dec 09 2011 19:44:13 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1141,/competitions/GestureChallenge,None /nlubchenco,Data Formatting Help,"I was wondering if anyone had any suggestions for how to format a handful of the variables for analysis. Converting many of the variables into dummies was pretty easy, but some of them have many values and duplicates. I just did it manually by entering a ALLOY and a COVERS column and then copied appropriate if statements. This is obviously a terrible idea for a variable with an unknown and high number of options. I'm pretty bad at coding, but trying to learn: I tried making an array, but that won't deal with the duplicates (at least the code i've tried). I also tried using the collection object in vba but i'm having trouble making it output what i want. The variables in particular are the Model, Trim, BYRNO and VNZIP1 and VNST. Any suggestions, tips or resources to recommend? Thanks, Nathan",0,None,2 ,Sat Dec 10 2011 00:15:51 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1143,/competitions/DontGetKicked,141st /del=4573194f4601b602,I gave up. Maybe this will help.,"It takes me so long to seperate the user_strength so I gave up. As a stufdent, I don't have that much of time. I have already tried to manipulate my algorithm to do faster calculation. But it still is a very time consuming process. I post it and if anyone is interested, just copy it. By the, it is in Matlab. Userid is for valid_train and arranged according to the sequence that user id appear in the dataset. Anyone prefer csv can just change file name to csv. Codes are in para.txt and userid is in userid.txt. Hope you guys can do great. Bill Wang",1,bronze,1 Comment,Sat Dec 10 2011 18:52:55 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1144,/competitions/WhatDoYouKnow,None /dchudz,Teams,"The fact that a person can only ever make submissions as part of one team makes it seem that if I make any submission on my own, I can never be part of a team. Is that true?",0,None,2 ,Sat Dec 10 2011 21:28:04 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1145,/competitions/GestureChallenge,None /stevenmarkford,Taking Advantage of Special Hardware for Computations,"Am I allowed to offload computations onto some CUDA cores? The particular hardware I will be using costs around $80? I ask this in light of what was said on the prize page ""...so that individuals trained in computer science can replicate the winning results."" As CUDA programming is not very common so someone trained in computer science might not be able to replicate the result with out doing substantial additional training? I don't want to go through the extra effort of porting my code to CUDA if I could potentially be disqualified.",0,None,3 ,Sun Dec 11 2011 15:55:00 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1147,/competitions/WhatDoYouKnow,153rd /dchudz,"importing data to Octave/R, visualizing with PCA","Hi all, Anyone who doesn't have MATLAB may find this post helpful for importing the data: [Link]:http://blog.learnfromdata.com/2011/12/visualizing-gestures-as-paths.html. I'd love to hear of other approaches. I also made a visualization of the devel01 training data, and all of my code is available on Github. -David",8,bronze,4 ,Sun Dec 11 2011 23:09:35 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1148,/competitions/GestureChallenge,None /iguyon,Getting started,"I added an entry to the Help to help you getting started: There are essentially 2 approches that can be taken for data representation: Extracting a ""bag"" of low level spatio-temporal features. This approach is often taken by the researchers working on activity recognition. An example is the [Link]:http://www.irisa.fr/vista/actions/. Tracking the position of body parts. This approach is used in most games. One popular method was introduced by Microsoft with their [Link]:http://research.microsoft.com/apps/pubs/default.aspx?id=145347, which is part of their [Link]:http://kinectforwindows.org/download/. There is [Link]:http://www.amazon.com/Visual-Analysis-Humans-Looking-People/dp/0857299964/ref=sr_1_1?ie=UTF8&qid=1321321690&sr=8-1 on the subject. Some approaches require separating the gesture sequences into isolated gestures first, which is relatively easy in this dataset because the users return their hands to a resting position between gestures. Once you have a vector representation of isolated gestures, to do the ""one-shot-learning"", the simplest method is the [Link]:http://en.wikipedia.org/wiki/K-nearest_neighbor_algorithm method. But you may also look for the best match between temporal sequences directly without isolating gestures using [Link]:http://en.wikipedia.org/wiki/Dynamic_time_warping. Isabelle",0,None,30 ,Mon Dec 12 2011 20:21:45 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1150,/competitions/GestureChallenge,None /anlthms,Where are the quants?,"Too busy making money, I suppose... The low participation level on this contest is surprising. There are 72 teams as of now, while there is a big crowd of 879 teams milling about in the credit rating contest. I think most folks that know this domain well are a bit too gainfully employed in high frequency trading etc. to bother about a challenge like this. Still, one would think this makes a good venue to see how their techniques stack up...",0,None,5 ,Tue Dec 13 2011 07:34:51 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1151,/competitions/AlgorithmicTradingChallenge,4th /morenoh149,lmer and caret package in R,"I'm very fond of using the caret package in R for building models. But despite the large number of supported algorithms, mixed effect models from the 'lme4' package aren't supported. Does anyone know if it is programmatically difficult to add support for this to caret? I'm tempted to contact the author and offer help. In the meantime, I suppose all of you are depending on kaggle for evaluating the goodness of your models. I really like evaluating a model using a test set on my own. Just curious if anyone else is in a similar situation.",0,None,2 ,Tue Dec 13 2011 20:06:13 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1152,/competitions/WhatDoYouKnow,None /ccccat,Magic team migration,Is it me or three top teams really were moved down the table? Did they disappear completely?,0,None,52 ,Wed Dec 14 2011 00:23:49 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1153,/competitions/GiveMeSomeCredit,8th /antgoldbloom,Rules and Terms & Conditions,"We have made a slight change to the [Link]:http://www.kaggle.com/pages/terms adding 3.6: No individual or entity may share solutions or code for any competition, or collaborate in any way, with any other individual or entity that is participating as a separate individual or entity for the same competition. The foregoing shall not apply to any public communications, such as forum participation or blog posts. We are also aware that the rules haven't been as clear as we might have liked. From now on, before you download the data for any new competition, you will be reminded that: you cannot sign up to Kaggle from multiple accounts and therefore you cannot submit from multiple accounts; and privately sharing code or data is not permitted outside of teams (sharing data or code is permissible if made available to all players, such as on the forums). We've reached out to several teams about this issue. Please let us know ASAP if you have multiple accounts and we've not reached out to you.",0,None,1 Comment,Wed Dec 14 2011 03:56:36 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1154,/competitions/GiveMeSomeCredit,357th /antgoldbloom,Rules and Terms & Conditions,"We are aware that the rules haven't been as clear as we might have liked. Please be reminded that: you cannot sign up to Kaggle from multiple accounts and therefore you cannot submit from multiple accounts; and privately sharing code or data is not permitted outside of teams (sharing data or code is permissible if made available to all players, such as on the forums). We've reached out to several teams about this issue. Please let us know ASAP if you have multiple accounts and we've not reached out to you.",0,None,16 ,Wed Dec 14 2011 03:58:11 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1155,/competitions/hhp,None /arnaudsj,What kind of accuracies are top competitors getting against the valid_test data set?,"Just wondering, since the [Link]:http://www.kaggle.com/c/ChessRatings2/Details/Evaluation metric does not provide a clear understanding of how well the top competitors are predicting the test data set. Cheers!",0,None,3 ,Wed Dec 14 2011 04:36:51 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1156,/competitions/WhatDoYouKnow,None /ayushmad,Questions about the data?,"From the information given, I could gather that each row gives a distinct trade and quote values for that stock. At transaction 49-50 liquidity happens for that stock item I have few questions regarding this:- At the trans 49-50 the liquidity happens then shouldnt the spread always increase at that time. In the exmaple set i found few examples that the spread was remaining constant or even dcreasing?Can this be a case for liquidity After we train the algorithm,Is our purpose to predict the next 50 trade and quote prices ? P.S :- I have attached a screenshot where the spread is reduced. [Link]:https://storage.googleapis.com/kaggle-forum-message-attachments/1580/liqred.jpg",0,None,3 ,Wed Dec 14 2011 08:04:41 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1157,/competitions/AlgorithmicTradingChallenge,None /cr21579,Why there is no submission after august,"Why there is no submission after August 31, on public leaderboard?",0,None,2 ,Wed Dec 14 2011 08:44:57 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1158,/competitions/hhp,None /soil1682,More submissions in the last day?,"Is it possible to allow people make more submissions in the last day? Just in case they have some new ideas not tried yet, because this is the last chance!",0,None,4 ,Wed Dec 14 2011 09:17:15 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1159,/competitions/GiveMeSomeCredit,93rd /karmicmenace,Teaming up?,Anyone interested in teaming up for this and other upcoming competitions ? I am interested in colloborating mostly for experience and learning. I must be missing something basic with this algo competition and newer perspective always help. Thanks.,0,None,5 ,Wed Dec 14 2011 09:48:44 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1160,/competitions/AlgorithmicTradingChallenge,74th /leazar,Limit order book,"I'd be curious if anyone has explored assembling the limit order book from the data provided. It seems that the magnitude of the shock (price change) and how quickly it reverts to normal is going to be related to limit orders in the book that can be filled to meet the request. With that said, I'm not certain there is enough information. As an example, with the exception of the bid/ask we are ask to predict, there is no information on volume of trades coming in, only that a trade occurred. One may be able to infer the type of previous trades based on the subsequent price movement, but without volume information, it's difficult to judge the magnitude or how depleted the limit order book is for meeting order request we are ask to predict. thoughts?",0,None,1 Comment,Wed Dec 14 2011 23:00:21 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1161,/competitions/AlgorithmicTradingChallenge,61st /byang1,Vdeo Decoding Hurdle,"I suspect no one has made a submission yet because unlike other contests, the data for this one is not readily accessible. I downloaded one of the ZIP files and double-clicked on a video, and Windows Media Player popped up with this message: Windows Media Player encountered a problem while playing the file. For additional assistance, click Web Help. I've worked with digital video before so I opened the AVI file in a hex editor and saw that the video codec is FMP4, something I never heard of before. I guess we can all download libavcodec and write our little video decoders, but is there a Visual C++ project out there already with all the external libraries included, that decodes FMP4 video to RGB or YUV buffers ?",0,None,6 ,Thu Dec 15 2011 05:53:23 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1162,/competitions/GestureChallenge,None /imagedoctor,Lottery Result,"I wonder who might win today's lottery, are the top entries overfitted on the public test data ? Will the surprisingly high number of teams in this competition mean that probability of a more generalised solution emerging as being the most successful on the fulll test set is higher? Or is there too high a degree of correlation amongst entries .... ? How would the competition have looked if there were no test set scores available at all and we only had access to the training data...would that have led to better generalisation and less tuning of entries to the distribution of the public test data? Anyone care to make some predictions, after all it's what we are here for... :)",0,None,1 Comment,Thu Dec 15 2011 10:04:30 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1163,/competitions/GiveMeSomeCredit,None /chinni0,Output format,Could you please somebody tell me what is the output format that I have to submit on test data? Is it like RefId Preidiction score ? Where can I get the evaluation output(actual output) to compare my predcited value?,0,None,4 ,Thu Dec 15 2011 14:19:18 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1164,/competitions/DontGetKicked,410th /oneoff,Random forests based on subsets of dataset,"Have anybody tried to build random forests based on 4 different subsets of dataset ? Each dataset is created by taking subset of rows and columns. For the first subset we take only that observations that have no NAs, second subset consist of observations that have only one NAs in given column and then delete that column consisting of NAs etc, then we classfy four subsets of test detaset. I wonder if that work better then appoximating NAs and training classifiers on corrected dataset. It's too late for me to try this approach. ps. the nice thing with random forest is that we dont need to take whole dataset at once (which could be the problem with weaker computers), instead of that we can train several random forests : train one random forest on random sample of traing dataset then predict classes of observation in testing dataset, delete random forest and train another one on random sample, after we decide that the number of classifiers builded is enought we can simply aggregate their predictions, the good side of this approach is that if saves memory, the other is that we can play we simply (naive :D ) boosing - make weighted sampling, instead of random sampling - after each random forest is trained we give bigger sampling weights for observation that are misclassified that resample and build another random forest, after several iterations observations on the border of two classes should have bigger weights of course that method is sensitive to outliers, and which observations classify as outliers ? observations with largest weights (one percent of them) after several iterations of algorithm, then we can delete them from dataset and reapet learing, beafore each detetation we predict classes of test dataset and submit our result, then delete and learn again, that should alow us for quite cautious/gradual cutting off outliers, the main problem is processing power, time and setting constans like size of the subsets, I wonder if that could work",0,None,1 Comment,Thu Dec 15 2011 14:36:42 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1165,/competitions/GiveMeSomeCredit,None /sirguessalot,Congratulations to the winners!,"Congrats to Alec, Eu Jin, and Nathaniel for the top spot! Also congrats to Gxav and Occupy for coming in secong and third. We'd love to hear what methods you all used on this very popular contest.",8,None,59 ,Fri Dec 16 2011 01:14:51 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1166,/competitions/GiveMeSomeCredit,28th /arcnewuss,number of rows submission,"For submission it is required that ""Your entry must: have exactly 48,707 rows"", but then I submitted a 2 column file of exactly 48, 707 rows and got the error: ERROR: Couldn't find matching destination row for '1' on sortable column 'RefId' (Line 1) I don't understand what I did wrong. My first row (line) is 1,1 Thank you for considering my request,",0,None,5 ,Fri Dec 16 2011 02:22:27 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1167,/competitions/DontGetKicked,404th /del=92525096498f3bbd,General Questions,"General Questions: So since their are 2 types either ""devel"" or ""valid"" and 20 choies for each of those, there would be a grand total of 40 batches with each batch having 100 gestures? So by saying that their are ""There are instances of N unique gestures from a vocabulary of 8 to 15 gestures"" you are saying that each batch contains a unique subset combination of the 8-15 gestures? Actually I think it would be permutation since order does matter(what order the gestures are done in)? Just wanted to confirm that I do not have to publish any papers on this topic correct seeing as to how the deadlines for submission are before final evaluations? Training Data/Test Data Am I allowed to use cross validation or do I have to keep data as training data and testing data seperate? Along the same lines am I allowed to use methods such as boosting/bagging? I don't really understand how to read the training data/test data: Ex: In ""devel01_train.csv"", cell A1 specifies following: devel01_1,10 Should I go ahead and split the data on the commas, because in excel all the data is shown in one column? So devel01_1 would be the row id which wold be unique and 10 would be a label? What is this label? Is that label the classification? Thanks!",0,None,1 Comment,Fri Dec 16 2011 07:54:13 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1169,/competitions/GestureChallenge,None /del=4e191bb6982745b7,Question on Handling Makes and Models,What has been working in terms of handling the categoricals? Lots of different feature vals in this dataset,0,None,4 ,Fri Dec 16 2011 21:21:30 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1170,/competitions/DontGetKicked,376th /lino25308,Data and Ideas from Milestone winners' papers,Hi! Can I include in my algorthms some data or ideas published in previous Milestone winners' papers ? In Particular the table of page 18 from the Market Makers paper. Thanks.,1,None,2 ,Fri Dec 16 2011 21:25:50 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1171,/competitions/hhp,None /dpmcna,Survey for competition participants,"For interest, here is a link to a follow-up survey that the competition host encourages all participants to fill out: [Link]:https://www.surveymonkey.com/s/GHRM78D. Your contribution would be highly valued!",0,None,2 ,Fri Dec 16 2011 23:08:03 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1172,/competitions/GiveMeSomeCredit,None /tinkerer,can you continue to submit your result after the competitions are over?,"Hi, I wonder whether we can submit the result after the competitions are over. The reason I want to do that is because I like to see my evaluation result even though the competitions are over. Does Kaggle allow that? Thanks, SK",0,None,1 Comment,Sat Dec 17 2011 20:19:59 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1174,None,None /salimali,spikes in timing of the shocks,Here is my brief examination of the data - hope it may prove useful to someone. Does anyone have any other hypothesis of why these spikes occur? [Link]:http://anotherdataminingblog.blogspot.com/2011/12/whats-going-on-here.html,3,bronze,1 Comment,Sat Dec 17 2011 23:04:39 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1175,/competitions/AlgorithmicTradingChallenge,None /mistergreen,Gini,"Hey all, This is my first post here, and this contest will actually be my first that I've ever entered. I'm sorry if this is a dumb question, it seems like it is, but how exactly is the gini index used to get results here? I have a basic idea of what a gini index is, but I'm not sure exactly how to use it to compare results of my models. Is it based on the confusion matrix? Thanks in advance. Rob",0,None,1 Comment,Sun Dec 18 2011 05:53:01 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1176,/competitions/DontGetKicked,567th /jmalicki,How were public/private leaderboard and test set sampled?,"There's been a lot of hullaballoo over how much the public leaderboard differed from the private leaderboard. Were they uniformly randomly sampled? With datasets that have a lot of rare columns, is it worth considering choosing sets via stratified sampling to get more regularity between them? This could help to minimize variance between sets, while keeping the public leaderboard small, and perhaps even increase accuracy of the private leaderboard.",0,None,19 ,Sun Dec 18 2011 18:28:09 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1177,/competitions/GiveMeSomeCredit,3rd /anlthms,"Kaggle, please check your scoring system","Something doesn't smell right... I compared the testing dataset to the old testing dataset (the one you had appended to the training dataset as the last 50K lines). Using measures such as variance of the prices, mean and variance of the spread etc., they look very similar. And yet, they score very differently from each other. As I pointed out in another thread, the benchmark RMSE is ~0.85 for the testing dataset while it is 1.2695 for the old testing dataset. I have a few hypotheses. - the scoring is done in way less than 30% of the testing dataset. - the 30-70 split was not done randomly. - the score is not RMSE, but some other beast. Another red flag is that minor tweaks to the code that should result in a small variation in accuracy actually leads to wild swings on the score reported by the scoring system. Please check and get back to us.",1,bronze,15 ,Sun Dec 18 2011 22:08:05 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1178,/competitions/AlgorithmicTradingChallenge,4th /jothy17269,Reduce variable level,"Statistics student here, What is the best method to reduce the categarical independent variable/ reduce the level of indepenedent variable in the training data set.",0,None,7 ,Mon Dec 19 2011 13:52:22 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1179,/competitions/DontGetKicked,332nd /bergeisubka,Breaking up of Training Data,"Would it be possible for admin (or some one else ) to break up the training data randomly into smaller sets , say 10000 rows each. My outdated PC has given up trying to process the complete file. I have been able to break the data serially using a software but that doesnt solve my problem.",0,None,7 ,Mon Dec 19 2011 16:05:26 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1180,/competitions/AlgorithmicTradingChallenge,72nd /del=045cbb9e15a01d31,The leaderboard is wrong!,"Click on Larry_temp, liqo and Yarong and you get ""Server Error"" which means those accounts have already been deleted. It seems the Kaggle team doesn't care about this unfair situation. Server Error 404 - File or directory not found. The resource you are looking for might have been removed, had its name changed, or is temporarily unavailable.",0,None,9 ,Mon Dec 19 2011 17:40:39 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1181,/competitions/hhp,None /outsider,Non-Random 30 Percent?,"After reading some of the forum posts from recent competitions, I have some concerns that the 30 percent of the data against which our public score is measured is not a random sample of those members with claims in Year 3. If some other rule was applied, then there could be a material statistical bias on the 30 percent public and the 70 percent private data. One way I thought this problem could be mitigated would be to calculate the final results on the whole 100 percent - after all isn't that what a good algorithm is supposed to work on? If anyone has fitted to the 30 percent, they probably will not gain a great deal (and may lose out) when measured against the 100 percent. Failing that, could Kaggle please disclose how the 30 percent was sampled, otherwise the competition becomes something of a lottery. The 'Give Me Some Credit' competition final leaderboard was very different in both the leaders' scores and the ordering of many of the competitors (Soil going from 3rd to 117th according to the forum). This might not matter too much for those competitors doing it for fun or as an exercise, but for those putting in potentially hundreds of hours to try to actually win, I think it would be important to be judged accurately in the final event. Thanks.",0,None,11 ,Mon Dec 19 2011 22:41:14 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1182,/competitions/hhp,569th /cihanb,devel*_test.csv vs. devel*_train.csv Question,"Hello, I was just looking at the data and noticed these two .csv files. My question is: Are we only allowed to train our classifier with the examples specified in devel*_train.csv? The reason why I'm asking is that the examples in devel*_test.csv seem to combine multiple gestures and also repeat the same gestures for different data points, which I thought defeated the purpose of the competition. The examples in devel*_train.csv are for testing/cross-validation? I think that our classifier is supposed to use each gesture only once - that's why I was a little confused.",0,None,5 ,Tue Dec 20 2011 06:30:50 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1183,/competitions/GestureChallenge,None /jaro27491,How to send an algorithm to assessment?,"Hello, I am going to join a competition: ""Don't get kicked"". I will probably use SAS or Statistica. I didn't find information on how I should send my algorithm to be assessed and to calculate a GINI indicator. Shall I just send a ""Project"" made in SAS Enerprise Guide or maybe a project made in SAS Enterprise Miner? Thanks for the answer Best regards, Jaro",0,None,1 Comment,Tue Dec 20 2011 14:19:43 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1184,None,None /akshit,Unable to load data in Octave.,I am trying to load the training.csv file in octave using dlmread function but it seems to be taking a lot of time . I tried the dlmread function on a file containing about 100 examples and it worked fine but for given training.csv it seems to be taking a lot of time. Can anyone suggest some other way of loading the dataset in octave?,0,None,3 ,Tue Dec 20 2011 16:31:02 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1185,/competitions/DontGetKicked,390th /masifjaffer,Help on Algos,Dears I am using Knime which is giving Java Heap space error. I tried to extend the memory through ini file but it didnt work. I cannot run any classification technique in Knime and Weka due to this and I dont know how to use 'R'? Can anybody help in this? Regards M Asif,0,None,1 Comment,Tue Dec 20 2011 17:32:40 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1186,/competitions/WhatDoYouKnow,160th /dejavu,impact of organizers divulging information mid-contest,"Just curious. Are there any guidelines regarding competition organizers divulging methods, algorithms or predictive variables that are likely useful mid-contest? I understand that the organizers want to find the best solutions, and releasing such information may help, but it can be frustrating to have spent hours developing such approaches only to have these broadcast to all competitors.",0,None,7 ,Wed Dec 21 2011 15:06:07 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1188,None,None /edramsden,A Really Simple Model,"A really simple model I tried was P(correct) = max(min(Studentfactor*QuestionFactor, 1),0) , which yields 0.26277. It is pretty easy to train up to find the student and question factor vectors. This model does not consider anything other than a student's average performance and the average difficulty of a question, and could be interpreted as some questions are more difficult than others and some students are brighter than others. EdR",3,bronze,18 ,Thu Dec 22 2011 03:54:58 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1189,/competitions/WhatDoYouKnow,30th /goldenhind,Two problems,"The Heritage competition can be tackled statistically as a regression problem, which is the prediction of a continuous numeric variable. I've been working for quite a long while on developing ""industrial strength"" solutions to the closely related problem of the prediction of nominal categorical variables, which forms the basis for many data mining and machine learning applications. A famous early ""solution"" is Fisher's 1936 linear discriminant analysis of the Iris data. So when the Heritage competition was announced ealier this year I was curious to see what was involved. After an extensive review of the background literature I came to the conclusion that it can be split into two problems, one potentially solvable, the other probably not. The first problem is developing an accurate statisical regression model, which is to estimate the condtional probablity distribution of the number of days of hospitalization for any given individual, taking into account all the relevant inoformation. The Heritage data is unusual in that it violates many of the assumptions which form the basis of the conventional statistical treatment of regression. (Many diverse data types including highly postively skewed count variables, extreme population heterogeneity, heteroskedasticity, skewness, kurtosis, non-normality, in addition to the nonlinearity, interaction, and collinearity which crop up in most real world regression probems.) Nonparametric or machine learning approaches are only limited solutions to these problems. These topics are almost never covered in basic statistics education, and are given short shrift in statistics books. The closest treatments of this type of medical data which I could find in the literature are the work of two econometricians Camerron & Trivedi in their book ""Regression analysis of count data"" Cambridge 1998, the Manton et al epidemiological book ""Cancer Mortality and Morbidity Patterns in the U.S. Population"" Springer 2009, and the Dartmouth Medicare project under Wennberg. As far as I can tell, none of this work completey solves the Heritage data problem, though Cameron & Trivedi present an example of a problem whch is suspiciously simiiar to the Heritage problem :-) Fully solving the regression problem would take a fair amount of work, but there is a second problem which I mentioned up front. It is that even given the copndtional porbability distributions for individual patients, these would not allow for accuarete prodction. We would see here large forecast erors for quite a few patients, ""residuals"" in regeession terminiology. The reason for this is that the conditionl probebility distributions would also be highly positively skewed counts, and in some cases even U shaped. The effect of the large residuals when squared, even after taking logarithms would be to indicate poor predictive accuracy. In fact, this problem is almost the same as the one which crops up in the life insurance industry, in that it is not possiible to accurately predict the experience of specific indiividuals, only the average experience for various risk groups. In medicine, the prognoses of indiividual patients can also be notoriously inaccurate, as when a physician tells a patient he/she only has 6 months to live, and they are still alive 5 years later! So all this raises the basic question, at least for me, is how much work should be put into the approach implied by the Heritage competition. Maybe some other problem formulation should be considered down the road. My own personal interests lie more with scientific and engineering applications for these statistical methodologies, such as the ab initio protein folding problem or statistical pattern recognition. In fact, one of the reasons I got into this business was the use of artificial neural networks for protein structure prediction. ANNs had been critiqued on both biological grounds (Francis Crick) and statistical grounds (Warren Sarle of SAS.)",2,bronze,7 ,Fri Dec 23 2011 17:56:42 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1190,/competitions/hhp,538th /asher27323,On Learning Excel & Other Tools,"If you were a beginner, how would you learn how to analyze data in Excel? What books, websites, classes, or software would you use, to ensure one can attain proficiency in Excel? Personally, I don't learn anything until I actually apply it, so a 'reference' type resource will not be the best solution. I learn best when forced to complete exercises, problems and practice, preferably where there is a predetermined solution against which I can check my work, alongside instructive examples that have already been worked out and demonstrate the given technique. Without completing an assignment and checking the answer, I'm never sure if I actually know the material. Something like a common math textbook, in effect, where examples are worked through and then problems are assigned, with answers to check your work, is what I have in mind. Of course, the slicker the interface and software, the better, though I find videos to often be needlessly long. A comprehensive course structure is preferred to a laundry list of lessons. Beyond Excel, what other educational resources would you recommend to learn other tools/platforms? My background: I'm new to Kaggle. While I don't anticipate partaking in any competitions just yet, I'd like to enhance my Excel skills. I currently work as an analyst of sorts, and it involves various modeling, forecasting and regressions within Excel. I've taught myself how to do various things as the need arose, such as scenario tables, solver, lookup, and regression analysis. Relative to the average user, my skills are advanced, but to a seasoned data analyst, I might qualify as a beginner. So I'd like to learn more.",0,None,1 Comment,Sat Dec 24 2011 00:04:48 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1191,None,None /byang1,"Gaps between validation, public, and private leaderboard scores","If you read [Link]:http://www.kaggle.com/c/AlgorithmicTradingChallenge/forums/t/1111/milestone-entries-and-reviews, you'll see that for the Nov 30 prize we have these public leaderboard scores: Xiaoshi Lu, 0.76133, 1st on public leaderboard Alec Stephenson, 0.77847, 5th on public leaderboard and for the Dec 22 prize we have: Xiaoshi Lu, 0.75567, 1st on public leaderboard alegro, 0.78206, 20th on public leaderboard yet evidently both Alec Stephenson and alegro had better private leaderboard scores, despite large lag on public scores. So it looks like in this competition, the public score has little bearing on the private score, it's not even a rough indicator. I guess rumors of large, seemingly random differences betweenn local validation and public leaderboard scores in this competition are true. Anyone care to comment ?",0,None,5 ,Sat Dec 24 2011 00:20:30 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1192,/competitions/AlgorithmicTradingChallenge,None /wildutah,Controller in hands,"Will all the videos have the human holding that black controller in one hand? Will it always be the right hand? Note: In the third set of videos, the human puts down the controller before each gesture. And operates it with the left hand. I'd still like to know if the controller will reliably be part of every video and if any other object will be held in any of the videos.",0,None,1 Comment,Sun Dec 25 2011 00:34:10 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1193,/competitions/GestureChallenge,None /bensalel,reading the files to R,hi want to take a part in this competition but i dont know how to read those csv big files to R i use read.csv but R tells me that i have a memory problem what can i do??? elad,0,None,13 ,Wed Dec 28 2011 09:36:12 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1195,/competitions/WhatDoYouKnow,None /tritonsd,Best CBD for a single method,"I was wondering how well a single method performs on this dataset, My best single method, matrix factorization inside a logistic function. Validation Public Leaderboard 0.25211 0.25320",0,None,9 ,Wed Dec 28 2011 18:13:54 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1196,/competitions/WhatDoYouKnow,5th /newdogwitholdtricks,Team merge offer,"I obtain a leaderboard score of around 24.5 using a single classifier. No special attribute pre-processing, missing value handling or use of the test data. If anyone thinks that we can help each other, he/she is welcome to contact me. I assume such an offer is allowed according to the competition rules. If not, I am sorry.",0,None,4 ,Wed Dec 28 2011 22:49:09 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1197,/competitions/DontGetKicked,41st /challengeadmin,Win a free Kinect,"Win a free Kinect if you are among the first 10 people who make an entry that outperforms on validation data the sample code entry: sample_predict.csv Benchmark [Link]:http://www.kaggle.com/c/GestureChallenge/Leaderboard#0.59978 To clain your prize, send to events@chalearn.org a screenshot of the leaderboard showing your entry and indentify your entry.",0,None,12 ,Thu Dec 29 2011 18:46:06 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1200,/competitions/GestureChallenge,None /william27672,Best Software,"Hi Guys, I am relatively new to Kaggle and the data mining community but I am really interested in this field! I would like to know what kind of software you use for database management, data manipulation and visualization, machine learning, etc.. Thanks Alot!",0,None,4 ,Fri Dec 30 2011 03:59:37 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1201,None,None /erdman,What happened to the Claim Prediction Challenge?,"Why is the link to the Claim Prediction Challenge's forums dead? http://www.kaggle.com/c/ClaimPredictionChallenge/forums Coming up on three months since the end of that contest, why has there not been a single ""How I Did It"" published?",0,None,8 ,Sat Dec 31 2011 02:48:04 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1203,None,None /nvedia,Tutorial on data analysis on approaching similar problems on this website,Hi All I have very good knowledge about JAVA/python/C++ and algorithms 1. Please let me know what are the prerequisites to start solving similar kind of problems in the contest 2. Any book/tutorial or forum where I can first practise solving such kind of simple problems to become better solving these problems Thanks,1,None,1 Comment,Sat Dec 31 2011 09:23:36 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1204,None,None /ildefons,I can't submit,"Is there any problem with the ""make submission"" button? Today I have tried several times but I didn't succeed...",0,None,1 Comment,Sat Dec 31 2011 18:10:28 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1205,/competitions/AlgorithmicTradingChallenge,1st /fuerve,Operationalizing a winning model,"I'll disclaim my post by stating that I am a total neophyte to statistics and machine learning (and I'm very much enjoying learning by participating in these competitions). I finally got a chance to carve out some time to start checking out this competition. I've never encountered Rasch analysis before (no surprise there) so I stepped through the benchmark code a bit and read some documentation on it to get an idea of how the problem might be approached. I've also appreciated the input of folks like Yetiman and others in other threads in this forum. I'm not a professional machine learning practitioner (yet), but I am a practicing software engineer. Looking at the way the model is fitted and the data that were provided, a few things come to mind about which I'm interested in soliciting opinions both for my own edification and for the purpose of better understanding the requirement of this and other competitions (and real-world ML problems). None of this is intended as nasty, harsh criticism so much as, well, flowery, non-harsh criticism with puppies and rainbows. The user ID as an input seems hokey in the long run for any model fit offline, unless you're planning to refit that model periodically with new data (expensive, highly latent, poor customer satisfaction!). This is my intuition, but I'd be interested to hear if it has been proven otherwise. The benchmark prediction code throws away the user strength number altogether if the user has never been seen before and relies entirely upon the question strength. Would it make more sense, say, to impute the median strength of all known users in place of an empty value (getting closer to a recommender system here)? Does the competition metric (CBD) essentially bias toward offline models that overfit the test set? I know that the topic of the nature of the data (timestamps, for instance) has already been broached, so I won't drag that discussion into this question. That notwithstanding, I'll be interested to see (and I'm probably going to get started on an implementation here soon) how well an online(ish) recommender system measures up against various offline models in terms of the competition metric. But in Grockit's case, where easing the path for integrating new users into the system is probably a primary operational goal (please tell me if it's not), the competition metric itself seems to yield no information about preference of systems that effectively balance cost and efficacy. I know I'm being a stickler with that last one. I certainly don't have any better ideas, as far as some mystical faery metric that sprinkles magic information dust all over your data about how lean a model can be to run and how happy its users will be. I wish I did. In the case of other competitions, it hardly matters, but in this case, it seems very pertinent. Or maybe I'm totally wrong. I'd really like to know. Anyway, good luck and happy hunting, all.",0,None,5 ,Sat Dec 31 2011 20:23:50 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1206,/competitions/WhatDoYouKnow,57th /alecstephenson,Parallelized Random Forests,"I've never looked into parallel programming, but inspired by code from Joe Malicki and the new recommended R package 'parallel', I had a go at writing functions for parallelized random forests. The function parRandomForest1 uses forking, which is fast but is not available under Windows (apparently it will run but not in parallel), whereas parRandomForest2 will work on any operating system. The function detectCores() returns the number of cores on your machine, but it is not fullproof so replace it with the actual number if you know different. The seed argument is for reproducibility. The data I am using is the file cs-training distributed in the recently completed 'Give Me Some Credit' competition which can be [Link]:http://www.kaggle.com/c/GiveMeSomeCredit/Data. I'm running linux (debian squeeze) on a fairly old laptop with a 2GHz Intel Core 2 Duo T7300 processor, so I only have two cores. You can see that the speed-up with forking is moderate, going down from about 36 (one core) to 28 (both cores) seconds of elapsed time. Any suggestions for code improvements or speed-up? What timings do you get on your system? library(randomForest)library(parallel)options(mc.cores = detectCores())train <- read.csv(""cs-training.csv"")[,-c(1,7,12)]train[,1] <- factor(train[,1])parRandomForest1 <- function(xx, ..., ntree = 500, mc = getOption(""mc.cores"", 2L), seed = NULL){ if(!is.null(seed)) set.seed(seed, ""L'Ecuyer"") rfwrap <- function(ntree, xx, ...) randomForest(x=xx, ntree=ntree, ...) rfpar <- mclapply(rep(ceiling(ntree/mc), mc), rfwrap, xx=xx, ...) do.call(combine, rfpar)}parRandomForest2 <- function(xx, ..., ntree = 500, mc = getOption(""mc.cores"", 2L), seed = NULL){ cl <- makeCluster(mc) if(!is.null(seed)) clusterSetRNGStream(cl, seed) clusterEvalQ(cl, library(randomForest)) rfwrap <- function(ntree, xx, ...) randomForest(x=xx, ntree=ntree, ...) rfpar <- parLapply(cl, rep(ceiling(ntree/mc), mc), rfwrap, xx=xx, ...) stopCluster(cl) do.call(combine, rfpar)}system.time(RF1 <- randomForest(train[,-1], train[,1], ntree=100, sampsize = 50000, replace = FALSE, nodesize = 50, mtry = 4, classwt = c(0.5,0.5)))# user system elapsed # 36.062 0.120 36.203 system.time(RF2 <- parRandomForest1(train[,-1], train[,1], ntree=100, sampsize = 50000, replace = FALSE, nodesize = 50, mtry = 4, classwt = c(0.5,0.5)))# user system elapsed # 36.902 0.652 27.724system.time(RF3 <- parRandomForest2(train[,-1], train[,1], ntree=100, sampsize = 50000, replace = FALSE, nodesize = 50, mtry = 4, classwt = c(0.5,0.5)))# user system elapsed # 1.301 0.292 30.480",1,None,3 ,Sun Jan 01 2012 03:51:44 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1207,None,None /rajstennajbarrabas,A technical question on Kinect color data,"If I take any frame of any of video, such as the first frame of devel01/K_1.avi and extract it as a bitmap I get a 320x240 B&W image with 211 unique colors. I make a histogram of the colors. Essentially, I count the number of pixels where Red == 1, the number where Red == 2, and so on. I've excerpted the beginning part of this table below. The left-hand column is the color index, with ""0"" being ""black"", the next column is Red, then Green, then Blue . There are a large number of counts assigned to RGB=[0,0,0], reflecting the proportionally large number of black pixels in this image. If you look at any individual column, you'll notice that there is a regular repeating ""0"" every seven positions. In the red column, these zeroes are at positions 5, 12, 19, 26, and so on. This pattern goes on for the entire histogram, and is fixed for all scenes in all videos I've examined so far (a very large number... it hasn't finished yet.) Normally I would consider this a quantization effect of the hardware. The hardware only has a resolution of 211 possible depths, and this is mapped into the 256 range of an RGB image. Zeroes are inserted, no problem, this is standard hardware behaviour. My question is this: If you compare the three columns, note that the Green and Blue columns also have zeroes... but these are offset from the zeroes in the red column. The green zeroes are two ahead of the red ones, and the blue zeroes are one behind. Taking a color as an example, there are 22 pixels which have Red == 6. All of these have Green == 8 and Blue == 5 in the image, indicating that the colors match the offsets in the table as well as the zeroes. I'm led to the hypothesis that the Kinect data has a bug in the transcription mechanism. The underlying hardware generates depth information quantized to 211 levels, but when this information is translated into RGB the color channels get ""out of sync"" with each other. The normalization algorithm is somehow skewed - averaging the R+G+B components is incorrect - one of the channels is correct, and the skewness of the other channels will introduce noise into the renormalization calculation. It would appear that the correct normalization procedure is to use one channel (probably Green) to calculate the original depth data. Taking Red == 1 as an example, the corresponding Green should be 3 and Blue should be -1. All pixels which have Red == 1 have Green = 3 and Blue == 0. This indicates that the offset in color intensities is clamped to the endpoints of the range. If this is indeed a transcription problem, correcting it would remove the clamping effect from the range endpoints, resulting in better resolution at the extreme ends of the measurement spectrum. 0: 1270 1132 1297 1: 27 98 17 2: 17 40 15 3: 15 27 15 4: 15 17 0 5: 0 15 22 6: 22 15 14 7: 14 0 13 8: 13 22 15 9: 15 14 13 10: 13 13 9 11: 9 15 0 12: 0 13 2 13: 2 9 5 14: 5 0 4 15: 4 2 2 16: 2 5 4 17: 4 4 3 18: 3 2 0 19: 0 4 2 20: 2 3 2 21: 2 0 2 22: 2 2 3 23: 3 2 4 24: 4 2 4 25: 4 3 0 26: 0 4 4 27: 4 4 2 28: 2 0 7 29: 7 4 2 30: 2 2 7 31: 7 7 5 32: 5 2 0",0,None,1 Comment,Sun Jan 01 2012 05:07:59 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1208,/competitions/GestureChallenge,46th /zygmunt,Skewed data,"You surely noticed that the data set in this challenge is seriously skewed: negative class outweights positive 5:1. (That's a good thing, actually, because that many more buys turn out good than bad). So, how do you deal with this? What are your precision and recall?",0,None,1 Comment,Sun Jan 01 2012 23:33:43 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1209,/competitions/DontGetKicked,None /rajstennajbarrabas,A question on color correlation,"Consider the devel03 training files. M_1 shows the actor wearing a black bib which covers most of his body. Test videos of this gesture have him wearing a white T-shirt. In M_2 he has the bib on his lap, and is wearing a patterned shirt. Test videos of this gesture have him wearing a white T-shirt. In M_3 .. M_6 there is no bib but there is a shirt. Test videos of these gestures have him wearing a white T-shirt. In M_7 .. M_8 he is wearing a white T-shirt Just to be clear, you would like a system which trains on color images where the actor is wearing one set of clothing, and then interprets gestures when the clothing is significantly different. This is correct, yes?",0,None,1 Comment,Mon Jan 02 2012 02:11:04 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1210,/competitions/GestureChallenge,46th /anup13958,Submission Problem,"I am not able to submit my csv file. Following is the message I get, ERROR: The value ' 11 6 ' in the required column 'Column2' must be a string matching the regular expression ""^([1-9][0-9]?\s){0,49}([1-9][0-9]?)?$"". (Line 1, Column 13)",0,None,3 ,Mon Jan 02 2012 03:19:59 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1211,/competitions/GestureChallenge,19th /dejavu,Will all submissions be scored?,"What feedback can we expect following the deadline eg will all submissions be scored? Will the test data be released? I would hope for both. Thanks,",0,None,2 ,Mon Jan 02 2012 06:40:28 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1212,/competitions/AlgorithmicTradingChallenge,9th /del=92525096498f3bbd,Depth Image Vs RBG Image,In case any one is curious: Its seems like the depth videos serve as better images than RBG Images based on using the sample code.,0,None,5 ,Mon Jan 02 2012 19:19:33 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1213,/competitions/GestureChallenge,None /domcastro,Item Response Theory,"Hi, I know there's an introductory post about this but I think the Forum is broken and is not allowing me to go to the 2nd page of topics. Anyway, I'm having trouble understanding IRT and reverse engineering the code. Could someone, in simple terms, explain to me how the user ability and question difficulty is worked out. ta EDIT: Are these just the ""random effects"" output from LME?",0,None,4 ,Mon Jan 02 2012 20:47:33 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1214,/competitions/WhatDoYouKnow,None /nicyeg,Can you provide the skeleton data?,"Kinnect generates the skeleton data which conveniently position of hands, head and elbows can be found. Why skeleton data is not included in the samples? We need to spend a LOT of time to implement and test well-known eixisting image-recognition techniques to find hands and remove the body which will eventually be re-inventing the wheel (probably a flat one). Wouldn't it be better to let us focus on the gusture recognition?",0,None,3 ,Tue Jan 03 2012 05:32:42 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1215,/competitions/GestureChallenge,None /pwfrey42,Binary Outcomes,"I keep wondering why the sponsors of many contests choose to provide a binary outcome. In the Don't Get Kicked! competition, each auction purchase in the training file is scored as a 1 or 0 where a 1 signifies a bad buy. Would it not make more commercial sense to score each case in terms of the profit or loss for each purchase? This should make the prediction models more valuable since they would discriminate between small losses and big losses and also between small profits and big profits. One would think that the auto dealers who are involved in the auctions would be able to generate this information and would benefit from prediction models that can make finer distinctions.",0,None,3 ,Tue Jan 03 2012 18:52:00 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1216,/competitions/DontGetKicked,2nd /anup13958,Blank / Total Black videos,I am wondering if videos like valid01 K_24.avi should be considered as an error in the given data. What do you expect to be the label of such videos ??,0,None,4 ,Tue Jan 03 2012 20:11:08 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1217,/competitions/GestureChallenge,19th /gaborfodor,final leaderboard score,"Hi guys! I read on the submission page: ""Note: You can select up to 5 submissions that will be used to calculate your final leaderboard score. If you do not select them, up to 5 entries will be chosen for you based on your most recent submissions. Your final score will not be based on the same exact subset data as the public leaderboard, but rather a different private data subset of your full submission."" How will be calculated the final score from the five submissions? (max or avg or something else?)",0,None,3 ,Wed Jan 04 2012 11:31:47 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1220,/competitions/DontGetKicked,56th /del=92525096498f3bbd,Questions about Sample Code,"Main.m Line 39: truth_dir = []; % Where the missing truth labels are... Not sure what this is refering to. Is this the actual labels for the data? Line 72: recog_options={'test_on_training_data=1', 'movie_type=''K'''}; Are these parameters/assignment to the variables in the recog_template? Line 126- Line 129: % Load training and test data dt=sprintf('%s/%s', data_dir, set_name); if ~exist(dt),fprintf('No data for %s\n', set_name); continue; end D=databatch(dt, truth_dir); Does it have both training and testing data for valid data set provided? I thought for the valid data the testing is done when you submit your csv file online? Line 134-138: % Split the data into training and test set Dtr=subset(D, 1:D.vocabulary_size); Dte=subset(D, D.vocabulary_size+1:length(D)); TrLabelNum(i)=labelnum(Dtr); TeLabelNum(i)=labelnum(Dte); Why would you split the data since you are already providing it seperate training data and testing data? For the valid data isn't the testing done when you submit csv file online? So why would we need to split data into training and testing data set? Line 144-146: % Train a trivial model tic [tr_resu, mymodel]=train(recog_template(recog_options), Dtr); What does ""tic"" mean? So we basically want to change the recog_template code to change the model? MAY HAVE MORE QUESTIONS ABOUT REMAINING FILES.... Thanks!",0,None,9 ,Wed Jan 04 2012 20:01:18 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1221,/competitions/GestureChallenge,None /mncoder,Language used,I am a C++ programmer by profession and am new to this type of data mining. I am just curious about what sort of programming languages/environments people would recommend to do analysis of this type of problem? (Keeping in mind I am a newbie when it comes to data mining). Thanks!,0,None,12 ,Wed Jan 04 2012 21:00:01 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1222,/competitions/DontGetKicked,None /teaserebotier,"Submission format/feedback help, please?","I made my first two subs today. The first one choked because my newlines were '0A' terminations instead of the text file '0D0A' and I'd put spaces behind the commas for readability. Had to open both the sample linear sub and my own file in a hex editor to see the newline problem, so I changed the output program, and as far as I can tell, the format of my file and the example similar EXCEPT that I use 4 decimals on all prices, and the prices are NOT rounded to the quotation quantum. Both submission files open well under excel, by the way. It would really help to have an exact description of what works and what does not. Am I allowed to have a high precision answer? Can it be in a continuous range or must I round off my prices to the closest penny or half-penny? Time is running out I can't afford trial and error. Neither of my subs see a return screen, I have to reload; the first submission returned the message ""use a csv or zip file format"", as I said, I fixed the newlines. The second file returned the following cryptic error message: ""We've run a quick competition, and found no meaningful pages in your URL dataset."" It would really be nice to get credited 2 submissions, or at least, not have submissions counted until AFTER the parsing is successful. Thank you!",1,None,5 ,Wed Jan 04 2012 21:25:59 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1223,/competitions/AlgorithmicTradingChallenge,91st /daveg1,Calculus formluae in output?,Do we need to use your Calculus formluea in our output? If so how to program it?,0,None,1 Comment,Wed Jan 04 2012 23:20:07 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1224,/competitions/hhp,None /prmlgreat,running FFgrab problem!,"Dear all: When I run FFgrab. It fails with the message ""not a valid mex,............, not a valid win32 application"" My system is window xp and matlab version is 7.7.0(R2008b) Who can help on this, thank you? Best Frank",0,None,2 ,Thu Jan 05 2012 04:57:47 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1225,/competitions/GestureChallenge,None /timveitch,Congratulations to the Winners!,"Well done to all the prize winners! There was a lot of meat to that dataset, so the prize money will be thoroughly deserved! I'm interested to find out what features people found important...and what techniques worked for people... Again, congats!",0,None,34 ,Fri Jan 06 2012 01:21:21 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1227,/competitions/DontGetKicked,5th /venki16197,Results for test data,Will the results of test data be made available to public? so that a good dataset will be available for researchers,0,None,8 ,Fri Jan 06 2012 18:25:18 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1228,/competitions/DontGetKicked,88th /davidg1,Question on depth data,"I'm starting to look at the Depth data from the kinect. It seems that it's given in a 320 x 240x 3 RGB style format. I'm not sure if I get this right, but R-G-B values for the same depth pixel seems to differs from one another by small values. Not sure where the difference comes from, I'm afraid I'm missing something more fudamental here :) I did not find information beside the depth scaling at the end of the readme for each Batches. From the Readme. set num mini maxi resol acc devel 1 801 1964 76 2 Is there units attached to min/max depth ? (cm and mm does not seems logical) Thanks ! David",0,None,2 ,Fri Jan 06 2012 21:24:14 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1231,/competitions/GestureChallenge,None /zachmayer,"Convert the tags string to a 0/1 matrix, with column for each tag","Here's a useful R function to convert the tags field to a 0/1 matrix, with a column for each tag. Right now it's very slow, but I'd love to hear any suggestions you have for improving it. Note that ""Data"" is an arbitray object, such as ""valid_training"" or ""valid_test,"" etc. allTags <- 1:281tagList <- lapply(strsplit(Data[,'tag_string'], ' '),as.numeric)tagList <- lapply(tagList,function(x) as.numeric(allTags %in% x))tagMatrix <- matrix(NA, nrow(Data), length(allTags))system.time(lapply(1:nrow(Data), function(i) { tagMatrix[i,] <<- tagList[[i]] return(NULL)}))colnames(tagMatrix) <- paste('T',allTags,sep='')",5,bronze,13 ,Fri Jan 06 2012 23:18:47 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1232,/competitions/WhatDoYouKnow,None /zachmayer,R function for capped binomial deviance,"I dunno if this has been posted here yet, but it's very useful: [Link]:http://www.kaggle.com/c/PhotoQualityPrediction/forums/t/1013/r-function-for-binomial-deviance/6392#post6392",0,None,2 ,Fri Jan 06 2012 23:21:14 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1233,/competitions/WhatDoYouKnow,None /podopie,Zeroes found in outcome?,"Hey guys, Curious if anyone's found a determinate for why zeroes exist in the outcome table? I'd say it represents missing data, but if we know the test and question information, as well as a breakdown if the question was answered or not(skipped or whatever the case may be), I'm not sure where these NA values came from (there's about 250 of them). Would love to hear theories/ideas. Thanks!",0,None,3 ,Sun Jan 08 2012 02:12:46 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1234,/competitions/WhatDoYouKnow,42nd /del=92525096498f3bbd,Using open source libraries?,Are we allowed to use opensource libraries such as opencv? Thanks!,0,None,4 ,Sun Jan 08 2012 18:50:17 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1235,/competitions/GestureChallenge,None /karmicmenace,Winning Algo/Code,"Congratulations to the top winners! It has been a fun competition. Question for kaggle (this is my first competition, so bear with me if it has been answered before). Would Kaggle publish the top winning algo/code ? And it would be nice if there is a way for other competitors willing to share their algo/code with other interested participants.The final swing in the results was interesting! I didn't expect that. May be you should hold another betting competition for each of these to predict the winner :)",21,None,52 ,Mon Jan 09 2012 01:24:28 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1236,/competitions/AlgorithmicTradingChallenge,74th /iguyon,"Full dataset released: 50,000 gestures!","We posted additional data on the data page. Altogether we now have 50,000 gestures (480 development batches and 20 validation batches). Note: all the data posted on Kaggle is in a quasi-lossless AVI format. For those of you who have less patience to download, there is a lossy-compressed version available at: http://gesture.chalearn.org/data (and also a pretty color movie I am proud of). We will send USB memory sticks with all the quasi-lossless compressed data at the participant's cost upon request. Send email to events@chalearn.org.",0,None,1 Comment,Mon Jan 09 2012 08:42:07 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1238,/competitions/GestureChallenge,None /selina,databatch function in the sample code crashes matlab ,"hi, I have problem with the databatch file. I had tried two version of matlab, R2009a and R2011a. Only once while reading it promotes that : libavbin.so is installed but seems to be the 32bit version. would you like to correct it? After choose Y, the function runs. But all the other times, matlab just crashes immediately while running that function. Could you give me some hint about it? Thank you. Selina",0,None,1 Comment,Mon Jan 09 2012 09:33:30 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1239,/competitions/GestureChallenge,None /johnjohnason,How many male and female there are in the data?(release 3),"Hello there, Can anyone tell me, how many male and female there are in the data-release3, because i want to do suppling in smallest data and i use one program that can't count them. Thanks!",0,None,1 Comment,Wed Jan 11 2012 00:20:35 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1266,/competitions/hhp,None /blindape,I found this useful.,"[Link]:http://www.tricare.mil/tma/tai/downloads/cpt_codes.xls [Link]:http://en.wikipedia.org/wiki/List_of_ICD-9_codes Don't know if considered as ""external data"" but I would like to share it in application of prize rules.",0,None,2 ,Thu Jan 12 2012 22:40:36 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1268,/competitions/hhp,3rd /yarreg,A question about algorithm licensing,"Let's assume that I use some ML system (like SVM or TreeNet) to train a prediction formula. Once formula trained it may be used alone without any ML algorithms. Than what should contain ""Prediction Algorithm"" which is described in rules and for which I should grant full rights to Sponsor? Should it contain only formula or ML algorithm as well? As far as formula (and code which prepares data for it) is enough to use a model for predicting, may I consider it as Prediction Algorithm (alone of ML algorithm used to train it)? Thanks, Yaroslav",1,bronze,8 ,Fri Jan 13 2012 14:32:59 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1270,/competitions/hhp,None /cbusch,First Milestone Solution,"This question pertains to the solution provided by Market Makers. Their modelling_set table appears to present the data in a manner that only include variables for a single year in the model. So, Y1 claims are used to predict Y1 LOS and Y2 claims are used to predict Y2 LOS. Would it not be beneficial to use both Y1 and Y2 claims to predict Y2 LOS?",0,None,1 Comment,Sat Jan 14 2012 17:11:49 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1272,/competitions/hhp,266th /boooeee,Leaderboard Sidebar Issue?,"I logged in today and noticed that it listed me as 7th in the sidebar leaderboard.. But if you click through to the actual leaderboard, I found my expected rank of 14. Anybody else seeing this kind of discrepancy?",0,None,2 ,Sat Jan 14 2012 23:30:04 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1273,/competitions/hhp,16th /vielauge,Hard first day with R -,"Dear Listeners, I had a hard first day with 'R'! (half day with importing the data and another half day with the following beginner problem:) Backround: The last ten year I worked with SAS (work) and the change to R is not so easy. Used to data steps I was not able to do some really easy computing and merging. What I was trying to do: Computing the mean question difficulty for each question_id with 'questiondiff<-tapply(training$correct,training$question_id,mean)' So far so good, the result is a vector with the right numbers, but I am not able to get the second dimension for merging it (1 to many) with the original training data by question_id for further steps. Do I have to tell tapply that I still need the question_id? Or maybe I am just to stupid for merging in R and there is no dimension problem at all...and so all the Ideas in my head for the real job have to wait.... I hope somebody can give me a hint. Thanks for your time. For me it's bedtime. Good night. Greetings from Germany- Vielauge",0,None,4 ,Sun Jan 15 2012 02:22:03 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1274,/competitions/WhatDoYouKnow,155th /mattfrancis,A (possibly stupid) question about RF,"Hi All. I haven't used random forests much before (I only heard of it via the Give Me Some Credit competition), but I have a question about using it for regression. This seems as good a place to ask as anywhere else on the internet! It appears that when in regression mode, each potential split is examined using MSE to see the optimal predictor to use for that split and where to place the treshold. This assume a specific (Gaussian) error distribution for the data. However, when using this tool for binary predictions, MSE is not the best measure. Instead the Bernoulli distribution ought to be used, at least that's what I would like to be able to do. Other circumstances may call for other distributions. If you compare this to say using a linear model instead of an RF, one would want to use a GLM with an appropriate error distribution if the predicted variable was binary, rather than using a simple least squares. So my question is firstly, does my question make sense? Have I misunderstood the workings of RF in regression mode? If there is some semblance of sense in the question, does anyone know of an implementation of random subspace methods that is essentially the same as RF, but allows any arbitrary function to be provided for evaluating the 'best' split when constructing trees?",1,bronze,8 ,Mon Jan 16 2012 03:51:20 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1276,None,None /del=92525096498f3bbd,Having trouble connecting kinect to ubuntu,So I finally went out and bought a kinect! Woot! I have version 10.10 on ubuntu but I think there is a problem with the usb connection. I have tried all the usb ports on my machine. The light on the kinect is blinking. Error message says 0 devices connected when i run glview from the bin directory. Also tried sudo command. Wonder if I have the wrong version of libusb? aniket@ubuntu:~$ lsusb -V lsusb (usbutils) 0.87 Here is output when I run lsusb -sv with device name: Bus 002 Device 007: ID 045e:02b0 Microsoft Corp. Device Descriptor: bLength 18 bDescriptorType 1 bcdUSB 2.00 bDeviceClass 0 (Defined at Interface level) bDeviceSubClass 0 bDeviceProtocol 0 bMaxPacketSize0 64 idVendor 0x045e Microsoft Corp. idProduct 0x02b0 bcdDevice 1.05 iManufacturer 1 Microsoft iProduct 2 Xbox NUI Motor iSerial 0 bNumConfigurations 1 Configuration Descriptor: bLength 9 bDescriptorType 2 wTotalLength 18 bNumInterfaces 1 bConfigurationValue 1 iConfiguration 0 bmAttributes 0xc0 Self Powered MaxPower 100mA Interface Descriptor: bLength 9 bDescriptorType 4 bInterfaceNumber 0 bAlternateSetting 0 bNumEndpoints 0 bInterfaceClass 255 Vendor Specific Class bInterfaceSubClass 0 bInterfaceProtocol 0 iInterface 0 Device Status: 0x0000 (Bus Powered) I believe device status should be 0X0001? Any help appreciated.,0,None,3 ,Mon Jan 16 2012 20:24:46 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1278,/competitions/GestureChallenge,None /robert2012,How is this possible?,There are a few thousand people with a claim in year 2 who have claims with fields set as follows: Speciality: Surgery PlaceSvc: Ambulance Length of stay: 1 day Nearly all have a three or four letter procedure code beginning with 'S' - i.e. confirmation they had surgery. How can it be that so many have DIH as zero? The only thing I can think of is that they died after the surgery but before they spent a full day in hospital.,0,None,4 ,Tue Jan 17 2012 11:30:05 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1281,/competitions/hhp,None /zachmayer,Calculating the probability that a patient associated with an entity will visit the hospital,"In their [Link]:https://kaggle2.blob.core.windows.net/competitions-data/hhp/2496/Market%20Makers%20-%20Milestone%201%20Description%20V2.pdf?se=2012-01-17T18%3A06%3A25Z&sr=b&sp=r&sig=tgKTgtP1%2FVQ%2BQOCasq%2B6ATuLfpwhNcyAXrj8YyxYcNQ%3D, the team ""market makers"" say the following on page 13: For each Primary Care Physician (PCP), Vendor and Provider, a value was calculated that was the probability that a patient associated with the entity would visit hospital. Each patient was then allocated the highest probability of all the PCPs (Vendors or Providers) that they were associated with, generating 3 fields in total. I'm trying to replicate their methodology. Is this probability only calculated using the claims data, or is the actual days in hospital for the next year merged in as well? Here's my first shot at this. I've already read in the claims data, converted LengthOfStay to numeric, and replaced missing values with zero: library(plyr)Claims$visit <- as.numeric(Claims$LengthOfStay>0)providerProbs <- ddply(Claims,c('ProviderID','Year'),function(x) c('prob'=mean(x$visit)), .progress='text') The result is the percent of visits to a given provider that resulted in hospitalization: > head(providerProbs[providerProbs$prob>0,],10) ProviderID Year prob29 12890 Y1 1.000000056 23379 Y1 0.529411857 23379 Y2 0.722222258 23379 Y3 0.416666797 40154 Y1 0.428571498 40154 Y2 0.571428699 40154 Y3 0.5000000466 173881 Y1 1.0000000467 173881 Y2 0.9375000468 173881 Y3 0.5600000 Am I on the right track? Or should I be using the ""DaysInHospital"" table, rather than ""LengthOfStay"" in the claims table?",0,None,2 ,Tue Jan 17 2012 20:31:16 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1282,/competitions/hhp,9th /shaz24,Reducing the bias in the baseline model,"In my opinion the mixed effects model in the benchmark is an underfit for the item difficulty and the student ability parameters. This is because each student only answers a small subset of the questions in a track(or sub-track). So, the student vs question-answered matrix is highly sparse and might lead to a high bias estimate. I feel that the estimates could be more accurate, if you find dense sub-regions of the matrix and then apply the parameter estimation techniques within each sub-region. Has anyone tried something like this? Any thoughts?",0,None,1 Comment,Wed Jan 18 2012 15:29:50 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1283,/competitions/WhatDoYouKnow,147th /benhamner,External Data,"As the [Link]:http://www.kaggle.com/c/asap-aes/details/Rulesstate, [quote] You are free to use publicly available dictionaries and text corpora in this competition. If you would like to use any other external data source, verify that this is permissible by posting in the forums or sending a private message first. Please use this forum thread to check whether additional external data is permissible. Also, feel free to let other competitors know what text corpora or dictionaries you have found useful here!",0,None,38 ,Fri Jan 20 2012 03:09:34 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1287,/competitions/asap-aes,None /mohit29765,package ‘ime4’ is not available (for R version 2.14.1) ,I am very new to R. and i am getting this error when i try to install ime4 package on R. Any help will be heartlt appreciated,0,None,4 ,Fri Jan 20 2012 03:41:46 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1288,/competitions/WhatDoYouKnow,43rd /wcukierski,Scoring Metric Verification,The scoring metric for this contest is a little more involved than most! It would be helpful (and probably prevent many redundant forum posts) if Kaggle could post a dummy submission and its weighted kappa score for the training data we have. That way we can know the evaluation code is correct. Thanks!,0,None,18 ,Fri Jan 20 2012 05:02:46 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1289,/competitions/asap-aes,2nd /pixelm,Looking to join a team,"Hello, I am looking to join someone's team. Please contact me if you would like an additional member on your team. Regards Mansi",0,None,18 ,Fri Jan 20 2012 17:40:05 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1290,/competitions/WhatDoYouKnow,43rd /martinoleary,Truncated essays?,"From a quick glance at the data, it looks like there are a number (~170) of essays which are truncated at 255 characters. A good example of this is essay 472, which receives a full 12 marks, despite consisting of just a sentence and a half. Is there any chance of an update to the data which fixes this issue, or should we just work around it, and treat it as a normal data cleaning problem?",2,bronze,9 ,Fri Jan 20 2012 18:23:54 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1291,/competitions/asap-aes,6th /benhamner,Data Set Releases and Updates,"We'll be using this thread to announce any modifications to the data sets, along with additional releases. In order to keep this thread clean, please start a new thread for any issues you may find with the data, or any questions that you have (e.g. [Link]:http://www.kaggle.com/c/asap-aes/forums/t/1291/truncated-essays)",0,None,3 ,Fri Jan 20 2012 22:14:35 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1292,/competitions/asap-aes,None /challengeadmin,10 more free Kinects!,"If you have not already won a free Kinect, here is your chance! We are offering 10 more free Kinects to the first 10 people who are first at some point on the leaderboard, starting January 20, 2012 (today). Each participant can only claim 1 free Kinect (so if you won one already, or if you are several times first on the leaderboard, you cannot get a second free Kinect). To clain your prize, send to events@chalearn.org a screenshot of the leaderboard showing your entry and indentify your entry. To address the concern of ""cheating"" by manual labeling, we will then ask you to send us your code for checking. If you do not want to send your code, you cannot win a free Kinect this time...",0,None,4 ,Fri Jan 20 2012 22:23:43 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1293,/competitions/GestureChallenge,None /edramsden,Didn't you always suspect this is the case?,When I was in high school I never did very well at essays and reports. I always suspected the grades were given out on the basis of verbiage rather than content. After making a scatter plot of Score vs. Length (in characters) I no longer suspect this - I am certain!!!! [Link]:https://storage.googleapis.com/kaggle-forum-message-attachments/1790/Score_vs_Length.gif [Link]:https://storage.googleapis.com/kaggle-forum-message-attachments/1791/Score_vs_Length.png [Link]:https://storage.googleapis.com/kaggle-forum-message-attachments/1792/Score_vs_Length.png [Link]:https://storage.googleapis.com/kaggle-forum-message-attachments/1793/Score_vs_Length.gif,0,None,11 ,Sat Jan 21 2012 00:41:46 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1294,/competitions/asap-aes,25th /ohanegby,Good new for students?,I guess that automatic essay checker can also enable high scoring automatic essay writer. Maybe we should expect a follow up kaggle competition :),0,None,6 ,Sat Jan 21 2012 08:14:26 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1295,/competitions/asap-aes,None /blindape,Public leaderboard vs private,Could anyone post the private leaderboard scores at milestone 1? I found the public [Link]:http://www.heritagehealthprize.com/c/hhp/Leaderboard?asOf=2011-08-31%2006:59:59 but interested in correlation with private. What are your experiencies with correlation in your cv test results and private leaderboard. Are sistematically worse a 2-3% aprox?,0,None,1 Comment,Sat Jan 21 2012 12:42:00 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1296,/competitions/hhp,3rd /wcukierski,"Adjudication, raters, and predictions","Hey Ben, three questions: 1) In some of the essays, there is a 3rd person who steps in if the ratings are not adjacent, If the two scores are non-adjacent, the final score is determined by an expert scorer. If Reader‐1 Score and Reader‐2 Score are not adjacent or exact, then adjudication by a third reader is required. etc. In such a case, reader1 and reader2 scores are completely ignored? 2) Am I correct in assuming reader1 and reader 2 are different people both within essay sets and across essay sets? 3) Clarifying the prediction task: we are to generate one integer ""resolved"" score for each essay's domain 1, as well as domain 2 scores for essay set 2? Does this mean there will be 2 rows per essay for set #2? Thanks!",0,None,6 ,Sat Jan 21 2012 19:36:44 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1297,/competitions/asap-aes,2nd /mohit29765," could not find function ""%dopar%"" ","Hello guys, i am new to R. while using bigmemory + foreach i have encountered this error - could not find function ""%dopar%"" can anyone help me with this? Here is code i am using #importing packages install.packages(""bigmemory"")install.packages(""biganalytics"")library(bigmemory)library(biganalytics) #filling featureUsersAbility Matrixforeach(user=users)%dopar%{ foreach(feature_name=feature_names)%dopar%{ feature=strsplit(feature_name,"" "") print(feature) #here we are simply putting the correct/total in usersability totalRecords=mwhich(training,c(""user_id"",feature[[1]][1]),list(user,feature[[1]][2]),""eq"",""AND"") correctRecords=mwhich(training,c(""user_id"",feature[[1]][1],""outcome""),list(user,feature[[1]][2],1),""eq"",""AND"") print(totalRecords) print(correctRecords) if(length(correctRecords)>0){ feature_UserAbility[user,feature_name]<-length(correctRecords)/length(totalRecords) } }}",0,None,6 ,Sun Jan 22 2012 05:46:14 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1298,/competitions/WhatDoYouKnow,43rd /benhamner,Invalid Essays,"Six out of the eight essay sets were originally handwritten and subsequently transcribed for the purposes of this competition. Any essays containing a fair amount of illegible words should have been flagged and removed from the data. However, the transcription instructions were not followed with 100% fidelity, and some essays in the dataset may contain transcription errors. We have opted to leave essays containing a small amount of illegible words in the training data - you can choose to include these in developing your models or discard them. Many of these can be identified by searching for a series of three question marks (""???"") or the word ""legible."" This should only affect a small percentage of the training data. (Note: if you are searching Excel for ???, Excel treats ? as a wildcard in searches, and ~?~?~? should be used to search for the ? character). A very small percentage of the training data may contain essays that were neither transcribed nor properly flagged for removal. An example is essay 9780 in set 4, where the essay states ""Reserved need to check keenly."" This appears to be a comment inserted by a transcriber and bears no relations to the handwritten essay text. This essay, and any others along these lines, should be removed from the training set. Use this forum thread to identify any other suspicious essays that you come across, and they will be removed in the next release of the training data if necessary. The validation and test sets should contain only essays that were fully transcribed.",0,None,15 ,Sun Jan 22 2012 21:41:29 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1299,/competitions/asap-aes,None /mattfrancis,Troubling Premise,"I hope I don't come across as a complete Luddite here, but the entire premise of this competition is deepy disturbing. No matter how clever the winning solutions end up being, there is no way an automated marking system could reward a very good but very unique respose to a question. By training a system on a set of responses, rather than having any idea what the question is actually asking, you can by definition only be rewarded by providing a response that is in some way similiar to other high ranking responses. Those 'outliers' that provide such a unique and innovative response might at worst be penalised for being too clever, or at best recieve an arbitrary, essentially random, score. To me, this is entirely against the intellectual spirit of this site, which places innovation above all else. Imagine trying to write an automated system for ranking a Kaggle competition submission, based on the source code that produced the submission compared with the source code and leaderboard score of previous submissions. I'm sure a system could be built in this way that would reward pretty good solutions using standard techniques, but would be most likely to not recognise the very best, most innovative solutions. An education system that used an automated marking system for essays might be able to churn out students with very good SEO skills and promising careers ahead of them writing copy for about.com, but would be unlikely to produce future Kaggle competition winners!",1,bronze,14 ,Sun Jan 22 2012 23:01:35 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1300,/competitions/asap-aes,None /del=92525096498f3bbd,Anyone figure out a way to read the avi file in C#,"Since Microsoft SDK is in C#, I was trying to code in C# but ran into issues of finding a library to read the avi file in C#. OS: Windows 7 Arch: x64 Visual Studio 2010 I tried the following:Emgu and [Link]:http://code.google.com/p/aforge/ but it failed to read avi file, so looks like I am going to have to write my own wrapper for opencv from C++ to C#. Anyone have any other suggestions for libraries? Thanks! Aniket",0,None,12 ,Tue Jan 24 2012 06:52:53 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1302,/competitions/GestureChallenge,None /benhamner,Welcome!,We're very excited to launch the Benchmark Bond Trade Price Challenge! Please let us know if you have any questions about the data or competition setup.,0,None,2 ,Fri Jan 27 2012 03:12:04 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1303,/competitions/benchmark-bond-trade-price-challenge,None /dhwanit0,HPN Usecase,"Hello, Can anyone describe about the purpose which can help to understand how the HPN Claims Predictions can benefit the community as well as business organizations at large ? Can anyone suggest a solid business use case scenario where the result can be applied specifically to increase ROI ? Thanks",0,None,3 ,Fri Jan 27 2012 11:28:42 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1304,/competitions/hhp,None /thomasgenevois,capped binomial problem,"Hello, We have the problem with the following: When we perform the learning, and calculate the capped binomial we get one value, which is pretty good. However, when we submit this result, its value on the website is different, much worse. This isn't an unique case, we tried testing our learning algorithm with different datasets and obtained the capped binomial around 0.25 and it varied very little for those different datasets. When we submitted these results, on the website it was around 0.31 We checked our capped binomial calculation function with the example given online (chess competition) and it was ok. We think it could be connected with the way of submitting the results. Does anyone have any idea what could be the problem? TIA",0,None,4 ,Fri Jan 27 2012 19:10:19 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1305,/competitions/WhatDoYouKnow,213th /salimali,Supplying Code,"1. Winning participants are required to provide code Can you elaborate on this please. Is this just to confirm the winning solution works and then erased, or is our code being 'bought' in return for the prize money. Does this mean all submission have to be hand coded from the ground up? Cheers",0,None,11 ,Fri Jan 27 2012 21:26:08 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1306,/competitions/benchmark-bond-trade-price-challenge,None /params,Unable to open .mat files,"Hello Folks, I am pretty new to Kaggle and this is my first competition. I am unable to open the datasets (.mat files) on my mac. Clearly they are not in text format, can you tell me how can I open and view it? Or probably export it in some way to a .csv/.arff etc? -- Thanks, Params",0,None,4 ,Fri Jan 27 2012 23:18:37 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1307,/competitions/uci77B,None /sankalp,Point 3 in the rules,The point 3 in rules is not very clear. Can you please elaborate on what cannot be done? 3. Participants must not attempt to reconstruct full time series data out of the trade examples and previous 10 trades. Doing so in order to obtain the trade price of the test data will be considered cheating and such entries will be disqualified and not considered for a prize. Regards,0,None,8 ,Sat Jan 28 2012 05:39:21 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1308,/competitions/benchmark-bond-trade-price-challenge,None /kolo30502,What is the unit of delay,received_time_diff_last1 = 68185 is that 68185 milliseconds or microseconds?,0,None,1 Comment,Sat Jan 28 2012 09:11:44 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1309,/competitions/benchmark-bond-trade-price-challenge,None /moysz28232,Usage of Data File,"Hi, All Sorry it may seems stupid . It's my first time here ,not seeing any explaination about the data file test // is the history bond trading data ? we base this data to build up model ? is this also history bond trading data ,but used to evaluate the accracy of models we build ? train // is this bond trading data used for Machine learning / AI training ? for me they are all history data with no difference ..is that right ? Thank you in advance for explaining this .",0,None,2 ,Sat Jan 28 2012 17:57:02 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1310,/competitions/benchmark-bond-trade-price-challenge,None /moysz28232,"Column ""Weight""","Hi, All weight: The weight of the row for evaluation purposes. This is calculated as the square root of the time since the last trade and then scaled so the mean is 1. That means weight data should NOT be in algo that predicts the data or be part of parameters of models we build ? We shouldn't use Weight in any way ,right ?",0,None,13 ,Sat Jan 28 2012 18:03:56 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1311,/competitions/benchmark-bond-trade-price-challenge,None /roseyland,Trade Times,"I have a question about the received_time_diff_last{1-10} parameters. These are the times between the actual trade time of the trade we are trying to precict, and the actual trade times of the last 10 trades. These are not trade reporting times. Is that right? In other words, to calculate the time between the the trade and any of the previous 10 trades, I only need the received_time_diff_last parameters, and I do not need to use the reporting_delay parameter. Correct? Out of curiosity, does anyone see much value to the reporting_delay parameter? If we are just trying to predict the trade price, what difference does it make how long it took before the trade was reported after it had already occured. I suppose it's possible there is a correlation between certain types of trades and their reporting delay, but this seems unlikely to me.",0,None,3 ,Sat Jan 28 2012 19:16:44 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1312,/competitions/benchmark-bond-trade-price-challenge,52nd /moysz28232,Using External Data ?,"Hi, all Can we use external data ,such as historical yield curve of T-Bond , T-BIll , interest rate / repo rate ? or we only can use data provided and cal needed Quasi reference data on ourselves ? Thank you for explainantion",0,None,3 ,Sun Jan 29 2012 05:54:36 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1314,/competitions/benchmark-bond-trade-price-challenge,None /smcinerney,"Clarifications of received_time_diff, reporting_delay and curve_based_price","For those of us who know nothing about bond trading, could you please clarify the time-related quantities and the trading behavior: Q1) Please define the following two quantities and how they are interrelated to each other (show us an example with sample times in seconds, labeling each event): * reporting_delay: The number of seconds after the trade occured that it was reported (reported to whom? all other traders? What sort of effect would this have on trading in the bond? Would other (human) traders have guessed or interpolated the trading behavior, or stopped trading this bond? reporting_delay is negative for 30759 records, and for a few outliers it's > +873500000 sec (2.77 years?!), are those meaningful or dirty data? How to treat them?) * received_time_diff_last{1-10}: The time difference between the trade and that of the previous {1-10}. (How is that related to reporting_delay, should we offset by reporting_delay, which reported prices are other traders seeing at which exact time? Or is that not relevant?) ( weight: This one is clear. The [Link]:http://www.kaggle.com/c/benchmark-bond-trade-price-challenge/details/Evaluation. I would have called that Weighted MAE, but hey. (Also the [Link]:http://www.kaggle.com/c/benchmark-bond-trade-price-challenge/forums/t/1311/column-weight/8326#post8326, that it's a proxy for the normalized value of: constant * sqrt(receivedtimediff1+1) ) Q2) For rows with NA for some or all previous trades, does that mean received_time_diff > some large threshold? or just that the data is missing? Q3) Also, how do these timings affect when curve_based_price is calculated, and at what time it is provided and to whom? Is it retroactively calculated for trades with large reporting_delay? Is curve_based_price your suggested fair price of what trade would be accepted at that time? If we graph curve_based_price vs trade_price over the window (received_time_diff), what does that tell us? Thanks",0,None,18 ,Sun Jan 29 2012 12:06:46 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1315,/competitions/benchmark-bond-trade-price-challenge,None /holzner,questions answered multiple times by same user ?,"It looks like there are entries indicating that the same user has answered the same question more than once, e.g. user_id = 85818 and question_id = 3989, with different outcomes (the variable training is filled by the R benchmark script): training[training$user_id == 85818 & training$question_id == 3989, ] correct outcome user_id question_id track_name subtrack_name32 0 2 85818 3989 5 14218 1 1 85818 3989 5 14 Is a question uniquely identified by the question_id or is the value of another column needed ?",0,None,3 ,Sun Jan 29 2012 18:34:45 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1316,/competitions/WhatDoYouKnow,99th /vishalsurana,Some questions regarding dataset,"Hello, I was trying to model the data, and have several questions: (1) curve_based_price: A fair price estimate based on implied hazard and funding curves of the issuer of the bond. Is this the estimated price of the bond at the time of maturity? Can you illustrate with an example? (2) We are given coupon rate and time to maturity for the trade price for only one trade (in a given row). I would imagine that the bond prices during the last 10 trades will also have been affected by these parameters. Will not having this data result in a model which may be good for competetion purposes, but not accurate in real life? (3) I would imagine that the data consists of bonds issued by several companies, and hence will exhibit different behavior for different values of coupons, maturity times, etc. There appears to be no direct way of grouping the data based on this particular fact. (4) Does ""customer buy"" mean that a non-dealer bought the bond from a dealer? What is the trade type if ""customer buys from customer (who sells)""? (5) If past trading data is NA, is it the case that the data was taken at the time markets opened for trading? Missing data (how?)? Just curious to know what could be the reasons behind this. (6) What are the reasons behind ""reporting delays""? For e.g. is it correlated to the type of trade, or maybe trade between dealers have smaller reporting delays? (7) Is this data taken from a week's worth of trade? Month? What is the sampling window?",0,None,2 ,Mon Jan 30 2012 09:30:46 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1317,/competitions/benchmark-bond-trade-price-challenge,253rd /jfister,More Truncated Essays,"Looks like the updated data set is still full of truncated essays. For example, check out essays 18, 210, 314, 349, .... (hundreds more) Looks like it often gets cut off at an apostrophe, but also just sporadically.",0,None,2 ,Mon Jan 30 2012 20:41:43 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1318,/competitions/asap-aes,3rd /jfister,Submission Process,"I apologize if this is a dumb question. I've not participated in any other Kaggle competitions, so I'm hoping to better understand the submission process. In the External Data thread, Ben stated, ""For the verification process, we will apply the submitted models from the top 3 preliminary winners to the test data, and check that we can reproduce the test submissions with these models"". This begs the following questions: 1) Are we confined to a particular programming language or environment? 2) Are we supposed to submit something that is as simple as running a shell script? For example, I'm using Ruby for this task, and when all is said and done, generating the final predicted data set will involve importing the test data into a database, extracting the features, and running the predictive models. These steps will involve an environment that includes Linux, MySQL, various Ruby libraries, interfaces with open source packages, etc. From the description, it almost sounds like Kaggle is expecting a self-contained, executable jar file that can be run. Could anyone shed some light on how this process works and what needs to be submitted? If there's already documentation on this somewhere, feel free to point me to it and I'll give it a read. Thanks in advance!",0,None,8 ,Mon Jan 30 2012 22:24:01 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1319,/competitions/asap-aes,3rd /rkirana,kinds of sampling techniques,"I was wondering what are the different sampling techniques that are being used for validation set? We have valid_training.csv - and to create a validation file from this, I am doing the following: Take the last question that was answered by each user and put it in validation set. Keep the remaining in training. What are the other methods that users have found effective? (of course for testing, we can use valid_test.csv)",0,None,1 Comment,Tue Jan 31 2012 15:05:59 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1321,/competitions/WhatDoYouKnow,150th /yogurt,id on matlab files,Is the ID column on the matlab(.mat) files correct? The order of the ids seem scrambled.,0,None,1 Comment,Wed Feb 01 2012 05:25:42 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1324,/competitions/benchmark-bond-trade-price-challenge,18th /tritonsd,Helpful Literature,[Link]:http://grockit.com/blog/main/files/2010/02/grockit_2011_methodology_whitepaper.pdf,0,None,3 ,Wed Feb 01 2012 08:06:46 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1325,/competitions/WhatDoYouKnow,5th /bcragin,Conference on 3/30,"According to the Prizes link, ""... one or more of the leading teams may be invited to participate in the Third Stanford Conference on Quantitative Finance on 3/30/2012"", suggesting a possible interim prize. When will this decision be finalized, and how would the selections be made, e.g., based on public/private leaderboard scores?",0,None,1 Comment,Wed Feb 01 2012 19:39:06 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1328,/competitions/benchmark-bond-trade-price-challenge,2nd /benhamner,Modification of Competition Data and Format,"Hi all, We realized that there was a sampling issue with the test data: windows in the test set were not disjointly sampled from the original time series, so these may overlap with windows in both the testing and training sets. I've attached a quick python script that highlights this issue: it obtains a WMAE of 0.21946 simply by matching windows from the test set to overlapping windows from the training set. (This script is fast and basic, and the results could be easily improved). For comparison, the current leaderboard score is 0.95695. I was concerned that this would lead to solutions that overfit the test set, as well as damaging the fairness and integrity of this competition by providing the solutions to the majority of test points. In order to preserve the integrity of the competition and help ensure that constructed models aren't overfitting the final evaluation set, Benchmark Solutions is preparing a new set of data based on bond trades that occurred during a different time window. The training data will consist of the full time series for these trades up to a certain point, linked by the corresponding bond. The test data will consist of disjoint windows of 10 trades that occur after the cutoff for the training data, and you will be predicting the next trade. Any point in the time series for the test set will only appear once in the data. The current competition setup will remain active while the new data is prepared, as you can continue to use it to develop your models. If you have any suggestions or modifications on the new competition structure, please let us know. I apologize for any disruption this modification causes, and wish all of you the best luck in developing your models! [Link]:https://storage.googleapis.com/kaggle-forum-message-attachments/1855/make_cheating_submission.py",5,bronze,16 ,Thu Feb 02 2012 21:18:03 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1330,/competitions/benchmark-bond-trade-price-challenge,None /raindog308,What exactly does DSFS mean?,"The DSFS column in claims is described as ""days since first service that year"" and ""Days since first claim, computed from the first claim for that member for each year"". It's not clear to me exactly what this means. For example, let's say DSFS for a claim is ""1-2 months"" and the Year is 2. Is the purpose of this field to sequence claims by the same MemberID in a year? i.e., so that if you have one that's ""1-2 months"" and one that's ""3-4 months"", you know the order they came in. If so, that makes sense, but the language is a little confusing.",0,None,1 Comment,Thu Feb 02 2012 22:05:18 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1331,/competitions/hhp,None /hate5six,Bad faith essays?,"Hi, Is it known whether or not these essays were assessed with some pre-existing automated essay rater (and the scores were subsequently witheld from the dataset)? I'm most curious as to whether or not the authors were aware that there would be an automated grader. While it seems like this is not the case based on the information/description provided, I'd like to make sure there are no ""bad faith"" essays--written with the intent of fooling of the autograder to receive a higher score than deserved. Thanks in advance for the clarification. - Sunny",0,None,1 Comment,Fri Feb 03 2012 07:41:44 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1332,/competitions/asap-aes,None /davec6371,Stochastic Gradient Descent (a la Willem M),"Hi I've read through Willem's (excellent) Milestone 1 winners paper a few times, but I'm a bit stumped by some aspects.... The model SigCatVec1 (and many others) uses a 'feature vector of dimension 12' for each category (ie for each of 131 possibles). So there are 12 numbers for e.g. ""AgeAtFirstClaim=50-59"". These are initialised randomly from -0.01 to +0.01. That is fine, and each applicable category's feature vector is summed up, sigmoidified, and then dot producted against a single set of 12 'score' numbers. But how are these 'score' numbers updated? What is the algorithm and learning rates for them? Can someone take a stab at describing the update rules for both the f_i and s_j members for SigCatVec1? Thanks a lot Dave",0,None,1 Comment,Sun Feb 05 2012 12:06:46 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1333,/competitions/hhp,313th /mtantawy,set 1 - domain 1 score,"I am sorry if I am missing something but I thought that domian1_socre is the resolved score if raters did not agree. But when I looked at set 1, I see that it is the sum of scores, is that right?",0,None,3 ,Sun Feb 05 2012 19:23:32 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1334,/competitions/asap-aes,32nd /sethwayland,What happened to the Submissions and Leaderboard?,"It looks like there are some issues with this competetion. The Submissions page is asking for a dataset with exactly 61,146 rows, while the test data in the download has 615,162 rows. When I submit, it shows many.many errors. My previous submissions, which were working last week, are now also showing errors. I tried downloading the test and training datasets again (in .zip format), but it is identical to the one I downloaded at the start of the competition, so no help there. In addition, the Leaderboard has totally rearranged, it now shows only a few of the teams, and most submissions show unlikely mean absolute error. What is going on? When will this be fixed? Is there only one of the datasets on the downloads page that is the correct one? If so, which one is it?",0,None,4 ,Mon Feb 06 2012 19:21:53 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1337,/competitions/benchmark-bond-trade-price-challenge,105th /danglaser,Rules Change with Relaunch,"With the relaunch of this competition, we have made the rules more permissive. Specifically, the training data is now in order and contains bond_id to aid you in reconstructing a full time series per bond if you so desire. The training data also includes every trade as a row, so each row contains the previous row as its most recent lagged trade. The test data contains no bond_id, is not in order, and does not have overlapping rows. We believe that this change makes the rules simpler, easier to enforce and more logical. Please let me know if you have any questions or concerns. -Dan",1,bronze,1 Comment,Tue Feb 07 2012 17:36:45 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1339,/competitions/benchmark-bond-trade-price-challenge,None /mrkaggler,Gestures boundaries in Training data,"Hi, Is it possible to get gesture boundaries (start and end frame numbers) for training data (development set). Right now, we only have information of what gestures are being performed in a video. I know, its possible to extract the information automatically, but if we could get the exact information from you that would be greate. Thanks",0,None,3 ,Tue Feb 07 2012 18:16:25 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1340,/competitions/GestureChallenge,4th /karmicmenace,Resubmission issues,"I am having trouble whenever there is a submission issue with the challenge results (has happened couple of times). For example, today kaggle pointed out there are issues with the submission data (which was correctly flagged since my csv file didn't have two columns). There was an option along the lines of ""Go back, fix it and resubmit"". I tried it and fixed it, but the first erroneous attempt ended up being accounted for as well (which as far as I can tell was not a ""submission"" since I chose the option to go back ). Could you take a look to see if you are accounting correctly ?",0,None,3 ,Tue Feb 07 2012 23:13:22 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1341,/competitions/benchmark-bond-trade-price-challenge,65th /chris46,Can we get the skeleton's from the Kinect SDK as well?,"Sorry if this has already been asked. Since the data is coming from a Kinect sensor, it would be really helpful to get the skeleton data in the video, which the SDK can't get based solely on the video. Could this be included in the data, or no?",0,None,7 ,Wed Feb 08 2012 13:44:23 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1343,/competitions/GestureChallenge,None /atlas100,Meaningful AUC differences,"I participated for the first time in the Give Me Some Credit competition. I placed 598th with and AUC of 0.858781,while the first place team had an AUC of .869558, a difference of 0.010777. How much better is the winning model than mine? Is it a marginal improvement or so superior as to make using my model in a true credit assessment application laughable? I originally posted this comment in the general forum but received no responses. Thanks for helping me understand, Allyen",1,None,3 ,Wed Feb 08 2012 15:34:59 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1344,/competitions/GiveMeSomeCredit,559th /jjjjjj,Clarification on Rule D-12,"""Once an Entry is selected as eligible for a prize, the conditional winner must deliver the Prediction Algorithm’s code and documentation to Sponsor for verification within 21 days. Documentation must be written in English and must be written so that individuals trained in computer science can replicate the winning results. Source code must contain a description of resources required to build and run the method. Conditional winners must be available to provide assistance to the judges verifying their Entries"" Consider and entry that uses a search heuristic (e.g. generic algorithm, hill climbing, etc.) to do pre-processing. For example, one may use a search heuristic to generate 50 pairs of feature subsets and classifier choices based on some fitness measure. Then say those 50 constituent classifiers are trained on their assigned subset of features, and used to generate predictions. Finally, the 50 sets of predictions are combined using an ensemble method such as stacked regression. The ambiguity is there doesn't seem to be a restriction on computing resources required to recreate a winning entry. For example, suppose: 1) each heuristic search required 24 hours processing time (on a high-end workstation). 2) training each constituent classifiers required 2 hours processing time. 3) the final ensemble required 4 hours processing time. If the Sponsor required full recreation of the winning entry from contest data, then 1304 computing hours are required or about 54 days. Is it reasonable to assume that Sponsor will wait almost 2 months for verification? The rules are unclear here! If the Sponsor required only recreation from the selected feature subsets and classifier choices (using the output of the preprocessing steps as a starting point), then verification is reduced to 104 hours or about 4 1/3 days. Or perhaps it is sufficient to also validate only a small sample of preprocessing steps which couple be completed a few days. But again the rules are unclear here. This is ignoring the impact of parallel computing, for example, perhaps the Sponsor has ample computing resources and can fully recreate the winning entry in 48 hours using a farm of 100 workstations. But is it the contestant's or the Sponsor's responsibility to identify and implement the parallel computations? Basically, I don't want to run a program on my workstation for 2 months to generate an entry and be disqualified on a technicality. Thanks! Jim",0,None,2 ,Wed Feb 08 2012 16:17:55 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1345,/competitions/hhp,113th /dansbecker,Essay text encoding,"I'm looking at the training_set_rel2.tsv file, and some of the punctuation marks are showing up as weird characters I've never seen before. Essay sets 1 and 2 seem fine. Essay 5979 is the first where I have a problem. The third sentence starts with the character モ Almost all of the quotation marks and apostophres are being shown as weird symbols from this point on. Is anyone else getting this problem? Do we know what characterset is being used, so I can convert it to unicode? Thanks, Dan",0,None,6 ,Wed Feb 08 2012 18:29:58 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1346,/competitions/asap-aes,152nd /boogpipp,tag_string variable,"I may have missed this in the forum, but how are we supposed to interpret the tag_string variable? I know we were given the labels in the category_labels file, but how do I interpret a value like this 192193207 or this 219222227232240000?",0,None,2 ,Thu Feb 09 2012 15:29:48 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1347,/competitions/WhatDoYouKnow,None /onemillionmonkeys,Unsupervised Training on Final Evaluation Data,"When we are given the final evaluation data, will it be permissible to perform some sort of unsupervised training procedure that looks at all the videos in a given batch, prior to producing results for that batch? As far as I can tell, there is nothing that prohibits this, so it seems it should be OK, but I would like to confirm. Again, this would be unsupervised - there would be no manual labelling of this data.",0,None,2 ,Thu Feb 09 2012 20:22:22 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1349,/competitions/GestureChallenge,3rd /evangelos2,Linking Gravatar to Kaggle issue,"I have been trying to link my Gravatar account to my Kaggle account in order to get my profile picture updated. It doesn't seem like its working though. Both account are registered with the same e-mail address, which is how I initially thought the two would have beenlinked together. Is there something more I need to do? I have tried cleared my cache, but still nothing.",0,None,14 ,Thu Feb 09 2012 21:44:44 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1351,None,None /vikpar,Selection criteria for private competitions,"First of all, I apologize in advance if this has been covered somewhere. I poked around this forum and the internet and could not find anything other than a few clues. It appears that private competitions are being held on an ongoing basis. What is not clear is how competitors are selected for these competitions. Is there an objective metric that Kaggle uses (perhaps similar to the recently defunct user ranking method)? Is it at the discretion of the competition organizers? Although I fully understand if this information cannot be divulged, given the general transparency found on Kaggle and the meritocratic approach to problem solving that is emphasized, it would help to have a rough idea of what the selection criteria are. Thanks for your time.",1,None,2 ,Sat Feb 11 2012 02:32:18 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1353,None,None /michaelh,Meaning of ProviderID and Vendor,I have not been able to find a good source of information about what the real meaning is of ProviderID and Vendor. Can anyone help me out? Thanks.,0,None,1 Comment,Sun Feb 12 2012 19:39:05 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1356,/competitions/hhp,176th /lancelot0,Can i make use of the benchmark_lmer file?,"My single model gets the result of 0.26162, and then i mix my model with the benchmark_lmer provided by the organizer: mixed_model = my_model * 0.5 + benchmark * 0.5, which gets the result of 0.25378. I wonder this result (0.25378) is valid? Or Can i make use of the benchmark_lmer file? thank you~",0,None,1 Comment,Mon Feb 13 2012 03:21:17 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1357,/competitions/WhatDoYouKnow,17th /daveime,Zero Scored Essays ?,"http://www.kaggle.com/c/asap-aes/details/Evaluation ""A set of essay responses E has N possible ratings, 1,2,…,N, and two raters, Rater A and Rater B"" However, in the training_set_rel3.xlsx I see 419 essays with a domain 1 score of zero. Should we be ignoring these essays for training purposes, as they may possibly skew our models to return an estimate below 1 ? Apologies if this has already been covered in another topic.",0,None,6 ,Mon Feb 13 2012 16:13:20 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1358,/competitions/asap-aes,None /dmoccia,intellectual property,"So I noticed in the rules for this compeition: 'By participating in this competition, each team maintains full, exclusive and absolute rights to their intellectual property.' (unlike in the Hertiage Prize where all IP is transferred to the Sponsor). This is real good news for everyone who enters this competition, but it leaves me wondering how this works for the contest sponsor? What can the sponsor do with the results if the competitors retain IP? How does Kaggle handle this agreement (is there a contract)? What if I file a provisional patent prior to submission of the file code (assuming my code was any good!!)? I ask these questions with respect to this competition and in general. Please note I am not arguing for or against the transfer of IP given the value of the competion (that was already done [Link]:http://www.kaggle.com/c/GiveMeSomeCredit/forums/t/870/prize-fund-too-low.). I did take notice that the Hewlett Foundation considers this competition the first of many so it makes me wonder what their end game is. Are they hoping to contract with the winning teams? If any Kaggle admin could chime in it would be much appreciated! Thanks and good luck to all the teams...",0,None,1 Comment,Tue Feb 14 2012 17:45:59 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1360,/competitions/asap-aes,None /iguyon,"CVPR workshop, June 2012","Please consider submitting a paper to the workshop on gesture recognition held in conjunction with CVPR (June 2012, Rhode Island, USA), where the results of the challenge will be discussed. Deadline March 16, 2012 http://gesture.chalearn.org/dissemination/cvpr2012 Note: the deadline is early so papers can make it in the proceedings. It may not be possible that you include your final challenge results before it goes to print. The papers will be reviewed like regular papers for their - relevance (to gesture recognition) - usefulness - sanity (good experiments, correct derivations of algorithms) - originality/novelty - presentation The best challenge entrants will later be invited to submit a paper in a special topic of the Journal of Machine Learning Research (jmlr.org) which will be reprinted as a book in the Challenges in Machine Learning series of Microtome: http://www.mtome.com/Publications/CiML/ciml.html.",0,None,3 ,Tue Feb 14 2012 19:47:09 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1361,/competitions/GestureChallenge,None /iguyon,Kinect demonstration competition,"We will be holding a first demonstration competition at CVPR 2012: http://gesture.chalearn.org/dissemination/cvpr2012 This is what we call the ""qualitative evaluation"" in the [Link]:http://www.kaggle.com/c/GestureChallenge/details/Rules. First prize: USD 5000, Second prize: USD 3000, Third prize: USD 2000. This is a ""free style"" competition, not limited to on-shot-learning, but contrained to using Kinect and the Microsoft SDK. Deadline for proposals: May 1st 2012.",0,None,3 ,Tue Feb 14 2012 19:58:54 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1362,/competitions/GestureChallenge,None /haemoglobin,AUC Calculation,"Lots of people have been asking about the calculation of AUC. Here's a simple example showing how it works together with the actual (PHP) that Kaggle uses. Hope this helps John The Kaggle algorithm basically works as follows First order the data predicted = [0.86, 0.52, 0.32,0.26] real = [1, 0, 1, 1] Then calculate the totals for each class in the total_1s = 3 total_0s = 1 Initialise the cumulative percentages percent_1s_last = 0 percent_0s_last = 0 Iterate for each solution-submission pair count_1s = count_1s + {0,1} count_0s = count_0s + {0,1} percent_1s = count_1s/total_1s percent_0s = count_0s/total_0s rectangle = (percent_0s-percent_0s_last)*percent_1s_last triangle = (percent_1s-percent_1s_last)*(percent_0s-percent_0s_last)/2 area = area + rectangle + triangle percent_1s_last = percent_1s percent_0s_last = percent_0s Kaggle's PHP Code: private function AUC($submission, $solution) { array_multisort($submission, SORT_NUMERIC, SORT_DESC, $solution); $total = array('A'=>0, 'B'=>0); foreach ($solution as $s) { if ($s == 1) $total['A']++; elseif ($s == 0) $total['B']++; } $next_is_same = 0 ; $this_percent['A'] = 0.0 ; $this_percent['B'] = 0.0 ; $area1 = 0.0 ; $count['A'] = 0; $count['B'] = 0; $index = -1 ; foreach ($submission as $k) { $index += 1; if ($next_is_same == 0){ $last_percent['A'] = $this_percent['A']; $last_percent['B'] = $this_percent['B']; } if($solution[$index] == 1) { $count['A'] += 1 ; } else { $count['B'] += 1 ; } $next_is_same = 0; if($index < (count($solution) - 1)) { if($submission[$index] == $submission[$index+1]){ $next_is_same = 1 ; $mycount += 1; } } if ($next_is_same == 0) { $this_percent['A'] = $count['A'] / $total['A'] ; $this_percent['B'] = $count['B'] / $total['B'] ; $triangle = ($this_percent['B'] - $last_percent['B']) * ($this_percent['A'] - $last_percent['A']) * 0.5 ; $rectangle = ($this_percent['B'] - $last_percent['B']) * $last_percent['A'] ; $A1 = $rectangle + $triangle ; $area1 += $A1 ; } } $AUC = $area1 ; return $AUC; }",0,None,2 ,Wed Feb 15 2012 11:28:43 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1364,/competitions/oxford-credit-scoring-competition,4th /jftttt,About 2 rounds?,"I wonder if the two rounds competition are independent or dependent? If I don't participate in the first round in CVPR2012 workshop, do I have qualification to participate in the second round in ICPR2012 workshop? Thanks!",0,None,4 ,Thu Feb 16 2012 06:50:49 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1366,/competitions/GestureChallenge,None /jeffreyburkert,Missing essays,"I noticed that in the scoring rubrics there are counts of the number of essays in the training set. These counts seem to mostly match the ones I get from parsing the file, but for set 7 and 8 there seems to be a large discrepency, 1730 claimed vs 1569 read for 7 and 918 claimed vs 723 read for 8. Were these files removed due to transcription errors or some other reason?",0,None,1 Comment,Thu Feb 16 2012 07:33:50 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1367,/competitions/asap-aes,45th /xs2ritesh,Looking to Join a team,"Hi, I am Ritesh. I have a fairly decent theoratical knowledge of Statistics. I am a beginner in Python. I know Matlab, Mathematica, and SAS quite a bit. Let me know if you want a team member. Regards, Ritesh",0,None,1 Comment,Thu Feb 16 2012 09:25:07 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1368,/competitions/asap-aes,None /rsenge,Some R code,"Hi all, it's basic, but anyway. Just to make sure, that I got the error measure right, here is my R code for the WMAE: wmae <- function(pred, actu, weig) { sum(abs(pred - actu) * weig) / length(pred) } So calling it for some rows on the training set, I receive the following result: wmae(train$trade_price_last1, train$trade_price, train$weight) [1] 1.099453 Can someone please verify this as being the correct way to calculate the error? Thanks in advance.",0,None,2 ,Thu Feb 16 2012 10:52:14 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1369,/competitions/benchmark-bond-trade-price-challenge,114th /laxmiv,training_set_rel3.tsv,"Hi Ben, Looks like there is an issue with some entries in the .tsv file. For example entries 224 and 380 do not have a proper separator separating them from their respective next essays. For example essay 225 is being read as part of essay 224 Thanks P",0,None,8 ,Fri Feb 17 2012 11:51:37 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1371,/competitions/asap-aes,46th /stillsut,2nd Milestone - General Questions,"With the submission deadline past for the 2nd milestone, I was curious if anyone is aware of the timeline for: - announcing winners on the scoring set - release of milestone documentation by winners Also, I know that the teams will be ranked on their performance for the scoring set (as in milestone #1). But is their actual score on that set ever revealed publicly/privately?",0,None,4 ,Fri Feb 17 2012 20:07:30 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1372,/competitions/hhp,None /jofaichow,Proper procedures for deleting/merging teams?,"Hi, I would like to remove my one man team (woobe) and join my friends' team (Me, Myself and AI). Could you tell me what is the best way to do it? Many thanks!! Regards, Jo",0,None,2 ,Sat Feb 18 2012 14:10:03 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1373,/competitions/benchmark-bond-trade-price-challenge,87th /salimali,Essay Set 2 order,"For essay set 2, I am assuminng the submission order is domain1,domain2. Apart from guessing, I don't see how we can figure this out, unless I have missed something.",0,None,2 ,Sun Feb 19 2012 07:44:04 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1374,/competitions/asap-aes,2nd /sirguessalot,Inter-human Kappa,"Out of curiosity, I computed the inter-human (rater1 vs rater2) Kappa scores for each set and then the weighted score: set,domain,kappa 1,1,0.72095 2,1,0.81413 2,2,0.80175 3,1,0.76923 4,1,0.85113 5,1,0.75270 6,1,0.77649 7,1,0.72148 8,1,0.62911 all = 0.76033 Given that the current leaderboard score for an automated algorithm is super close to the agreement between two human experts, how realistic is it that we can further improve upon it? EDIT 03/08/2012: As it turns out - it's possible to improve. There are now 7 teams above the 0.76033 benchmark.",3,bronze,4 ,Sun Feb 19 2012 21:18:42 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1375,/competitions/asap-aes,1st /pikachu32083,Submitting the model,"I am interested in participating, but need clarification on the Terms. I am agreeable to upload the model ( you mean code and data, I assume ), but I won't have a working model until we're nearly done. And we don't even know if the model is suitable for the task at hand. In order to see that, we need to evaluate what rules humans are following ( readability? quality of writing? accuracy of content? ... ) which presumably would become clear once we downloaded the test and training data. So we're in a chicken and the egg problem. Any suggestions for how to proceed? Thank you!",0,None,1 Comment,Mon Feb 20 2012 00:03:13 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1376,/competitions/asap-aes,None /leustagos,Merging Teams,"Hi, Anybody placed around 10th want to join forces and merge teams? I discovered an odd aspect of the data, but couldn't explain it yet. So i'm looking for people with strong skills in applied statistics. Tranks, Lucas (Bitutas team)",0,None,1 Comment,Mon Feb 20 2012 16:42:53 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1377,/competitions/WhatDoYouKnow,4th /benhamner,Welcome,"We're excited to launch a follow-up to the [Link]:http://www.kaggle.com/c/wic2011! Ali's provided the raw images plus a number of extracted features, which are available on the [Link]:http://www.kaggle.com/c/awic2012/data, as well as several Matlab benchmarks. I've also put up one additional Python benchmark, which is available [Link]:https://github.com/benhamner/awic2012. Good luck on the competition, and let us know if you have any questions!",1,bronze,1 Comment,Wed Feb 22 2012 00:09:42 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1379,/competitions/awic2012,26th /maternaj,Current coupon,"Hello, can someone please explain in more details the meaning and/or purpose of this attribute from the data sets? Thank you very much!",0,None,1 Comment,Wed Feb 22 2012 14:55:15 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1384,/competitions/benchmark-bond-trade-price-challenge,116th /kelemam,HHP de-identification methods,"We are hosting a webinar on 6th March (noon EST) on how the HHP data set was de-identified. This will explain in detail the steps that were used to de-identify the data, the risk thresholds that were used, the rationale for the transformations applied, and the methods used. You can register here: http://ehil.ca/yopcCq There is an article that will appear in the Journal of Medical Internet Research shortly providing more information and I will post that on-line as soon as it is available.",0,None,4 ,Wed Feb 22 2012 20:18:11 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1386,/competitions/hhp,None /ejlok1,Scoring metric,Hi there I'd like to get some clarification and more understanding of the scoring metric - categorization accuracy. I can't seem to find anything about it on the web. Are you able to post the code (R preferably) as well? Thanks Eu Jin,0,None,3 ,Wed Feb 22 2012 21:39:34 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1388,/competitions/awic2012,28th /benhamner,Welcome,"We are very excited to be working with Tencent to host this year's KDD cup! The descriptions for each track have been posted so you can go ahead and start thinking about the contests. The data will be released on March 1, 2012, and then the competition will kick into full gear on March 15, 2012 with the activation of submissions and the leaderboard. Please let us know if you have any questions about the competition framework or the platform through these forums or by contacting us [Link]:http://www.kddcup2012.org/contact. Good luck on the contests!",0,None,2 ,Thu Feb 23 2012 01:14:43 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1389,/competitions/kddcup2012-track1,None /benhamner,Welcome,"We are very excited to be working with Tencent to host this year's KDD cup! The descriptions for each track have been posted so you can go ahead and start thinking about the contests. The data will be released on March 1, 2012, and then the competition will kick into full gear on March 15, 2012 with the activation of submissions and the leaderboard. Please let us know if you have any questions about the competition framework or the platform through these forums or by contacting us [Link]:http://www.kddcup2012.org/contact. Good luck on the contests!",0,None,1 Comment,Thu Feb 23 2012 01:15:47 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1390,/competitions/kddcup2012-track2,None /xmen28161,How does one leave a team?,Kindly let me know about the procedure to leave a team midway during a competition. Thanks,14,None,6 ,Thu Feb 23 2012 11:16:13 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1391,None,None /quansun,Question about the validation set ,"Hi, I have a question about the validation set (valid_set.xlsx) and the valid_sample_submission_x file. Say if I had my model built, should I use the model to predict the eassys in valid_set.xlsx one by one and then submit the result? but I don't unerstand why the valid_set.xlsx file has 4819 rows and the valid_sample_submission_x has 4219 rows. Why the numers are not the same? Thanks",0,None,2 ,Thu Feb 23 2012 23:36:00 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1394,/competitions/asap-aes,64th /byang1,Extracted Features,"Hi, can the organizers provide more info (description, source code) about the features in the dataset ? In particular I'd like to know if there's any special image-based features that only apply to Arabic text.",0,None,2 ,Fri Feb 24 2012 18:54:30 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1396,/competitions/awic2012,None /waynezhang,use of test data in unsupervised learning,Is it legal to use an unsupervised learning method but both training and testing set (no labels of test data are known)? Thanks!,0,None,6 ,Sat Feb 25 2012 04:07:59 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1397,/competitions/awic2012,1st /yumengkk,任务一(中文翻译):KDD Cup2012 数据挖掘竞赛主题一:预测围脖的推荐结果,【介绍】KDD Cup2012 数据挖掘竞赛主题一:预测围脖的推荐结果.同志们,我把kdd的介绍翻译了一下,贴到博客了,大家快速的看一下把。个人感觉,入手不像我们想象的那么难,自然语言处理用不到,会议方已经做好了。可以说主要就是推荐算法的设计,关键是发现智能化的模式。 http://t.cn/zO48Eox,1,bronze,12 ,Sat Feb 25 2012 10:30:36 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1398,/competitions/kddcup2012-track1,None /teaserebotier,Error (missing class def) reading Matlab data files,"Just loading the data off the download I get: >> load train Warning: Variable 'train' originally saved as a dataset cannot be instantiated as an object of MCOS class and will be read in as a uint32. supposedly that's a classdef not in the path, but i see 3 dataset classes in different matlab paths ... did anyone else meet or solve this?",0,None,3 ,Sun Feb 26 2012 00:17:19 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1399,/competitions/benchmark-bond-trade-price-challenge,37th /steffenrendle,Selected submissions and final leaderboard score,"The submission page says ""You can select up to 5 submissions that will be used to calculate your final leaderboard score"". Does this mean: from the 5 submissions that I pick, the final leaderboard score of each of these 5 submissions is calculated and out of these 5 leaderboard scores, the final score is the BEST one and the four other scores/ submissions will be completely discarded?",0,None,1 Comment,Sun Feb 26 2012 09:37:34 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1400,/competitions/WhatDoYouKnow,1st /dhammack,Issue making a submission,"We're trying to make our first submission, and every time we try (we've tried four times so far), the submission times out. We've tried from different computers and connections and keep getting the same issue. Is there a problem with Kaggle's servers right now that would be preventing us from submitting? Here's the error we keep getting: EXCEPTION: Exception of type 'System.OutOfMemoryException' was thrown. Thanks.",0,None,1 Comment,Mon Feb 27 2012 00:33:03 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1401,/competitions/asap-aes,119th /silverfish,Duplicate prices and odd trade sizes,"I've noticed some of the trades for a particular bond have the same price. They mostly seem to be trades at the exact same time, and usually with the same size of trade. I just wonder if there is a particular significance to them. I'm assuming they are related somehow, but I'm not clear how. Also, there are quite a few trade sizes of 5000001 and 1000001 when almost all the rest seem to be multiples of 1000. Again I assume there is some significance, but I have no idea what it could be. Any insight about these oddities would be useful. I don't know much at all about the mechanics of bond trading.",2,bronze,1 Comment,Mon Feb 27 2012 21:12:08 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1405,/competitions/benchmark-bond-trade-price-challenge,None /allankamau,User join date data in Track 1 problem,Seems there is no data detailing the date a user joined the microbloging site. Such data would compliment the already provided total number of tweets of a given user. Allan.,0,None,1 Comment,Tue Feb 28 2012 08:36:49 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1406,/competitions/kddcup2012-track1,644th /zygmunt,binamial deviance / accuracy,I would like to know what do leading scores (0.24x) mean in terms of percent accuracy.,0,None,2 ,Tue Feb 28 2012 16:45:11 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1407,/competitions/WhatDoYouKnow,81st /iguyon,Data annotations,"We provide man-made annotations of two kind: Temporal segmentation and body part annotations. February 2012 release: We are providing - all temporal segmentation for the devel01 to devel20 batches into individual gestures; - the position of the head, shoulders, elbows and hands for over 400 frames sampled from the devel01 to devel20 batches. Description (click for details) Matlab format and reader CSV format [Link]:https://sites.google.com/a/chalearn.org/gesturechallenge/data/data-annotations/README_TEMPO_SEGMENT.txt?attredirects=0 [Link]:https://sites.google.com/a/chalearn.org/gesturechallenge/data/data-annotations/Tempo_segment.zip?attredirects=0 [Link]:https://sites.google.com/a/chalearn.org/gesturechallenge/data/data-annotations/tempo_segment.csv?attredirects=0 [Link]:https://sites.google.com/a/chalearn.org/gesturechallenge/data/data-annotations/README_BODY_PARTS.txt?attredirects=0 [Link]:https://sites.google.com/a/chalearn.org/gesturechallenge/data/data-annotations/Body_parts.zip?attredirects=0 [Link]:https://sites.google.com/a/chalearn.org/gesturechallenge/data/data-annotations/body_parts.csv?attredirects=0 See for more details the [Link]:http://gesture.chalearn.org/data/data-annotations",0,None,2 ,Tue Feb 28 2012 19:59:26 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1408,/competitions/GestureChallenge,None /iguyon,Verification procedure,"The goal of the challenge is to devise a fully automatic system of gesture recognition. When we release the final evaluation set, the participants will have to provide predictions of labels without making changes to their system that involves a human looking at the data (either to label it, perform temporal segmentation, identify body parts, or to provide any other type of human knowledge). To enforce that rule, we will ask the top ranking participants to cooperate to reproduce their results (see the [Link]:http://www.kaggle.com/c/GestureChallenge/details/Evaluation). The preferred method is that the participants submit their code together with their predictions. Submission of code will be open shortly. It is OPTIONAL but will facilitate the verification and reduce the risk of disqualification. We are showing below a DRAFT of instructions for people who want to submit code. Please give us feed-back BEFORE March 7, 2012, if you want your comments to be taken into account ===================================================== CHALEARN Gesture ChallengeDRAFT instructions to prepare software submission (for verification purpose only)For questions, contact: events@chalearn.org==========================================================================By submitting their software, the authors grant to the organizers a license to evaluate it for the purpose of verifying the challenge results only. The authors retain all rights to their software. The organizers will keep the software confidential.==========================================================================1) LibrariesThe authors are responsible to include all the necessary libraries with their software. The software should be completely self-contained, unless agreed in advance with the organizers.2) PlatformThe authors should provide an executable for either- Windows 7- Mac OS X 10.5 or lateror interpretable code in either- Matlab (release R2011a or higher)- Java (latest release)The authors should contact the organizers in advance if they want to submit compilable source code under Unix platforms or other versions of the OS, Matlab or Java, or if they want to use other interpreted languages such as Python of R. Matlab and Java libraries will not be included by the organizers, unless agreed in advance.3) Put your software, including all libraries in a directory bearing the name of your team. Add a README file with installation and usage instructions. Zip the directory and upload it to xxxxxx.4) Recommended usage:> prepare_final_resu(input_dir, output_file)The code should comprise a main command line function called prepare_final_resu taking two arguments: the input directory and the output file name. The input directory will contain the batches to be processed organized in sub-directories (e.g. valid01, valid02, ... final01, final02). The software should list these files and automatically process them all.It is recommended that the code possess some kind of exception handling such that if it fails on a few files, it does not crash but runs on the remaining files. The organizers should be able to run the software on new data batches that were never released to the participants.5) Output format:The output file should be in the challenge submission format, see:http://www.kaggle.com/c/GestureChallenge/details/SubmissionInstructionssuch that the organizers can submit the file to the challenge website and obtain the same result that was submitted by the authors of the software on valid and final data.",0,None,14 ,Tue Feb 28 2012 20:26:42 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1409,/competitions/GestureChallenge,None /goldensection,About Team Forming,"Hi Dear Admin, 1. May a participant play in these two tracks for different teams? 2. Is there an amount limit of members for each team this year? Last year's policy said that a team should be formed up to 10 members, but the winning team much exceeded this limit.",0,None,8 ,Wed Feb 29 2012 04:22:10 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1410,/competitions/kddcup2012-track1,None /nikku33187,Meaning of MAE,"Hi, In the problem it is written ""The performance of the prediction will be scored in terms of MAE and AUC"". What is MAE here. Can someone please provide references for it. Thanks in advance.",0,None,2 ,Wed Feb 29 2012 17:10:03 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1411,/competitions/kddcup2012-track2,None /mrkaggler,Can we report results of validation set in the paper to CVPR2012 workshop ?,Can we report results of validation set in the paper to CVPR2012 workshop ?,0,None,5 ,Wed Feb 29 2012 21:09:43 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1412,/competitions/GestureChallenge,4th /thesuffocated,The meaning of trade size,"The data page says that trade size is ""the dollar amount of the trade"". In the first row of the training file, we have trade price = 128.596, trade size = 120000 and current coupon = 5.95. How many bonds have the customer bought? For what notional amount? Are coupons paid on a monthly basis, a quarterly basis, a half-yearly basis or a yearly basis?",0,None,1 Comment,Thu Mar 01 2012 01:15:47 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1413,/competitions/benchmark-bond-trade-price-challenge,226th /phoenix4,where's the data file?,"Today is March 1,but i cannot find the place for download",0,None,10 ,Thu Mar 01 2012 07:23:41 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1415,/competitions/kddcup2012-track1,None /holzner,use of collaborative filtering methods ?,"I wonder whether anybody has tried a collaborative filtering method (""students who liked this question also like these other questions"") based on singular value decomposition and got a better score than the Rasch model benchmark ? Personally, I tried using pyrsvd with some modifications (e.g. added a logistic function transformation) with different numbers of latent variables but I did not get an improvement over the benchmark. I have to admit though that I did not do a systematic determination of the learning rate, regularization parameters or the number of latent variables due to limited time available, I did however split these by track name as the benchmark model does.",0,None,11 ,Thu Mar 01 2012 10:10:44 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1416,/competitions/WhatDoYouKnow,99th /tqchen,seems official site is sometimes slow..,I have been refreshing for 10 min to get to the forum page. It gets better now. Does anyone experience similar problem?,0,None,4 ,Thu Mar 01 2012 10:40:54 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1417,/competitions/kddcup2012-track1,1st /benhamner,2nd Milestone Winners,Congratulations to teams Market Makers and Edward & Willem for winning the 2nd milestone prize! [Link]:http://www.marketwatch.com/story/heritage-provider-network-awards-80000-to-the-progress-prize-winners-in-the-3million-heritage-health-prize-contest-at-the-strata-conference-in-santa-clara-california-2012-03-01 We look forward to seeing the papers on your methods and results soon. The full 2nd milestone leaderboard is available here: [Link]:https://www.heritagehealthprize.com/c/hhp/leaderboard/milestone2,0,None,18 ,Thu Mar 01 2012 20:21:26 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1419,/competitions/hhp,None /barnandas,Nature of test data,Is there any possibility that the test data on which the final model would be evaluated might consists of essays that do not belong to any of the eight essay sets?,0,None,2 ,Fri Mar 02 2012 00:48:06 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1420,/competitions/asap-aes,65th /shuye0410,Where Is the Data?,"Hi There: I wonder where the data is? Any one knows? It's said the data is available in March 1,2012?",0,None,11 ,Fri Mar 02 2012 02:26:59 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1421,/competitions/kddcup2012-track2,None /shawnhuang0,units of time?,"Just making sure, I believe it was already stated that received time difference is in seconds, but how about the other two time-related data, time to maturity and reporting delay? Are these also measured in seconds?",0,None,3 ,Fri Mar 02 2012 05:18:36 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1422,/competitions/benchmark-bond-trade-price-challenge,146th /shoo33449,Solutions to old competitions?,"Are the best solutions to old competitions released? If so, where can they be found? If not, doesn't this give Kaggle employees an unfair advantage in these competitions given that they can have access to all the submissions?",6,None,10 ,Fri Mar 02 2012 06:38:46 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1423,None,None /btak47,I cannot add my team members?,I searched and could not found the links that i can add my team members for registerison. Can anyone tell me? Thanks!,0,None,5 ,Fri Mar 02 2012 08:01:43 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1425,/competitions/kddcup2012-track2,None /benhamner,Training Data Release,"We're excited to release the training data for Track 1. It can be downloaded from the [Link]:https://www.kddcup2012.org/c/kddcup2012-track1/data. This competition will kick into full swing with the release of the test data and public leaderboard activation on March 15, 2012. For those of you that wish to form teams, you will be able to do so then as well. Please let us know if you have any questions, and good luck!",6,bronze,21 ,Fri Mar 02 2012 09:52:27 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1426,/competitions/kddcup2012-track1,None /sreeaurovindh,Help! Data file for track 2 missing,Dear Admin. The data link that appeared an hour ago got disappeared.Kindly help Thanks Data learner [Link]:https://storage.googleapis.com/kaggle-forum-message-attachments/1952/data2.JPG,0,None,6 ,Fri Mar 02 2012 15:37:14 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1429,/competitions/kddcup2012-track2,None /benhamner,Training Data Release,"We're excited to release the training data for Track 2. It can be downloaded from the [Link]:https://www.kddcup2012.org/c/kddcup2012-track2/data. This competition will kick into full swing with the release of the test data and public leaderboard activation on March 15, 2012. For those of you that wish to form teams, you will be able to do so then as well. More detailed submission instructions will be provided then as well. Please let us know if you have any questions, and good luck!",1,bronze,6 ,Fri Mar 02 2012 18:25:47 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1432,/competitions/kddcup2012-track2,None /matthewroos,Leaderboard score missing,I made a submission which scored ~0.712 but it's not showing up on the leaderboard. Matt,0,None,2 ,Fri Mar 02 2012 19:51:27 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1433,/competitions/asap-aes,90th /tritonsd,Availability of Test Set,"I was wondering, if and when the test-set (outcomes) would be released. It will help me write a report that I am working on currently. Thanks! Rohan",0,None,7 ,Fri Mar 02 2012 21:07:42 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1434,/competitions/WhatDoYouKnow,5th /ompatri,Item categories a.b.c.d Loops in hierarchy?,"There are instances like 1.4.1.4 and 1.4.2.2 in the dataset in item.txt; Maybe I am missing something here but aren't they supposed to be hierarchical, i.e. Item-Category is a string “a.b.c.d”, where the categories in the hierarchy are delimited by the character “.”, ordered in top-down fashion (i.e., category ‘a’ is a parent category of ‘b’, and category ‘b’ is a parent category of ‘c’, and so on. So, this would imply 1.4.1.4 means 1 is a parent of 4, which is parent of 1, which again is parent of 4 ? Also 2.2 would mean 2 is a parent category of itself ?",0,None,4 ,Sat Mar 03 2012 04:37:39 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1437,/competitions/kddcup2012-track1,None /jczheng,A problem about test data,"Can anybody tell me that whether the target users, queries, and ads in the test data set for CTR prediction are also involved in the training data set? That is, are we required to predict CTR among users, queries, and ads that have already appeared in the training data set, or are we required to predict CTR between NEW users, NEW queries and NEW ads? This is important for the design of the model I think. Thanks",0,None,8 ,Sat Mar 03 2012 05:56:45 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1438,/competitions/kddcup2012-track2,61st /krachos,Unknown semantics of gender attribute value 3,"Hi, what are the semantics for the value 3 in the attribute gender in the user_profile.txt. The value 3 is not mentioned in the description. ""Gender has an integer value of 0, 1, or 2, which represents “unknown”, “male”, or “female”, respectively."" Our script reported an error for the following user profiles. 238037 1977 3 1 0 328969 1900 3 12 0 ...",0,None,4 ,Sat Mar 03 2012 08:47:16 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1439,/competitions/kddcup2012-track1,551st /jeromezhao,Exact meaning of result -1?,"In the instruction it says that -1 means user rejected the item recommended, but it this ""reject"" means the user ""explicitly rejected"" the recommendation(same as ""I don't like this"") or just ignore this recommendation also means reject? Is anyone know about this? Thks",0,None,4 ,Sat Mar 03 2012 09:17:31 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1440,/competitions/kddcup2012-track1,532nd /timmy1,Timestamps,"Hello, Shouldn't additional data parts beside the recommendations be timestamped? (the general description mentioned ""Timestamps for user’s follow actions are given for performing session analysis."" but I don't see it in the specific file's description nor the data itself) Thanks",0,None,3 ,Sun Mar 04 2012 00:13:57 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1442,/competitions/kddcup2012-track1,594th /ahassaine,Usage of internet allowed?,Can the developed system use the internet? for example to check plagiarism. Thanks !,0,None,1 Comment,Sun Mar 04 2012 07:36:48 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1444,/competitions/asap-aes,13th /tqchen,Possible Official Validation Set?,"Is it possible that organizers provide an ""official"" split of validation set, which is usually available in the previous competitions. Since the data specification do not specify clearly how the train and test data are splitted. Alternatively, refining the specification to specify the train vs test split will also be great.",0,None,14 ,Sun Mar 04 2012 10:49:09 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1445,/competitions/kddcup2012-track1,1st /rajstennajbarrabas,Win a free Kinect,"It's been a month since the ""send us your address so we can ship"" E-mail and I haven't received the Kinect. Have these been sent? Is there a tracking number or something? Should I contact UPS?",0,None,1 Comment,Sun Mar 04 2012 18:49:19 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1447,/competitions/GestureChallenge,46th /simondexter,Simple Stats,"Hi, I'm analyzing user_sns.txt file, I have 1,944,591 unique user ids, the cardinality of the set of followers (left column) is 1,892,060 and that of followees is 920,110. Can anybody confirm? Thanks, Simon",0,None,11 ,Mon Mar 05 2012 01:44:02 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1448,/competitions/kddcup2012-track1,None /luoleicn,User follow himself???,"in rec_log_train.txt, I got this: 1606902 1606902 -1 1318844011 1606902 1606902 -1 1318844018 1606902 1606902 1 1318844027 Does this mean tencent recommender system will recomend the user to himself ? but how can someone follow himself on tencent microblog?",0,None,7 ,Mon Mar 05 2012 08:57:22 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1451,/competitions/kddcup2012-track1,89th /phoenix4,About the target,Is our task to predict a decimal value between -1 and 1. or just two integer -1 and 1,0,None,11 ,Mon Mar 05 2012 13:05:53 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1455,/competitions/kddcup2012-track1,None /luoleicn,user_sns.txt incomplete,"for user id 628369.I found that: in rec_log_train.txt, he follows 189 users who are recommended by default system. That means user 628369 have followed 189 users at least but in the user_sns.txt, there are only 20 records for user 628369. Is the user_sns.txt a incomplete file or some other reasons cause this happened?",0,None,5 ,Mon Mar 05 2012 13:35:46 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1457,/competitions/kddcup2012-track1,89th /del=f240cc52390a57e0,About Baseline,"Hi, I got the baseline(default system) = 0.099. Can anybody confirm? Thanks, Justin",1,bronze,11 ,Mon Mar 05 2012 14:34:34 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1460,/competitions/kddcup2012-track1,None /amosstorkey,Welcome,"Welcome to the MLPR Challenge. If there are questions, you are welcome to ask them here.",0,None,22 ,Mon Mar 05 2012 17:51:05 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1462,/competitions/mlpr-challenge,None /jiajuhe,the weight of key word,"what's the max weight for a key word? It seems that the max weight is 1.0, but for some words it's 2.0. Is it correct?",0,None,3 ,Tue Mar 06 2012 04:25:17 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1463,/competitions/kddcup2012-track1,None /czz333605,About the three keyword dimensions,"user_key_word.keywordsID item.keyword user_profile.tagID are these three dimensions in the same space? meaning if we have a same id in these three dimensions, does that mean the keywords are the same?",0,None,5 ,Tue Mar 06 2012 11:30:22 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1475,/competitions/kddcup2012-track1,None /krey33078,monkey,"I used logistic regression with a log Gaussian prior as my classification algorithm. Logistic regression is based on the principle that the log of the odds should be linear, and given this assumption, logistic regression is simply the maximum likelihood estimate of the problem. The log Gaussian prior adds robustness to the method. As the general optimisation problem (MLE) is intractable, I used stochastic gradient descent to approximate a local maximum. This I later replaced with a more sophisticated algorithm called BFGS and this improved my test results significantly. Applying the tanh function to all inputs (features) was a very reasonable thing to do as well, it increased my AUC score by 0.276122. The tanh function maps the real line to [-1,1] so it ""normalises"" features ie. reduces scale. I used logistic regression with a log Gaussian prior as my classification algorithm. Logistic regression is based on the principle that the log of the odds should be linear, and given this assumption, logistic regression is simply the maximum likelihood estimate of the problem. The log Gaussian prior adds robustness to the method. As the general optimisation problem (MLE) is intractable, I used stochastic gradient descent to approximate a local maximum. This I later replaced with a more sophisticated algorithm called BFGS and this improved my test results significantly. Applying the tanh function to all inputs (features) was a very reasonable thing to do as well, it increased my AUC score by 0.276122. The tanh function maps the real line to [-1,1] so it ""normalises"" features ie. reduces scale. I used logistic regression with a log Gaussian prior as my classification algorithm. Logistic regression is based on the principle that the log of the odds should be linear, and given this assumption, logistic regression is simply the maximum likelihood estimate of the problem. The log Gaussian prior adds robustness to the method. As the general optimisation problem (MLE) is intractable, I used stochastic gradient descent to approximate a local maximum. This I later replaced with a more sophisticated algorithm called BFGS and this improved my test results significantly. Applying the tanh function to all inputs (features) was a very reasonable thing to do as well, it increased my AUC score by 0.276122. The tanh function maps the real line to [-1,1] so it ""normalises"" features ie. reduces scale. My final performance was an AUC score of 0.849885.",0,None,1 Comment,Tue Mar 06 2012 11:43:07 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1478,/competitions/oxford-credit-scoring-competition,20th /sinzero,weight=2.0 in user_key_words.txt,"These user who have keywords of weight=2.0 seems to be items. D:\XXXX\track1>cat user_key_word.txt|grep 23358692335869 974:2.0;974:2.0;974:2.0;974:2.0;974:2.0;8895:2.0;8895:2.0;1670:2.0;1670:2.0;85658:2.0;85658:2.0;85658:2.0;85658:2.0;72246:2.0;72246:2.0;6183:2.0;6183:2.0;2245:2.0;2245:2.0;9525:2.0;9525:2.0;174033:2.0;174033:2.0;174033:2.0;174033:2.0;6977:2.0;6977:2.0;39928:2.0;39928:2.0;412042:2.0;30066:2.0;30066:2.0 D:\XXXX\track1>head -1 item.txt2335869 8.1.4.2 412042;974;85658;174033;974;9525;72246;39928;8895;30066;2245;160;85658;174033;6977;6183;974;85658;174033;974;9525;72246;39928;8895;30066;2245;670;85658;174033;6977;6183;974 2335869 is the first item in item.txt, and these keywords of 2335869 are identical to keywords in item.txt. does 2.0 mean unkonw?",0,None,14 ,Tue Mar 06 2012 12:08:45 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1480,/competitions/kddcup2012-track1,24th /cike33411,about the item.txt,"the tags of one recode are often repeated. for example: 643400 8.1.3.4 314155;178434;9450;8661;22158;56388; 8661;22158; 8661;22158;56388; 8661;22158 753466 1.4.2.6 6359;18059;6183;3203;44625;479121;18059;6183;3203;10638;2321;117898;393066;213525; 18059;6183;3203;44625;479121;18059;6183;3203;10638;2321;117898;393066;213525 is there any reason ? another question: is there any reason that the tags in ""item.txt"" are often too big while the tags in ""user_profile"" are often little?",0,None,6 ,Tue Mar 06 2012 12:48:11 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1481,/competitions/kddcup2012-track1,392nd /zaythedatascientist,"Assumptions about user-keywords, item-keywords and tag-id. (Official Confirmation required) ","According to the official information provided: item-keywords are extracted from user profile (may be from the self-introduction). user-keywords are from the tweets, retweets and comments. Question 1: are item-keywords and user-keywords in different dimenstions (same number represent different words)? Tags are selected by users to represent their interests. But some tag-ids are very large number (e.g. user 2420818 has tag-id 138415). Question 2: does a tag-id represent an interest that user chosen manually? How can a user choose from too many interests (more than 100,000 interests)?",0,None,1 Comment,Tue Mar 06 2012 14:49:50 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1483,/competitions/kddcup2012-track1,641st /howardxie,question regarding coupon,"Hello, I have two questions regarding coupon 1. for the column ""coupon"", does it indicate the coupon RATE (percentage of bond face value) or the coupon AMOUNT ( amount of dollars) ? 2. What is the coupon payment frequency? Is it every year, every quarter, every six month or other time peorids ? Thanks in advance.",0,None,1 Comment,Tue Mar 06 2012 21:50:07 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1484,/competitions/benchmark-bond-trade-price-challenge,123rd /farout,count regularities in rec_log_train.txt ?,"The number of entries per UserID in rec_log_train.txt has a strong regularity with period 3. Below is the number of UserID with 1,2,3... entries in the file: 6 79 224255 678 731 167972 745 945 121844 723 1091 94434 708 1015 74758 The regularity comes from ""refused recommendations"". A histogram of accepted recommendation per UserID is smooth. This seems odd. Am I mistaken ? If not, what causes this regularity? Just eyeing the beginning of the file shows a pattern of 3 lines per UserID (with distinct ItemIDs) Perhaps related, there are far fewer distinct timestamps in this file (2445766) than entries (~73*10^6).",0,None,3 ,Wed Mar 07 2012 00:33:03 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1485,/competitions/kddcup2012-track1,None /leustagos,R code for AUC,"Does anybody knows an efficient implementation of the AUC metric used in this competition? Tried a couple of R packages, but they where very slow. The AUC code in matlab I have runs way faster!",0,None,1 Comment,Wed Mar 07 2012 03:36:19 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1487,/competitions/kddcup2012-track2,5th /cozilla,how many members in each team?,how many members in each team? Does it have some limits of the number of members?,0,None,2 ,Wed Mar 07 2012 04:18:43 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1488,/competitions/kddcup2012-track1,None /kongfupig,about the weight of keyword,"Can anyone explain the strategy on keyword weighting for user and item? Without considering the given user or item, if all key words supposed to be uniform important in the vocabulary or not ?",0,None,1 Comment,Wed Mar 07 2012 07:55:53 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1489,/competitions/kddcup2012-track1,224th /bhaskher,Perplexity 137604551180683.00000 !!!,"Hi, Seems like I have broken all records of rubbish predictor, I have out-performed all zeros!. If I understand correctly lower the perplexity score it is better. My log prob. look about reasonable. I have rescaled them so that 0 represents highest probable Y class. However my rest of the classes have negative probability. It is not very clear from your earlier comments as to how should we rescale the results. Am I supposed to turn the polarity/sign? Any thoughts? Regards",0,None,5 ,Wed Mar 07 2012 12:47:59 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1498,/competitions/mlpr-challenge,34th /timmy1,Updating the dataset description,"Hello, Since there were a couple of clarifications regarding the data from when it was initially published, would it be possible to please update the dataset description page and highlight the changes? It could make it easier then following all forum discussions. Thanks",0,None,1 Comment,Wed Mar 07 2012 13:49:54 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1499,/competitions/kddcup2012-track1,594th /iguyon,15 Free Beagle Boards!,"Were you thinking to participate in the CVPR 2012 demonstration competition? You may need extra compute power to run a real time demonstration. You could use a Beagle board and you can easily get one for free: Submit your model executable code together with your predictions on the website NOW and facilitate the verification process. See the instruction at the bottom of the page: http://www.kaggle.com/c/GestureChallenge/details/SubmissionInstructions. Texas Instruments is offering 15 Beagle Boards. Beagle Boards are based on the OMAP3 processor, which is used as the main processor in many smart phones today. The Beagle Board Open Source community has already connected Kinect with Beagle Board and demonstrated examples of gesture recognition. More information is available at www.beagleboard.org. Beginning today, participants who are classified in the top 10 entrants on the leaderboard AND submit their code for verification at any time until the end of Round 1 of the challenge, are entitled to a free Beagle board, while supplies last. To claim your Beagle board, send email to events@chalearn.org with a snapshot of the leaderboard and highlight your submission.",0,None,1 Comment,Wed Mar 07 2012 22:03:16 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1500,/competitions/GestureChallenge,None /lambchops,Any ideas / suggestions how to approach this problem?,"I started off thinking creating a linear regression baseline would be nice... however w. almost 150 million training samples, that is pretty much intractable. What is the general idea you guys have in mind in approaching this problem?",0,None,3 ,Thu Mar 08 2012 06:49:46 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1503,/competitions/kddcup2012-track2,None /pspcxl,Is training data file ordered by time ,Is training data file ordered by time or other factors,or just random?,0,None,1 Comment,Thu Mar 08 2012 09:51:31 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1504,/competitions/kddcup2012-track2,None /danmaftei1,re-submissions?,"Hi, I had a small bug taking a needless square and so my linear regressor perplexity is wrong. Is there a way to delete / resubmit?",0,None,1 Comment,Thu Mar 08 2012 15:22:18 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1505,/competitions/mlpr-challenge,25th /abhinavkulkarni,Space of tag-Ids and Item-Keyword,"Hello, Although data specification makes it pretty clear that tag-Ids and Item-Keyword are not from same space (i.e. same tag-Id and Item-Keyword do not necessariy correspond to same 'keyword'), I just wanted to double check this with organizers. If there indeed is any association between them, it could change the treatment of the problem. Thanks.",0,None,2 ,Thu Mar 08 2012 19:31:42 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1506,/competitions/kddcup2012-track1,None /abhinavkulkarni,About Item-Id and User-Id,"Hello, I was wondering if Item-Id space is a subset of User-Id space. According to data description, a set of specific (interesting, famous) users were selected and promoted as items so that they can be recommended to others for following, etc. Does this mean that Item-Id is same as their (previous) User-Id? Or once they were promoted to the item set they were assigned fresh Item-Ids? Thanks.",0,None,1 Comment,Thu Mar 08 2012 19:57:45 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1507,/competitions/kddcup2012-track1,None /rkirana,Row Delimiter missing,What is the row delimiter of the training files? Seems like row delimiter is missing!,0,None,2 ,Fri Mar 09 2012 08:12:40 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1508,/competitions/kddcup2012-track2,50th /c6h5no2,Same tokensid in different *_tokensid.txt files,"Do they have the same meaning? For example, you can find tokensid ""75"" in the 4th line of purchasedkeywordid_tokensid.txt, 2nd line of queryid_tokensid.txt, and 1st line of titleid_tokensid.txt. Do these ""75""s mean the same word?",1,bronze,6 ,Fri Mar 09 2012 13:18:44 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1509,/competitions/kddcup2012-track2,None /iguyon,Final evaluation data,The final evaluation data is available for download from the data page. It is encrypted using WinZip (for which a free trial version is available). We will release the decryption key on April 7. Please download the data at your convenience to avoid overloading our servers with requests at the last minute.,0,None,3 ,Fri Mar 09 2012 17:36:36 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1510,/competitions/GestureChallenge,None /zstats,Out of memory in scoring?,"Is anybody else getting errors in the Kaggle scoring system today? My submission takes forever to score an then eventually gets: ""EXCEPTION: Exception of type 'System.OutOfMemoryException' was thrown."" as an error. My submission is the right size (4818 rows in a CSV, 96 KB). Haven't tried submitting to this contest before, so maybe I'm doing something wrong?",0,None,4 ,Sat Mar 10 2012 20:24:25 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1512,/competitions/asap-aes,103rd /timmy1,Keywords,"Hello, Are all the connections betwen items-to-keywords that are in item.txt also listed in user_key_word.txt (since items are users)? Best regards",0,None,1 Comment,Sun Mar 11 2012 00:21:28 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1513,/competitions/kddcup2012-track1,594th /kurak38,userid = 0 ??,"Hi, I've noticed that 78391188 row in training.txt has userid = 0, and few more after it. Actually there are exactly 37733352 record with userid=0. In userid_profile.txt file there isn't any user with id equal to 0, so how should we interpret this situation? Are they missing/unknown data? Can you provide us a md5 hash of each files. thanks",0,None,2 ,Sun Mar 11 2012 00:25:29 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1514,/competitions/kddcup2012-track2,77th /plus133582,Duplicated recommendation,"As mentioned in previous posts, (and could be seen in testdatas)there could be multiple recommendation record, for ex:A B -1 time1A B -1 time2A B +1 time3it is perfectly normal and acceptable, but I have one question:since the evaluation is ""average precision"" (something like MAE),we need to rank the recommendation records in the testing data,and those K (suppose a user accepted K records) highest ranked items will be taken.Just to make sure,if an item appear more than once in the K records,for ex in the example above, all three records is given some high evaluation and appear in the top K records,will they be counted just once? or do we have to exceptionally handle duplicated records or something?",0,None,1 Comment,Sun Mar 11 2012 12:40:48 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1517,/competitions/kddcup2012-track1,22nd /pjia18363,"impression? what is it?anybody who can tell me,thx","I do not clearly understand the description on "" impression "" in tha training dataset, but it seems to be a very important parameter for I saw this example given by authorities in the Evaluation part: if an instance has 40 impressions and 10 clicks, its empirical CTR is 10/40. Could anybody tell me what does ""impression"" mean? Many thanks~~",0,None,4 ,Sun Mar 11 2012 14:54:21 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1518,/competitions/kddcup2012-track2,None /wangyuantao,AUC is also weighted by ad impression?,It seems AUC can only handle two classes. Does a single instance in KDD Cup data set with 2 clicks 3 impressions means 2 positive examples and 1 false example?,1,bronze,1 Comment,Sun Mar 11 2012 16:13:14 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1519,/competitions/kddcup2012-track2,84th /allankamau,Difficulties understanding the problem.,"I have difficulties understanding the presented problem. A)I have difficulties understanding the distinction between user and item. An example ""an item a vip user Dr. Kaifu LEE"" is given as an example of an item. And the text goes on to explain that the exmple vip user now identified as ""kaifulee"" belongs to two categories, ""X.science-and-technology.internet.mobile"" and ""X.business.Investment.Angle Investment."". Question: 1)Can a user also be an item? 2)I have found no items belonging to more than one category. The field ""item_id"" of the ""item"" dataset contains no duplicates, meaning that there are no items being represented in different categories. B)Then we advised that user ""Peter"" follows ""kaifulee"" who was classified as item in the case description. The exact text reads. ""For example, if a user Peter follows kaifulee, he may be interested in other items of the category that kaifulee belongs to, and might also be interested in the items of the parent category of the kaifulee's category"". Questions: 1)Can a user (for example ""Peter"") follow an item, or does he only follow other user(s). Or can a user follow both items and other users? 2)Can a user belong to a category?",0,None,5 ,Mon Mar 12 2012 11:47:14 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1521,/competitions/kddcup2012-track1,644th /danglaser,Stanford Conference Update,"Hi All, We will be looking at the leaderboard on 3/15 to help us determine who to invite to the Third Stanford Conference on Quantitative Finance. The conference is on 3/30-3/31 at Stanford University. More details can be found on the Prizes tab. Please get your submissions in soon, and good luck. -Dan",0,None,8 ,Mon Mar 12 2012 14:05:26 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1522,/competitions/benchmark-bond-trade-price-challenge,None /xing34749,Dr. Kaifu LEE.UserID==Dr. Kaifu LEE.ItemID?,"Since Dr. Kaifu LEE is an item and also a user in Tencent Weibo, does it mean userID=itemID of Kaifu?",0,None,3 ,Tue Mar 13 2012 09:51:22 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1525,/competitions/kddcup2012-track1,None /zhangjun,question about session,"In the explaination of impression, it says ads will impressed to the user in search session. In fact the search results will be several pages, and maybe the ad will be in the 2nd page or the 3rd page.Does the search session contain the situation that user browse the second result page and see the ad? I am confused about it. Please help me , thank you very much!",0,None,1 Comment,Tue Mar 13 2012 10:02:31 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1526,/competitions/kddcup2012-track2,None /xjsxjtu,item category & tagID ??,"Does item category, from item.txt, and tagID, from user_profile.txt, come from the same vocabulary? thanks~",0,None,2 ,Tue Mar 13 2012 14:05:37 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1527,/competitions/kddcup2012-track1,None /dchudz,Milestone 2 Papers,"The papers written by the milestone winners are now available [Link]:https://www.heritagehealthprize.com/c/hhp/leaderboard/milestone2. As described in section 13 of the [Link]:http://www.heritagehealthprize.com/c/hhp/Details/Rules, if you have any concerns about these papers, you have 30 days from their posting to provide your feedback.",0,None,19 ,Tue Mar 13 2012 20:06:20 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1528,/competitions/hhp,None /hangz19598,Question about the Unix timestamp,Is the time stamp provided in the Chinese Standard Time zone or some other zone?,0,None,1 Comment,Tue Mar 13 2012 22:23:42 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1529,/competitions/kddcup2012-track1,44th /qanlp32519,some question about tags and keywords,Hi~ I found that keywords of a specific user or item sometimes occurs more than once in the keyword list of the user. For example: userid keyword 92700 3203:2.0;3203:2.0;12203:2.0;12203:2.0;16745:2.0;16745:2.0;16745:2.0;142344:2.0;..... I noticed that 3203 and 12203 occur twice in the keyword-list and 16745 occurs three times but all of these three keywords are weighted 2.0. Does it means that 16745 is more important than the other two or all of these 3 keywords are equally important? the other question is about tags. Can the tags occur more than once in the tagid_list as the keywords,0,None,1 Comment,Wed Mar 14 2012 07:56:20 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1532,/competitions/kddcup2012-track1,223rd /qanlp32519,recommendation in trainset,"Hi~ I wonder if there are recommendations for users who are also items in the trainset. In other words, does the recommender system also recommend users for the users who are in the item.txt?",0,None,6 ,Wed Mar 14 2012 08:51:54 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1533,/competitions/kddcup2012-track1,223rd /windboy,The files are too large,The files are too large to operate. How to make this problem easier?,0,None,2 ,Wed Mar 14 2012 13:30:19 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1534,/competitions/kddcup2012-track1,None /loyolite270,Evaluation Formula,Can anyone please explain the exact formula used for evaluation?,0,None,3 ,Wed Mar 14 2012 13:35:05 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1535,/competitions/benchmark-bond-trade-price-challenge,None /predictor,Possible Abuse of Public Score?,"Isn't it possible for a contestant to cheat by submitting a prediction file of all writer 0 (""writer not in training set"" class) and using the resulting public score to estimate the proportion of writer 0 cases in the test data? In fact, a more sophiticated cheat would be to submit a prediction file with random assignments: observations predicted other than 0 would be unlikely to be correct, so that the leaderboard accuracy would (largely) represent the fraction of writer 0 predictions which accidentally were correct.",0,None,9 ,Wed Mar 14 2012 17:25:26 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1537,/competitions/awic2012,40th /subharya83,Skeleton Detection,"Kinect has a very good skeleton detector and tracker. Using that, the challenge becomes way easier. Therefore, only the teams which have access to the skeleton tracking code will benefit, which might be biased. It seems fairer to make the skeleton detection and tracking results available to everyone.",0,None,2 ,Wed Mar 14 2012 19:55:39 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1538,/competitions/GestureChallenge,44th /chriscarter,Query word structure,"Hi, do the numbers in the query field represent different words in the Chinese language? Are linking words such as AND and OR included or have these been removed?",0,None,1 Comment,Thu Mar 15 2012 01:55:26 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1539,/competitions/kddcup2012-track2,None /onemillionmonkeys,Data Discrepancy,"Hi, I noticed a puzzling discrepancy in the data and I wonder if the administrators can shed any light on it. There are some (maybe many) differences between the movies contained in the devel01-40 download and the devel02 download. To take just one example, the file K_16.avi (from batch 2) has size 1052554 in the devel01-40 download, while the corresponding file has size 1007874 in the devel02 download. Is the explanation that one is compressed in the ""quasi-lossless"" format and one in the lossy format? That could explain this discrepancy, but from reading the download page, it appears both downloads are supposed to be in the quasi-lossless format. The follow-on question is: how can I ensure that I am getting all the data in one format (devel, validation and final)? I think what I want is the lossless format, but mostly I just want consistency. I definitely don't want to tune my system on one format and then have the final evaluation be performed on a different format. Thanks.",0,None,1 Comment,Thu Mar 15 2012 02:02:07 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1540,/competitions/GestureChallenge,3rd /alkhwarizmi,I can't make a submission,I have followed the instructions for making a submission and can't get it to accept the file. What is the trick? I had no problems in the other competition.,0,None,1 Comment,Thu Mar 15 2012 04:23:05 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1541,/competitions/benchmark-bond-trade-price-challenge,74th /aimago,about the weight of keywords in user_key_word.txt,the weight of a keyword is relative to user itself or all the users(global)?,0,None,1 Comment,Thu Mar 15 2012 07:31:49 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1543,/competitions/kddcup2012-track1,None /helloworld34041,Calculate AUC,How to claculate AUC in the track. I don't understand the meaning of TP & FP in this track and I can't draw the ROC curve.,0,None,9 ,Thu Mar 15 2012 12:44:00 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1545,/competitions/kddcup2012-track2,133rd /charisse,HHP_release3 claims.cvs CharlsonIndex,For this column in the data I only see dates.Is there another file I should be using?,0,None,3 ,Thu Mar 15 2012 15:26:51 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1546,/competitions/hhp,None /smile33551,where is the test dadaset?,where is the test dataset? how many items should be recommended to a user?,0,None,4 ,Thu Mar 15 2012 15:28:49 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1547,/competitions/kddcup2012-track1,None /grangwang,Does anyone/team would like to take one teammate?,Please contact me soon:) orchestor {at] gmail ^dot} com Kaggle is a wonderland of machine learning!,0,None,2 ,Thu Mar 15 2012 16:30:31 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1548,/competitions/benchmark-bond-trade-price-challenge,None /smile33551,Is the 'items set' a subset of 'users set'?,Is the 'items set' a subset of 'users set'? whether we recommend items to a item?,0,None,2 ,Thu Mar 15 2012 19:00:41 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1550,/competitions/kddcup2012-track1,None /benhamner,Test Data Release,"We're excited to release the test data for Track 1. It can be downloaded from the [Link]:https://www.kddcup2012.org/c/kddcup2012-track1/data. There will be an update posted with the next 24-48 hours that contains the new submission format for Track 1 (which will drastically reduce the size for the submission files). Good luck, and let us know if you have any questions!",0,None,8 ,Fri Mar 16 2012 07:08:10 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1551,/competitions/kddcup2012-track1,None /benhamner,Test Data Release Delayed,"Hi all, We are holding off on releasing the test data for Track 2 in order to check it more thoroughly for potential issues (such as data leakage). It should be posted within the next week. Thanks for your patience. In the meantime, the test data for [Link]:https://www.kddcup2012.org/c/kddcup2012-track1/data has just been released!",0,None,14 ,Fri Mar 16 2012 07:08:16 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1552,/competitions/kddcup2012-track2,None /andyxs,how to create a team,when can we create a team and how to do it? please provide more information about it!,0,None,1 Comment,Fri Mar 16 2012 07:19:24 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1553,/competitions/kddcup2012-track2,143rd /smile33551,Maybe I know what we should to do ,"According to Yanzhi Yanzhi wrote You can not pick items but rank items associated with users in test set. Please wait for test set. The task is that we should rank the items in test set for each user,It doesn't generate new items to recommend.",0,None,1 Comment,Fri Mar 16 2012 07:49:59 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1554,/competitions/kddcup2012-track1,None /dataminer10,Getting Data,I am teaching a Data mining course and I need data for students projects. How can I get this?,0,None,2 ,Fri Mar 16 2012 11:53:10 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1556,None,None /qiming0,not clear about 4(b) task,"4. (b) ""Now using the first 10 components of your PCA representation for the data, do linear regression, and report the 4-fold cross-validation perplexity."" I am very confused. Could anyone help me to explain the task? What is the data? x or [ x(:, end), x(:,end -34), x(:, end-35)]. What are the first 10 components of your PCA representation of the data. What are the input for the linear regression?",0,None,9 ,Fri Mar 16 2012 16:03:09 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1557,/competitions/mlpr-challenge,None /danmaftei1,Length?,"I know that's a horrible question to ask, but I have no sense of how verbose I should be. Minus graphs / images, is there a desired length, give or take a few pages?",0,None,1 Comment,Fri Mar 16 2012 18:45:26 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1558,/competitions/mlpr-challenge,25th /benhamner,Submissions Enabled and Leaderboard Activated,"We're excited to fully launch this competition! Submissions have now been enabled, and the leaderboard is active. See the [Link]:https://www.kddcup2012.org/c/kddcup2012-track1/data for examples of the submission file. Our system accepts compressed submissions (.gz, .7z, and .zip), and we recommend that you compress them to minimize upload time. Instructions on the submission format can be found [Link]:https://www.kddcup2012.org/c/kddcup2012-track1/details/SubmissionInstructions. Good luck, and please let us know if you have any questions about the submission process!",0,None,18 ,Fri Mar 16 2012 19:37:32 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1559,/competitions/kddcup2012-track1,None /benhamner,Welcome,"We're excited to launch this competition! How well can you predict the biological response to a molecule given only features derived from its structure and composition? The code to create the benchmark submissions is available from this [Link]:https://github.com/benhamner/BioResponse. Good luck on the competition, and let us know if you have any questions!",0,None,15 ,Fri Mar 16 2012 20:11:31 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1560,/competitions/bioresponse,302nd /user2ndarysignals,definition of log loss metric,"[Link]:https://github.com/benhamner/BioResponse The data is in the comma separated values (CSV) format. Each row in this data set represents a molecule. The first column contains experimental data describing a real biological response; the molecule was seen to elicit this response (1), or not (0). The remaining columns represent molecular descriptors (d1 through d1776), these are caclulated properties that can capture some of the characteristics of the molecule - for example size, shape, or elemental constitution. The descriptor matrix has been normalized.",0,None,2 ,Sat Mar 17 2012 14:50:30 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1561,/competitions/bioresponse,None /kxia19023,Do we get ranking information from training set?,"Hi guys, Do we get the ranking of recommendations from the training set? Or for each user, all accepted recommendations with value ""1"" are treated equally? Thus we don't get much score information from a binary labeled class.",0,None,3 ,Fri Mar 16 2012 23:18:39 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1562,/competitions/kddcup2012-track1,None /hangz19598,# of clicks of each user in test set >=3?,"Since the submission requires 3 items for each user in the test set, does it mean that in test set, each user has clicked at least 3 items?",0,None,2 ,Fri Mar 16 2012 23:23:28 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1563,/competitions/kddcup2012-track1,44th /harriken,Michigan,"Reading the essays I realised that most easays might come from a given US State. Words, style and idiom used by people vary slightly from state to state. Is it okay to use this info for the grading ?",0,None,7 ,Sat Mar 17 2012 06:41:02 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1564,/competitions/asap-aes,15th /katzgilad,duplicate keywords with different weights for the same user,"Hi, The user_key_word.txt file has many cases of duplicate keywords, but usually they are all of the same weight. However, there are cases in which the weights are not the same. For example, for user 1003218 there are 5 instances of keyword 154145 - 4 with a weight of 2, but one with a weight of 0.1537. I have two questions: 1) Am I right to ignore the duplicities and treat them as a single instance of the keyword 2) What is theright course of action when there is more than one possible value for a keyword? Thanks in advance!",0,None,2 ,Sat Mar 17 2012 10:08:00 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1565,/competitions/kddcup2012-track1,189th /ladderrunner,what is the minimum number of members in the team?,Is't posiible that team consist of only one member (team leader)?,0,None,2 ,Sat Mar 17 2012 13:39:36 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1566,/competitions/kddcup2012-track2,None /jmp0xf,Absolute value or?,"In assignment 2.(d), should we use the absulute value of y(i)-x(i,end) since we use 64 bins?",0,None,7 ,Sat Mar 17 2012 16:30:56 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1568,/competitions/mlpr-challenge,2nd /jmp0xf,"does ""these methods"" contain NB?","In assignment 4.(c), it says compare ""these methods"", should we also involve the methods in Q3 ?",0,None,1 Comment,Sat Mar 17 2012 16:51:34 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1569,/competitions/mlpr-challenge,2nd /timmy1,Are the keywords stemmed?,"Hello, Since the keywords are built out of the user's tweets, are they after stemming or do they also include connection words? Thanks",0,None,2 ,Sat Mar 17 2012 17:23:29 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1570,/competitions/kddcup2012-track1,594th /hangz19598,Why some users in test set are not in sample submission sub_small_header.csv?,"How many unique users are in test set? Do we need to recommend for each user in test set? Some users in test set are,not in the sample submission provided by the organizer, such as 100019,100021, 100022, 100023, 100024. Thanks.",0,None,5 ,Sat Mar 17 2012 21:29:59 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1571,/competitions/kddcup2012-track1,44th /kurak38,Impression estimate,"Hi, I wonder why we must estimate impression? It is actually result of your search engine advertising algorithm, so now the purpose of the contest is to guess your algorithm. Best Regards",0,None,1 Comment,Sat Mar 17 2012 23:49:24 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1573,/competitions/kddcup2012-track2,77th /cike33411,about the evaluation ,"ap@n = Σ k=1,...,n P(k) (3) If among the 3 items recommended to the user, the user clicked #1, #3, then ap@3 = (1/1 + 2/3)/2 ≈ 0.83 so,if 1item recommended to the user, the user clicked #1, then ap@3 = (1/1 )/1 =1 the result of one recommended will be better than it of three or two recommended if you convince thar one recommendation will be successful??? then you can ignore other recommendation?",0,None,14 ,Sun Mar 18 2012 09:32:23 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1574,/competitions/kddcup2012-track1,392nd /danmaftei1,preventing underflow with class-cond gaussian,"I am using a class-conditional Gaussian model with the surrounding pixels x(:, [end, end-34, end-35])). Evaluating the second datum x(2, :) at the class-1 and class-2 Gaussians causes underflow. Thus, calculating p(y|x) then moving to negative log-space results in Infinity where y=1 and y=2. In negative log-space, -log p(y|x) = -log p(y) - log p(x|y) + log SUM_y p(y)*p(x|y). The problem is that sum. My current solution is to do the sum in normal space, then take the log. The results I get are consistent (i.e. p(y|x) sums to 1 over all y), but obviously the sum isn't entirely accurate since p(x|y) will underflow for some values of y. I found an identity for the log of a sum: log (a+b) = log a + log (1 + exp(log b - log a)). This doesn't work, since I have to go back to normal space after doing that difference in log space, and this results in Infinity. What to do?",0,None,5 ,Sun Mar 18 2012 16:42:09 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1575,/competitions/mlpr-challenge,25th /sashikanthdareddy,R code for LogLoss,"actual<-c(0, 1, 1, 1, 1, 0, 0, 1, 0, 1) predicted<-c(0.24160452, 0.41107934, 0.37063768, 0.48732519, 0.88929869, 0.60626423, 0.09678324, 0.38135864, 0.20463064, 0.21945892) LogLoss<-function(actual, predicted) { result<- -1/length(actual)*(sum((actual*log(predicted)+(1-actual)*log(1-predicted)))) return(result) } LogLoss(actual=actual, predicted=predicted) [1] 0.6737617",2,bronze,15 ,Sun Mar 18 2012 17:58:40 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1576,/competitions/bioresponse,265th /flyingkebab,Perplexity VS Accuracy,"I am just wondering why we use perplexity as the measurement for different classifiers. Perplexity can be interpreted as how certain we are of the right predictions. The more certain we are, the less information we gain from the predictions, thus less perplexity. However small perplexity can not guarantee we have a really good prediction accuracy. Say, a classifer A prefers some certain classes by assigning very high probabilities can result in a bigger perplexity than a classifer B that assigns probabilities more evenly. But you can not say that A is definitely better than B. When comparing the classification accuracy, something interesting was found. Naive Bayes which has a higher perplexity than Linear Regression, actually has a lower classification accuracy.",0,None,1 Comment,Sun Mar 18 2012 18:01:13 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1577,/competitions/mlpr-challenge,23rd /mrwhkczz,Linear regression - one for each image?,"In task 4, should I have a separate model(in this case coefficients) for each image? (or am I being silly?)",0,None,1 Comment,Sun Mar 18 2012 18:33:26 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1578,/competitions/mlpr-challenge,33rd /danmaftei1,difference b/wn min & max values in principal components,Does the difference between the minimum & maximum values in a principal component mean anything significant?,0,None,2 ,Sun Mar 18 2012 19:15:47 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1579,/competitions/mlpr-challenge,25th /smcgruer2,What to visualize for 3bii,"Apologies for another question, but I am struggling to see what we are meant to be visualizing for 3bii and what it might show us. At the moment I am examining the values of P(Y = y | x_i) for a number of different x_is - is this the correct thing to graph? (If it is then I think I know what the question is prompting us to state, otherwise I don't.)",0,None,7 ,Sun Mar 18 2012 19:17:00 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1580,/competitions/mlpr-challenge,29th /danmank,Visualising Predictions Question 3(b)(ii),"In this question, we are asked to visualise the predictions using histograms. Are we expected to visualise the negative log likelihood predictions (for every value of y) for various image patches? This will produce a histogram with 64 bins each containing the prediction of y assuming that intensity value given x. If not, would you mind to please elaborate on what predictions we are required to visualise? Thanks",0,None,2 ,Mon Mar 19 2012 02:18:16 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1581,/competitions/mlpr-challenge,12th /orzvsorz,Different submission instruction,"in ""Description"" tab, it says ""Teams are to submit a prediction file with respect to the testing dataset line-by-line in text format, in which each line contains 3 fields, (UserId)\t(ItemId)\t(Prediction). "", while in ""Submission Instructions"" tab, it says, the first column is userId, and ""The second column contains between 0 and 3 space-separated ids for recommended users or items to follow"". So, what's the first description mean?",0,None,1 Comment,Mon Mar 19 2012 03:32:02 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1582,/competitions/kddcup2012-track1,208th /orzvsorz,About the sub_small_header.csv file,"I noticed that there are 1340127 lines in this file, but the num of different user id is 1196411, it means that there are some lines with the same user id. Why?",0,None,3 ,Mon Mar 19 2012 04:03:00 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1583,/competitions/kddcup2012-track1,208th /zhaogang,final VS. public test set?,"The rec_log_test.txt file contains data for both the public leaderboard set and final evaluation set. The file is sorted temporally. Any data with a timestamp = 1321891200 is used for the final evaluation set. The goal is to predict which items that were recommended to the user were followed by the user, within each set. I am a little confused with above paragraph. Basily, we should rank the items in the test data for targeting user. Does above mean that we only need consider both the targeting user and the candidate items only for the timestamp 1321891200 ?",1,bronze,5 ,Mon Mar 19 2012 06:06:26 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1584,/competitions/kddcup2012-track1,373rd /medial,What was the mechanisem for pair removing on test?,"Hi, You said that: ""The repeated recommended (user, item) pair were removed from test set in this release."" What was the mechanisem for removal? few options: 1. Keeping the first appearance by timestamp of repeated pair, removing all the other 2. Keeping the last appearance by timestamp of repeated pair removing all the other. 3. Keeping randomly appearance. Thank you",0,None,7 ,Mon Mar 19 2012 06:20:59 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1585,/competitions/kddcup2012-track1,6th /bjyang,Item-category,"Hello 1) Can the 'Item-category' have multiple values? Ex.) 1.2.3.4;5.6.7.8;0.0.0.0 2) Does the 'root' exists? There is an explanation about root category like below and it makes me confused, because, in dataset, there exist value like 1.4.2.6, 8.2.4.1 Should we ignore 1 and 8 from the dataset above? X.science-and-technology.internet.mobile (X indicates the root category and can be ignored); and X.business.Invest.Angel Investment.",0,None,3 ,Mon Mar 19 2012 06:50:25 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1586,/competitions/kddcup2012-track1,112th /marzena,Question 3b,"I am totally lost what steps I should do to complete the question 3b. We are given 64 alphas, but we work with 3 x attributes that can take 1 of 64 values. Does it mean we have 192 parameters? Also, when i wanted to store x|y - few values are non-zero, the rest are all zeros. Working with -logs just turn them to Inf. When I want to use them later, all my results turn to Inf. I am sure I do something wrong, but I am unable to find out what. any suggestions? Thanks",0,None,10 ,Mon Mar 19 2012 15:21:46 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1590,/competitions/mlpr-challenge,None /rajstennajbarrabas,Observations of the data,"I have to withdraw from the competion, so I'm posting some observations and conclusions so that others can make use of them. Observation 1: The distance measurement is per channel, but offset. I took separate R, G, B histograms in the distance frames and compared the pixel counts. The data seem to indicate that the R, G, and B channels all measure the same distribution, but offset from each other. For example, the count of pixels which have a red = 10 is the same as the count of green = 12 and the same as blue = 8. I concluded from this that the three channels are measuring the same distribution but offset from each other, and that calculating the distance at a pixel by averaging the channels (R+B+G)/3 would effectively fuzz the data. To calculate the distance of a particular pixel, simply use the green channel value. This appears to have the best spread of values within the distribution. Observation 2: The distance values are not continuous The histograms also show gaps in the data counts - channel values which have a count of zero at periodic intervals. For example, a Red channel histogram is shown below. The values at positions 5, 12, 19, 26 and so on are zero. This indicates, for example, that there are no pixels in the distance image which have a Red component of 26. This is true for all distance frames in all videos, and since the green histogram is at an offset from the red data, there are corresponding zeroes in the green channel, and also in the blue channel. Calculating the *change* in value from one frame to the next without compensating for these zeroes will skew the results. For example, consider the MSE of the same pixel taken from two succeeding frames. One frame might show a distance of 11 at that location, and the next might show a 13. The MSE will appear to be (13-11)^2 = 4 when in reality the distances differ by 1 so the MSE should be (12-11)^2 = 1. The extra distance is an anomaly, caused by the hole in the digitization technique. Since no pixel can have a distance of 13, the new pixel has be the next higher value. When distances are remapped to avoid the zeroes, 212 separate distance values remain in a more-or-less continuous distribution. Observation 3: Black (R=G=B=0) means ""no information"" In the distance images, the value of ""black"" does not mean ""really close"", but instead means ""no information"". The Kinect apparently uses this value to report that the infrared signal has disappeared - hence, ""no information"". For an example of this, look at devel05/K_26.avi and notice how the patterns change frame to frame. Any detection algorithm needs to account for this special value; for example, by relying on pixel area averages instead of absolute pixel counts. Observation 4: The ""no information"" value (Black) is low-pass filtered A histogram of the distance images plotted does not show a sharp vertical line at zero, but a smoothly sloping curve starting at around value 5 and rising up to meet the value at zero. This indicates that the black value has been put through a low-pass filter, with the result that some of the black values appear as small-numbered values close to black. Practically speaking, this means that values close to black (around 5 and less) should also be considered the ""no information"" value. How close depends on the specific video - I think this has to do with the specifics of the AVI encoding. Histogram of red channel in one of the distance frames: 1270 27 17 15 15 0 22 14 13 15 13 9 0 2 5 4 2 4 3 0 2 2 2 3 4 4 0 4 2 7 2 7 5 0 5 3 4 6 3",2,bronze,1 Comment,Mon Mar 19 2012 20:40:54 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1591,/competitions/GestureChallenge,46th /chris9,Computational Power,"I was wondering if any of the leaders could comment on the computational effort they used to get their results. From reading Market Makers papers, it would seem they are using a tremendous amount of resources to perform all those fits. Anyone want to share run times, # of nodes and node configurations used to perform these kinds of runs? Thanks",0,None,3 ,Mon Mar 19 2012 21:31:28 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1593,/competitions/hhp,314th /dfb35928,very serious ethical implications,Isn't anyone concerned that building a tool to provide health insurance companies with information about patient re-admissions before treatment decisions have been made and without regard to patients' long term survivability is essentially giving the companies a tool with which they can justify declining treatment to the patients who need it most?,0,None,6 ,Mon Mar 19 2012 23:10:31 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1594,/competitions/hhp,None /xenzios,molecular descriptors question,"Do we have any informaton as to what each molecular descriptor actualy is, perhaps categorically? Thanks.",0,None,3 ,Tue Mar 20 2012 02:06:15 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1595,/competitions/bioresponse,None /benhamner,Competition Ideas,"On occaision an organization will approach us, wanting to sponsor a competition for the public good. However, they don't always come with a specific problem or idea. We have many ideas of our own, but we want to make sure we're presenting potential competition sponsors with some of the best and most relevant open problems, where their money could have the greatest impact. Is there any competition that you would love to compete in, or data set that you want to see on our platform? If so, please let us know!",0,None,10 ,Tue Mar 20 2012 03:45:43 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1597,None,None /benhamner,Welcome,"Can people be reliably identified based on their eye movement characteristics? We're excited to launch this research competition to help find out! The code to create the benchmark submissions is available from this [Link]:https://github.com/benhamner/emvic. Good luck on the competition, and let us know if you have any questions!",0,None,1 Comment,Tue Mar 20 2012 05:32:44 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1598,/competitions/emvic,None /iotcasc,Confused about Submission Instructions and sub_small_header.csv,"The Submission Instructions says ""Any data with a timestamp < 1321891200 is used for the public leaderboard set, and >= 1321891200 is used for the final evaluation set."" But i find in the sub_small_header.csv,all of the unixtimestamp of user 2421056 is bigger than 1321891200 .Why ?",0,None,1 Comment,Tue Mar 20 2012 08:48:42 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1600,/competitions/kddcup2012-track1,94th /nosik35408,Competition deadline,"Hi, Just out of curiosity: Kaggle says there are 9 days left for the competition, but the assignment deadline is today. Does it mean we can make submissions to Kaggle after submitting the assignment?",0,None,1 Comment,Tue Mar 20 2012 11:21:25 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1601,/competitions/mlpr-challenge,None /djokov88,Log Gaussian,"In one of the messages Amos said ""For example do not compute a Gaussian, and then take logs. Compute instead the log Gaussian."". I presume this means using lognpdf, but how do we proceed from there to get to Kaggle friendly results?",0,None,2 ,Tue Mar 20 2012 13:34:43 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1602,/competitions/mlpr-challenge,3rd /alexeigor,"AdId connected to several Urls, titles...","As I understand particular AdId must be connected with only one UrlId and titleId, but in training.txt AdId can be connected to multiple ones. Can organizers comment this issue? cut -f4 training.txt | sort -u | wc 641707 awk -F""\t"" -v OFS=""\t"" '{ print $3,$4 }' training.txt | sort -u | wc 674702 awk -F""\t"" -v OFS=""\t"" '{ print $4,$10 }' training.txt | sort -u | wc 4257577",0,None,1 Comment,Tue Mar 20 2012 13:57:28 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1603,/competitions/kddcup2012-track2,140th /bjyang,Make clear the task,The task is to recommend top-3 items from 'item.txt' to each user. Right?,0,None,6 ,Tue Mar 20 2012 16:14:17 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1604,/competitions/kddcup2012-track1,112th /kalimet,Paragraphs?,"Hi, Is there any way to tell where the paragraph breaks are? I don't see any markup in the TSV files that would indicate paragraphs / essay organization. Thanks!",0,None,1 Comment,Tue Mar 20 2012 17:19:28 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1606,/competitions/asap-aes,None /datacooking,Issues with class labels,"Hi, can anyone explain why the first column (class label?) of ""train.csv"" contains values other than just 0 or 1? Thank you.",0,None,7 ,Wed Mar 21 2012 04:06:01 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1609,/competitions/emvic,6th /tingtingxiong,What is the Maximum Achievable Score?,"It has been said that some users may accept none of the items in the test set. Because the ap@3 is always zero for these users, they bring down the total AP@3 score. Thus the maximum achievable score is less than 1.0. Would the organizers be willing to inform contestants of this optimum score, for both the leaderboard and final evaluation test sets? Alternatively, why not exclude such users from the test set as they contribute nothing to the score? Thank you!",0,None,4 ,Wed Mar 21 2012 10:16:02 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1610,/competitions/kddcup2012-track1,None /hangz19598,what is the AP of a user if he does not follow any recommended item?,"Is he counted but set AP=0 for him, or just skipped? Thanks.",0,None,2 ,Wed Mar 21 2012 15:53:14 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1611,/competitions/kddcup2012-track1,44th /ahassaine,8192 ms?,The sum of all times is: 1600+20+550+550+550+550+550+550+550+550+550+550+1100 = 8200 ms Slightly above 8192ms. Am I missing something? Thanks!,0,None,7 ,Thu Mar 22 2012 06:39:52 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1614,/competitions/emvic,17th /elpiloto,Feature Descriptions?,The data description simply states the 1776 features as molecular descriptors - is there any way we can get more detailed informatiion about what each feature corresponds to?,0,None,1 Comment,Thu Mar 22 2012 10:08:24 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1615,/competitions/bioresponse,271st /luoleicn,Could the organizer supply a evaluation script?,"I am still confusing about the evaluation metric, could the organizer supply a evaluation script, so I can use it for local test Thanks in advance",0,None,7 ,Thu Mar 22 2012 10:26:20 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1616,/competitions/kddcup2012-track1,89th /kdesai,Track 1 - Clarification Questions,"1. The user tags are mapped to ids, and the keywords are also mapped to ids. If 'computer' is a tag word, and it is also a keyword, would it have the same number as its tagid as well as keyword-id? 2. In the rec_log_train.txt and what does the time stamp stand for: is that the time the user accepted or rejected the recommended item? or is that the time the item was recommended to user? Many accepted/rejected actions by a user share the same timestamp. 3. How are the recommendations presented? Does the user have a button to ""Accept All"" or ""Reject all' ? If user moves on without clicking on any recommendations, is it counted as implicit Rejection of all recommended items? 4. Once a user U accepts the recommendation for item T, will the system ever recommended the same item T to the same user U in future? 5. Are the time-stamps in the rec_log_test.txt to be recognized while creating submission? To elaborate, if we see in the test set that a user U was recommended with item A at time t1, item B at time t2, and item c at time t3. Are we supposed to rank the probability that user will follow item A at time t1, B at time t2, and C at time t3? Or are we supposed to rank the probability that user will follow item A, B, or C at a time t4 (where t4 is not disclosed)? Thanks a lot in advance for the clarifications!",0,None,3 ,Thu Mar 22 2012 14:33:13 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1617,/competitions/kddcup2012-track1,381st /carter,download seems to hang,"I'm not sure why, but every time I try to download that dataset, at some point it hangs. Is this possibly somehow a failing of how chrome is interacting with the download server, or are other folks having those difficulties?",0,None,6 ,Thu Mar 22 2012 20:06:48 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1618,/competitions/kddcup2012-track2,None /sylvanas,confused about submission file,"""It is strongly recommended to compress the submission files before uploading them"" . I am not much understanding this instruction. We should submit a csv format file, how can it be compressed?",0,None,3 ,Fri Mar 23 2012 03:35:47 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1620,/competitions/kddcup2012-track1,228th /sreeaurovindh,Data Missing for User profile other than userid=0!! Please confirm,"Hi, I found a few statistics and wanted to confirm within forum that these infact exits.The mapping between training document and user_profile.txt is missing with the following numbers: Total # of training samples for which userid does not have a mapping (Excludes userid=0) = 98,95,32 Total # of distinct userids that have no mapping =2,31,598 The following are some of the userids 176 697 743 793 875 888 906 937 Can you please verify and confim it Thanks data learner",0,None,1 Comment,Fri Mar 23 2012 03:57:42 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1621,/competitions/kddcup2012-track2,None /salimali,Rule Clarification,"The rules state... In order to be eligible for prizes, you are required to submit the complete model you will use to score the test set prior to the release of the test set. and the deifinition of model in the Terms and Conditions is any code, text, algorithm or series of algorithms, equation or series of equations, material, software, designs, documents, descriptions or specifications which is used, in whole or in part, directly or indirectly, in calculating, drafting, building, devising, calibrating, testing, evaluating, analyzing or generating an Entry, or which itself constitutes the whole or part of an Entry. Can you confirm that the model we need to submit has to be a generic algorithm that is applied to all essay sets (and can be applied to future new essay sets), rather than an equation that can be pre-tuned to each individual essay set with specific pre configured parameters at an essay set level (and hence useless on any new essay sets). Hope this makes sense!",0,None,7 ,Fri Mar 23 2012 08:16:20 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1622,/competitions/asap-aes,2nd /fooltencent,Come on let's get 0.6+!,We duplicate the first column of the sub_min.csv three times to form the result and get [Link]:http://www.kddcup2012.org/c/kddcup2012-track1/leaderboard#0.63185 on the leaderboard...,0,None,6 ,Fri Mar 23 2012 15:49:12 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1625,/competitions/kddcup2012-track1,159th /vikpar,Looking to work with someone,"I would ideally like to work with someone (or a team) whose score is within +/- .01 points of mine. That said, I understand that the scores are a bit fluid, so I'm not going to discriminate too much on that basis if your score falls below that threshold. I certainly don't mind discussing my approach in a general manner and deciding whether or not it will fit well with yours before we make a decision. Please email me at vikp.kaggle at gmail.com if interested. Thanks!",0,None,5 ,Fri Mar 23 2012 19:57:04 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1626,/competitions/asap-aes,3rd /timmy1,Are all keywords listed in either item.txt or user_key_words.txt,"Hello, Is it possible for the user_key_word.txt file to contain keywords that were not listed in the item.txt file?",0,None,2 ,Sat Mar 24 2012 10:52:28 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1628,/competitions/kddcup2012-track1,594th /rfeather,POI locations?,"Sorry if I've missed this, but are the actual locations of the ""focus dots"" available? Thanks!",0,None,8 ,Sat Mar 24 2012 16:54:37 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1629,/competitions/emvic,46th /npbendre,additional info on the 1st 4 additional files?,"Hey, Does anyone know what the tokens stand for in the addtional 4 files given in the dataset? is there a way to associate them with the training file ? Thanks!",0,None,1 Comment,Sat Mar 24 2012 21:45:01 GMT+0100 (heure normale d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1630,/competitions/kddcup2012-track2,None /chenchen,I wonder if I comprehended the evaluation correctly?,"Hi, I'm a little confused about the evaluation process in Task1. Is it means that: We are supposed to recommend 3 items to each user, noted as (Ri1, Ri2, Ri3) and each user has an actual accepted list, in which some are in the leaderboard set, some are in final set, noted as (Li1, Li2, Li3...Lim, Fi1, Fi2,...Fin) Current leaderboards are calculated MAP@3 based on users' Li part, and the final scores are calculated on users' Fi part? or the final scores are calculated based on both Li and Fi parts? I hope I' ve made my question clear.",0,None,4 ,Sun Mar 25 2012 10:18:55 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1632,/competitions/kddcup2012-track1,237th /synopsis360,Segmentation of a video,What is the simplest way of segmenting a video I mean how can we find start and end a gesture in a test video ? looking forward for your cooperation thanks,0,None,1 Comment,Sun Mar 25 2012 18:01:09 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1633,/competitions/GestureChallenge,35th /rajstennajbarrabas,More observations of data,"Apologies for starting a new thread, I can't seem to attach images to responses. Observation: The depth image is apparently offset and bigger than the color image Posted below is an image which overlays the Kinect and color data (from devel03). Notice that the Kinect data is both offset to the left and bigger than the color data. That is to say, the Kinect data does not appear to be offset *above* the color image, it's simply bigger than the color image. (The red is an artifact of some processing I was doing - identifying pixels of a particular depth.) Any algorithm that attempts to correlate motion between the two frames will need to take this into account. Observation: The color stream is of marginal utility I've also uploaded 3 selected images from the videos of the lexicon of devel03. Notice anything unusual? Some of the lexicon videos are recorded with the actor wearing a black ""bib"", some are recorded with the bib on his lap, some without the bib, and some with a different color shirt. An algorithm that attempts to use the color videos to identify the gestures will need to allow for this - the actor may be wearing different clothing from the lexicon videos. (I asked about this in the forums, and was told that dealing with actors who change clothing is part of the competition.) As you can see, attempting to locate (for example) the actor's chest using color correlation will not work well. Observation: One of the videos is broken valid01/K_24.avi [Link]:https://storage.googleapis.com/kaggle-forum-message-attachments/2044/OverlayDevel03.jpg [Link]:https://storage.googleapis.com/kaggle-forum-message-attachments/2045/M_9.0001.bmp [Link]:https://storage.googleapis.com/kaggle-forum-message-attachments/2046/M_1.0001.bmp [Link]:https://storage.googleapis.com/kaggle-forum-message-attachments/2047/M_5.0001.bmp [Link]:https://storage.googleapis.com/kaggle-forum-message-attachments/2048/OverlayDevel03.jpg",1,bronze,2 ,Mon Mar 26 2012 05:16:04 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1635,/competitions/GestureChallenge,46th /drshabbir,"New comer, can any one clarify following doubts ","1. Is this data selected from In-patients of the hospitals or randomly selected from population? 2. If it is from hospital data then why majority of patients have ""0"" LOS. 3. When we randomly give value of '0' to all the 70k patients that needs to be predicted the error rate is 0.22. 4. I know somewhere i am doing mistake please guide me. Thanks in advance. Regards Shabbir",0,None,6 ,Mon Mar 26 2012 06:39:31 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1636,/competitions/hhp,None /farseer,Task Validation,"the Task is recommendation 3(or more) Items for each user in ""rec_log_test.txt"",which number is 1340127.?? is it possible that the train.txt and the test.txt have the same user? do we just predict the result of test.txt, and record 1,abondon -1,to make final record, or we can add some new result? Thanks!",0,None,1 Comment,Mon Mar 26 2012 09:46:58 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1637,/competitions/kddcup2012-track1,181st /reza736782,Location?,"Could you please clarify the x, y positions? Is it the position of gaze on the monitor? why the variation is huge even the person looking at the same point? For example for the first raw in the tarining file the first 1600 ms, lx samples changes form 150 to 361!",0,None,4 ,Tue Mar 27 2012 02:45:02 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1640,/competitions/emvic,None /zenog15462,Public leaderboard results: 0.00000,"Hello, I just tried to submit two submissions, and got a result of zero for both of them. This is a bit confusing to me, as even random guessing should lead to a MAP@3 > 0. In fact, the average overlap (number of items that are on both lists) between result lists in my submissions and the example file is 2.06 and 1.80, so I cannot imagine that the true result is 0.0000. I use Unix newlines, and the two-column file format. One submission was compressed with gzip, the other one not. The output for one of the submissions: INFO: Could not infer file format from the file extension. Assuming '.csv' format based off file contents. INFO: Assuming submission does not have a header since the first row looks like data. (Line 1) The first 5 lines of one of the submissions: 100001,647356 458026 1870054100004,647356 1606574 1675399100005,1606574 1774505 218438100009,1606599 647356 1760327100010,458026 2105511 1774963 I am not sure whether I have missed something very obvious, or whether there is a bug in the evaluation code. Any ideas? Best regards, Zeno",0,None,7 ,Tue Mar 27 2012 11:56:30 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1642,/competitions/kddcup2012-track1,158th /user96well,test vs train?,What is the difference between the two datasets test.csv and train.csv? Which one I'm supposed to use?,1,None,2 ,Tue Mar 27 2012 15:30:19 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1643,/competitions/bioresponse,None /kdesai,Evaluation: number of items recommended vs accepted,"I have a clarifying question about the evaluation metric for track 1. Here's what the description says: ""there are m items in an ordered list is recommended to one user, who may click 1 or more or none of them to follow..."" and then in the formula for AP@n, the denominator is (number of items clicked in m items) If I remember correctly somewher it is mentioned that in the recommendation system under consideration, m is always equal to 3 (true?). Also, if I understand correctly, the range of values for denominator is between 0 to m? Or are we here talking about number of all items that user has accepted till a given point of time - given by the timestamp - which could be much greater than m? Thanks for clarifications.",0,None,6 ,Tue Mar 27 2012 15:58:21 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1644,/competitions/kddcup2012-track1,381st /hangz19598,Please clarify what should be the denumerator when calculating AP@3 for a single user?,"Should it be the total number of clicks a user has in the validation/test set? Or, should it be the total number of clicks a user has in the top 3 predicted items? I saw some posts in forum and people were giving conflicting solutions. Thanks.",0,None,1 Comment,Wed Mar 28 2012 18:56:47 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1647,/competitions/kddcup2012-track1,44th /rajstennajbarrabas,Notes on segmentation,"(This is in response to the segmentation question. Apologies for starting a new thread, having difficulty with responses.) Calculate the image-by-image MSE difference. That is to say, for each image, for each pixel, calculate the squared difference in pixel values between this image and the next image. Ignore the case where either pixel is black, as that indicates ""no information"" in the Kinect data. Sum the results (of all pixels), and normalize over the number of non-black pixels. (And note my previous comment about gaps in the color distribution.) When you're done, you will have an array of values, one for each image (except the last) which indicates the amount of change from one frame to the next. I've included plots of this value for selected videos from devel01 below. In order to segment a video, find images which match the ""null position"": the position that is normally before and after each of the lexicon videos. The first problem is identifying the null position. Some of the lexicon videos don't start with a null position - they jump immediately into an action sequence. The plot of K_3 has several frames of low action at the beginning, so it's a good bet that these represent the null position. The plot of K_4 has high action right from the start - this indicates that there is no null position at the beginning. You can check the action values of each of the lexicon videos and decide which of these indicate the null position. Note that the null position changes over the course of a video. If an actor moves slightly to the left, a direct match won't work very well. Unless you first identify the actor, and tune out the invariant features. Also, the null position changes slightly from video to video, complicating the match algorithm. For example, this actor puts her hands down (at her sides) for the null position, but never at quite the same angle. This can be difficult to match, since the field of matching (the hands) is rather small - a small offset in the angle or position can result in a large mismatch. Looking at K_3, and not counting the null position at the beginning and the end, we see that the gesture is composed of three segments: an action, a pose, and another action. The first action is when the actor sweeps the hands into position, the pose is where the hands are held in position for a few moments, and the second action brings the hands back to null position. This suggests a way of describing the gestures in terms of actions and poses. K_3 has action-pose-action, while K_4 is mostly action. Alternately, you could also describe K_4 as having 5 actions with little or no pose in between. (Looking at the video will confirm this.) If the you segment the videos by action and pose, you only need to match a single image from within a pose segment. If the actor is holding still, all the images within the pose will be largely the same. You match one frame against the one frame selected from each pose in the lexicon, and choose the one with the best match. Similarly, you can digest the motion within an action in various ways to compare it to an action segment from the lexicon. If the motion is largely ""up"" and the action happens to the right of the actor, that can be matched to similar descriptions from the various lexicons. This will greatly narrow the search space for your matching algorithm. If you can immediately discount certain lexicon entries because the action/pose contour is wildly different, it makes it possible to put more effort into distinguishing between more similar gestures. As a final example, I've plotted the action for K_19 below. The video contains gestures 10-2-3-3, and a human can easily see the transitions between the ""character"" of the segments. Also, you can see where the poses are, and where poses are likely to be the null pose. The gesture ""2"" corresponds to video K_4 - does this look similar to the action plot of the K_4 video? If you match the poses and the actions to specific lexicon entries, you will know when a lexicon entry ends - and you can segment the video at that point. Also, if you reach the end of and have actions/poses left over, you know you've made a mistake etc. Hope that helps! [Link]:https://storage.googleapis.com/kaggle-forum-message-attachments/2068/K_19.jpg [Link]:https://storage.googleapis.com/kaggle-forum-message-attachments/2069/K_2.jpg [Link]:https://storage.googleapis.com/kaggle-forum-message-attachments/2070/K_4.jpg",1,bronze,2 ,Thu Mar 29 2012 07:24:31 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1648,/competitions/GestureChallenge,46th /smallabc,The relationship between user_action.txt and user_sns.txt,"I construct a graph G(V,E) with user_sns.txt. Given a triple (start, end, tweet, retweet,comment) in user_action.txt , I findd that the edge (start, end) may be not in E. This is so strange, If A is not a follower to B, How can A tweet B? Can somebody kindly enough to explain that? Or user_sns.txt defines A->B, B->C, Then in user_action.txt, B tweet C, and then A tweet B, Thus you note this action of A tweet B as (A,C, XXX,XXX,XXX) in the user_action.txt. Am I right?",0,None,2 ,Thu Mar 29 2012 09:50:54 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1649,/competitions/kddcup2012-track1,251st /mikhail1,ItemId vs UserId,"Hi All! This might sound like a stupid question but I wonder if ItemId and UserId are defined on the same domain? Is it correct that both ItemId = UserId identify the same actual user? For example, if we take ItemId = 112125 from item.txt does it mean that the record with UserId=112125 from User_Profile.txt provides additional info about this item? Thanks.",0,None,15 ,Thu Mar 29 2012 13:03:39 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1650,/competitions/kddcup2012-track1,None /pb236227,Appears to be a bug in the test data/sample submissions.,"The submission rules state that ""Your submission must: ...have exactly 1,340,127 rows."" And indeed, the sample submission sub_small_header.csv contains 1340127 lines (not counting the first header line). $ cat sub_small_header.csv | grep -v ""^id"" | wc -l 1340127 But that file only contains 1196411 unique user IDs. (Same as the test data, rec_log_test.txt.) $ cat sub_small_header.csv | grep -v ""^id"" | cut -d',' -f1 | sort | uniq | wc -l 1196411 $ cat rec_log_test.txt | cut -f1 | sort | uniq | wc -l 1196411 It appears that many user IDs are duplicated in sub_small_header.csv, starting at line 713613. Therefore, some users (eg. user ID 100014) have multiple ranked recommendations. $ grep --line-number ""^100014,"" sub_small_header.csv 10:100014,859545 2167615 715470 713617:100014,1774959 1774418 721665 Unless I am overlooking something, this looks like a bug in the submission evaluation criteria. Can someone from KDD please comment? I briefly searched the forums, but didn't find anything about this.",0,None,3 ,Thu Mar 29 2012 14:37:08 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1651,/competitions/kddcup2012-track1,501st /mikhail1,One more question about test dataset generation,"I would like to ask if test dataset contains only pairs UserId-ItemId that were recommended (and either accepted or rejected) or it may contain ANY pair UserId-ItemId (for instance, a pair UserId-ItemId where ItemId has not been recommended to UserId at all)? Sorry If I missed it in earlier discussions but I cannot find the answer.",0,None,2 ,Thu Mar 29 2012 16:33:24 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1652,/competitions/kddcup2012-track1,None /dminesl,How the evaluation metric incorporate ranking information,"In some discussions in the forum I came across that for each user we have to submit the top 3 recommendations (in order). But I'm not clear how the evaluation metric take account of the ranks. For example out of 12 recommendations given to the user, if the user has accepted the items #3 #6 #4 #2 #12 (#3, #6 and #4 are the top 3 in order), but my algorithm produced #3 #4 #12 as the top 3 recommendations, what is the score I will get? Could you please explain how to calculate ap@3 for this example. Another thing is there is no mention about submitting the top 3 results in the submission instructions, it says ""The second column contains between 0 and 3 space-separated ids for recommended users or items to follow (for example ""647356 458026 1606609"") "" which implies any 3 recommendations will be fine. Appreciate if the guidelines are consistent with the forum discussions.",0,None,8 ,Thu Mar 29 2012 19:43:30 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1653,/competitions/kddcup2012-track1,None /rajstennajbarrabas,Identifying the actor,"(I withdrew from the competition, so I'm posting observations which might be of use to people.) To identify gestures you need a way to identify the actor. You need to be able to distinguish ""actor"" from ""non actor"" in the image. One way to do this is by using differential error. I've posted the MSE action plot for devel02/K_3 below. As per my previous post, the plot shows a clear action-pose-action gesture, with a null pose at the beginning (but not at the end, in this instance). From the null-pose section, take any two consecutive frames, and create a new frame which is the MSE of the difference, pixel by pixel. I've posted the results below. This handily identifies the actor in the image. If you can deal with small gaps in the outline, a flood fill algorithm will identify every pixel associated with the actor in this image. Given a section of the outline (a zoomed-in window, for example), the actor will be the part that's closest to the camera. Humans are always moving a little - breathing, adjusting position, &c. Pixels which catch the edge of the human will ""fall off"" to hit the background when the human moves slightly in one direction, and pixels on the other side will be ""caught short"" when the edge of the human moves to intercept. The MSE of these changes is very large compared to the noise value of invariant features, or even variations within the actor's profile. As a follow-on for my previous post about pose images, construct a similar frame from the pose section in the middle of K_3. For comparison, consider a similar image from the corresponding pose section in K_16. Since this is a pose section, there is little relative motion and we only have to match a single frame. Instead of matching frames, we could instead match differential images. A floodfill can set all pixels to either ""actor"" or ""non-actor"", which greatly simplifies the matching algorithm. In the case shown there is a great deal of difference between the two images. Are the ""thumbs up"" significant? This is where the matching algorithm comes to play. We don't need to find a match between these two images, we only need to state that these images match *better* than other lexicon cases. Thus, if the lexicon had two gestures, one with thumbs up and one not, then the thumb position is significant. This lexicon does not have such a set, so matching this pose is greatly simplified. The action potential (the MSE graph values) and the differential images are very useful in identifying features - the human visual system does essentially this as one of the steps in cognition. I don't know if the Microsoft API has these types of data channels, but they're really useful for identifying features. [Link]:https://storage.googleapis.com/kaggle-forum-message-attachments/2073/Devel02_K_3Action.jpg [Link]:https://storage.googleapis.com/kaggle-forum-message-attachments/2074/Devel02_null.jpg [Link]:https://storage.googleapis.com/kaggle-forum-message-attachments/2075/Devel02_K_3.jpg [Link]:https://storage.googleapis.com/kaggle-forum-message-attachments/2076/devel02_K_16.jpg",1,bronze,1 Comment,Fri Mar 30 2012 00:39:36 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1654,/competitions/GestureChallenge,46th /mingot,Data aggregation and primary key,"Hi, As I understand from: We divide each session into multiple instances, where each instance describes an impressed ad under a certain setting (i.e., with certain depth and position values). We aggregate instances with the same user id, ad id, query, and setting in order to reduce the dataset size. in ""training.txt"" userid+adid+queryid+depth+position is a primary key. But that is not true, aproximately 1% of the instances are duplicated in that sense. For example, the following one is duplicated $ cat training.txt | grep 21258213 | grep 3566940 1 1658343530815135762 21258213 2298 3 2 7246 4307 7183 16230 3566940 1 1658343530815135762 21258213 2298 3 3 7246 4307 7183 16137 3566940 1 1658343530815135762 21258213 2298 3 3 7246 4307 7183 16230 356694 It's a mistake or I am misunderstanding the problem statement? Thank you.",0,None,2 ,Fri Mar 30 2012 18:35:42 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1656,/competitions/kddcup2012-track2,5th /titanoceanus,Submission on Validation Data,"There are 4,218 observations in the validation dataset but there are 4,818 essay_id's in the submission template, and required by the submission parser. Can someone advice as to how to submit against the validation dataset of 4,218 - while we await the test dataset. Regards, Oceanus",0,None,1 Comment,Fri Mar 30 2012 19:13:12 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1657,/competitions/asap-aes,84th /abhinavkulkarni,Few Questions,"Please go through following points: user_profile.txt has, for every user a list of 'tag-IDs' he/she entered in his/her profile. item.txt has, for every item an 'item category' of the form 'a.b.c.d' and a list of 'keyword IDs'. user_key_word.txt has, for every user a list of 'keyword-IDs' (along with their weights) extracted from tweets of the user. Now, I have following questions: It is explicitely mentioned that 'keyword-IDs' from user_key_word.txt and 'keyword-IDs' from item.txt are from same vocabulory. However we have two more sources of textual data (encoded as integers, of course) - 'tag-IDs' from user_profile.txt and 'item category' of the form a.b.c.d from item.txt. Are they also from same vocabulory as former two? To clarify this with an example, suppose a user entered 'sports' as one of the tags in his/her profile, an item (say Roger Federer) is about 'sports' so there are entries about that in item.txt of the form and a user tweeted about 'sports' and that was extracted as one of keywords in user_key_word.txt. Now would the same ID occur wherever 'sports' occurs in above description? user_profile.txt has some fields with gender=3, for e.g. 238037 1977 3 1 0 328969 1900 3 12 0 352738 2009 3 23 0 665962 1900 3 1 0 742805 1900 3 2 63099;508510;36744;544748;19793;27234;145290 793982 1900 3 9 0 992176 2010 3 20 0 1484399 1900 3 3 0 It is mentioned that gender would either be 0, 1 or 2. What do we interprete in this case? In item.txt the item categories are of the form a.b.c.d. However some of the lines have three- or two-level categories such as 1.2.1 or 1.5. How do we interprete in such a scenario? What level category could be assumed to be missing, i.e. in case of 1.2.1 was data actually 1.2.1. or .1.2.1? In item.txt do category IDs at different levels correspond to each other? For e.g. suppose two items have categories 1.2.3.4 and 5.3.7.2. Now 2 and 3 appear in two categories at different levels (2 appears at 2nd and 4th levels resp. while 3 appears at 3rd and 2nd levels). Do they correspond to possibly same category such as 'sports' or 'politics'? Thanks!",0,None,1 Comment,Fri Mar 30 2012 19:59:05 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1658,/competitions/kddcup2012-track1,None /cbusch,Binning,Could anyone comment on why the leaders' solution transformed age at first claim into multiple binary variables? It seems more appropriate for a gbm based linear regression to have turned it into a single scale variable by using the median age for each band. This is the code excerpt from the leaders' solution. UPDATE Members SET age_05 = CASE WHEN ageATfirstclaim = '0-9' THEN 1 ELSE 0 END UPDATE Members SET age_15 = CASE WHEN ageATfirstclaim = '10-19' THEN 1 ELSE 0 END UPDATE Members SET age_25 = CASE WHEN ageATfirstclaim = '20-29' THEN 1 ELSE 0 END UPDATE Members SET age_35 = CASE WHEN ageATfirstclaim = '30-39' THEN 1 ELSE 0 END UPDATE Members SET age_45 = CASE WHEN ageATfirstclaim = '40-49' THEN 1 ELSE 0 END UPDATE Members SET age_55 = CASE WHEN ageATfirstclaim = '50-59' THEN 1 ELSE 0 END UPDATE Members SET age_65 = CASE WHEN ageATfirstclaim = '60-69' THEN 1 ELSE 0 END UPDATE Members SET age_75 = CASE WHEN ageATfirstclaim = '70-79' THEN 1 ELSE 0 END UPDATE Members SET age_85 = CASE WHEN ageATfirstclaim = '80+' THEN 1 ELSE 0 END UPDATE Members SET age_MISS = CASE WHEN ageATfirstclaim IS NULL THEN 1 ELSE 0 END,0,None,5 ,Sat Mar 31 2012 00:02:12 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1660,/competitions/hhp,266th /drshabbir,"""0"" Daysinhospital are more than 70% ","When we are trying with predicting days as ""0"".. our error is much less about 0.22 Please let me know can we predict ""0"" as Daysinhospital ??",0,None,2 ,Sat Mar 31 2012 10:45:03 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1661,/competitions/hhp,None /blukee,New Fish. Need some guideline of competition.,"Hello everyone, As the topic. I'm not quite understand the rules in this competition. We have training data which is labeled. But the test data is un-labeled. How can I evaluate my model if the test set data is un-labeled? Cheers",0,None,4 ,Sat Mar 31 2012 19:42:47 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1662,/competitions/bioresponse,468th /seanryan,Final Data: depth normalization values,I cannot find the depth normalization values for the final data (i.e. maximum and minimum depth values). Are they hidden somewhere in the encrypted archives?,0,None,1 Comment,Sun Apr 01 2012 22:18:08 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1665,/competitions/GestureChallenge,7th /xingzhao,april fools day joke?,scores submitted today are much lower than expected. Is this a April fools day joke?,1,None,1 Comment,Mon Apr 02 2012 03:36:04 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1666,/competitions/kddcup2012-track1,5th /gerigory,About the evaluation formula:Who can tell me which one is correct?,"First, i'd like to give a example data as follow (there are 10 items to be recommended to A, and 3 items to be recommended to B): User & Accepted Item Recommendation A (1,2,3,4,5) 1,10,3 A (1,2,3,4,5) 1,3,10 B (1,2) 2 B (1,2) 3,2,1 About the AP evaluation, there are three explaination: 1.The first explaination is from the evaluation [Link]:http://www.kddcup2012.org/c/kddcup2012-track1/details/Evaluation, which gives the formula as follow: ap@n = Σ k=1,...,n P(k) / (number of items clicked in m items) where if the denominator is zero, the result is set zero; P(k) means the precision at cut-off k in the item list, i.e., the ratio of number of clicked items up to the position k over the number k, and P(k) equals 0 when k -th item is not followed upon recommendation; n = 3 as this is the default number of items recommended to each user in our recommender system. For example, (1) If among the 5 items recommended to the user, the user clicked #1, #3, #4, then ap@3 = (1/1 + 2/3)/3 ≈ 0.56 (2) If among the 4 items recommended to the user, the user clicked #1, #2, #4, then ap@3 = (1/1 + 2/2)/3 ≈ 0.67 (3) If among the 3 items recommended to the user, the user clicked #1, #3, then ap@3 = (1/1 + 2/3)/2 ≈ 0.83 According to this formula and the data supported on the top, the AP calculating result should be: AP (1/1)/3=1/3 (1/1+2/2)/3=2/3 (1/1)/1=1 (1/2+2/3)/3=7/18 2.The second explaination is extracted from a topic in the forum: [Link]:http://www.kddcup2012.org/c/kddcup2012-track1/forums/t/1574/about-the-evaluation ""The denominator is the total number of recommendations that the user has accepted. Thus, if the user accepted only one recommendation and you recommend that first, then your AP@3 is (1/1)/1 = 1.0. However, if the user accepted two recommendations and you only recommended one of the two then your AP@3 is (1/1)/2 = 0.5."" by [Link]:http://www.kddcup2012.org/users/993/ben-hamner AP@3 (1/1+2/3)/5=1/3 (1/1+2/2)/5=2/5 (1/1)/2 =1/2 (1/2+2/3)/2=7/12 3.The third explaination is from the pdf file which is given on the evaluation page: [Link]:http://sas.uwaterloo.ca/stats_navigation/techreports/04WorkingPapers/2004-09.pdf, the result should be: AP@3 (1/1+2/3)/2=5/6 (1/1+2/2)/2=1 (1/1)/1 =1 (1/2+2/3)/2=7/12 Could anyone gives me a exact correct explaination, which one in the three explaination is correct?",0,None,1 Comment,Mon Apr 02 2012 11:00:38 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1667,/competitions/kddcup2012-track1,647th /warden,why the task is confused so many people?,"do we just predict the 'rec_log_test.txt' result? or we can recommend some items out of the rec_log_test? As the official, the test data is just from the 'rec_log_test.txt' ,so our mission is just predict? is right?",0,None,3 ,Mon Apr 02 2012 15:00:16 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1668,/competitions/kddcup2012-track1,252nd /soufanom,"Are (and, or, a, the, etc..) in the Words Files ?!!","Hi, I would like to know if the words that appear in the query, ad's title, ad's description, advertiser's keywords may contain what is equivalent to the following words: (and, or, a, the, there, in, etc...) Those words might have a very high frequency compared to other and still do not play a major role in the model. Regards, Othman",0,None,2 ,Mon Apr 02 2012 16:10:03 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1669,/competitions/kddcup2012-track2,109th /booya25932,Please officially confirm: are the actuals top-coded or not?,"I suppose not, but some might assume, based on the ""All 15s Benchmark"", that the actuals in the error calculation are top-coded to 15 like the training data. There is no statement to that affect in the rules or evaluation page, and having searched this forum I did not see a definitive statement one way or the other. One, might think that since it is beyond the 95th percentile, it would have minimal affect, but every little bit helps and it is a simple question. So can someone please officially confirm this one way or the other (or point me to a prior post if I missed it)? Thanks!",0,None,1 Comment,Mon Apr 02 2012 17:17:13 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1671,/competitions/hhp,422nd /ladderrunner,user_ids in training.txt and userid_profile.txt are different,"Can someone please explain why the number of distinct user_ids in the training data set is less than number of user_ids in userid_profile.txt. We have: training.txt - has 22023546 distinct user_ids userid_profile.txt - has 23669283 distinct user_ids 989532 231598 user_ids in training.txt are missed from userid_profile.txt. Can someone elaborate why not all user_ids exist in userid_profile file, and why there are 1645737 + 231598 989532 extra user_ids in the userid_profile? for testing data set?) Thank you, -Alex",0,None,5 ,Tue Apr 03 2012 18:01:25 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1675,/competitions/kddcup2012-track2,None /balazsgodeny,Deadline,Do I understand correctly that the deadline for submitting solutions on the validation set is this Friday midnight (UTC)? Thanks.,0,None,3 ,Tue Apr 03 2012 20:18:06 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1676,/competitions/GestureChallenge,42nd /zygmunt,Matlab/Octave code for LogLoss,"I believe this would be Matlab code for log loss. Translated from R code by Sashi and Alec Stephenson. function ll = log_loss( actual, predicted ) eps = 0.01 predicted = min( max( predicted, eps ), 1 - eps ); ll = -1 / numel( actual ) * ( sum(( actual .* log( predicted ) ... + ( 1 - actual ) .* log( 1 - predicted ))));",0,None,7 ,Tue Apr 03 2012 22:13:51 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1677,/competitions/bioresponse,290th /pierluigimartini,Is descriptor data really normalized?,"The information states that : ""The remaining columns represent molecular descriptors (d1 through d1776), these are calculated properties that can capture some of the characteristics of the molecule - for example size, shape, or elemental constitution. The descriptor matrix has been normalized."" What exactly is the normalization done of the descriptor data? I would guess that each row (for each molecule), the descriptor vector is normalized (either with the normal vector 2-norm, or the 1-norm (sum of abs values (they are all positive in this dataset anyways)). However, neither one of those produces a constant norm for each molecule, so I am confused! thanks, PLM.",0,None,3 ,Wed Apr 04 2012 01:03:58 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1679,/competitions/bioresponse,416th /yanirseroussi,What constitutes external data?,"According to the rules ""Participants are free to use the provided features or to implement their own features. The use of manually extracted features or any other external data is not permitted."" So if I have a feature extraction method that was trained on external data, is that not allowed? It seems a bit strict, as any new feature implementation would rely on some source of external knowledge (which is based on external data...).",0,None,1 Comment,Wed Apr 04 2012 01:09:02 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1680,/competitions/awic2012,3rd /higgstachyon,Use of dataset for research beyond KDD Cup,"Dear Admin, Given how rich the dataset is, can we use it for research whose objectives are not the same as that for KDD Cup-'12? Specifically, if I were to publish a paper using the data here, how do I cite it? Thanks!",0,None,5 ,Wed Apr 04 2012 03:36:02 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1681,/competitions/kddcup2012-track1,None /abuzar0,Newbie : purpose of optimized_value_benchmark and other files,"Dear All, I am not aware what is the puspose of optimized_value_benchmark , rf_benchmark ,svm_benchmark and uniform_benchmark",0,None,3 ,Wed Apr 04 2012 14:24:29 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1683,/competitions/bioresponse,None /luistp001,Advice for blending models,"Hello, This is not a question about the contest, but I am new to data science, so please can someone give me an advice about the following? Suppose you have several models and you want to blend them. What is better for the final prediction? - To average the predictions of each model. - To train a new model with the predictions as features for the final prediction Or to train the best model but using the predictions of the other models as features for the final prediction. What would you suggest? Thanks",0,None,4 ,Wed Apr 04 2012 23:06:57 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1685,/competitions/asap-aes,13th /brandniemann,Alternate Data,"There is so much missing data that I am exploring use of the Ambulatory Health Care Data on Emergency Department Length of Stay.http://semanticommunity.info/HPN_Health_Prize/Ambulatory_Health_Care_Datahttp://semanticommunity.info/HPN_Health_Prize/Emergency_Department_Length_of_StayAfter searching through all of this, I found NAMCS and NHAMCS data can also be downloaded from the Inter-University Consortium for Political and Social Research (ICPSR) http://www.icpsr.umich.edu/index.htmlI am also analyzing Semantic Medline. http://semanticommunity.info/A_NITRD_Dashboard/Semantic_Medline",0,None,1 Comment,Thu Apr 05 2012 00:25:20 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1686,/competitions/hhp,None /jerryhouse,Why are the items accepted not in the sns list of users?,"This is the rec_log information of user 2253017: | user_id | item_id | result | rec_time |+---------+---------+--------+------------+| 2253017 | 1606717 | 1 | 1320756494 || 2253017 | 1760350 | 1 | 1320756494 || 2253017 | 1606902 | 1 | 1320756566 || 2253017 | 1760410 | 1 | 1320756566 || 2253017 | 562574 | 1 | 1320756566 || 2253017 | 461710 | 1 | 1320757001 |+---------+---------+--------+------------+ and this is the sns information of user 2253017: +---------+---------+| id | d_id |+---------+---------+| 2253017 | 1655644 || 2253017 | 1760396 || 2253017 | 1760405 || 2253017 | 2257091 || 2253017 | 436442 || 2253017 | 535463 |+---------+---------+ I am so confused that when an item is accepted by the user, it means that the item is follwed by the user, but the data dose not look like that. Anyone has some ideas?",0,None,2 ,Thu Apr 05 2012 09:55:06 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1688,/competitions/kddcup2012-track1,None /jerryhouse,Why is an item accepted and rejected many times?,The log information of user 535463 on item 1606902 is shown as below: +---------+---------+--------+------------+| user_id | item_id | result | rec_time |+---------+---------+--------+------------+| 535463 | 1606902 | -1 | 1319200087 || 535463 | 1606902 | 1 | 1319200123 || 535463 | 1606902 | -1 | 1319881289 || 535463 | 1606902 | -1 | 1319947849 || 535463 | 1606902 | 1 | 1319947984 |+---------+---------+--------+------------+ It is confused to see that the user accept and reject the same item so many times. How dose this happen?,0,None,2 ,Thu Apr 05 2012 10:00:41 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1689,/competitions/kddcup2012-track1,None /tudor1m,Final Evaluation,"Hi, I am another newbie in the community. Is the test file test.csv the actual file against we have to run our models and those results will be to be used for the final evaluation, or the contest organizers will provide another file? Thank you,",0,None,11 ,Thu Apr 05 2012 14:01:18 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1690,/competitions/bioresponse,476th /marinewu,no rec_log_test file is provided,"the dataset of track 1 that we have downloaded only contains 6 files , and the rec_log_test.txt file is not inclued.please make sure and solve this problem.thx",0,None,1 Comment,Fri Apr 06 2012 17:01:55 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1691,/competitions/kddcup2012-track1,629th /stevenwudi,the decryption key ,"Hi, How can we find the the decryption key since its already 7th April (GMT) Thanx",0,None,3 ,Sat Apr 07 2012 01:49:45 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1692,/competitions/GestureChallenge,38th /iguyon,Final evaluation phase,"The deadline for submitting code has expired and we are making available the decryption key for the final evaluation data, which can be downloaded from the data page: 4lkc221: The data may be uncompressed and decrypted with Winzip (a free trial version may be downloaded from the WInzip website). We encourage every participant to submit final evaluation results. Don't be discouraged if your validation data results are not among the best. It is frequent that people overfit the validation data.",0,None,2 ,Sat Apr 07 2012 02:30:56 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1693,/competitions/GestureChallenge,None /van5136749,Keywords hash function ,"Hello, I would like to know what hashing function was used to translate the keywords (in the query, title etc) to hashes. Or if that isn't possible please tell us the probability that two different words will end up with the same hashes. I mean, it is safe to assume that different hashes in the additional data files correspond to different keywords, right?",0,None,1 Comment,Sat Apr 07 2012 12:49:18 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1695,/competitions/kddcup2012-track2,153rd /teaserebotier,Question regarding rule # 9,"Rule # 9 states: Participants will not use data other than that provided to estimate their model. Does this apply to the original training and testing sets? If I read Dan's post correctly, the ""new"" training set is from a different time period than the ""old"" training set. Does the model have to be trained exclusively from the ""new"" training/testing sets or can it use both the old and the new?",0,None,1 Comment,Sun Apr 08 2012 08:23:32 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1696,/competitions/benchmark-bond-trade-price-challenge,37th /felixbarbalet,Multiple submissions from same person?,"Looking at the leaderboard - why are there some teams which appear to be very similar with multiple entries? Isn't this against the rules of the compeition (hence the daily limit on entries)? Just wondering is all (no offence meant to any of these teams) Eg: zju-icad - same username, different user id, multiple submissions on the leaderboard http://www.kddcup2012.org/users/37901/zju-icad http://www.kddcup2012.org/users/37939/zju-icad Other examples of similar usernames from the leaderboard: yuntian yuntian1 yuntian2 yuntian3 yuntian4 yuntian5 xugang xugang0822 xugang822 shanpao sanguosha majia2 [Link]:http://www.kddcup2012.org/users/37920/majia2017 majia2016 [Link]:http://www.kddcup2012.org/users/37877/majia2 majia2017 [Link]:http://www.kddcup2012.org/users/37938/majia2021 [Link]:http://www.kddcup2012.org/users/37931/majia2019 feixue,feixue1,2,3,4,5,6,7...",0,None,2 ,Sun Apr 08 2012 08:42:03 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1697,/competitions/kddcup2012-track1,None /ahassaine,Deadline extension?,I wish I had more time to spend on this competition. Is it possible to have a deadline extension? or is that against Kaggle rules?,0,None,1 Comment,Mon Apr 09 2012 07:54:29 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1698,/competitions/emvic,17th /warden,崩溃了,用C++,将SNS和好友列表载入内存,结果new申请不到内存了,大家有什么好建议吗?两个表都是用链接表方式存储的。不过我越来越觉得我的想法不行,我开始的想法是,test中某一项的得分与其好友的follow的情况有关,就像人人网一样,推荐共同好友,结果我写完代码看数据,发现一个人的SNS那么少,我还以为至少有200项啊。郁闷。 任务是删除example中的-1项,因为1项没有必要删除。但是试了几个模型,效果反而降了下来,这就是说,我删除的项有些最后被接受了。还有,我认为按照他的要求,我们是不能添加更多推荐项的,因为他的测试数据只有test文件,如果我推荐的项不在test文件中,即便模型中得到的距离比较小,由于测试文件中没有,所以也没有办法。 不知道我理解的对不对?,0,None,2 ,Mon Apr 09 2012 09:34:14 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1699,/competitions/kddcup2012-track1,252nd /benhamner,Award Ceremony,"We will be inviting contest participants to come to an award ceremony around May 9 (probably in Washington DC) to be recognized for their work and meet some of the current vendors of Automated Essay Scoring engines. We will cover the travel expenses for one person from each of the winning teams to attend the award ceremony (coming from within the continental US). Please take a moment to fill out this brief survey in preparation for the award ceremony. https://docs.google.com/a/benhamner.com/spreadsheet/viewform?formkey=dHd3SkRBRk0td19Bby1peFVyUDNVa1E6MQ#gid=0 Thanks, and good luck as we near the final stretch of the contest!",0,None,3 ,Mon Apr 09 2012 12:04:10 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1701,/competitions/asap-aes,None /yixiang,Essay Topic,"Hi, I have a quick and dumb question. I know the essay topics are given for each set of the essays, but will the essay topic be considered as an input to our model? Thank you.",1,None,2 ,Tue Apr 10 2012 04:41:28 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1704,/competitions/asap-aes,None /twofirst,Where these user_keywords come from?,"Hi~ I learnt that a user_keyword is extracted from the tweet/retweet/comment by the user. But, I found some strange records like this: A user, user_id:1780050, has not any tweet/retweet/comment, but has many user_keywords as follow: 234:0.2916;100:0.2804;87:0.2439;151:0.243;703:0.2187;320:0.2153;379:0.2144;251:0.2048;125:0.204;443:0.191;1117:0.1554;6161:0.1484;874:0.1476;215:0.1432;3383:0.1406;8215:0.138;1676:0.1345;3588:0.1302;1975:0.1302;1429:0.1267;1795:0.1241;1015:0.1241;106:0.1233;626:0.118;291:0.1172;269:0.1172;172:0.1172;632:0.1163;149:0.1154;2109:0.1137;2047:0.1137;9563:0.1076;427:0.1076;1643:0.1068;901:0.1033 In addition, there is no record about his followee and follower. So I wonder where these user_keywords come from? Thank you~",0,None,2 ,Tue Apr 10 2012 13:40:56 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1705,/competitions/kddcup2012-track1,None /solari,Where is the test data?,"There are 2 submissions have been made, but where is the test data? Thank you~",0,None,2 ,Wed Apr 11 2012 07:31:40 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1706,/competitions/kddcup2012-track2,128th /benhamner,Release of Test Data and Launch of Competition,"We're excited to be able to fully launch this competition! The test data may now be downloaded from the [Link]:https://www.kddcup2012.org/c/kddcup2012-track2/data, along with some basic single-feature benchmarks and the corresponding code to create them. Sample python code for the evaluation metric along with associated test cases has been provided as well. Note that the AUC metric (as implemented in the scoreClickAUC function) is the only one being used to evaluate this track. Also, note that the test set and submission files are sizeable (there are 20297594 test samples). It may take several minutes to upload your submission, and an additional minute or two to evaluate it once it has been uploaded. We strongly encourage you to compress your submissions (.7z, .zip, .rar, or .gz) before uploading them. Please post any questions you may have on the data or the structure of the competition in the forums. Bon courage!",0,None,6 ,Wed Apr 11 2012 10:43:29 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1708,/competitions/kddcup2012-track2,None /chefele,Final Evaluation / Timeline Questions,"I have some questions about the timelines for the end of the contest: 1. The submission instructions say that ""During the last two weeks of the model training period, you will be able to upload your models to Kaggle."" So when does that start? (Two weeks back from April 23rd was April 8th...). Also, will we upload our models via the submissions page? 2. Will leaderboard submissions be suspended after April 22nd? Or will they be allowed?",0,None,12 ,Thu Apr 12 2012 04:58:33 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1711,/competitions/asap-aes,2nd /morenoh149,svm question (newbie question),I was wondering why I haven't heard anyone use an SVM for this challenge. I tried using one on this dataset but didn't get good results (actually worse than benchmark). I was wondering if anyone could explain why this is? I understand the data doesn't become linearly seperable by any of the common kernel tricks. But why would performance be worse after doing a kernel trick? could anyone explain a bit of the theory here. Thank you.,0,None,1 Comment,Thu Apr 12 2012 05:42:15 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1712,/competitions/WhatDoYouKnow,None /gaborfodor,Final Leaderboard,Will we get a cleaned final leaderboard after the competition is finished? I am really interested how many teams try to solve the problem not how many different email adresses make submissions. eg.: 98 ↓23 [Link]:http://www.kddcup2012.org/c/kddcup2012-track1/leaderboard [Link]:http://www.kddcup2012.org/c/kddcup2012-track1/leaderboard#0.34220 11 [Link]:http://www.kddcup2012.org/c/kddcup2012-track1/leaderboard#0.34220 99 ↓23 [Link]:http://www.kddcup2012.org/c/kddcup2012-track1/leaderboard [Link]:http://www.kddcup2012.org/c/kddcup2012-track1/leaderboard#0.34217 6 [Link]:http://www.kddcup2012.org/c/kddcup2012-track1/leaderboard#0.34217 100 ↓23 [Link]:http://www.kddcup2012.org/c/kddcup2012-track1/leaderboard [Link]:http://www.kddcup2012.org/c/kddcup2012-track1/leaderboard#0.34217 7 [Link]:http://www.kddcup2012.org/c/kddcup2012-track1/leaderboard#0.34217 101 ↓22 [Link]:http://www.kddcup2012.org/c/kddcup2012-track1/leaderboard [Link]:http://www.kddcup2012.org/c/kddcup2012-track1/leaderboard#0.34210 2 [Link]:http://www.kddcup2012.org/c/kddcup2012-track1/leaderboard#0.34210 102 ↓22 [Link]:http://www.kddcup2012.org/c/kddcup2012-track1/leaderboard [Link]:http://www.kddcup2012.org/c/kddcup2012-track1/leaderboard#0.34201 9 [Link]:http://www.kddcup2012.org/c/kddcup2012-track1/leaderboard#0.34201 103 ↓22 [Link]:http://www.kddcup2012.org/c/kddcup2012-track1/leaderboard [Link]:http://www.kddcup2012.org/c/kddcup2012-track1/leaderboard#0.34192 1 [Link]:http://www.kddcup2012.org/c/kddcup2012-track1/leaderboard#0.34192,0,None,1 Comment,Thu Apr 12 2012 14:09:05 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1713,/competitions/kddcup2012-track1,55th /kurak38,bug in python scoreKDD,"Traceback (most recent call last): File ""scoreKDD.py"", line 238, in main() File ""scoreKDD.py"", line 230, in main auc = scoreClickAUC(num_clicks, num_impressions, predicted_ctr) File ""scoreKDD.py"", line 122, in scoreClickAUC auc = auc_temp / (click_sum * no_click_sum)ZeroDivisionError: float division by zero when all clicks equals to 0 Traceback (most recent call last): File ""scoreKDD.py"", line 238, in main() File ""scoreKDD.py"", line 230, in main auc = scoreClickAUC(num_clicks, num_impressions, predicted_ctr) File ""scoreKDD.py"", line 101, in scoreClickAUC reverse=True)MemoryError almost 2gb allocated but still some memory free, why this cost so many memory?",0,None,2 ,Thu Apr 12 2012 14:16:44 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1714,/competitions/kddcup2012-track2,77th /ccccat,modified version of the terms and conditions ,Is it possible to post modified version of the terms and conditions related to this competition in more readable format than zip file with a bunch of xml files ?,0,None,5 ,Thu Apr 12 2012 21:07:28 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1715,/competitions/bioresponse,1st /benhamner,Model Submission,"You may now upload your models to our server, by attaching a model file to the submission as a part of the submission process. The model should be contained within an archive and include all the code and data necessary to run your model on unseen samples. It should contain a README with explicit instructions on how to do this. Ideally, running your model on new samples will entail running a script (or a function from the MATLAB / R command lines) that accepts a path to the test set and an output file path as input parameters. In the event that your model uses any form of stochastic feature extraction, classification, or regression, set the random seed to a constant so that the results are repeatable. In the README, please include information on the platform you used to run your models as well as the estimated execution time. In order to be eligible for prizes, your model must be submitted prior to the release of the test set & be able to reproduce your results on both the validation and test sets.",5,None,18 ,Thu Apr 12 2012 21:53:53 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1716,/competitions/asap-aes,None /jamesxli1,LogLoss versus Hit Ratio,"I have noticed that there have been some confussion and cretics about the LogLoss performance metric. I too think it is not the best metric for this competition, as many classification methods just try to get the best hit ratio whereas LogLoss extremely penalizes prediction misses. For instance, if your submission contains only 0 and 1, a single miss predicition will totally mess up your score. Nevertheless, for people using binary predicators, there is a simle way to convert a hit ratio, H, i.e. correct-preditions/total-samples, to LogLoss L: L = - (H * log(1-e) + (1-H)*log(e)) where e is the small number used to represent the no-response state (and 1 - e for the response state). So, in order to minimize L for a given hit ratio H you need to set e to the value 1-H. This means if you know your hit ratio H, you should use (1-H, H) to represent non-response and response state instead of 0 and 1, when submitting results. Of course, the hit ratio is not known before you have made a submission. You might estimate H with some kind of cross-validation. But, if you have made a submission with an e and got back a L reported from the Leaderboard, you can calculate the real H using above form. And using the calculated H, you can make a second submission to improve your score, for the leaderboard at least. Since the LogLoss reported on LeaderBoard is not done with the final real data, you won't know the real LogLoss until the end of the competition. Yet, using the ""optimization"" method described here, you can definitely improve your score on the leaderboard ( and probably also in the final evaluation.)",1,bronze,2 ,Thu Apr 12 2012 22:31:48 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1717,/competitions/bioresponse,380th /iguyon,Fill out FACT SHEETS,"As per the [Link]:http://www.kaggle.com/c/GestureChallenge/details/Rules, to be part of the official final ranking and become eligible for prizes, you must fill out a short survey ( [Link]:https://docs.google.com/a/chalearn.org/spreadsheet/viewform?formkey=dHhUY3d2NDlkcEgwUlNJMmZDWTBCa0E6MQ) about your method. The deadline for submitting fact sheets is April 20, 2012.",0,None,4 ,Fri Apr 13 2012 18:59:56 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1720,/competitions/GestureChallenge,None /kurak38,rank by AUC only?,"solution file: 0.5,100.7,100.5,100.1,10 submission file 0.50.70.50.1 AUC = 0.727273 for submission file 0.50.80.50.1 AUC = 0.727273 I can't get more then that on AUC result. Changes in the submission takes effect to AUC only when change the order of lines and order of lines correlated to solution file. And this is correct to python scoring code, but it don't measure performance as it was mention before on forum The perfect classifier doesn't have AUC=1.0 . How do you want to achieve fair rank? Another thing is that training data has 95% clicks=0 and test data is far from that as you can see how much score has ad_id_benchmark, which in my opinion is ridiculous. As I understand it gives answers based on ad id history. How you can build classfier based on unique property... It is classical example of overfitting, and giving to it new data it will have 0% of correctness.",0,None,2 ,Fri Apr 13 2012 23:15:23 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1721,/competitions/kddcup2012-track2,77th /ziyaowei,Is there any caveats in the submission system on Kaggle here?,"Hi, I am experiencing some problem with the submission system. I just submitted my 4th entry, but the system recorded that I submitted twice. If anyone could check, the files are identical, the submission time only differs by one second, and I only submitted once. I don't mean to ask anyone to cancel my submission, but I am just concerned that is there any tricks / bug in the submission system that could cause this? Or maybe problem with my system / brower that caused this? Thanks!",0,None,1 Comment,Sun Apr 15 2012 04:55:54 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1724,/competitions/bioresponse,581st /zapper0,compute platforms,This is a pretty basic question but is there a reasonably priced cloud computing platform suitable for an individual doing data analysis? I can't use the machines at my workplace and don't really want to purchase something comparable if there's a reasonable rental option.,0,None,2 ,Sun Apr 15 2012 18:51:08 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1725,None,None /qsicanada,Registrant vs Entrant,"I was hoping to able to use the data for a slightly different purpose and potentially enter the contest, but after I read the Acceptance form, I see that (potentially) even being a registrant might mean my IP becomes the property of HPN...can anyone comment / clarify this for me? Thanks. John O'",0,None,1 Comment,Sun Apr 15 2012 21:53:48 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1726,/competitions/hhp,None /amatsukawa,One model for all sets?,"I just wanted to confirm something. We are supposed to train ONE model for all essays, correct? Not one model for each of the sets (same model, just trained on a subset of the data corresponding to each set of essays)? I ask because each set may have its own strong predictors, even if we are using the same model template. A trivial example: say we look for the word ""relativity"" in papers and use that as a precitor. This may be a good predictor for a set of scientific papers, but probably not in general.",0,None,1 Comment,Sun Apr 15 2012 22:51:19 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1728,/competitions/asap-aes,135th /pawelkasprowski,Just before the end,"Dear Participants, The competition is coming to the end. We'd like to thank all competitors for their efforts - escpecially the best ones! :) Number of submissions ensures us that the competition's aim was interesting and challenging for kagglers. Because the main purpose was to establish some standards how to analyze eye movements we'd like to ask you ALL for information about methodologies that you have used. Please complete a survey available here: [Link]:http://www.kasprowski.pl/emvic/kaggle_survey.php Information about competitors that completed the survey will be published during BTAS conference and on www.emvic.org web page. Unfortunately the company that promised to give a prize is still processing our request. But we still hope that we will have something for some best submitters, so please leave us contact information in the survey! Thanks a lot to the whole Kaggle staff, especially to Ben for help in organizing the competition!",0,None,4 ,Mon Apr 16 2012 00:19:26 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1729,/competitions/emvic,None /benhamner,Competition Wrapup,"Congratulations to IRIG, Killian, and Dorothy for placing in the top three in this competition, and to all the other participants! We would like to invite any participant who placed highly or think they came across something particularly novel or interesting with this data to submit a post to the Kaggle blog. Follow our standard ""How I Did It"" format for your posts (see http://blog.kaggle.com/2012/01/26/mind-over-market-the-algo-trading-challenge-4th-place-finishers/ for an example of an excellent one), and feel free to add or remove sections as necessary. Please send your posts to margit.zwemer [at] kaggle and CC me ( ben [at] kaggle ). As Pawel said, please complete the [Link]:http://www.kasprowski.pl/emvic/kaggle_survey.php. Thanks for competing in this contest, and helping determine the feasibility of eye movements for identifying and authenticating people. I look forward to seeing you in future contests!",0,None,1 Comment,Mon Apr 16 2012 03:45:41 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1730,/competitions/emvic,None /benhamner,Competition Wrapup,"Congratulations to Wayne, Andrew & Lewis, and Yanir for placing in the top three in this competition, and to all the other participants! We would like to invite any participant who placed highly or think they came across something particularly novel or interesting with this data to submit a post to the Kaggle blog. Follow our standard ""How I Did It"" format for your posts (see [Link]:http://blog.kaggle.com/2012/01/26/mind-over-market-the-algo-trading-challenge-4th-place-finishers/ for an example of an excellent one), and feel free to add or remove sections as necessary. Please send your posts to margit.zwemer [at] kaggle and CC me ( ben [at] kaggle ). Thanks for competing in this contest, and helping determine how to identify arabic writers based on their handwriting. I look forward to seeing you in future contests!",0,None,7 ,Mon Apr 16 2012 03:51:06 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1731,/competitions/awic2012,26th /suvor38361,A question about the data set,"Hi, everybody! Help me, please.. I'm not sure I got the right data.. (HHP_release3.zip) - At opening a data file (Members.csv for example) Excel write ""The file is loaded not completely"" !!!? -Each data file contains equally 65535 records, but some MembersID are repeat, and others are absent !! Then, why the number of records is equally? So, I'm afraid I got the data not completely.. Thanks..",0,None,5 ,Mon Apr 16 2012 08:50:22 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1732,/competitions/hhp,737th /ahassaine,Methods description,"Dear all, Gongratulations to the winning team and all other participants! As mentioned in the rules, an article about this competition will be published in the ICFHR2012 proceedings. We will be very grateful if you could send us your name, affiliation and a description of your method along with reference to publications (if available). We will be interested in hearing about what you tried, what didn’t work, and what ended up working. Also, if you have used the features we provided, please mention this in your description. You might post this either directly on this forum or by sending an email to hassaine (at) qu.edu.qa Thanks again for participating, Ali",2,None,1 Comment,Mon Apr 16 2012 09:32:59 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1733,/competitions/awic2012,None /djeddichawki,Private Leaderboard,"I think there's a bug in the private leaderboard, the identification rate recorded by my system is 91.275% it means that 208.107 documents has been identified correctly. Do you think that this result is correct",0,None,5 ,Mon Apr 16 2012 09:44:25 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1734,/competitions/awic2012,6th /redsfan,Creating algorithms 101,"Ok, I'm brand new here without any math or statistics courses for 30 years. But I'm really interested in the Kaggle notion of creating an algorithm to predict behavior. My basic quesiton is, what field of math have I stumbled into? Is this calculus, statistics, algebra, all the above? What types of college refresher courses in math and/or computer science should I look into? I'm a librarian, and I have access to plenty of data (and least I think it's plenty) on what types of items library users checkout and return or don't bring back. There's got to be some kind of algorithms to be used here, but I don't know what tools I need to start. Thanks!",0,None,3 ,Mon Apr 16 2012 15:49:00 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1737,None,None /quatrain,Error function in SGD,"hello all, I'm just curious about the function we want to minimize. The MAP we want to minimize is more an rank error measure than a quadratic error. But the only thing that worked for my models is to minimize the MSE over the training dataset. My best model is a kind of AFM (asymetric factor model). Have anyone of you used other techniques than MSE to train factor models ?",0,None,8 ,Mon Apr 16 2012 22:17:54 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1738,/competitions/kddcup2012-track1,20th /sghosh,how to use the logloss metric,"I am totally new in machine learning and hence my questions might seem absolutely naive to most of you. I am listing my questions below 1. In the log loss definition N is the number of samples. Does N mean the number of features of number of molecules? Again since I am new in this field this question might look very dumb. 2. Is the main idea of this kind of approach is to find logloss values of the test data assuming a true value (1 or 0) and then iterate the true value such that the logloss value matches the one that I calculated from the training file? The problem with this method is to match the logloss value of each row of the test data to several logloss values of the training data set. Not sure if I was able to explain the question. 3. Can you please suggest any online machine learning literature or book that is free to download and discusses about logloss metric to predict responses Thanks, Ghosh",0,None,5 ,Mon Apr 16 2012 22:44:47 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1739,/competitions/bioresponse,658th /del=8276820997c79039,What Machine Learning Frame Work can be used,Hello Isabelle I was wondering what approach is relevant to this problem. Assume that we have solved the problem of action segmentation. What we then have is a master dataset which consists of multiple data sets (one per user) each with a limited amount of data (~150 labeled data points per user. I have considered all the segmented action videos). The aim is to learn a classifier for each of these small datasets such that every new classifier learns from the entire data set or from the previous classifiers. In the end we have a system that can churn out a classifier given just one data point per label for a new classifier. Have I summarized the problem correctly? I am not aware of what Machine Learning frameworks are there to address this kind of a problem. The closest is Transfer Learning. Can you please give me some hints on how you would approach this problem and what Machine learning frameworks might be applied to solve this problem. I am not interested in the competetion as much as in the Machine Learning framework. Thank you for providing the data and also answering all the questions so far. Regards Hemanth,0,None,1 Comment,Tue Apr 17 2012 10:22:47 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1741,/competitions/GestureChallenge,None /jfister,Runtime Assurances,"As I prepare our model for smooth running, I'm anticipating potential issues and wanting to head them off by clearly understanding what is expected in terms of the runtime environment. I've installed Ruby codebases on several slightly different Linux platforms and still encounter non-trivial setup issues, so I'm having some concerns that folks (potentially new to Ruby) will be able to do so w/o a hitch. Can we be assured that the proper version of libraries and runtimes will be installed (e.g., Ruby 1.9.2)? In some ways this is like a contract: we provide the code, but have contractual expectations about the runtime environment. How much of this is our responsibility to ensure the runtime is setup properly? And what can we do to help the process?",0,None,2 ,Tue Apr 17 2012 15:53:50 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1743,/competitions/asap-aes,3rd /cucumberguagua,user-item pairs in test and training,"Does anyone know that whether the (user, item) pairs in the testSet appear in the trainingSet?(Ideally, it shouldn't) Besides,are individual users and individual items in the testSet appear in the trainingSet? I read from another post that ""some users in testing data are not in the training data"" [Link]:http://www.kddcup2012.org/c/kddcup2012-track1/forums/t/1445/possible-official-validation-set. So what about the items in testSet? Thanks very much!",0,None,1 Comment,Wed Apr 18 2012 16:11:11 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1749,/competitions/kddcup2012-track1,267th /sriharij,new user,"I have few questions on submissions: 1. What is the format that we need to submit the scored data (test data) (csv other formats)? 2. Do we need to just submit the predicted probablities or predicted ""activity"" as well ? What is the variable name that we should give the probablities? 3. If I submit the predictions today and lets say I come up with the better model tomorrow can we resubmit the new data/model? 4. Can we use commercially avaiable software packages to develop models? Thanks",0,None,3 ,Thu Apr 19 2012 03:40:30 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1752,/competitions/bioresponse,488th /dsweet,rf_benchmark.py contains R and Python modules?,"I'm confused by the rf_benchmark.py file. It appears to call both python and R in the same file. python - ""from sklearn.ensemble import RandomForestClassifier"" R- ""import csv_io"" I've only tried this in python (which doesn't recognize the ""csv_io"" module) but I know sklearn is a python module. What am I missing?",0,None,2 ,Fri Apr 20 2012 15:33:23 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1755,/competitions/bioresponse,None /andreas,Bizarre Difference in training/test set,"I tried to use KNN (with euclidean distance in feature space), and I'd get typically a 0.51 training error. Then I used it for the test set, and boy, did it loose... KNN wasn't able to perform much better than random guessing, and much worse than uniform probability. I set out to discover why, and surprisingly, the distances between testset and training set don't go below 6 units, whereas the distances within the training set and within the test set would do so easily. I might still be doing something wrong, but overall KNN seems to be not that valuable....",3,bronze,9 ,Fri Apr 20 2012 16:03:03 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1756,/competitions/bioresponse,436th /cckk3333,Negative click?,I have found some strange instance as following? -1 43 9680905456514508800 20724306 33923 2 2 24171 16856 197502 167212 20561-1 21 5120683440510468096 8676724 1268 1 1 150 40 45 13 129283-1 12 14340390157469403136 21163927 23808 3 1 23518 14976 38973 136 0-1 108 17586747310878498816 20561491 1998 2 2 6847 23247 21566 1592 0-1 79 12057878999086460928 20192676 27961 1 1 11231 7659 9839 99 0-1 34 14340390157469403136 3126903 23777 2 1 47657 26500 44157 40493 0-1 69 15689252236861136896 21176564 24130 3 2 11 96 1229 608 0-1 13 126550245966758304 21243951 36263 2 1 7578 4807 20681 179 0-1 12 8802768287123154944 21644405 26278 3 1 8029 1370 2892 3164 0-1 14 4203081172173603840 4427101 28647 2 1 15549 39625 221055 194055 0-1 48 12972158645927618560 10869499 29128 1 1 3082 2791 3015 20863 0-1 18 15145480155589095424 21183708 23807 2 1 13941 152348 284962 8 0-1 62 18414892642356570112 21222486 36185 3 2 625 835 68105 61049 0-1 41 13756257544627677184 8180996 24354 1 1 5628 5619 10744 13593 0-1 13 14340390157469403136 4428860 23808 3 1 11077 13115 12945 12777 0-1 21 15145480155589095424 21156726 23807 2 1 464 354 6803 8 0-1 81 11363457806790746112 20137552 26278 2 1 2259 57 124 176 0-1 15 12057878999086460928 20163493 27961 2 2 104930 55815 131130 128342 0-1 93 1729963849377387520 9583861 23637 1 1 413 2826 17579 145 0-1 46 12057878999086460928 20222693 27961 1 1 107646 326 215 319 0-1 74 11315908569713166336 20311334 18070 2 1 845 4573 2109 1190 0-1 15 15145480155589095424 21183716 23807 3 1 256 374 441 8 0 what does it mean?,0,None,2 ,Sat Apr 21 2012 07:22:58 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1757,/competitions/kddcup2012-track2,13th /jeremyatia,Data disposition,"Hi, When you dispose the data corresponding to the same variables ( like : trade_size trade_size_last{1} ..) if you look at the top left of a cell you can see the same value. Why ?? Thx [Link]:https://storage.googleapis.com/kaggle-forum-message-attachments/2207/trade_size.csv",0,None,1 Comment,Sat Apr 21 2012 20:59:50 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1758,/competitions/benchmark-bond-trade-price-challenge,None /stellar,minimization function,"Hi Admin, I might have missed the answer to my questions in a previous topic but I want to confirm that the data columns we are allowed to use in the prediction function are cols E through BI (as displayed in excel), ie all columns from 'Current Coupon' to 'curve_based_price_last10' inclusive. I also want to confirm that we are minimizing ∑i(weighti * abs(trade_pricei - f(Xi))) where f is the prediction function and X is the vector of data from columns E to BI and i refers to row i. Does that make sense (sorry I cant get the symbols the way I want them but it should be clear enough.) thanks",0,None,5 ,Sun Apr 22 2012 06:48:10 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1759,/competitions/benchmark-bond-trade-price-challenge,191st /darlingmew,Question about the code,"A friend told me that this competition requires us to use matlab or R. However, I didn't see the requirement on the website. Can I use Eviews or use both Eviews and matlab?",0,None,2 ,Sun Apr 22 2012 09:53:45 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1760,/competitions/benchmark-bond-trade-price-challenge,252nd /zaythedatascientist,About Results and User Keyword,What is the meaning of Result = 1? Does it mean user click the item name (to check profile) or actually following the item? Can someone explain how the user keywords are calculated mathematically? I think it might be helpful when making assumption if we know the details. Thanks.,0,None,1 Comment,Sun Apr 22 2012 11:00:10 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1761,/competitions/kddcup2012-track1,641st /salimali,Quality Control on test set,"Ben, A couple of things we noticed in the data that you might want to check on the test set. 1. The resolved score is supposed to be the max of rater1/rater2. This was not always the case for sets 5 & 6. 2. There were duplicate essays in the train set and they actually had different resolved scores. 3. There were essays in the train set that also appeared in the valid set.",0,None,11 ,Sun Apr 22 2012 14:12:03 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1762,/competitions/asap-aes,2nd /yogurt,A lot of Australians in data mining?,"Given that Australia only has a population of 22 million, I see a lot of Australians involved in data mining and statistical analysis. Is data mining very popular in Austalia?",0,None,6 ,Sun Apr 22 2012 18:11:57 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1763,None,None /zygmunt,How to do blending,"From results of different competitions, blending seems like an obvious way to go. I would like to learn how to blend. My idea comes from KDD Cup 2010 writeup by Toscher und Jahrer, and is roughly as follows: you have a bunch of classifiers (models) you take each of them and perform cross-validation on a training set for each classifier, collect predictions from each fold of CV. These predictions will be one column in a blender training set, B_train train each classifier on a full training set and get predictions for a test set. These predictions will be one column in a blender test set, B_test train a blender on B_train get predictions for B_test. Those are the end product Here come the questions: Is this correct? How many classifiers do you need for blending? Do you put some other data into B_train, or just CV predictions? What classifier do you use as a blender (linear, NN...)?",3,bronze,6 ,Sun Apr 22 2012 21:59:15 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1765,/competitions/bioresponse,290th /jfister,Labelling Data,"Ben, Just wanted to publicly verify that labelling data using previous submissions is not allowed. Instead, ""Source of the labels should be generated on-the-fly for the general model training"". Is that still correct? Thanks,",0,None,1 Comment,Sun Apr 22 2012 22:54:51 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1766,/competitions/asap-aes,3rd /dgboy2000,When is the next phase of the competition?,Thanks for putting on a great contest! We are excited for phase 2: grading short answer responses. Can you please let us know when phase 2 will begin?,0,None,1 Comment,Mon Apr 23 2012 02:13:59 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1767,/competitions/asap-aes,5th /benhamner,Test Data Released,"Hi all, The test data has now been uploaded. Please run your models on this data and upload the submission as soon as possible. While you technically have until April 30 to do so, it will make scheduling and booking flights for the winners easier if you upload your results well before then. When you're making a submission, it should have 4854 rows (or 4855 with a header), not the number it says on the submissions page. Once you make the submission, it will say that you scored 0.0000 on the public set. This is fine, and it means your submission was parsed correctly. Thanks for your participation in this contest so far! Ben",0,None,21 ,Mon Apr 23 2012 19:01:12 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1768,/competitions/asap-aes,None /benhamner,Please Select Your Final Submission,Please select your final selection for scoring once you submit it. This will make it easier for us to check / verify on the backend.,0,None,3 ,Tue Apr 24 2012 03:46:34 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1769,/competitions/asap-aes,None /squaredloss,Debriefing,"This was my first Kaggle competition and I must say I had a BLAST. I think a couple things made this competition especially interesting. I'd love to hear what other people thought (and tried). Features: There was practically no limit to the different features and techniques you could try. I imagine a lot of these competitions involve a fairly strict feature set and winning is a matter of hyper tuning and mega blending. I never got any amazing traction beyond length/spelling/grammar features and LSA, but it was fun to try things out. Wish I had showed up a little earlier to the competition and had more time. Ordinal response: Having various ordinal ranges to compare side-by-side was interesting. On some of the larger ranges (sets 7 and 8) pure regression worked well, on some shorter ranges (set 3) I found pure classification worked better, while most were ameniable in some way to true ordinal methods. Some blend of all three was optimal for me in the end. The response also had observed heterogeneous variances. I never got any traction with weighted methods (giving samples where rater 1 and 2 disagreed less weight), but I'd be interested in hearing if other people did. I do think it was unfortunate that we were optimizing for an obviously non-optimal outcome. A lot of the human ratings were bad, and more than a few were compeletly non-sensical, as has been mentioned elsewhere in this forum. A lot of the larger errors in my models were coming from essays that (to me) seemed mis-rated. There was also something unsatisfying about measuring for proxies and correlates of good writing, instead of focusing on what actually makes a good essay. Not sure how you avoid these problems though. Looking forward to more Kaggling in the future, time (and girlfriend...) permitting. P.S. I built a python framework for rapid machine learning prototyping (using pandas, scikit and rpy2) as I worked on this competition, it'll be on github shortly. Hopefully any fellow python hackers out there will find it useful :)",3,bronze,3 ,Tue Apr 24 2012 05:07:25 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1770,/competitions/asap-aes,120th /graphlab,KDD CUP using GraphLab,"Hi everyone! Following the great excitement of last year KDD cup, where solutions based on [Link]:http://graphlab.org CMU open source machine learning framework,won the [Link]:http://bickson.blogspot.com/2012/04/kdd-cup-2011-reflection.htmlin track 1 (and was also used at the 1st place solution), we continue this tradition by providing tools for competitors for this year contest. We have added a parser library, which prepares the training data by splitting it into meaningful validation and training, as well as splits the test data as required. We provide scripts for packaging the solution and generating linear features based on the training and validation data. Currently, our team is at the 6th place at track 2: http://bickson.blogspot.com/2012/04/kdd-cup-update.html . Unfortuantely, as the contest is heating up we did not yet have time to improve our tools for perfection. this is why we would love to get any feedback from our users. We promise to give a quick and high priority support to anyone who wants to try it out. Best, Danny Bickson, Project Scientist, Carnegie Mellon University",1,bronze,1 Comment,Tue Apr 24 2012 08:56:32 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1771,/competitions/kddcup2012-track1,141st /benhamner,Public Leaderboard Performance Over Time,Made a quick plot of the public leaderboard performance over the course of the competition - thought y'all would be interested. [Link]:https://storage.googleapis.com/kaggle-forum-message-attachments/2254/Automated Essay Scoring.png,4,bronze,22 ,Wed Apr 25 2012 10:27:56 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1775,/competitions/asap-aes,None /byang1,Understanding the AUC,"I don't understand the AUC metric in this contest. The paper referenced in the evaluation page only talks about positive and negative examples, and here we have clicks and impressions. How's the AUC applied here ?",0,None,7 ,Wed Apr 25 2012 18:55:25 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1776,/competitions/kddcup2012-track2,None /teaserebotier,Substantial difference in held out estimates and leaderbard score,"I'm having difficulties reconciliating internal and leaderboard estimates of mean weighted error. It's not a consistent ratio either but the leaderboard estimate is about 10% higher on average. My latest internal estimate was obtained by removing entirely 1/10th of the training set and naming it testing set before re-running the entire process from scratch... I can understand overlearning but this is not a hyperparameter optimization, it's just a one-shot post-training estimate. Do I understand the metrics wrong? I read: Performance evaluation will be conducted using mean absolute error. Each observation will be weighted as indicated by the weight column. This weight is calculated as the square root of the time since the last observation, scaled so that the mean weight is 1. Which for me corresponds to the following formula in MATLAB code: error_estimate = sum( abs( (predictions-witheld_answers).*weights ) / sum(weights); (By the way, I don't understand why weights had to be scaled by an arbitrary constant from sqrt(test_data(:,2)+1). The constant cancels out in the error computation). What am I getting wrong? Are others meeting the same discrepancy?",0,None,12 ,Thu Apr 26 2012 00:02:09 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1778,/competitions/benchmark-bond-trade-price-challenge,37th /acgrama,"""best practice"" for tackling a problem","I am relatively new to ML, and I am curious how you usually tackle a new problem (be it Kaggle or some other data set). Do you 1) start playing on it with functions of an existing ML library (like pyML, or liblinear, etc), or 2) implement from scratch your own home-brew algorithm as a variation of a certain method? Thanks!",0,None,2 ,Thu Apr 26 2012 12:53:52 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1781,None,None /bhaskher,Visulaizing Class boundry for Logistic regression.,"Hi, I am able to visualize how linear regression works, e.g. the W and how it minimizes the Square error The fact that MLE is same as Least Squared error for regression, helps to visualise what is going on. When we plug it in to a sigmoid and use it for classification, the MLE is slightly different. It minimises the miss-classification (I think ..) I am bit confused with decision boundrys for Logistic Regression and SVM. Can someone please help? When drawning an approximate decision boundry for logistic regression what are the key points? In what way is it different from SVM ( perhaps sensitivity to outliers ) ? Thanks Shekhar",0,None,2 ,Thu Apr 26 2012 17:33:08 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1782,/competitions/mlpr-challenge,34th /aggelosgkiokas,Evaluation Measures,"Dear all,I was thinking that evaluation measures should take account the number of times that a user listened to a song, i.e. if user listened song X more times than song Y, an algorithm that outputs only X should be ranked better than an algorithm that outputs only Y.Maybe a practical measure could be the Discounted Cumulative Gain (DCG) or the normalized DCG:Let i=1..T the total results from an algorithmDCG at rank k is:DCG(k) = rel(1)+sum(rel(i)/log2(i)), i=2..kwhere rel(i) denotes the relevance of song i to target user.rel(i) could be something like:rel(i) = (#listens of song ranked i) / (#total_listens from user)Normalized DCG is nDCG(k) = DCG(k)/IDCG(k)where IDCG(k) is the ideal ranking at k-results, i.e the best DCG that can be achieved at k results.Then a mean value can be computed from nDCG() for all k=1..T for each user.Finally, compute the mean value across all users.Another formulation could be a modification of mAP as described in the challenge description paper.M(u,i) can be replaced by M'(u,i) in Eq. 2 where M'(u,i) is the #listens of user u for song i.Then nu should be replaced with the total listens for user u.Is this modification valid? Any other measure suggestions??I am looking forward to hearing your comments/suggestions.Best regards,Aggelos Gkiokas",0,None,3 ,Thu Apr 26 2012 18:04:32 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1783,/competitions/msdchallenge,18th /brianmcfee,Looking for a team?,"So you want to build a music recommender, but not by yourself? Use this thread to find partners and build teams. Good luck! --Brian",0,None,2 ,Thu Apr 26 2012 18:37:36 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1784,/competitions/msdchallenge,None /aleczopf,Allowable Data Sources,"Are there specific criteria for additional data sources that could be used to perform model training? For example, if I had proprietary data about tastes, or collected that data myself as part of this challenge, would such a source be allowed? Would I be required to make such data public during or after the competition?",0,None,3 ,Thu Apr 26 2012 19:15:21 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1785,/competitions/msdchallenge,None /eigenvector,Limit on submissions?,"Hi Folks, I'm new to Kaggle, and I have a question that I couldn't find the answer to. Is there a limit on the # of submissions one can make towards the same competition? In the rules for the Heritage Health Prize, I noticed that they limit one submission per calendar year, but I couldn't find a similar rule in the Benchmark Bond Trade challenge. However, some of the postings in this forum allude to the fact that there may be a limit. Could someone please clarify this? Thank you!",0,None,2 ,Fri Apr 27 2012 00:25:34 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1786,/competitions/benchmark-bond-trade-price-challenge,None /vladimirbelikov,Getting Started Documentation,I can't download [Link]:http://www.kaggle.com/c/msdchallenge/download/MSDChallengeGettingstarted.pdf -- it just returns me to the same page. Is it my mistake or server-side problem?,0,None,7 ,Fri Apr 27 2012 10:35:12 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1788,/competitions/msdchallenge,None /jfister,Award Ceremony Expectations?,"Ben, I was wondering if you would be able to provide additional info about the award ceremony that would allow us to make a better assessment of the value in making the trip. For anyone in the top 3 or for those living nearby, the decision is an easy one. But for anyone travelling at a distance it becomes more dicey. For example, it could be that Kaggle has simply invited vendors and contestants, but there are no clear intentions (besides informal networking), nor is it clear which vendors will be there. On the other extreme, it could be an event jointly planned by vendors and Kaggle to facilitate the vendors' potential hiring, licensing, or consulting with contestants in the near future. Do you know which of these scenarios is closer to reality? Do you know which vendors will be attending, and what their possible intentions are? Any representatives from state departments or standardized testing services? Any other info that would be helpful to us? Sorry if I'm prying too much, it's just that the trip could be very costly for many and the description is so vague that it's difficult to know if the trip has no value, or if it's incredibly important. I'm sure any info you can provide would be very much appreciated.",6,silver,2 ,Fri Apr 27 2012 17:41:42 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1789,/competitions/asap-aes,3rd /martinoleary,Open source solutions,"Seeing as there's no prize at stake in this contest, I had an idea that I would develop a solution ""out in the open"", writing about it as I go along, and putting everything in a GitHub repository for all to see. This would be a full attempt at solving the problem, not a simple benchmark or tutorial. I'm conscious though that not everybody would like to see public solutions, as these are likely to lead to lots of copycat solutions filling the leaderboard. I'd like to get a sense of how people feel about this: would people rather that I kept my work to myself, or shared it with everybody?",7,silver,8 ,Fri Apr 27 2012 22:29:02 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1790,/competitions/msdchallenge,37th /matthew41243,Do you really mean the data is,"Do you really mean the data is in rows by date? Or is that a typo? If in rows then we have to pivot it, which seems like less than productive use of everybodies' time.",0,None,2 ,Fri Apr 27 2012 23:22:06 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1791,/competitions/dsg-hackathon,None /dchudz,Welcome!,"Data will be uploaded at noon UTC, 5am PT, 8am ET, etc. tomorrow and the contest will last for 24 hours in hackathon location around the world and also remotely from wherever you like. Note that some information about the data, etc. may change before the hackathon start (though additions are more likely than other sorts of change).",0,None,16 ,Fri Apr 27 2012 23:26:07 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1792,/competitions/dsg-hackathon,22nd /elistats,Data decription complete?,"I assume measurements happen at different sites and not a single site. Are locations provided (e.g. corresponding to EPA sites, I expect)? I didn't see that in the decription.",0,None,18 ,Fri Apr 27 2012 23:38:26 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1793,/competitions/dsg-hackathon,8th /rambeaux,Climate/Weather Data API,"Hi All, The following website appears to have an open API for extracting localised weather history and general climatic patterns: http://www.ncdc.noaa.gov/cdo-web/webservices Now the challenge is in working out how to extract the information!",0,None,5 ,Sat Apr 28 2012 06:17:26 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1794,/competitions/dsg-hackathon,100th /dchudz,Leaderboard by Location,"For this hackathon, we have leaderboards by location. When you form your team (the first time you make a submission), you can choose your site's location (or ""remote"" if you're not working from one of the hackathon sites). You will then show up on your site's leaderboard, and your site will be shown by your team name on the main leaderboard. If you're a Kaggle regular, you'll notice that's new for us.. We're intrigued by the idea!",0,None,2 ,Sat Apr 28 2012 06:43:25 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1795,/competitions/dsg-hackathon,22nd /elgringo,Chunks with missing positions,"Dear management, in the data, I found 10 chunks that do not end at position 192. Moreover, none of the 192 positions is available for all 210 chunks. Is that something you can do something about? (Also: when this happens, I start predicting at 193 as in your sample submission, right?) (Edit: there are 12 chunks that do not end at position 192. Two of them - 94 and 153 - have no data at all!) Thanks",0,None,1 Comment,Sat Apr 28 2012 16:09:19 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1796,/competitions/dsg-hackathon,14th /dchudz,submission limits,"""Each team will be able to make 8 submissions on Saturday, and 8 submissions on Sunday (where days are defined as periods from midnight to midnight UTC).""",0,None,2 ,Sat Apr 28 2012 16:44:26 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1798,/competitions/dsg-hackathon,22nd /aaron13738,Wind Speed and Direction Data,"Two questions about the wind data: The data format describes the data as: WindDirection..Resultant_1 (direction the wind is blowing from given as an angle, e.g. a wind from the east is ""90"") WindDirection..Resultant_1018 (direction the wind is blowing from given as an angle, e.g. a wind from the east is ""90"") WindSpeed..Resultant_1 (wind speed)_(site number) WindSpeed..Resultant_1018_(site number) So it looks like there are two different Measures 1 and 1018. What is the difference between these? It looks like we are supposed to be able have Wind Speed (but not direction?) data for each site, But in the file I see only: ""WindDirection..Resultant_1"",""WindDirection..Resultant_1018"",""WindSpeed..Resultant_1"",""WindSpeed..Resultant_1018"", Any clarification would be appreciated.",0,None,1 Comment,Sat Apr 28 2012 16:49:20 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1799,/competitions/dsg-hackathon,35th /aaron13738,Site locations of Sensors,"It looks like there are 9 meterologic sensors, but only 3 of them overlap with the pollutant sensors. Can you post the location of the other six sensors? The IDs are 14,22,52,76,3301 and 6005. -edit- It looks like the wind speeds are also unique sites, so add 1 and 1008 to that list.",0,None,3 ,Sat Apr 28 2012 16:54:23 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1800,/competitions/dsg-hackathon,35th /mlearn,Submissions,What does it mean when a submission's score is pending? Been like that for over 5 minutes. Also I'm feeling a bit stupid as I've made a prediction when I should have said 1e-6. Is there any way you can get an error reported in this case as I've now wasted one of my submissions for the day. Thanks.,0,None,1 Comment,Sat Apr 28 2012 17:22:13 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1801,/competitions/dsg-hackathon,46th /elistats,Just having fun with graphics (studying wind),Taylor said this looks best in Chrome for some reason... [Link]:http://euler.stat.yale.edu/%7Etba3/hack/movie2/,12,gold,1 Comment,Sat Apr 28 2012 17:24:48 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1802,/competitions/dsg-hackathon,8th /philipskokoh,Weekday on test data?,I cannot find weekday on the test data (SubmissionZerosExceptNAs.csv). Am I missing something?,0,None,6 ,Sat Apr 28 2012 17:29:48 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1803,/competitions/dsg-hackathon,42nd /ulrich,Standardised target variables?,"Hey all, we cannot reproduce mean=0, std=1 in the target variables as the description suggests. Can anyone confirm this? Cheers, Ulrich",0,None,2 ,Sat Apr 28 2012 18:18:56 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1805,/competitions/dsg-hackathon,20th /chrisraimondi,Would others find data size useful for competition summary page?,"Not sure what others think, but in the section the lists $$$, time left, and number of teams - I wonder if others would find size of the data set useful. Obviously there is more than one way to measure data size, but something that would give people a clue might be nice (or perhaps in the future on a more detailed summary page). Nothing urgent, but just occurred to me. Could also give it in the form of #of included features, by number of training examples. Also in the section where you can download the data - for the REALLY large data sets - it might be nice to have a small (say first 1000 rows) of the data set available so you can get an idea without downloading the whole thing. or maybe even have tags like you are doing for users such as ""will fit into excel 2007"" or ""will fit into excel 2010""",0,None,2 ,Sat Apr 28 2012 18:39:40 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1806,None,None /mgalle,Missing chunks in train data,"Test data contains chunks id 94 and 153, but these chunks do not appear in training data. Is this correct? What does this mean?",0,None,6 ,Sat Apr 28 2012 18:42:43 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1807,/competitions/dsg-hackathon,59th /joecamel,Starting points in MIR,"My team is participating in the challenge as part of our course project in IR. We are novice in music information retrieval, so we're seeking for advice on starting points in the field. If you know some good papers, books or other resources which might be useful for the challenge, we would appreciate it. For example, I see that the book by Òscar Celma is mentioned on the webpage.",0,None,3 ,Sat Apr 28 2012 22:38:36 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1810,/competitions/msdchallenge,16th /barrenwuffet,Use of predicted columns,"I'm a bit unsure about what types of inputs we're allowed to use. Since most of the columns aren't in the test data, are we allowed to use predicted versions of the columns that are present in the training data, but not in the test set. For instance, 'solar_radiation' is in the training data but not the test data. Can we use the training data to predict solar_radiation levels in the test data and then use the predicted value to predict the target values in the test data?",0,None,1 Comment,Sun Apr 29 2012 00:06:54 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1812,/competitions/dsg-hackathon,23rd /johnsonbizint,Means is not working,So we replicated the exact file as the R code produces using MySql tables and SQL statements. Both our file and the R file produce an outrages absolute means error. What the hell are we doing wrong. The submission files with just 0s gives us a better score the mean by chunk and hour. [Link]:https://storage.googleapis.com/kaggle-forum-message-attachments/2398/submission_1.csv,0,None,1 Comment,Sun Apr 29 2012 02:13:45 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1813,/competitions/dsg-hackathon,91st /sheac38395,"Submitting all zeros scores way worse than ""AllZeros""","I've tried submitting a relatively naive solution(based on hourly averages) and wound up with results comparable to ""SubmissionAllZerosEvenNAsVeryBadScore.csv"". I tried again, submitting all zeros except for NAs. Virtually the same score: Four thousand ""MAE units"" worse than it should be. Any ideas why I can score so poorly with those submissions? Has anyone else had this problem but resolved it? I'm not really interested in scoring on par with the ""SubmissionZerosExceptNAs.csv"", but if I can't even submit that properly, my odds of getting my more sophisticated solution to score well are pretty limited.",0,None,6 ,Sun Apr 29 2012 03:46:14 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1814,/competitions/dsg-hackathon,106th /matthew41243,More missing data,"As an example, Chunk 129 has dropped records between 35 and 83, and many of the chunks are incomplete. It is making it very difficult to model. Any words of wisdom?",0,None,1 Comment,Sun Apr 29 2012 03:56:15 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1815,/competitions/dsg-hackathon,None /notbatman,Submission Stuck in pending for last 5 mins.,Never happened before. Can the admins please check whats happening.,0,None,1 Comment,Sun Apr 29 2012 06:20:56 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1816,/competitions/dsg-hackathon,33rd /mikel1,How many unique songs in Taste Profile dataset?,"http://labrosa.ee.columbia.edu/millionsong/tasteprofile states 384,546 unique MSD songs in Taste Profile dataset. I also find 384,546 unique songs. The number is correct.",0,None,3 ,Sun Apr 29 2012 12:38:11 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1817,/competitions/msdchallenge,14th /jpetterson,Incorrect end time in leaderboard page,"20 minutes ago the leadearboard page was showing ""15 minutes"" in the ""Ends"" box, and I as a consequence I suspended my experiments and submitted everything I had at that time. To my surprise, after getting to ""0 minutes"" it went up to ""60 minutes"" again. Anyone else noticed that? I still had things to try, but now I ran out of submissions... James",3,bronze,3 ,Sun Apr 29 2012 13:09:57 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1818,/competitions/dsg-hackathon,2nd /smcinerney,General approaches to partitioning the models?,"Now the competition is over, it would be interesting to share overall approaches: A) MODELING APPROACH: As I see it, you could in principle build individual models broken out by some or all of: 39 target/site combinations (or subcategories of, e.g. highly diurnal or seasonal vs lowly) 10 position_within_chunk {+1,2,3,4,5,10,17,24,28,72 hrs}. Many people used one short-term and one long-term model. 7 weekdays (might as well arbitrarily label these based on starting weekday; we are predicting for +8..11 days later) 12 months (based on month_most_common). Or at least sets of months. Monthly seasonality definitely varies by target(/site). 24 values of hour (or else you might have renumbered to start at midnight, or sunrise, which varies by month...) and there may be other creative criteria... To use 39*10*7*12*24 models would obviously be massive overkill (and the result would not be explanatory); it seems like most teams stuck to a small handful of models (typically 2 or 3). What were your approaches to partitioning the modeling? A2) WEATHER DATA & MODEL: There were 9 meteorological sites. Only 30.9% (11690/37821) rows had at least 30 out of 40 meteorological features non-NA. (Although site 14 was useless, and only site 52 had near-complete temperature data.) Did anyone build predictive weather models? How did people map the values from meteorological sites to pollution sites? How did you handle NAs? A3) DEANONYMIZING THE INDIVIDUAL TARGETS: (into NO2, fine particulates etc.), and/or piecewise modeling their underlying production mechanisms? Has anyone got the list of targets? B) VALIDATION SET: The training set (8 days) could be partitioned into 6 days training + 2 days validation set (or 7+1). Taking the NAs into account (e.g. last 4 days must not be NA). This can then used to validate models by scoring with overall MAE. That MAE could be broken out by {target-site, position_within_chunk, weekday, month_most_common} to get insight into where the worst contributions to MAE were coming from, and then tweak or further subdivide those models (or weights), and iterate until MAE on validation set seems acceptable (don't overfit!), and then make a submission (and verify that test MAE also improved, or else discard the changes). C) DATA CLEANING, AND HANDLING OF NAs: Any creative approaches? We did not spend time on this but a couple of teams (in NYC) seem to have spent huge time on it. Conversely, was there selection bias if you simply ignored all chunks with >25% missing data? All insights welcome...",4,bronze,21 ,Sun Apr 29 2012 16:08:17 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1821,/competitions/dsg-hackathon,48th /mlearn,Leaderboard progression,I attach a picture of the leaderboard progression over time.,1,bronze,2 ,Sun Apr 29 2012 17:56:38 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1823,/competitions/dsg-hackathon,46th /mlearn,Hackathon thoughts,"So what did people think of the intense hackathon nature of this competition? I've experienced the ""remote"" option and I'd be keen to hear from the on-site people. I thought it was fun to get the extra pressure / action however without proper video feeds / chatrooms it felt hard to be fully in touch. Kudos to the Canberra crew for setting something up but unfortunately few people joined in. As a ""remote"" person I found it hard to get enough time to fully take part - managed a stint near the beginning (even got to #1 on the leaderboard for a short while) but then needed to do other things. Tried to get into it again the next day but didn't have enough time to learn the the required new tricks before the competition timed out. So despite having enjoyed myself I may stick to the longer competitions in future as there's more time for learning and doing around the rest of life's activities.",0,None,4 ,Sun Apr 29 2012 18:04:31 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1824,/competitions/dsg-hackathon,46th /zachmayer,Cross-validated vs. leaderboard error,What sort of differences have people been observation between their training set cross-validated error and leaderboard error?,0,None,12 ,Sun Apr 29 2012 23:10:01 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1825,/competitions/bioresponse,45th /sheac38395,Access to MAE Scoring Code,"I was never able to get my submission formatted properly, but I'd still like to keep working on a solution. In order to compare my results to those of others in the contest, I'd need the MAE scoring code that's used. Can this be made available to teams? Thanks, Shea",0,None,2 ,Mon Apr 30 2012 04:13:50 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1826,/competitions/dsg-hackathon,106th /alecstephenson,Thanks Melbourne,"A big thanks to Melbourne organizers Yuval Marom and Rory Winston, and sponsors Melbourne Institute, Lift Analytics and Huntel Global for keeping us well fed and watered and putting on a great event in Melbourne that went down to the last few minutes. Thanks to all participants and congrats to feeling_unlucky. We've discovered a whole new bunch of Melbourne kagglers. Remember, turn right. :)",1,bronze,9 ,Mon Apr 30 2012 09:20:45 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1827,/competitions/dsg-hackathon,10th /intaka,Recommendations vs Predictions,"In the FAQ, it says: You're not really measuring music recommendation! True. We're measuring the ability to predict the songs a user listens to, given a subset of observations. More details in our [Link]:http://www.columbia.edu/%7Etb2332/Papers/admire12.pdf, but in short this setting is the best offline proxy for music recommendation available. Have read the AdMire paper but still not clear on the distinction being made between recommendations and predictions. Please clarify. Thanks.",0,None,4 ,Mon Apr 30 2012 14:17:12 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1829,/competitions/msdchallenge,17th /dchudz,what the target variables really were,For anyone who's curious: PARAMETER_CODE PARAMETER_DESC measured_quantity 1 42101 Carbon monoxide target_8 2 42401 Sulfur dioxide target_4 3 42406 SO2 max 5-min avg target_3 4 42601 Nitric oxide (NO) target_10 5 42602 Nitrogen dioxide (NO2) target_14 6 42603 Oxides of nitrogen (NOx) target_9 7 44201 Ozone target_11 11 81102 PM10 Total 0-10um STP target_5 12 88305 OC CSN Unadjusted PM2.5 LC TOT target_15 13 88306 Total Nitrate PM2.5 LC target_2 14 88307 EC CSN PM2.5 LC TOT target_1 15 88312 Total Carbon PM2.5 LC TOT target_7 16 88403 Sulfate PM2.5 LC target_8 17 88501 PM2.5 Raw Data target_4 18 88502 Acceptable PM2.5 AQI & Speciation Mass target_3,0,None,4 ,Mon Apr 30 2012 19:22:00 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1830,/competitions/dsg-hackathon,22nd /mmaut41696,Python code for log loss,"For my fellow newbs, a sample utils.py: from math import log def log_loss(predicted, target): if len(predicted) != len(target): print 'lengths not equal!' return target = [float(x) for x in target] # make sure all float values predicted = [min([max([x,1e-15]),1-1e-15]) for x in predicted] # within (0,1) interval return -(1.0/len(target))*sum([target[i]*log(predicted[i]) + \ (1.0-target[i])*log(1.0-predicted[i]) \ for i in xrange(len(target))]) if __name__=='__main__': # if you run at the command line as 'python utils.py' actual = [0, 1, 1, 1, 1, 0, 0, 1, 0, 1] pred = [0.24160452, 0.41107934, 0.37063768, 0.48732519, 0.88929869, 0.60626423, 0.09678324, 0.38135864, 0.20463064, 0.21945892] print log_loss(pred,actual)",6,bronze,5 ,Mon Apr 30 2012 23:24:38 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1831,/competitions/bioresponse,338th /viveksha,Congratulations!,"Congratulations to the winners! And, thanks to the sponsors and Kaggle for a terrific contest. There was clear differentiation from the rest of the pack as well as between the winners. I'd love to know what insights, methods worked for people, and what didn't.",1,None,21 ,Tue May 01 2012 03:04:19 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1833,/competitions/benchmark-bond-trade-price-challenge,16th /waynezhang,Profile picture display,"I set up gavatar, and I can see the pic through it. But on Kaggle, the picture can display on some win 7 computer, but cannot on some other win7 and mac. Anyone who knows why? Thanks!",0,None,9 ,Tue May 01 2012 03:25:01 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1834,None,None /mikel1,Placing transformed dataset on a public server?,I have transformed the Million Song Dataset for easy analysis by a statistical software package. Am I allowed to upload the transformed dataset to a public server for access by anyone?,0,None,2 ,Tue May 01 2012 09:00:40 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1835,/competitions/msdchallenge,14th /byang1,question about some data IDs,"I didn't go thru the 9GB file to check for this, but the information page tells us we have the following fields in the data: 4. AdID ... 9. KeywordID: a property of ads. This is the key of 'purchasedkeyword_tokensid.txt'. 10. TitleID: a property of ads. This is the key of 'titleid_tokensid.txt'. 11. DescriptionID: a property of ads. This is the key of 'descriptionid_tokensid.txt'. This seems to say some ads may have different keywords, titles, and descriptions. Is this true ?",0,None,1 Comment,Tue May 01 2012 09:09:33 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1836,/competitions/kddcup2012-track2,None /teaserebotier,Did anybody use the structure of weights (or time_last1),"Do a histogram of the weights and you'll see what I mean. the weights have a strong multimodal distribution, separated in sets corresponding to a period of about 26 hours (not 24...). Within each set of weights the median deviation between curve based price and actual price goes up linearly with time... I added that in my model mix and gained a tiny bit, but i was wondering what;s the phenomenon under that. My comp is busy right now but Ill make and post graphs later. If you have octave/matlab you can use the attached code to regen the graphs, the variables are self-explanatory... % train_data is the training set, Ntrain X 61 % each graph plots a ""blob"" that is the all the data in the graph comes from trades within one of the modes of the multimodal distribution of delays between THE trade (indexes 1-11) and trade-1 (indexes 12-16 in data line). % each graph cuts that mode in 10 finer slices of time and plots the mean or median absolute error % the first 5 blobs, which comprise the vast majority of weight, show a strong relationship between time in the blob and abs(price-curve_based) % the last two graphs put together all blobs to illustrate the fact that the relationship gets a reset every blob. [Link]:https://storage.googleapis.com/kaggle-forum-message-attachments/2423/studyblobs.m",0,None,2 ,Tue May 01 2012 10:04:22 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1837,/competitions/benchmark-bond-trade-price-challenge,37th /ddminer,What tools do you use to analysis it? ,Those text files are huge.. What tools do you use to analysis it? Can I anlysis this files with SPSS 14.0 Clementne? I just participate in KDD Cup for my school assignment I haven't analysis those types of data files in class. So I wonder what tools do you guys use?,0,None,5 ,Tue May 01 2012 10:33:43 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1838,/competitions/kddcup2012-track1,None /peterphillips,Final model scoring,I notice that not all entries have been scored on the private leaderboard (most still say 0.00000). Are they still being scored or does this mean something went wrong with our submission?,0,None,1 Comment,Tue May 01 2012 13:18:59 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1839,/competitions/asap-aes,107th /benhamner,Congratulations and DC Conference,"Congratulations to everyone who took part in this contest, and especially the top 3! This was a very challenging problem, and all your results are impressive. Preliminarily, the three winning teams are: Jason, Momchil, and Stefan Phil, Chris, Will, Eu Jin, and Bo Vik and Justin We're working to validate these as soon as possible to confirm this. So far, Jason, Momchil, and Stefan's code is looking good, which presumably means everyone else is good as well. We'd like to invite anyone with a final QWK above 0.75 to partcipate in the TILSA conference on May 9 (see https://www.kaggle.com/c/asap-aes/forums/t/1789/award-ceremony-expectations and https://www.kaggle.com/c/asap-aes/forums/t/1701/award-ceremony for more information). Please email Lynn (Lynn at openedsolutions.com) and CC me (ben at kaggle) ASAP if you'd like to come. I'll be in Washington DC Tuesday May 8 (arriving at night) - Sunday. Hope to see many of you there!",0,None,4 ,Tue May 01 2012 18:24:56 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1840,/competitions/asap-aes,None /jothy17269,"Memory issue: ""Cannot allocate vector of size...""",I am a new bie in this data mining field. I have started working on this competition to learn new things. My computer is 32 bit machine and has 3.24 GB RAM. I am facing memory issues while training the data set. Can anyone please help me on this...,0,None,4 ,Wed May 02 2012 11:40:16 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1841,/competitions/kddcup2012-track2,156th /activegalaxy,Visible versus Hidden,When the test triplets were split into visible and hidden halves was there any reordering of the songs for each user? I'm curious if these user/song lists could contain time sensitive preferences and if that was altered upon splitting. Thanks.,0,None,1 Comment,Wed May 02 2012 18:07:50 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1843,/competitions/msdchallenge,119th /danglaser,Thanks!,"I want to thank everyone who worked on this competition, from those who downloaded the data just to check it out to those that made multiple submissions per week and were active on the forums. We appreciate the interest in our competition and all the effort that was put into your solutions. The discussion on the forum was great to see and I'm enjoying reading about your methods in the ""Congratulations"" thread. We hope to run future challenges in the future, so please let us know your thoughts about this competition. What did we do well? What could we improve on? How was our competition interesting/fun/informative? Thanks, Dan",0,None,1 Comment,Thu May 03 2012 00:07:23 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1844,/competitions/benchmark-bond-trade-price-challenge,None /dajre41911,UserId and ItemId,"I get that UserId = ItemId however in the cases when the ItemId is ""an organization, or a group"" does that mean that the organization and group can be a singular user? I guess what I'm asking is are there sometime multiple users assigned under a single Item or is it always 1:1?",0,None,1 Comment,Thu May 03 2012 07:05:56 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1846,/competitions/kddcup2012-track1,318th /imsrch,Is mahout of use in this track?,Is mahout of use in this track? Has anybody tried?,0,None,2 ,Thu May 03 2012 07:11:57 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1847,/competitions/kddcup2012-track1,None /mashhoori,The thing that everyone in track1 wonders!,Does anybody have any idea why there is such a huge discrepancy between the scores of the first two ranks and other scores?,0,None,1 Comment,Thu May 03 2012 10:42:08 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1848,/competitions/kddcup2012-track1,27th /workingknowledge,"Any estimates of the errors in ""Activity"" or D[i]?","The description of the data set implies that ""Activity"" was estimated by some experimental process (e.g., in vivo or in vitro measurements). Are there any estimates of the rate of Type I & Type II errors in this Activity data? Similarly, are there any error models for any of the molecular descriptors? Although descriptors based on elemental constitution might be error-free (assuming testing used 100% pure samples), other descriptors such as size, shape, or other chemical properties might be subject to calculation or measurement errors.",0,None,5 ,Thu May 03 2012 16:29:46 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1849,/competitions/bioresponse,417th /liwoliht,Allowed to use Leadboard-score to improve model,"Hi, I wonder if it is allowed to use the scores found on the leadboard to systematically extract the true classes for test-data entires? E.g. one might change the predicted value for one testdata entry at a time to see if the score improves or not. Thanks",0,None,14 ,Thu May 03 2012 16:44:02 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1850,/competitions/bioresponse,55th /dajre41911,"Keywords, Categories and Tags","I am a little confused about Keywords, Categories and Tags. Keywords I understand that UserKeywords and ItemKeywords use the same vocabulary. For example if 9 = Computer then a 9 in UserKeyword and a 9 in ItemKeyword would both mean Computer. One question I have is how are the ""weights"" determined for user keywords? Is this weight determined from all of that user's tweets? As in the keywords they tend to post most often? The other question I have about keywords is about the item keywords. How are these keyword generated? Are they the same keywords of the user that posted them? Or are they generated from the words posted alongside the item? Tags Tags from my understanding are words that a user chose to describe themselves. Such as climbing or programming. My question about tags is are there any other fields that share the same vocabulary as tags? Categories My question about categories are similiar to tags. Are there any other fields that share the same vocabulary as categories? Extra My last question which I'll throw in here too for completeness that I asked in another thread is are organizations and groups identified by a single UserId?",0,None,1 Comment,Fri May 04 2012 06:05:47 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1853,/competitions/kddcup2012-track1,318th /onlineproductsaleshost,Welcome from the competition host,"Hello all, Thanks so much for your interest in our data competition. We look forward to seeing the algorithms data scientists around the world come up with. Please direct any questions you have regarding the competition my way! Thanks, Jason",1,bronze,9 ,Sat May 05 2012 01:27:15 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1856,/competitions/online-sales,None /ovgu12,days in hospital y1,"hello everyone, there is a newbie question .assume i have claims for y1 could i calculate exactly days in hospital for y1 and how can i do that?",0,None,2 ,Sat May 05 2012 03:46:10 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1857,/competitions/hhp,288th /sashikanthdareddy,NaNs,What do NaNs in outcome variables mean? have the products been discontinued?,0,None,5 ,Sat May 05 2012 12:32:29 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1859,/competitions/online-sales,None /mikel1,Mean average precision and the order of the 500 songs,"In a submission, does the order in which the 500 songs are listed for a user change the Mean Average Precision? It looks like the order of the songs in each user's list is immaterial, so that 400 songs correctly included in the list for a user produces a precision of 0.8 to average into the MAP Also, does the order in which the songs are listed for a user in kaggle_visible_evaluation_triplets.txt have meaning?",0,None,6 ,Sat May 05 2012 12:43:01 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1860,/competitions/msdchallenge,14th /chrisbrew,Odd wording in description of evaluation.,"The mention of ""editors"" and ""edits"" in the description of the evaluation is a touch mysterious. Is this wording left over from some other competition, for something like prediction of how often Wikipedia pages get changed? It would all make sense if ""editors"" were replaced by ""months"" and ""edits"" by ""monthly sales figures"". Can someone clarify?",0,None,3 ,Sat May 05 2012 16:09:48 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1862,/competitions/online-sales,69th /fredm42251,code,>Winning participants are required to provide code is there specific programing language code must be in? or whatever I use?,0,None,1 Comment,Sun May 06 2012 08:14:51 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1863,/competitions/online-sales,None /idrach55,n in the Evaluation Function,"Hello, I found the Evaluation Function to be a bit vague. n is described as ""n is the total sales in the (public/private) data set"". Is that the number of sales in a given month for a product, or for the whole year? Also, is a score given for each month, or for the whole year. Would p be the prediction of the i month? Any information would be much appreciated.",0,None,2 ,Sun May 06 2012 08:31:31 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1864,/competitions/online-sales,164th /ejlok1,Evaluation metric code,"here's the R evaluation metric code if anyone's interested: RMSLE <- function(P,A) { sqrt(sum((log(P+1)-log(A+1))^2)/length(A)) }",2,None,2 ,Sun May 06 2012 14:30:24 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1865,/competitions/online-sales,5th /datajunkie1,AUC or MAE,What criteria is used for evaluation AUC or MAE? in one of the post its mentioned that both will be used but in Information section i see only AUC is mentiond (as described in Algorithm 3 in Tom Fawcett's paper). Please clarify. Thanks,0,None,1 Comment,Mon May 07 2012 20:21:09 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1869,/competitions/kddcup2012-track2,139th /randomforestfanatic,Can clustering help with data mining?,"So, this question applies to this particular problem (predicting a biological response) but it's also something I've been curious about in general. My question is: has anyone seen improvement in models by first clustering the data into groups and then building separate models on each group? I have a bit of background in marketing/customer analysis, and it seems like the typical approach in that field is to first segment the customers and then build individual models for each population. To me, it seems like it would be more profitable to use all of the data to train one model (such as a random forest, boosting tree, etc.) and not worry about clustering. But, I could be wrong and that's why I'm asking for thoughts!",0,None,4 ,Mon May 07 2012 21:44:38 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1870,/competitions/bioresponse,71st /jinglei,Will the rec_log_test.txt with the right value be released? ,If the rec_log_test.txt with the right value of the ‘Result’ field will be released when the competition ends?,0,None,4 ,Tue May 08 2012 11:09:16 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1872,/competitions/kddcup2012-track1,None /blindape,Someone found an usefull variable?,Is a bit frustrating don't be able to find a simple association. GBM convergence crash! deviance grows with more trees... Really a hard data set. A lot of variables almost constant. Any clues of the promotors about the nature of the variables will be welcome.,0,None,2 ,Tue May 08 2012 12:54:46 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1873,/competitions/online-sales,168th /chrissumner,Welcome,"We're excited to launch this, our first competition! The aim of this competition is to determine the best models to predict the personality traits of Machiavellianism, Narcissism, Openness, Conscientiousness, Extraversion, Agreeableness, and Neuroticism based on Twitter usage and linguistic inquiry. As an organization one of our research goals is to understand just how well personality can be predicted by activity on social network sites such as Twitter. There are many sensational headlines stating that social network activity can predict personality, but there's scant research into predictive model performance. We want to answer questions such as ""Are employers who pre-screen based on social networking making a gross mistake?"". Your participation in this research will help drive future important research on personality prediction. Finally, we will imminently be releasing a second competition focusing on just one personality trait, Psychopathy. Please do look out for that competition. Thanks and good luck, Chris",0,None,9 ,Tue May 08 2012 20:16:27 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1875,/competitions/twitter-personality-prediction,None /mkline55,Patient 41073844 LengthOfStay,"I realize the provided data is ridiculously poor, and that's probably why the competition has to run so long, since every team has to come up with their own methods for handling this problem. Why wasn't the data cleaned up? That seems to be a more difficult problem than the supposed solution being sought. In year 2 Member ID 41073844 appears to have LengthOfStay value between 4 and 8 weeks per month (never longer) and sometimes an additional 2-4 weeks. It's ridiculous. Yes, I can handle the data in a manner I determine to best represent what it should have been. I've read the posts related to the quality of the data. But the real question is how does Heritage handle the data for year 4? Do we assume Heritage has correct numbers, and not junk, and if so, why give us junk for the prior years and distract from their goal? In calculating results, would Heritage have used the total of 1009 days for this one member in year two, and said my estimate is a little short? Gee, I thought they would only be in the hospital for 365 days.",0,None,1 Comment,Tue May 08 2012 21:16:53 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1876,/competitions/hhp,453rd /eigenvector,Submission Limits?,"Hi All, I'm new to this forum/competition and I'm a bit confused about the limit on the # of submissions. The rules state that: ""8. ENTRY SUBMISSIONS ... Entrants may submit one (1) Entry during each calendar day (UTC) of the Competition Period, beginning on May 4, 2011. Team Entries must be submitted by the Team leader."" So we're only allowed 1 entry per year? I've been looking at the leaderboard for the past few days, and it appears that some teams are able to submit much more frequently than that. What am I missing? Thanks ARP",0,None,2 ,Tue May 08 2012 21:27:01 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1877,/competitions/hhp,180th /delley11,Leaderboard on June 1st,"Dear all, I have a question about the leaderboard on June 1st. In description, it says ""In the result file, the lines corresponding to the lines from validation dataset will be used to score for the ranking on the leaderboard during the competition except the last day (June 1, 2012), and the lines corresponding to the lines from testing dataset will be used for the ranking on the leaderboard on the day of June 1, 2012, and for picking the final winners."" Does it mean that we can see the AUC of the 58% testing data on June 1, while we can still submit new entries on the same day? Looking forward to your answer. Thanks. Best regards, Delley",0,None,2 ,Wed May 09 2012 09:10:09 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1879,/competitions/kddcup2012-track2,33rd /mikel1,Submit from a website ?,"Folks: I am on a slow internet connection, so my submissions time out. My website is on a fast connection. If I upload my Submission file to my website, is there any way I can type the URL of the Submission file into the Submission box on the Submissions webpage?",0,None,2 ,Wed May 09 2012 12:17:24 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1880,/competitions/msdchallenge,14th /sidav40256,Anybody can give some hints on how to model the new users?,Anybody can give some hints on how to model the new users that are in the test set but not the training set? Many thanks in advance!,0,None,6 ,Wed May 09 2012 13:18:21 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1881,/competitions/kddcup2012-track2,60th /activegalaxy,MCAP Definition,What is your definition of MCAP on the leaderboard?,0,None,2 ,Wed May 09 2012 16:21:01 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1882,/competitions/twitter-personality-prediction,None /zstats,Categorical variables,"Can you clarify the non-binary categorical variables? For instance, Cat_4 appears to have 529 unique integer values, ranging from 1 to 1544. Does the ordering or the exact numerical value have meaning, or are the integers simply being used as arbitrary labels?",0,None,2 ,Thu May 10 2012 05:02:38 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1884,/competitions/online-sales,29th /chipmonkey,Abandoned teams?,"Four of the top 20 leaderboard entries (plus #21 in fact) were posted back in 2011, if I read the leaderboard correctly... most of them just before the Milestone 1 cutoff. I can think of plenty reasons that would legitimately happen, but I find it kind of surprising, to get that close so soon and then disappear? Where'd you guys go?! Or do I misunderstand the date stamp? Oh well. The forums were slow lately so I thought I'd point it out.",0,None,13 ,Thu May 10 2012 05:58:08 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1885,/competitions/hhp,63rd /tafkas,More information about the variables,"Can you provide more information aobut the variables? For example why they are grouped: AVar1 - AVar13, LVar1 - LVar 239, FVar1 - FVar 85",0,None,7 ,Thu May 10 2012 16:20:57 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1886,/competitions/twitter-personality-prediction,None /skyline,Question(s) about Data,"Hello! Maybe you can think that I am crazy, but I thought that we have the next files: train.csv - data in which we should train our models test.csv - file which we shoud use to predict the real output AND optimized_value_benchmark, rf_benchmark, svm_benchmark, uniform_benchmark to use like output for test set! But now after reading some topics on this forum, I understand that we haven't got any out for test data! Is it right? We have only samples how we shoud make our submissions, is it right? But I stil don't understand what means optimized_value_benchmark where it's seems that all cases are the mean of svm_benchmark. So, the main question is ""How we can you these benchmark files?"" I really thought that I shoud use (for example I used svm_benchmark) this files to understand the quality of my model before I will make a submission. Thank you for the answers and sorry for my english!",0,None,1 Comment,Thu May 10 2012 19:38:57 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1887,/competitions/bioresponse,354th /randomforestfanatic,Question about the process of ensemble learning,"This question may require a rather long explanation, so if someone could direct me to a reference that would be much appreciated as well! Anyways, I'm wondering about the accepted practices in ensemble learning. I just attempted to do what I thought would be a good approach for this problem: Build many different models (logistic regression, elastic net, random forest, boosted trees, SVM; also using different values for tuning parameters on these models) and determine their predictive accuracy using cross-validation (5-fold). I computed a logloss for each model (using the hold-out data sets), then built a final model on the entire training data. I built new models (with the same tuning parameters) on the entire training set and then predicted on the test data set. My final prediction was a weighted average of these models (where the weights were proportional to 1/logloss of each model on the validation sets). I also tried combining these predictions using a random forest (trained on the entire training data set) and using that to then predict on the test data set. (Sorry if this doesn't make sense, I'd be glad to provide more details if I'm not explaining this well.) However, what surprised me is that my model didn't perform too well on the leaderboard; it didn't even beat the random forest benchmark. Am I doing something wrong in this process? Does anyone have a good reference on blending (that would be easy for a newbie like me to understand)? Thanks for the help!",7,bronze,15 ,Fri May 11 2012 16:36:37 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1889,/competitions/bioresponse,71st /intaka,Submission predictions start in column 1 or 2?,"The instructions on the Submission process page says: ''' Your entry must: be in CSV format (can be in a zip/gzip/rar/7z archive) have your prediction in column 2 have exactly 110,000 rows ''' Do the predictions start in column 1 or column 2?",0,None,5 ,Fri May 11 2012 19:25:32 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1890,/competitions/msdchallenge,17th /meanregression,Monthly sales values,"How were sales amounts rounded in the first 12 columns? For example, if the sales in Month 11 after release were: 300 for Product A, 2100 for Product B, and 2700 for Product C, how would Outcome_M11 read for these three rows?",0,None,1 Comment,Sat May 12 2012 07:58:53 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1894,/competitions/online-sales,170th /salimali,Date_2 seems odd,There are some products where there are nearly 8 years between the product being announced (date_2) and it being launched (date_1). Is this correct?,1,bronze,3 ,Sat May 12 2012 08:54:36 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1895,/competitions/online-sales,9th /meanregression,Duplicate columns?,"Quan_13 and Quan_14 are exactly the same. Is this expected behavior? Also, Quan_28 has 3 distinct values: 0, 1 and 2. Please confirm that this is a quantitative variable.",1,bronze,6 ,Sat May 12 2012 10:00:24 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1896,/competitions/online-sales,170th /harriken,Black Magic,Hey Xavier Conort aka Gxav could you share with us how you always manage to get the top spot with only one (1) submission???,0,None,3 ,Sat May 12 2012 11:27:01 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1897,/competitions/online-sales,25th /boxu142789,Binary Variables,"Hi there, In the description, binary variables are defined as a variable with values 0 and 1 representing having the feature and not having the feature respectively. However, in many of the columns, some variables are observed to have two distinct values such as 0 and x, where x is another integer value. Could these variables be intepreted as binary variables as well? Thanks",0,None,5 ,Sun May 13 2012 06:23:05 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1898,/competitions/online-sales,None /ccccat,Disappeared location,"Is there specific reason you removed location from profiles? I do not think it is good idea. I ,personally, prefer to form teams with members who are geographically close to me. Also it may be important information for potential employers.",0,None,8 ,Sun May 13 2012 17:01:06 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1899,None,None /skyline,Question(s) about final log loss output,"Hello! I have a question about log loss error (output) and data. When I wan't to know error of my model I get svm_benchmark file and output vector of 2501 numbers of my model. Then I can obtain the log loss error with this two vectors with evolution formula. After that I make a submission and I have the same error ( +- 0.01 - 0.03). I know that error on the leaderboard use only ~600 points from 2501. Is all of this correct? And the second important question. As I understand we can't get error less than ~0.519 with svm_benchmark because if we get two svm_benchmark files and use one of this like real probabilities and another one like model output we will get the error ~0.519. So how we can get the error less that even when we have MAE = 0 !! we have such bad error. I understand that we don't use MAE here, but I have also try to understand what number we must have at each row to minimize log loss function and I have obtaind the same numbers !! It's means that MAE must be minimized first but the error can't be less ~0.519. How can it be?..",0,None,7 ,Sun May 13 2012 18:08:17 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1900,/competitions/bioresponse,354th /meanregression,Quant_28,"Please confirm that Quant_28 is quantitative. There are only three distinct values (0, 1 and 2). Edit: I assumed that 'Quant' is the same as 'Quan', that is, both are meant to represent quantitative variables. Is this assumption correct?",0,None,1 Comment,Sun May 13 2012 23:07:21 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1901,/competitions/online-sales,170th /chrissumner,Welcome,"We're excited to launch this competition, which is closely related to our first competition "" [Link]:http://www.kaggle.com/c/twitter-personality-prediction"" (which is still running). The aim of this competition is to determine the best model(s) to predict the personality trait of Psychopathy based on Twitter usage and linguistic inquiry. As an organization one of our research goals is to understand just how well personality can be predicted by activity on social network sites such as Twitter. Following on from a paper titled "" [Link]:http://www.cbc.ca/fifth/37/episodes/murderhewrote/images/Hancock%20Woodworth%20&%20Porter%20(2011)Hungry%20Like%20The%20Wolf%20-%20The%20Language%20of%20the%20.pdf"" (Hancock et al, 2011), we saw a headline titled "" [Link]:http://www.dnaindia.com/world/report_can-twitter-help-expose-psychopath-killers-traits_1598342"" and sought to examine the question. We have conducted a statistical analysis and identified numerous significant relationships, but we now want to understand the performance of predictive models. You will likely find the comments in our other competitions' [Link]:http://www.kaggle.com/c/twitter-personality-prediction/forums helpful. This is our first set of competitions and we expect to learn as much about framing data for data science competitions as we already have about Psychopathy. We appreciate that the obfuscated variables may be frustrating, and we're currently investigating that further. We will take comments on board and look to either improve the competition or release follow-on competitions. The results of this competition will be shared at [Link]:http://www.defcon.org/html/defcon-20/dc-20-index.html in Las Vegas this summer and used in a paper. The challenge with this competition is that it's an imbalanced data set, with a majority of people scoring low on the scale of Psychopathy (we expected this). Your participation in this research will help us highlight Online Privacy issues and help drive future important research on personality. Thank you, Chris",0,None,1 Comment,Mon May 14 2012 18:31:07 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1902,/competitions/twitter-psychopathy-prediction,None /activegalaxy,Meta-Contest: Leaderboard Upper Bound,"Some of the Kaggle contests generate leader scores which, when plotted against time, show good correspondence with a curve of the form A - exp(a + b x t) where t is time and A, a, and b are free. Example contests are the AWIC and ASAP (piecewise exponential). This contest appears to have the same quality as is shown in the attached graph. Here the leading scores, not including the benchmark, and the model are shown. The vertical line is the current date relative to the start. This seems to indicate that the utility of the contest for the sponsor has been consumed and all that is left is to make minor improvements and become the winner. This model gives an upper bound of A = 0.864. Would anyone else like to propose an upper bound for this contest? [Link]:https://storage.googleapis.com/kaggle-forum-message-attachments/2500/TwitPersonality.JPG",1,None,3 ,Mon May 14 2012 21:54:14 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1903,/competitions/twitter-personality-prediction,None /utdiscant,Problem downloading the DescriptiveStats.pdf,"I have problems downloading the pdf of DescriptiveStats from the front page of the competition. When I click the link, nothing happens.",0,None,1 Comment,Mon May 14 2012 22:59:13 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1904,/competitions/twitter-psychopathy-prediction,None /benoitplante,Quantitative vs Categorical.,"Just to be sure I understand correctly. Quan3=100 is twice of Quan3=50. Like for example $100 and $50, or 100 hours and 50 hours. Cat2=100 is different of Cat2=50, but no information about size. It could be 100=red and 50=blue. But if the product is blue it can't be red. Am I right?",0,None,5 ,Tue May 15 2012 03:40:24 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1905,/competitions/online-sales,8th /fdyangdq,what will MAP be when a user refused all recommendation indeed?,"What can tell me? When a user A refused all recommendation (all items are -1) in fact and my model also predict such result, what is MAP for user A? Will it be 1? Or should I neglect such user when computing total MAP? Moreover, still for such user A, is his MAP 0 if my model predict A accept one or more recommendation? Thank you!",0,None,1 Comment,Tue May 15 2012 03:48:57 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1906,/competitions/kddcup2012-track1,148th /alexxanderlarko,Evaluation,"My sample code R ###################################################### err1 <- function(obs, pred){ tst<-data.frame(obs,pred) tst<-tst[order(tst$pred, decreasing = TRUE),] mean(cumsum(tst$obs)/cumsum(obs[order(obs, decreasing = TRUE)])) } #########################################################",13,gold,5 ,Tue May 15 2012 05:57:25 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1908,/competitions/twitter-psychopathy-prediction,59th /leustagos,Same members on many teams,"Kaggle guys, Can you check if there are distinct teams belonging to the same users? Itś very odd that some teams always improve their score on blocks, always near each other... Thanks a lot!",0,None,3 ,Tue May 15 2012 17:08:34 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1909,/competitions/kddcup2012-track2,5th /silvia3,acceso a la base de datos,"no puedo descargar los archivos necesarios para este trabajo, Ni kaggle_user ni el kaggle_songs ni kaggle_visible_evaluation_triplets. Me lleva directamente a cambiarme o editar mi perfil.",0,None,1 Comment,Tue May 15 2012 18:58:12 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1911,/competitions/msdchallenge,None /alphaxomega,Matlab data,"Hi, According to [Link]:http://labrosa.ee.columbia.edu/millionsong/pages/getting-dataset, do I need to massage the existing SQL data into Matlab or is that already done? I simply want to avoid running scripts hours in and then finding out there already exist Matlab arrays to index into. The song-level similarity gives some score; the lyrics bag-of-words provided by [Link]:http://labrosa.ee.columbia.edu/millionsong/musixmatch doesn't do that. Is there existing ""scores"" for that somewhere?",0,None,1 Comment,Tue May 15 2012 21:00:24 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1912,/competitions/msdchallenge,None /masilvav,Timeout,"Hi. Recently I've tried to make a submission, but I had a timeout error....I'm in a bad internet connection, I know. Although I couldn't upload the file, my try was counted as a second entry a day and then I cannot upload any file within 8 hours :( Any suggestion? ps: I think if you can't upload, then Kaggle must not count it as a try",0,None,2 ,Wed May 16 2012 18:12:11 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1915,/competitions/twitter-psychopathy-prediction,42nd /sedielem,Order of users in the taste profile subset,"The (user, song, count) triplets in the taste profile subset text file are ordered by user, but I was wondering if there is any other regularity in their ordering. I rather arbitrarily split this dataset into a training and evaluation set to get a rough idea of how solutions will perform before submitting them. Evaluating the getting started submission (songs by popularity) on this split gives a considerably lower mAP value (~0.001) than the one displayed on the leaderboard (~0.023). I also implemented the 'same artist - greatest hits' baseline from the paper. The results for this are much more comparable: ~0.061 on my split, and ~0.065 on the leaderboard. The latter seems to indicate that my mAP calculation code is okay (no serious flaws at least), so the result for the 'songs by popularity' baseline is somewhat surprising. This lead me to think that perhaps the order of users in the taste profile subset is not random. Is this the case? It's as if the users that I split off have particularly 'alternative' tastes (i.e. they don't listen to popular songs).",0,None,4 ,Wed May 16 2012 19:21:37 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1916,/competitions/msdchallenge,21st /mattfrancis,Big gap at the top,"The top 4 teams have a clear lead over the rest of the pack. There must be some similar trick they have all discovered that leaps them over the barrier at ~0.413 that the teams below them are trapped at. I wonder what that is? There's still a long way to go but if this was reflected in the final sprint I'd feel sorry for the 1 team out of 4 that misses out on the mula, given how stark the difference in scores are between the breakaway group and the peloton.",1,None,20 ,Thu May 17 2012 23:50:08 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1917,/competitions/bioresponse,35th /milestone,why not use zero-one loss (or hinge loss)?,"zero-one loss should be the most natural loss function for the general classification problem, while the log loss function is associated to the generalized linear (logistic) model. Is there any particular reason that log loss is used here?",0,None,4 ,Fri May 18 2012 00:49:28 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1918,/competitions/bioresponse,None /dslate,Timed-out submissions,"Recently I made a couple of submissions that, after some period of time, received a message from the server that the submission timed out and I should resubmit it. In the first case I did, but ended up with duplicate submissions and an extra charge to my daily quota. The second time I noticed that the submission showed up with the message ""Pending"", and after some more time it received a score, so no resubmission was necessary. If would make more sense for the server to delete the submission after issuing the time-out message to avoid duplicate submissions. The time-out message suggests that the submission really was lost, and not just waiting for a score.",0,None,4 ,Fri May 18 2012 09:49:13 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1921,/competitions/kddcup2012-track2,27th /brucewu,How to generate the validation set?,"I'm so confused. I split the train set according to time information (GMT+8), and reluctantly used RMSE ( the result of validation set is -1 or 1) to evaluate the model. How to extract the validation set from rec_log_train.txt ?",0,None,5 ,Fri May 18 2012 14:39:36 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1923,/competitions/kddcup2012-track1,114th /iguyon,10 Free Kinects to new challenge entrants,"Dear gesture enthusiasts,If you did NOT enter round 1 of the gesture challenge, enter round 2 and win one of 10 free Kinects offered by Microsoft.Be the first 10 NEW people who make an entry that outperforms on validation data the benchmark entry (for which we provide the Matlab code): Principal Motion 0.39912(see http://www.kaggle.com/c/GestureChallenge2/Leaderboard) Claim your free Kinect by sending a screenshot of the leaderboard indentifying your winning entry to:events@chalearn.orgGood luck!Isabelle",1,bronze,3 ,Fri May 18 2012 18:11:22 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1924,/competitions/GestureChallenge2,None /qhfgva," ""average precision"" algorithm [noob alert]","So is it assumed that I should create my own ""average precision"" calculator so that I can track my progress before submitting? I'd rather avoid submitting every little tweak. Obviously if there are libraries (R,octave,python) I'd just as soon use one of those. dustin",1,None,2 ,Fri May 18 2012 22:40:45 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1926,/competitions/twitter-personality-prediction,None /bobdeaton,Submission format,"Does a submission consist of just the myID column, sorted by the computed psychopathy score in descending order?",0,None,4 ,Sun May 20 2012 22:03:17 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1929,/competitions/twitter-psychopathy-prediction,77th /bogdan90,Submission error,"Hello, When I try to submit the solution, I get the following error: ""Submission must contain a minimum of 12 columns (Line 1)"" And I always get that error, even if I have 519 rows x 13 columns and I have tried to submit .csv, .zip archive containing .csv and even .zip containing a text file and I still can't get it to work. Can someone please help? Thanks, Bogdan90",0,None,1 Comment,Sun May 20 2012 22:37:30 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1930,/competitions/online-sales,359th /mattfrancis,More Humane display of scores,"Maybe it's just me, but I find it difficult to conceptualise leaderboard scores easily, since they are such small numbers. When someone posts in the forums about obtaining a '0.01' improvement by some method, my brain keeps telling me that is tiny and insignificant, when of course in general it is not. I've almost managed to train my brain to pre-process anything related to a leaderboard score by multiplying by 1000, then things become more comprehensible somehow. Does anyone else think the leaderboards would be easier to digest if the displayed values were already multiplied in this way?",0,None,1 Comment,Mon May 21 2012 02:14:09 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1931,None,None /mikel1,Lessons so far ....,Here's what I have discovered .... When predicting which songs a user downloaded .... 1. Users with many downloads are less helpful 2. Songs with many downloads are less helpful 3. Clustering is less helpful 4. Nearest neighbors can be helpful provided that the neighbors are really near (at least two downloaded songs in common),2,bronze,2 ,Mon May 21 2012 04:50:11 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1932,/competitions/msdchallenge,14th /glider,Multiple Accounts Frozen,"If you received an email last week regarding a KDD account that has been flagged as a multiple account and did not respond to us within the grace period, your account has now been frozen. If you believe we have made an error in freezing your account, please contact [Link]:mailto:compliance@kaggle.com.",1,bronze,2 ,Mon May 21 2012 18:48:07 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1935,/competitions/kddcup2012-track1,None /kubokonecny,Bug in databatch.m,"Hi, I am learning to work with the sample code you have provided, and I believe I found a bug. It is in the databatch.m file, in function goto(this, num) (line 131) Problem is in the condition - if num~=this.current_index && num>0 && num If causes problems when databatch.invidx isn't the same as databatch.subidx. For example, when you send for testing only the test set, like you do in main.m. In this case, it doesn't evaluate the first video of the test set, but evaluates first video in train set instead, because current_index isn't changed from it's initial value in first loop. I think the right contition is if this.current_index~=this.subidx(num) && num>0 && num Your score using Principal Motion algorithm is 0.39912. I scored 0.37866 only thanks to change in this condition.",0,None,2 ,Mon May 21 2012 20:43:49 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1937,/competitions/GestureChallenge2,22nd /ruandaffue,Heritage Algorithm vs Existing Algorithms: How do they differ?,"Hi there, I've been doing some extensive research on the potential value of conceptually applying the heritage algorithm in a healthcare provider setting (hospitals, etc). From a data analytics perspective, the scope for developing this algorithm undoubtedly exists. However, various similar algorithms (with the purpose of identifying patients at risk of admission or preventing 'unnecessary hospitalizations') have been developed and implemented across the globe for a number of years now. Due to the fact that some of these algorithms are disease based, they tend to have much higher accuracies as opposed to the heritage 'accuracy threshold' of 0.4. How does the heritage algorithm then differ, if at all, from these algorithms? Regards",0,None,4 ,Mon May 21 2012 23:16:23 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1938,/competitions/hhp,None /salimali,how random is the leaderboard sample?,Does the leaderboard select a random sample of rows and then use all 12 month for those rows in calculating the leaderboad score? Or does it just sample points completely randomly?,0,None,4 ,Wed May 23 2012 10:55:13 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1943,/competitions/online-sales,9th /sidav40256,How to select my best 5 results as the final ones?,"Hi, According to the following info, I dont know how to choose my best 5 results as the final ones? Is there some functionality of Kaggle that we can use to do this? Thank you very much! ---------------------------------- Note: You can select up to 5 submissions that will be used to calculate your final leaderboard score. If you do not select them, up to 5 entries will be chosen for you based on your most recent submissions.",0,None,3 ,Wed May 23 2012 12:08:37 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1944,/competitions/kddcup2012-track2,60th /yellowbelly,Evaluation of RMSLE,"The submission is given in the form of a table consisting of 519 observations of 13 (one id and 12 month) variables. Is it correct to assume that n in the evaluation formula is 6228 (519 * 12) in the final evaluation, i.e. every single cell is counted? Or is the evaluation done column or row wise, i.e. n is equal to 12 or 519? Regards",0,None,3 ,Wed May 23 2012 14:59:21 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1945,/competitions/online-sales,None /shaka43699,all rows in sample_submission_using_training_column_mean file are identical,Two questions: 1) why are all rows in sample_submission_using_training_column_mean file identical? 2) why does the filename contains mean. does it indicate something?,0,None,4 ,Wed May 23 2012 16:21:36 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1946,/competitions/online-sales,None /smartersoft,What year did this data start?,What year did this data start? Not sure if i've seen any reference to the real time period this data was collected over. For example is Y1 = 2005 for all members? Or is it multiple vintages of data? (ie Y1 can be 2005 or 2006 or 1999 etc) thanks EDIT:>> Same question applies to DSFS. I think I already know the answer but would like confirmation.,0,None,4 ,Wed May 23 2012 20:01:51 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1947,/competitions/hhp,796th /benoitplante,Accepted tools,"In the Rules section, it says: the participants will be asked to certify that they use a Kinect sensor and the Microsoft Software Development Kit (SDK). and They will need to use a Microsoft Kinect sensor (to the exclusion of any other sensor) and the Microsoft officially released Software Development Kit (to the exclusion of any other driver software). so we are obligated to develop our program using the Microsoft SDK? Why is there sample code done with Matlab?",0,None,2 ,Wed May 23 2012 21:44:40 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1948,/competitions/GestureChallenge2,29th /smartersoft,What am I missing? Why so many submissions?,"As far as I can tell, people/teams that continuously submit are just randomizing their input variables slightly and randomly changing weights. For the life of me I can't figure out how this would help HPN achieve its goal. As soon as the actual target changes, all top ranking submissions would be useless. Is this just a game of submit as many random variations as you can and hope you are the closets? Am I missing something? Why would anyone submit more than a few entries?",0,None,14 ,Thu May 24 2012 16:52:32 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1949,/competitions/hhp,796th /dfg125630,Date's format,When do Date_1 and Date_2 day numbers start from? 01/01/1970? Thanks,0,None,1 Comment,Thu May 24 2012 22:50:04 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1950,/competitions/online-sales,None /mlandkdd,What are the rules for finalizing team composition?,"Dear organizers, The official competition rules state that ""One week before the end of the challenge, the team leaders will have to declare the composition of their teams."" We assume that this means the team composition cannot be changed after 5/25/2012. Is that right? If not, what are allowed --- creating account, adding members, removing members, merging with other teams, ...? By the way, what is the exact rule for team merging? There seems to be a fine line between team merging and using multiple accounts. Thank you.",0,None,3 ,Thu May 24 2012 23:46:50 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1951,/competitions/kddcup2012-track2,None /allenict,what 's the submissions format?,"hi ,I dont't know the format,is it double between 0 and 1?or just 0,1? Thanks?",0,None,2 ,Fri May 25 2012 08:53:59 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1953,/competitions/kddcup2012-track2,142nd /ericmichael,Evaluation algorithm generated an invalid score.,"I have already submitted successful submissions. When I use Gradient Boosting on this data set, however, and submit the results i get this error: Evaluation algorithm generated an invalid score. Can anyone give me some insight as to why this might be occuring?",0,None,1 Comment,Fri May 25 2012 22:08:29 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1955,/competitions/online-sales,142nd /benoitplante,RMSLE ,What is the point of the logarithms in the error function? You want to make big errors less important?,0,None,2 ,Sat May 26 2012 03:00:30 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1956,/competitions/online-sales,8th /allankamau,Duplicate user_id+item_id records having different results in rec_log_train,This could be very late to ask. But why are there different recommendation follow-up results for the same user_id and item_id. Yes these records may have different timestamps but the timestamp in our case is not useful as I understand the true ranking is based on recommendation records (in the rec_log_test) having a timestamp larger than those found in the rec_log_train dataset.,0,None,1 Comment,Sat May 26 2012 09:41:30 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1957,/competitions/kddcup2012-track1,644th /salimali,cross validation errors,Is anybody getting cross validation errors way different from the leaderboard scores? I got a cv error of 0.50899 but only achieved 0.61975 on the leaderboard. Have I done something wrong or are others finding the same?,0,None,14 ,Sat May 26 2012 11:31:45 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1958,/competitions/online-sales,9th /muyu1984,About the entities in item.txt,"Hello everyone and adminstrators, I have a question about the entities in the item.txt. Are the items chosen by human beings? And are the categories of items assigned by persons? Or are these works done by machine? Thanks!",0,None,2 ,Sun May 27 2012 12:36:26 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1960,/competitions/kddcup2012-track1,None /gkoundry,Validation scores vs. the Leaderboard,"I'm finding that the scores I'm getting by cross validation are not really correlating to what I'm getting from the leaderboard. I've been in enough competitions on Kaggle to know that there is always some variance between validation sets and the test set, but in general lower validation scores translate into an improvement on test set scores. With this competition, it seems that my validation scores have no realtion at all to test set scores. Which means I have to choose between ignoring the leaderboard and trusting in cross validation or trying to blindly develop algorithms. I'm assuming that this is mainly due to the small size of the test set (the leaderboard only represents 30% of 1172 records). Another factor seems to be the non-continuous nature of the scoring metric (average precision) where small changes in prediction algorithms result in large changes in score. If I'm not mistaken, when this contest ends there is going to be a considerable reshuffling of leaderboard rankings. Are other people having the same problem? Is anyone else approaching this competition differently from past ones?",0,None,4 ,Sun May 27 2012 16:27:35 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1961,/competitions/twitter-psychopathy-prediction,52nd /chaosdecoded,"I have some questions regarding Licensing here, please.","Hello I am new to Kaggle, I would like to try myself in this challenge, but I would like to know some more about the rules. "" By accepting an Award, each Winner agrees to grant a worldwide, perpetual, irrevocable and royalty free license to the Competition Host to use the winning Entry and any Model used or consulted by the Winner in generating the winning Entry in any way the Competition Host thinks fit. This license will be non-exclusive."" So let's say I will use SVM combined with some tweaks (all using R project and it's packages) and some additional my code (some tweaks to data pre processing etc) , and let's say I will win the contest... I now have to give the code to the Host and explain what and how I did it to win ? Is this right ? So, let's say I'd like to take part in another challenge here at Kaggle, and let's say I will be able to use almost same code, same tweaks, SVM and r Project packages to win the other one ? Can I share the code and method with the seconda Challenge Host ? Can I use the code and methods I developed in first challenge in the second challenge ? What if 10 years from now I have my company, that somehow will be looking at the biological responses to different substances, And let's say this company would use again same code with some modifications ? Would this legal ? Thank you for your advices, Does anybody has any experience in situations like these ?",0,None,6 ,Sun May 27 2012 18:35:07 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1962,/competitions/bioresponse,228th /esla43459,Please visit our data-mining blog!,"Hi Everybody, This summer, I am teaching a data-mining class. In the class, we cover basic techniques in data mining. Students, as part of class homework, should submit their prediction for HHP. We try to publish our underestanding of the problem in our blog. Please visit us at: http://machinelearningsummer.blogspot.com/ http://machinelearningsummer.blogspot.com/2012/05/course-description.html Kind Regards, Esla",0,None,7 ,Sun May 27 2012 20:15:44 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1963,/competitions/hhp,1344th /weilinear,3rd party library/packages/code,"Can we use the code publicly available on the web for extracting features? Many features need to be extraced in the way described in some of the published papers, or use some of the library like opencv. Can we use those to build the prototype for participting in this competation? As far as I know some of the feautres are patented though codes are available. Not sure how Kaggle will restrict the use of third party codes. Best Regards,",0,None,1 Comment,Mon May 28 2012 04:32:37 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1964,/competitions/GestureChallenge2,10th /bradybenware,Handling of categorical attribute data,"This is likely a dumb question, but I guess I won't learn without asking a few of them... I am experimenting with the sklearn RandomForestClassifier and some basic feature engineering. My first attempt at feature engineering was to find all the columns in the training data set that appear to take on a small set of discrete values and create a new attribute (0,1) for each of the unique values. My thought was that this looked like categorical data and really should not be treated as a continuos variable. So for example there is a column with values (0, 0.125, 0.25, ..., 0.875, 1). Perhaps, 0.125 is really not any closer to 0.25 than it is to 0.875. So, in my modified data set I will keep the original column and then create 9 new columns. The second new column for example would have a value of '1' when the original column value is equal to 0.125, and '0' otherwise. I repeated this procedure for all columns which had 10 or fewer unique values. This increased the number of features to over 5000. With this approach I am seeing about a 0.004 difference on the leaderboard. However, I suppose it is possible that this increase comes just from different input conditions and not at all related to having these new features. I have a few of questions: 1. Does the sklearn RandomForestClassifier already handle categorical data such that my efforts here are silly? 2. Anyone else experimenting with this and seeing significantly different results? 3. Are there other ways to deal with categorical data? I was thinking that a gray code binary representation might ne interesting. Thanks!",1,None,5 ,Mon May 28 2012 17:57:47 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1966,/competitions/bioresponse,50th /byang1,The Variable Selection Game,"Hi, Anyone interested in sharing some variable selection results ? Here's the 100 most important raw variables I came up with using 2 different methods. Unfortunately they don't have much in common. Since these are raw variables, I think the RF version should be better. Logistic regression: D302, D823, D78, D603, D571, D469, D721, D41, D783, D490, D632, D144, D3, D720, D872, D649, D158, D665, D187, D87, D182, D88, D951, D631, D739, D596, D647, D172, D747, D688, D129, D607, D889, D666, D506, D504, D60, D681, D217, D715, D141, D480, D912, D269, D175, D789, D237, D154, D855, D10, D74, D64, D14, D660, D286, D856, D595, D96, D803, D310, D367, D85, D913, D231, D403, D81, D844, D489, D276, D440, D900, D49, D69, D806, D21, D663, D659, D311, D690, D928, D100, D901, D164, D334, D839, D479, D73, D155, D146, D762, D580, D19, D35, D436, D885, D664, D149, D414, D133, D887 Random Forest: D27, D14, D66, D10, D106, D469, D88, D5, D87, D7, D18, D2, D16, D107, D95, D9, D78, D21, D64, D8, D103, D91, D20, D84, D89, D104, D6, D86, D19, D105, D177, D17, D200, D15, D26, D951, D747, D911, D146, D74, D182, D196, D32, D100, D61, D99, D56, D102, D47, D204, D217, D660, D67, D25, D659, D187, D30, D90, D158, D31, D175, D69, D75, D173, D11, D46, D80, D131, D48, D209, D70, D208, D201, D181, D739, D45, D76, D198, D101, D218, D126, D60, D152, D607, D71, D43, D38, D83, D202, D62, D207, D49, D3, D39, D596, D65, D180, D68, D55, D54,",4,bronze,6 ,Tue May 29 2012 20:05:23 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1969,/competitions/bioresponse,92nd /medial,Final score - how to submit ?,"Hi, I will be happy to have a clear and exact explanation, how and when one should submit the file for the rest of the testing (on which the final score is calculated) , Thanks",0,None,3 ,Tue May 29 2012 21:48:29 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1972,/competitions/kddcup2012-track1,6th /xiaocongliang,Is it still able to submit on June 1?,"Could someone tell us whether it is ended before the midnight between May 31 and June 1, or ended between June1 and June 2? Thanks.",2,bronze,2 ,Wed May 30 2012 07:58:11 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1973,/competitions/kddcup2012-track1,3rd /datalev,RF and boost tree,Does anyone obtain an individual model that can reach <0.425 by using RF or Boost tree with selected raw variables only? How many variables should be put into RF with only 3700 records? It seems that without feature engineering it's difficult to reach top 20.,0,None,17 ,Wed May 30 2012 16:57:05 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1974,/competitions/bioresponse,125th /char144275,How to delete a member in a team?,Anybody any tell me how to delete a member in a term ?,0,None,7 ,Thu May 31 2012 04:52:27 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1975,/competitions/kddcup2012-track2,3rd /imran2,Anyone interested in joining teams ?,"My current score (0.414) comes from creating an ensemble of weak learners, so I'd be particularly interested in partnering with someone who has a strong individual learner or significant performance gain from feature engineering. I've attached a copy of my cross-validation outputs from the training set, so people can try combining my results with theirs and see if they can generate improved results. Email me if you're interested (imranghory@gmail.com) [Link]:https://storage.googleapis.com/kaggle-forum-message-attachments/2535/crossvalidation.csv",4,bronze,6 ,Thu May 31 2012 18:41:25 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1976,/competitions/bioresponse,7th /wfczcsbuaa,"still want the results scored, is it possible?","Hi, I'm not able to submit the results on time, but I still want them scored, not for leaderboard. Is this possible?",0,None,2 ,Fri Jun 01 2012 08:14:50 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1977,/competitions/kddcup2012-track2,None /tqchen,Delete the Duplicated Teams,"I repost the post here so that it can gets more attention. Currently there are many teams in the leaderboard which are duplicated. For example: team [Link]:https://www.kddcup2012.org/c/kddcup2012-track1/leaderboard#0.42784 with a user named ""tired"" which is closed. These accounts may be multiple registered accounts and closed by their owners. Cleaning them from leader board can help other players to know their clear position currently, and it's very important.",1,bronze,1 Comment,Fri Jun 01 2012 13:02:17 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1980,/competitions/kddcup2012-track1,1st /josephinazx,"When the competition is over, can anybody tell that how to achieve a score larger than 0.42?",Steffen said that he used a single FM model to achieve a score 0.41509 on track1. I wonder how to do that. What does your best single model score in the leaderboard? Which are the most useful information or rules in the dataset?,0,None,6 ,Fri Jun 01 2012 17:32:59 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1982,/competitions/kddcup2012-track1,45th /speedup,Submissions last time,"Hello, My team made it' s first submission today Fri, 01 Jun 2012 23:49:26 but we can't see our teams name in results... Website says it still calculates our score, does that mean we will appear afterwards? It keeps calculating a long time now.... Should we be worried?",0,None,2 ,Sat Jun 02 2012 02:28:49 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1984,/competitions/kddcup2012-track2,153rd /kddqiang,A Clean Leaderboard?,"Some teams have already closed there account, when will the leaderboards be cleaned?",0,None,4 ,Sat Jun 02 2012 05:09:39 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1985,/competitions/kddcup2012-track1,13th /eraviv,leaderboard results,"Hi all, I am a bit worried.. :) The leader board results.. can anyone explain how are these calculated? Is it some kind of ratio? or indeed their methods manages to pinpoint the sales with such a small margin.. Regards",0,None,1 Comment,Sat Jun 02 2012 15:42:10 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1986,/competitions/online-sales,139th /chaosdecoded,What are the other files ?,"Beside test.csv and train.csv, there is couple of other files in the Data ""section"". What are they ?",0,None,1 Comment,Sun Jun 03 2012 04:44:02 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1987,/competitions/bioresponse,228th /chaosdecoded,Question about the Submission and test data.,Are our submissions evaluated against categories 0 / 1 or probabilities (range of 0-1 )?,0,None,1 Comment,Sun Jun 03 2012 05:00:40 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1988,/competitions/bioresponse,228th /iotcasc,Will the true dataset be released?,"I want to know whether the true dataset for test will be released,we hope to do more research on Track 1.",0,None,1 Comment,Mon Jun 04 2012 10:22:04 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1989,/competitions/kddcup2012-track1,94th /lamkelf,Final Submission question,"A newbie question. The final standing will be based on the 75% of the test set. Do we submit our own codes and let Kaggle run for us, or we will be given the data a day or two prior to June 15 to let us make our ultimate submissions? If it is in FAQ please let me know. Big thanks!!",0,None,2 ,Mon Jun 04 2012 14:48:27 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1990,/competitions/bioresponse,43rd /benhamner,Welcome,"We are very excited to launch our newest product, Kaggle Recruiting Competitions, with Facebook! For this competition, you may demonstrate your machine learning skills by recommending users to follow in an anonymized, directed social graph. Facebook will review the code and methods that the top participants use and offer interviews to the best ones. Kaggle's been very successful through hiring some of our top participants, and we wanted to extend this capability to other organizations seeking top data scientists. We hope you have fun with this new addition to the hiring process, where you may succeed based on your skills and abilities instead of how polished your resume is. Please let us know if you have any questions about the nature of the competition or the data!",0,None,10 ,Tue Jun 05 2012 22:57:48 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1991,/competitions/FacebookRecruiting,367th /jscheibel,a little clarity,"I'm new to this style of scoring so I want to make sure (1) If among the 5 items recommended to the user, the user clicked #1, #3, #4, then ap@3 = (1/1 + 2/3)/3 ≈ 0.56 (2) If among the 4 items recommended to the user, the user clicked #1, #2, #4, then ap@3 = (1/1 + 2/2)/3 ≈ 0.67 (3) If among the 3 items recommended to the user, the user clicked #1, #3, then ap@3 = (1/1 + 2/3)/2 ≈ 0.83 shouldn't that be (1) If among the 5 items recommended to the user, the user clicked #1, #3, #4, then ap@3 = (1/1 + 2/3 + 3/4)/3 ≈ 0.81 (2) If among the 4 items recommended to the user, the user clicked #1, #2, #4, then ap@3 = (1/1 + 2/2 + 3/4)/3 ≈ 0.91 (3) If among the 3 items recommended to the user, the user clicked #1, #3, then ap@3 = (1/1 + 2/3)/2 ≈ 0.83 *edit* (and if not, could someone explain why the last term was droppped in the first 2 but not in the 3rd, again just new to this style of scoring and was looking for clarity)",0,None,1 Comment,Tue Jun 05 2012 23:43:35 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1992,/competitions/FacebookRecruiting,None /weiwunyc,Submissions page does not show the scores ,"On my ""Submissions"" page, the scores for each of the submissions are not displayed. I don't know it is just a problem for my account or if other people have the same problem. I am participating in another competition, the submissions page for that competition is normal, showing the scores for all the past submissions.",0,None,2 ,Wed Jun 06 2012 00:10:14 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1993,/competitions/bioresponse,53rd /jorgenhorstink,social graph dates,"Hi, Is it possible to get the dates of the nodes when they joined the network, and the date when the directed edge is created? It would helpful to understand the state of the network over time and when edges were created. This is helpful for determining how strong a connection is. Time is a very relevant factor. cheers, Jorgen",0,None,2 ,Wed Jun 06 2012 00:49:51 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1994,/competitions/FacebookRecruiting,None /stevenhnoble,Anymore information on how the edges were removed?,Can you share at all how the edges were removed or how many were removed? This would be really useful for generating data for a fitness test.,0,None,2 ,Wed Jun 06 2012 02:37:21 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1995,/competitions/FacebookRecruiting,None /waynezhang,cross validation,I have no idea on how to conduct cross validation for this task. Deleting more edges? Anyone who gives some hint? Thanks.,0,None,3 ,Wed Jun 06 2012 04:27:29 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1996,/competitions/FacebookRecruiting,None /raviteja,SAS/SQL code for c value or AUC of ROC ,"This code can be used for calculating AUC of ROC or the c value in sas: /*dataset data=sasuser.validation with predicted response*/ proc rank data=sasuser.validation out=rank_out; var Predicted_response; /* predicted response variable*/ ranks Pr_rank; /* rank variable*/ run; proc sql; select SUM(response=1) as tv, (SUM(Pr_rank*(response=1))-.5*(calculated tv)*(calculated tv+1))/((calculated tv)*(COUNT(response)-(calculated tv))) as auc from rank_out;",0,None,2 ,Wed Jun 06 2012 12:16:28 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/1999,None,None /ajschumacher,Good first step: make graph more commutative,"Hi! So, this graph is directed, meaning you can have A -> B without having B -> A. For kicks, I did a solution where my only predictions were those that made the graph more commutative. That is, if in the training set A -> B but not B -> A, and I had to predict for B, I predicted A. When there were more than ten such possible predictions, I just took the first that happened to be in the list. For some entries in the test set, I didn't make any prediction at all. I was wondering if I would get a score of zero, since this is such a simple technique and maybe the data was designed so that all good predictions didn't exist going either direction in the training set. But instead I got a score of 0.61523, which is currently good enough for third place. So: I suggest as a first step just introducing all the missing reciprocal edges that you can - then work on making a smart algorithm for your remaining guesses. No more low scores!",22,gold,10 ,Wed Jun 06 2012 20:56:59 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/2000,/competitions/FacebookRecruiting,199th /ajschumacher,Alternate explanation of Mean Average Precision,"The provided [Link]:http://www.kaggle.com/c/FacebookRecruiting/details/Evaluation is fine and all, but I will explain it differently here. (I would love to hear any improvements/corrections!) Consider one row of test.csv. You have a source_node, and you have to predict an ordered list of up to ten destination_nodes. Facebook and Kaggle know, for that source_node, some number of correct destination_nodes. For this row (one list of predictions) we do a sum and then divide to standardize. For every prediction, is it correct? If it isn't correct, you get no points for that prediction. If it is correct, you get a number of points equal to the number of correct predictions up to and including this one, divided by the position of this prediction in the list. For example: Prediction Correctness Points 1 wrong none 2 right 1 / 2 3 right 2 / 3 4 wrong none 5 right 3 / 5 6 wrong none 7 wrong none 8 wrong none 9 right 4 / 9 10 wrong none Note that order matters; if the correct answers in this example had been in positions 1, 2, 3, and 4, the sum at this point would be 4. The number you divide by is the number of points possible. This is the lesser of ten (the most you can predict) and the number of actual correct answers that exist. For this source_node, maybe there are only four possible correct predictions. Then divide by four. This makes the maximum possible average precision for every line of the test set equal to one. So average precision is the ""average"" of the ""precision"" at every position in the list of predictions. (Oh - and if there are no correct answers or you make no predictions, then the average precision is just zero.) Is it bad to submit a lot of predictions for every test source_node? No. There is no harm in using all ten guesses per test source_node, even if that source_node has fewer than ten correct destination_nodes. However, since order matters, it is important to put your best guesses first. Does every test source_node have correct destinatin_nodes? It seems likely, but is nowhere promised. It doesn't particularly matter, but if there are no correct answers, then everybody gets a zero for that row. All the test source_nodes at least exist somewhere in the training data, but not necessarily as source_nodes. Does order really matter? Okay: if all your predictions are correct, then order doesn't matter. The only bad thing is having an incorrect prediction before a correct prediction. The ""Mean"" in ""Mean Average Precision"" is just how all the individual (per test data set row) average precision scores get combined. Mean means mean. More thoughts? Also: my code to implement this: [Link]:https://gist.github.com/2891017",54,silver,8 ,Wed Jun 06 2012 22:06:50 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/2002,/competitions/FacebookRecruiting,199th /wcukierski,Graph-based Features for Supervised Link Prediction,(alternate title: shameless self promotion...) Competitors may find this paper helpful. Good luck to all! [Link]:https://storage.googleapis.com/kaggle-forum-message-attachments/2594/supervised_link_prediction.pdf,27,gold,7 ,Thu Jun 07 2012 01:49:41 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/2003,/competitions/FacebookRecruiting,None /jinglei,How long will the submission system run?,how long will us could submit the resuld though the competition has completed?,0,None,1 Comment,Thu Jun 07 2012 03:31:42 GMT+0200 (heure d’été d’Europe centrale),https://www.kaggle.com/discussions/questions-and-answers/2004,/competitions/kddcup2012-track1,None /andrew37,BFS Benchmark error,"In the BFS benchmark code, next_node should be added to looked_at at some point in the while loop, eg: while queue and len(visited)