Dataset Preview
Viewer
The full dataset viewer is not available (click to read why). Only showing a preview of the rows.
The dataset generation failed because of a cast error
Error code:   DatasetGenerationCastError
Exception:    DatasetGenerationCastError
Message:      An error occurred while generating the dataset

All the data files must have the same columns, but at some point there are 8 new columns ({'date', 'title', 'medal', 'question', 'url_competition', 'pseudo', 'nbr_comment', 'vote'}) and 5 missing columns ({'medal_com', 'vote_com', 'pseudo_com', 'answer', 'date_com'}).

This happened while the csv dataset builder was generating data using

hf://datasets/Raaxx/Kaggle-post-and-comments-question-answer-topic/kaggle_post.csv (at revision 081222a5e3aedeb04b84abde27abf98e130fe602)

Please either edit the data files to have matching columns, or separate them into different configurations (see docs at https://hf.co/docs/hub/datasets-manual-configuration#multiple-configurations)
Traceback:    Traceback (most recent call last):
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 2011, in _prepare_split_single
                  writer.write_table(table)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/arrow_writer.py", line 585, in write_table
                  pa_table = table_cast(pa_table, self._schema)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 2302, in table_cast
                  return cast_table_to_schema(table, schema)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 2256, in cast_table_to_schema
                  raise CastError(
              datasets.table.CastError: Couldn't cast
              pseudo: string
              title: string
              question: string
              vote: int64
              medal: string
              nbr_comment: string
              date: string
              url_post: string
              url_competition: string
              rank_competition_comment: string
              -- schema metadata --
              pandas: '{"index_columns": [{"kind": "range", "name": null, "start": 0, "' + 1440
              to
              {'pseudo_com': Value(dtype='string', id=None), 'answer': Value(dtype='string', id=None), 'vote_com': Value(dtype='int64', id=None), 'medal_com': Value(dtype='string', id=None), 'date_com': Value(dtype='string', id=None), 'url_post': Value(dtype='string', id=None), 'rank_competition_comment': Value(dtype='string', id=None)}
              because column names don't match
              
              During handling of the above exception, another exception occurred:
              
              Traceback (most recent call last):
                File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1321, in compute_config_parquet_and_info_response
                  parquet_operations = convert_to_parquet(builder)
                File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 935, in convert_to_parquet
                  builder.download_and_prepare(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1027, in download_and_prepare
                  self._download_and_prepare(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1122, in _download_and_prepare
                  self._prepare_split(split_generator, **prepare_split_kwargs)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1882, in _prepare_split
                  for job_id, done, content in self._prepare_split_single(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 2013, in _prepare_split_single
                  raise DatasetGenerationCastError.from_cast_error(
              datasets.exceptions.DatasetGenerationCastError: An error occurred while generating the dataset
              
              All the data files must have the same columns, but at some point there are 8 new columns ({'date', 'title', 'medal', 'question', 'url_competition', 'pseudo', 'nbr_comment', 'vote'}) and 5 missing columns ({'medal_com', 'vote_com', 'pseudo_com', 'answer', 'date_com'}).
              
              This happened while the csv dataset builder was generating data using
              
              hf://datasets/Raaxx/Kaggle-post-and-comments-question-answer-topic/kaggle_post.csv (at revision 081222a5e3aedeb04b84abde27abf98e130fe602)
              
              Please either edit the data files to have matching columns, or separate them into different configurations (see docs at https://hf.co/docs/hub/datasets-manual-configuration#multiple-configurations)

Need help to make the dataset viewer work? Open a discussion for direct support.

pseudo_com
string
answer
string
vote_com
int64
medal_com
null
date_com
string
url_post
string
rank_competition_comment
string
/antgoldbloom
Hi Matt,The reason we prevent participants from submitting an unlimited number of times is because otherwise:(a) our servers may not be able to handle all the traffic; and(a) it would be easier to decode the portion of the test dataset that's used to calculate the public leaderboard. The technique you describe, often referred to as cross validation, is very sensible and we encourage others to use it. Anthony
0
null
Sat Aug 07 2010 07:04:01 GMT+0200 (heure d’été d’Europe centrale)
https://www.kaggle.com/discussions/questions-and-answers/54
217th
/chrisraimondi
Keep in mind also - the months get bigger as you go along - the last five months are actually 17% of the games. Also - you have to make sure you convert to player months when calculating RMSE - they are probably very correlated, but they won't match... I got pretty close with my Player Month RMSE using the last 5 months, but after pulling it down to last three - it is a ways off.
0
null
Sat Aug 07 2010 07:04:01 GMT+0200 (heure d’été d’Europe centrale)
https://www.kaggle.com/discussions/questions-and-answers/54
121st
/mattieshoes
A good tip... I was thinking I might try reversing the months and using months 1-5 as a testbed to avoid that. But perhaps I'll just use 98-100 or something instead of 96-100. I've noticed my results don't correlate very well to the ones I get when I submit, so I'll have to keep playing around with it.
0
null
Sat Aug 07 2010 07:04:01 GMT+0200 (heure d’été d’Europe centrale)
https://www.kaggle.com/discussions/questions-and-answers/54
49th
/jeffsonas
Chris - Can you please explain what you mean about converting to player months? Are you contrasting that with applying RMSE on a game-by-game basis? By the way, when we were discussing the contest last week with Mark Glickman, he said that when evaluating game-by-game outcomes RMSE would have been more appropriate with normally distributed outcomes, but for binary outcomes (or binary with ties, like chess), it's better to use the mean of -(y*log(E) + (1-y)*log(1-E)) per game, where y is the game outcome (0, 0.5, 1) and E is the expected/predicted score, which is more connected to the Binomial distribution for game outcomes. I don't know that this really helps, but if for some reason your training methodology requires that you score game-by-game, perhaps the above formula would work better than direct RMSE. I should point out that the reason we didn't go with just RMSE on a game-by-game basis, was that I was concerned it would be biased unfairly toward approaches having expected score very close to 5, since you gain the benefit of the squared difference and there are all those draws in chess.
0
null
Sat Aug 07 2010 07:04:01 GMT+0200 (heure d’été d’Europe centrale)
https://www.kaggle.com/discussions/questions-and-answers/54
137th
/jeffsonas
Possibly if you tried to identify the characteristics of the players in the test dataset, and see if the populations differ at all between training and test, and applied a consequent filter to the games in the training testbed, your results might correlate better from training to test. Of course it would be a tradeoff since you would then have less data. I didn't take this approach when developing my Elo Benchmark approach, but perhaps I should have. On the other hand, it would have been impossible to "blind" myself, since I do know exactly what steps I took to create the test dataset.
0
null
Sat Aug 07 2010 07:04:01 GMT+0200 (heure d’été d’Europe centrale)
https://www.kaggle.com/discussions/questions-and-answers/54
137th
/mattieshoes
I'm going by player/month in my RMSE calculation (summing the scores and predictions per player, per month, then doing the squared error, etc. However, there's always the possibility that I've got a bug. Using the first 95 months as training and the last 5 as a test, I get these results All black wins: 1.071 All white wins: 0.988 All draws: 0.735 Average score: 0.733 My best result in my testing shows up as 0.821, but when I submitted it, it came in at 0.718, 2nd place thus far. That's pretty wildly off. I'll investigate further, perhaps see what I get with the formula that Glickman suggested. :-) Oh, incidentally, the Evaluation page shows a predicted score of 0.53 and an actual score of 2.0, and a squared error of 2.18. 2.0-0.53 = 1.47 1.47^2 = 2.16, not 2.18. I'm sure it's just a rounding thing, it just threw me for a second when I was verifying that mine works.
0
null
Sat Aug 07 2010 07:04:01 GMT+0200 (heure d’été d’Europe centrale)
https://www.kaggle.com/discussions/questions-and-answers/54
49th
/chrisraimondi
Thanks for those figures - when I get a chance I will attempt to verify my code. I had some running in excel (still trying to get the hang of using R) - and I must have messed it up there - I am getting 0.87889 for all white wins and 1.36 as black. That sure seems wrong - and I am pretty sure it was different before :)
0
null
Sat Aug 07 2010 07:04:01 GMT+0200 (heure d’été d’Europe centrale)
https://www.kaggle.com/discussions/questions-and-answers/54
121st
/jeffsonas
Matt - I think it sounds like a bug in your testing. If you submitted something that did that well, then either your system was really doing better than 0.821 on your testbed, or you got very lucky. Seems like a bug is more likely. I'm sure you have seen the clustering on the leaderboards of people submitting simple solutions like all draws or a constant score value, and those are scoring more like 0.88
0
null
Sat Aug 07 2010 07:04:01 GMT+0200 (heure d’été d’Europe centrale)
https://www.kaggle.com/discussions/questions-and-answers/54
137th
/mattieshoes
Yes, I've seen it, and I know my numbers seem way off... I guess what I was hunting for was a correct RMSE number for something we've already got the data for, like predicting 1.0 for every game across the entire training set gives you an RMSE of ____. Then I could verify that my code is broken and be relatively sure when I've got it fixed. I thought I had it working before, but I accidentally erased EVERYTHING yesterday and rewrote it earlier today.
0
null
Sat Aug 07 2010 07:04:01 GMT+0200 (heure d’été d’Europe centrale)
https://www.kaggle.com/discussions/questions-and-answers/54
49th
/jeffsonas
I imported the training dataset into a Microsoft SQL Server database (that's where I do all my work) and so it is relatively easy for me to confirm these RSME calculations across just the training dataset: Across training dataset, months 96-100 only: All draws: 0.735443 Average score (54.565% for White or 45.435 for Black): 0.733063 White wins: 0.988078 Black wins: 1.071470 Across training dataset, all months: All draws: 0.797270 Average score (54.565% for White or 45.435 for Black): 0.794281 White wins: 1.063790 Black wins: 1.160176 Please note that this is doing the Player Month RSME calculation in my database, and is not necessarily the same exact thing the website is doing.
0
null
Sat Aug 07 2010 07:04:01 GMT+0200 (heure d’été d’Europe centrale)
https://www.kaggle.com/discussions/questions-and-answers/54
137th
/chrisraimondi
Thanks so much Jeff for doing that! Matt - my excel code is beyond hope - but I was actually able to get it to work in R. I have only been coding since ~ February - so it probably looks awful - but here it is if you want it (I am pretty sure merging isn't the proper way - but it works)... ==== Start #### ### ### Player Month Testing ### #### rmse <- function(obs, pred) sqrt(mean((obs-pred)^2)) chess.training<-read.csv("training_data_chess.csv") chess.sub<-chess.training[chess.training$Month>95,] x <- chess.sub other.dummy <- .54565 # Figure out seperate score for white and black for each game month.player.w <- paste(x$Month,"x",x$White, sep="") month.player.b <- paste(x$Month,"x",x$Black, sep="") white.scores <- x$Score black.scores <- abs(1-x$Score) dummy <-rep(other.dummy, length(white.scores)) print(length(black.scores)) white.dummy.ww <- 1 black.dummy.ww <- 0 white.dummy.bw <- 0 black.dummy.bw <- 1 white.dummy.d <- .5 black.dummy.d <- .5 white.dummy.other <- dummy black.dummy.other <- abs(1-dummy) df <- data.frame(month.player.w, month.player.b, white.scores, black.scores, white.dummy.ww, black.dummy.ww, white.dummy.bw, black.dummy.bw, white.dummy.d, black.dummy.d, white.dummy.other, black.dummy.other) # Data frame above should end up being the same number of rows as there are games in the set being analyized white.ag <- aggregate(cbind(df$white.scores, df$white.dummy.ww, df$white.dummy.bw, df$white.dummy.d, df$white.dummy.other), by=list(df$month.player.w), sum) colnames(white.ag) <- c("Player.Month", "white.scores","white.dummy.ww", "white.dummy.bw", "white.dummy.d", "white.dummy.other") black.ag <- aggregate(cbind(df$black.scores, df$black.dummy.ww, df$black.dummy.bw, df$black.dummy.d, df$black.dummy.other), by=list(df$month.player.b), sum) colnames(black.ag) <- c("Player.Month", "black.scores","black.dummy.ww", "black.dummy.bw", "black.dummy.d", "black.dummy.other") # but that can be looked up with player months # Now make data.frame with player months left.side <- c(month.player.w, month.player.b) left.side <- sort(left.side) left.side <- unique(left.side) left.side <- data.frame(left.side) colnames(left.side) <- ("Player.Month") # Now there should be a data.frame with the same number of rows as there are unique player months print(nrow(left.side))## This many player months w.m <- merge(left.side, white.ag, by.x="Player.Month", by.y="Player.Month", all.x=TRUE) w.m <- merge(w.m, black.ag, by.x="Player.Month", by.y="Player.Month", all.x=TRUE) w.m[is.na(w.m)]<-0 w.m$actual.score <- w.m$white.scores + w.m$black.scores w.m$all.ww <- w.m$white.dummy.ww + w.m$black.dummy.ww w.m$all.bw <- w.m$white.dummy.bw + w.m$black.dummy.bw w.m$all.d <- w.m$white.dummy.d + w.m$black.dummy.d w.m$other.dummy <- w.m$white.dummy.other + w.m$black.dummy.other ## rmse(w.m$actual.score, w.m$all.d) rmse(w.m$actual.score, w.m$other.dummy) rmse(w.m$actual.score, w.m$all.ww) rmse(w.m$actual.score, w.m$all.bw) ==== End > rmse(w.m$actual.score, w.m$all.d) [1] 0.7354431 > rmse(w.m$actual.score, w.m$other.dummy) [1] 0.7330631 > rmse(w.m$actual.score, w.m$all.ww) [1] 0.9880782 > rmse(w.m$actual.score, w.m$all.bw) [1] 1.071470
0
null
Sat Aug 07 2010 07:04:01 GMT+0200 (heure d’été d’Europe centrale)
https://www.kaggle.com/discussions/questions-and-answers/54
121st
/mattieshoes
Hrm... I get the right RMSE numbers (I verified all 8 provided by Jeff), so that's not the issue. So I still get 0.82 when I run my current best scheme using 1-95 as training and 96-100 as test though, even though it comes out under 0.72 when I submit it using 1-100 and 101-105. If I hadn't submitted it, I'd have thought it was worse than guessing draws across the board. When I use the full 100 in training and 96-100 as the test (the last 5 months in both places), the RMSE shows up around 0.44, which is kind of what you'd expect if my system was working -- super-accurate because the test data is in the training data.So I have no explanation as to why the results I get are so odd. I made an alteration to the scheme and got similar results in both places... 0.811 on my local test, 0.716 when submitted.my own C# code for RMSE, also ugly: public double CalculateRootMeanSquareError() { int[] games = new int[testing.HighestNumberedPlayer+1]; double[] score = new double[testing.HighestNumberedPlayer+1]; double[] predicted = new double[testing.HighestNumberedPlayer+1]; double sum = 0; int count = 0; int currentMonth = testing.data[0].Month; for (int i = 0; i < testing.data.Length; i++) { //end of month accounting if (currentMonth != testing.data[i].Month) { for (int j = 0; j < games.Length; j++) { if (games[j] > 0) { count++; sum += Math.Pow(score[j] - predicted[j], 2); } } //reset arrays for new month games = new int[testing.HighestNumberedPlayer + 1]; score = new double[testing.HighestNumberedPlayer+1]; predicted = new double[testing.HighestNumberedPlayer+1]; currentMonth = testing.data[i].Month; } //tallying games[testing.data[i].White]++; score[testing.data[i].White] += testing.data[i].Score; predicted[testing.data[i].White] += testing.data[i].PredictedScore; games[testing.data[i].Black]++; score[testing.data[i].Black] += 1.0 - testing.data[i].Score; predicted[testing.data[i].Black] += 1.0 - testing.data[i].PredictedScore; } //end of final month accounting for (int j = 0; j < games.Length; j++) { if (games[j] > 0) { count++; sum += Math.Pow(score[j] - predicted[j], 2); } } return (Math.Pow(sum/count, 0.5)); }
0
null
Sat Aug 07 2010 07:04:01 GMT+0200 (heure d’été d’Europe centrale)
https://www.kaggle.com/discussions/questions-and-answers/54
49th
/chrisraimondi
Hmm - I can't read c code well enough, but also - it is only being checked against 10% - and that probably messes it up some. If you are doing well against your hold out set and 2nd on the leader board - that is a good sign :)
0
null
Sat Aug 07 2010 07:04:01 GMT+0200 (heure d’été d’Europe centrale)
https://www.kaggle.com/discussions/questions-and-answers/54
121st
/johnlucas0
Like Matt, I've been finding that the score for simple algorithms can vary quite dramatically depending upon which part of the dataset you score them on.For example, someone said in another thread that assigning 0.545647 to every game gives a score of 0.88257 on the leaderboard.When I score the same 'algorithm' against all 11182 games in months 96-100, I get a RMSE of 0.7330631. But when I score it against all 53871 games in months 1-95, I get a RMSE of 0.809077.Assuming these figures are right the rather depressing conclusion is that the performance of simple algorithms like this has a big random component on this size of dataset, so there may be quite a lot of luck in the leaderboard positions.Would be very grateful if anyone can verify or correct my calculation by the way.
0
null
Sat Aug 07 2010 07:04:01 GMT+0200 (heure d’été d’Europe centrale)
https://www.kaggle.com/discussions/questions-and-answers/54
22nd
/jeffsonas
Confirmed, when I check the training dataset for months 96-100 with that "algorithm", I get 11,182 games and RMSE=0.733062, and for months 1-95 I get 53,871 games and RMSE=0.809076. By the way for months 91-95 it is 8,651 games and RMSE=0.751848. I expect the discrepancy is because the portion of the test dataset used to calculate the leaderboard is small, and so it may have different characteristics than the entire training dataset.
0
null
Sat Aug 07 2010 07:04:01 GMT+0200 (heure d’été d’Europe centrale)
https://www.kaggle.com/discussions/questions-and-answers/54
137th
/axelscheffner
Hi Chris,here is some R code which could be useful for you:RMSE <- function(subdat){ score.df <- merge(with(subdat,aggregate(SC,list(MO,WH),sum)) , with(subdat,aggregate(1-SC,list(MO,BL),sum)),by=c("Group.1","Group.2"),all=TRUE) SC.MO <- apply(score.df[,3:4],1,sum,na.rm=TRUE) res.df <- merge(with(subdat,aggregate(RES,list(MO,WH),sum)) , with(subdat,aggregate(1-RES,list(MO,BL),sum)) ,by=c("Group.1","Group.2"),all=TRUE) RES.MO <- apply(res.df[,3:4],1,sum,na.rm=TRUE) sqrt(mean((SC.MO - RES.MO)^2))} Given the data withdat <- read.csv(<your path to training_data.csv>) colnames(dat) <- c("MO","WH","BL","RES")you can do things likedat$SC <- 0.545647RMSE(dat) -> 0.7942807RMSE(subset(dat,MO %in% 1:95)) -> 0.8090766or sdat <- subset(dat,MO %in% 96:100)sdat$SC <- 0.5 RMSE(sdat) -> 0.7354431Hope that helps, Axel
0
null
Sat Aug 07 2010 07:04:01 GMT+0200 (heure d’été d’Europe centrale)
https://www.kaggle.com/discussions/questions-and-answers/54
161st
/johnlucas0
Thanks for confirming that Jeff - it's good to know my RMSE calculation is working (even if not much else is yet!)
0
null
Sat Aug 07 2010 07:04:01 GMT+0200 (heure d’été d’Europe centrale)
https://www.kaggle.com/discussions/questions-and-answers/54
22nd
/chrisraimondi
Axel, Thanks for the R function - works great!
0
null
Sat Aug 07 2010 07:04:01 GMT+0200 (heure d’été d’Europe centrale)
https://www.kaggle.com/discussions/questions-and-answers/54
121st
/levi2107
Matt, be thankful you're getting better results when you submit... I get an RMSE of 0.66459 with my test set, and 0.70912 on the real data after I submit :( I did check my RMSE code (which is slightly uglier than your code :) and I get the same numbers Jeff shared. Also, in my testing I found that if you only use a random 10% sample of players for your RMSE, the results do vary quite a bit. Somewhere around +/- 0.04 for me.
0
null
Sat Aug 07 2010 07:04:01 GMT+0200 (heure d’été d’Europe centrale)
https://www.kaggle.com/discussions/questions-and-answers/54
66th
/chris2418
Would like to clarify one aspect on this "out of sample" testing. Settling on the idea of using the 96-100 months as the sample, I too have been able to match Jeff and other's numbers. For instance, the "All draws" scenario giving a RMSE of 0.73544 over 11182 games. However, I want to verify that in the competition games between players who have not played before are excluded, and therefore one should exclude these games from the backtest. I make it only 8404 games between players who have already played a game and the "draws" RMSE for those games is 0.71470. To me the other 33% or so games would constitute little more than guesses with one (or two) players who have not played. These types of games will not be in the competition test set? And hence constitute adding "white noise" to the out of sample test. My point is that this probably favours certain strategies in the backtest - such as an average score for white wins. But in the competition this won't help at all so it makes sense for participants to exclude these games when calculating their own RMSE scores?
0
null
Sat Aug 07 2010 07:04:01 GMT+0200 (heure d’été d’Europe centrale)
https://www.kaggle.com/discussions/questions-and-answers/54
112th
/jeffsonas
Chris - I realize that the ideal setup in general for competitions like this, would be for the population characteristics to be identical between the training set and the test set. However there is a problem with this approach, in that the pool of active players is always growing. So there will always be new players entering the mix each month, and this would be true of the five months of the test set as well. It would have spoiled the competition, in my opinion, had I left the test set completely intact (i.e., matching the same characteristics as the training set), because significantly more than half of the games would have included at least one player who was either unrated or their rating would be based on very few games.Therefore I made the decision at the last moment (before launching the contest) to reduce the games in the test set so that most of those players would be excluded. I had to draw the line somewhere, and I am reasonably happy with the line where it ended up. It is quite possible that there are players that you will need to predict results for, without a rating available (in your particular scheme). However there are a number of possible rating schemes, some of which are relatively inclusive, and others which are relatively restrictive. For instance I believe if you did a massive simultaneous calculation across all games in the training set, calculating relative performance ratings for all players across the 100-month period, and letting it converge down to stable ratings, these ratings would be meaningful (in that they all operate on a single closed pool of players) and you would then have some information available with which to predict results. We already have observed that all players in the test set are covered by the closed pool of players in the training set. On the other hand if you start by creating seed ratings from a smaller closed pool identified across the first N months of the training set (where N<100), and let players accumulate new ratings in the same way that they would in the FIDE Elo system, then it is quite possible that you will have some unrated players needing predictions in the test set. This is likely something like 2%-10% of the games in the test set, but of course the precise percentage depends upon the inclusiveness of your approach, and that is why there was no perfect place to draw the line. The quantity of applicable games in the test set is far smaller than the 60%-70% that it originally would have been, but still an annoyance, I know. It will therefore be necessary for well-performing systems to do a good job of predicting those games as well, and I would indeed suggest using the training set to help develop a method for this.I hope you can understand and accept that I don't want to provide exact details of how I reduced the games in the test set, because such a revelation would provide additional information about players who were or weren't included, considering that I did incorporate the FIDE ratings of the players into my reduction algorithm. Therefore I do encourage you to try and identify particular characteristics of the players who are included in the test set, and I agree that if you are using the 96-100 month range in order to evaluate your possible systems, it would be wise to apply some sort of corresponding reduction of the training data in months 96-100 and only evaluating your system over the reduced set of results.My personal opinion is that it would be better to evaluate your possible systems over a longer stretch of time than the final five or six months. One of the reasons I included such a long duration of results (more than eight years) was because I know there are potentially three important durations to consider - the span of months with which to calculate initial ratings, the span of months to let different rating systems operate actively and thereby differentiate themselves from each other, and finally the span of months where the relative performance of those previously-differentiated rating systems can be evaluated. Certainly I would have gone with a longer stretch than the five months of 101-105, for evaluating submissions, if not for the challenge that we couldn't give you the results of those five months, with which to calculate further ratings. This is not a problem you would face when training your system in the training dataset, so I would suggest potentially going with a longer stretch than just five or six months.
0
null
Sat Aug 07 2010 07:04:01 GMT+0200 (heure d’été d’Europe centrale)
https://www.kaggle.com/discussions/questions-and-answers/54
137th
/alexxanderlarko
Use a mixture of different models (linear regression, neural networks). Choosing the best model by using the wave criterion. The theoretical grounding of the criterion is based on Bayes Theorem, the methods of cybernetics and synergy. See, article "Performance criterion of neural networks learning" published in the "Optical Memory & Neural Networks" Vol. 17, number 3. pp. 208-219. DOI:10.3103/S1060992X08030041http://www.springerlink.com/content/t231300275038307/?p=0c94471924774e8894973ad3c0d391a7&pi=0
0
null
Thu Apr 29 2010 01:13:08 GMT+0200 (heure d’été d’Europe centrale)
https://www.kaggle.com/discussions/questions-and-answers/1
28th
/colingreen
My first thoughts on this problem are that it would be ideal for Geof Hinton's deep belief networks.http://www.scholarpedia.org/article/Deep_belief_networksThey are generally competetive at tasks such as high level feature detection in images and audio. Quite possibly though there isn't enough data available to use this approach - not sure.Is anyone using this approach?
0
null
Thu Apr 29 2010 01:13:08 GMT+0200 (heure d’été d’Europe centrale)
https://www.kaggle.com/discussions/questions-and-answers/1
40th
/rajstennajbarrabas
The theory of deep belief networks is probably a correct description of information encoded in the human brain, but it's not a complete theory.As such, you will have difficulty using it to solve any problem, much less this one.Deep belief network theory fails because it does not imply how many layers or variables there should be.Certainly, someone could implement a network with a chosen number of layers and variables that "seem right". Depending on how well your model represents the correct encoding the results will show anywhere from random to perfect correlation....and there are many, many more configurations with random correlation than perfect correlation.(That a wrong feature in the model will push the output towards randomness is the big number one reason why machine algorithms require a ton of data and then only show weak results. For comparison, the human brain will correlate from three samples and is usually correct.)As for this particular problem, it's not clear that deeply hidden features with complex dependencies are even needed. Chances are basic observations about the data will be sufficient.Additionally, from an information theoretic point of view, well over half of the information needed to find a correlation was moved into the test set where we can't see it. This makes finding correlations extremely difficult, and any which are found will probably be weak at best.(Future events may prove me wrong, but I have good evidence for this last statement.)
0
null
Thu Apr 29 2010 01:13:08 GMT+0200 (heure d’été d’Europe centrale)
https://www.kaggle.com/discussions/questions-and-answers/1
4th
/colingreen
Going by Hinton's last two google techtalks, recent results have been very strong. Yes the topology is manually selected, but that's not a show stopper, quite clearly less structure forces generalisation and one can plot the curve of how performance changes with # of nodes and use that to make estimated guesses about the approximate location of a sweet spot. I take the point that deep features may not be useful here, but that's not a certainty. However, I do very much doubt there is enough data to find anything other than 'surface' features. On the last point - It was an interesting aspect of the netflix prize that folks were able to use the available test data in the training algorithm despite not having the hidden part of the data. I believe that all of the top teams integrated this data to improve their models of the data overall. I wonder if a similar approach could be used here. E.g. deep belief nets model the data initially without any need (in this HIV task) for the responder flag. For netflix this was perhaps an unexpected loophole which remained open (probably because the workaround is to submit an algorithm rather than predictions, which would have been unworkable).
0
null
Thu Apr 29 2010 01:13:08 GMT+0200 (heure d’été d’Europe centrale)
https://www.kaggle.com/discussions/questions-and-answers/1
40th
/rajstennajbarrabas
I believe I understand deep belief nets well enough to implement one to look for correlations in the response data.My first real submission was analogous to a belief net with two layers. It got very good results on the training data - around 68% on the first attempt - but failed miserably on the testing data.After spinning my wheels for several weeks trying to tweak the model to better fit the data, I finally wrote some code to detect selection bias in the test set.It turns out that the test set suffers from enormous selection bias, and this is why I believe that any model based on the training set will only loosely predict the test data.Here's an example. Plotting RTrans length (actually, the "tail" length after the main body) shows a section which is 100% Non-Responded. There are 74 patients in this section, and the chances of this happening due to chance is less than 1 in 25 million.Ordinarily this would be a strong indicator of patient response which should be incorporated into the model. There are 24 such patients in the test set, and if the correlation holds all should be set to non-respond. I don't have access to the test data, but I *believe* that selection bias has put all of the "responded" patients from this sub-population into the test set. Meaning, that there is no way to measure the relevance of this feature from the training data alone.I have identified a couple of these anomalies.My current system works within the selection bias and is having much better results.
0
null
Thu Apr 29 2010 01:13:08 GMT+0200 (heure d’été d’Europe centrale)
https://www.kaggle.com/discussions/questions-and-answers/1
4th
/colingreen
Thanks Rajstennaj. That's a very odd bias, and given your experience with poor initial test results I don't doubt it. It would be interesting to get some feedback on this from the organisers, perhaps after the competition has finished. I guess it's slightly unfortunate that the novel techniques required to compete on a task like this are overshadowed by the need to do some simpler statistical analyses before you get started proper, but that's real world data for you :)
0
null
Thu Apr 29 2010 01:13:08 GMT+0200 (heure d’été d’Europe centrale)
https://www.kaggle.com/discussions/questions-and-answers/1
40th
/colingreen
Rajstennaj, congrats on your progress so far. 76+ is a very promising and interesting score. Looking forward to learning more about your approach(es) after the end of the competition.
0
null
Thu Apr 29 2010 01:13:08 GMT+0200 (heure d’été d’Europe centrale)
https://www.kaggle.com/discussions/questions-and-answers/1
40th
/rajstennajbarrabas
Heh.I've got almost 10,000 lines of code so far in perl (a highly expressive language) and 6 separate models which don't work on the test data.Don't get discouraged by my apparent MCE. It is entirely likely my approach will be declared invalid by the contest organizer. My system identifies weaknesses of the contest data rather than doing real prediction.A more interesting result from the leaderboard is "team sayani", who got an extremely good score in three attempts. (!)Alex (of "team sayani") sent me his paper outlining his method. I haven't read it yet - I'm waiting for the contest to end - but I bet he's the one to beat. When he comes back from vacation, that is.Grzegorz Swirszcz sent me two of his papers - they were an interesting read. He's won two other data prediction competitions. His papers introduced me to "ensemble selection" - a concept of which I was unaware.There's been very little discussion about the contest in general, but I know that the people here are accessible and easy to talk to. And there's lots of opportunity to learn new stuff by talking to people.It's a pity people don't talk more. I wonder what techniques John Chouchoumis (team hcj) is using?
0
null
Thu Apr 29 2010 01:13:08 GMT+0200 (heure d’été d’Europe centrale)
https://www.kaggle.com/discussions/questions-and-answers/1
4th
/brucetabor
Rajstennaj, I can't say I've spent as much time on this contest as you but it has become clear to me there is serious selction bias in the choice of training and test sets. Another obvious example: 80 of the traing set are missing PR data entirely - none of the test set are missing ALL PR data. I have also seen clusters of PR sequences in the test set that do not occur in the training set. This lack of random assignment to training and tests sets is disappointing. We are not looking just for features associated with reponse but also patterns of bias.
0
null
Thu Apr 29 2010 01:13:08 GMT+0200 (heure d’été d’Europe centrale)
https://www.kaggle.com/discussions/questions-and-answers/1
23rd
/colingreen
Dividing the data into equally representative sets is always going to be problemtaic for such small numbers of records (in comparison to say the Netflix prize which had half a million user accounts and 17000 movies). I suppose my main concern surrounds how much overfitting can occur to the public portion of the test set (quite a lot it would seem) and whether this goes towards the final score. Strictly speaking we should only be scored on the hidden portion (which may be the case) since we have no idea what the biases are in that set and therefore cannot make assumptions. Of course, some may get lucky or may make educated guesses about the biases in that set
0
null
Thu Apr 29 2010 01:13:08 GMT+0200 (heure d’été d’Europe centrale)
https://www.kaggle.com/discussions/questions-and-answers/1
40th
/brucetabor
Hi Colin, "Dividing the data into equally representative sets is always going to be problemtaic ..." It just has to be randomised, or randomised subject to constraints on the numbers of cases/controls assigned to the test and training groups. Machin learning algorithms and statistical learning techniques rely on the training set being REPRESENTATIVE of the test set. Otherwise there's no point in learning.
0
null
Thu Apr 29 2010 01:13:08 GMT+0200 (heure d’été d’Europe centrale)
https://www.kaggle.com/discussions/questions-and-answers/1
23rd
/brucetabor
I'll briefly post a summary of my 20 hours (or so) of work on this. I'm a statistician so I prefer to stick to proven methods. I've used logistic regression with stepwise variable selection and Baysian Information Criterion (BIC) to stop overfitting. I've used simple receiver operating characteristic (ROC) concepts to choose cut points. I've also used cross-validated my entire model building process using n-fold or leave-one-out cross validation. These are robust reliable techniques that: 1) almost always work, 2) prevent overfitting (statistics has a concept called "inference" that is enormously valuable here), 3) minimise effort, 4) provide a good guide to model performance on a new test dataset representative of the training dataset. I used the statistical package R for those who are interested. My conclusion is that the test dataset is NOT representative of the training dataset. I absolutely agree with Rajstennaj that the dataset partitioning was not random and is highly biased. My first fitting attempts used only viral load and CD4 count (both modelled as logs). After using viral load, CD4 count is not a useful predictor. I chose cut-points on the ROC to maximise accuracy on the test dataset, where cases and controls are equally numerous (roughly where sensitivity + specificity is a maximum). I got an MCE (actually accuracy) of 62.5%. My next attempt was what took all the time. I'm not very used to translating sequences (or rather alighing them), and I discovered a few errors in my work through Cory's contributions. Thanks Cory! Then I simply looked for amino acids associated with disease outcome, created indicator variables for the various "alleles" at these sites and then ran the variable selection (keeping viral load and CD4 count as possible variables to choose from). My final models had only 8 variables, including viral load. At the appropriate cut-point they predicted an accuracy of 79% - of course this won't happen as the model is built on that dataset. But the cross-validated accuracy was 76% (projected to a dataset balanced in cases and controls). This is what statistical theory would predict I would get if the test dataset was representative of the training dataset. I again got an MCE of 62.5%. Noting that the 80 patients with missing PR sequence, and ALL in the training dataset, could not have been randomly assigned, I deleted them and tried buiding the model again. I still got an MCE (accuracy) of 62.5%. I am forced to the same conclusion as Rajstennaj. There is serious selection bias which makes the problem unsuitable for statistical or machine learning approaches. One is forced to "go fishing" for bias, and that means submitting solutions based on hypothesised bias and observing the outcome. Great as a game, but of little use in problems that concern me. So in my view Rajstennaj deserves to win. It is just a shame that the kind of research paper that will come out of this is not the kind Will intended - nor can it be - as the dataset has not been randomly partitioned. I suspect those who have done better with train-test approach (eg. sayani) have been in some senses lucky in that that chose feature-outcome sets more randomly distributed between the two datsets - viral load is one such predictor. It got me an accuracy (MCE) of 62.5% by itself.
0
null
Thu Apr 29 2010 01:13:08 GMT+0200 (heure d’été d’Europe centrale)
https://www.kaggle.com/discussions/questions-and-answers/1
23rd
/colingreen
Hi Bruce, I appreciate you describing your approaches and discoveries. It will be intersting to see what the outcome of the competition will be if there really are no other features (or significant features in comparison to the biases) in the data. Are there features that are consistent across test & training data that the leaders could be basing their predictions on? Probably not based on what I've read. I think it's early days for kaggle and my hope would be that folks don't get disheartened - that the organisers can learn from this and improve the design of future competions. It's an iterative process. Also I'd be interested in learning how your (and others) progress on modelling the training data compares with the current best techniques described in the literature. Colin.
0
null
Thu Apr 29 2010 01:13:08 GMT+0200 (heure d’été d’Europe centrale)
https://www.kaggle.com/discussions/questions-and-answers/1
40th
/solorzano
Training and test data are clearly hand-picked and/or biased somehow. It's like a "trick question" in a quiz. I can see how that makes a contest more challenging, but if the purpose is to produce useful results, this just hinders the task at hand.
0
null
Thu Apr 29 2010 01:13:08 GMT+0200 (heure d’été d’Europe centrale)
https://www.kaggle.com/discussions/questions-and-answers/1
21st
/corygiles
Well, it's unfortunate that the contest turned out this way, but I *really* hope that when the contest is over, the top people on the leaderboard (Rajstennaj, Flying Pig, sayani) will find some way to test their algorithms in a way that's more comparable with the other literature on the subject. For example, cross-validation on the whole set. Hopefully those results could be posted or at least linked to on Kaggle -- I'm very curious how much of the increase in MCE here is merely an artifact and how much is a genuine advance in the field. Rajstennaj's MCE in particular would be astonishing if anywhere close to accurate.
0
null
Thu Apr 29 2010 01:13:08 GMT+0200 (heure d’été d’Europe centrale)
https://www.kaggle.com/discussions/questions-and-answers/1
11th
/antgoldbloom
The public leaderboard is only indicative because competitors can use information on their score to get information on a portion of the test dataset. The final results are a) quite different and b) better reflect actual performance.
0
null
Thu Apr 29 2010 01:13:08 GMT+0200 (heure d’été d’Europe centrale)
https://www.kaggle.com/discussions/questions-and-answers/1
null
/rajstennajbarrabas
My MCE values are not accurate. I'm not predicting from the data, I'm working the competition system. Basically, I know how many entries are used to calculate the public MCE and where they are.To get around the selection bias, I plan to determine the correct public MCE and use the extra information to make a more accurate prediction based on the test data. It's a long shot.I've known from early on that I won't be winning the competition, but the analysis led to interesting results which should inform future contests.For example, if you have 100 yes/no questions, how many guesses are needed to determine the answers with certainty?In the worst possible configuration you can get to certainty with about 70 guesses on average (ie - 70% of the number of questions). Most situations are not worst possible, these require fewer guesses.The reduction from 100% to 70% was important - with it I can just squeak by determining the entire public MCE and have 6 submissions left over.I know all but 9 values of the public MCE, and I'm guaranteed to discover at least one new value with each new submittal.An interesting statistic: if you sort the test data on Viral Load and set the first half to "Responded" you get a public MCE of 61.0577. If this holds across the entire test set, then I can expect 100% of 30% of the data, and 61.0577% of 70% of the data, for a grand total of 72.6% MCE. Not enough to win :-(I suspect Alex ("team sayani") will be the winner.
0
null
Thu Apr 29 2010 01:13:08 GMT+0200 (heure d’été d’Europe centrale)
https://www.kaggle.com/discussions/questions-and-answers/1
4th
/chrisraimondi
This has certainly been an interesting problem. The sets are clearly not random (as has been pointed out numerous times). However, even with that knowledge - it has been difficult to turn that to an advantage. I have broken up the combined data into five different groups and have been trying to come up with separate solutions/theories. Of course - this is made more difficult in that one of the groups appears nowhere in the prediction set - and another is present in the prediction set, but not in the area Rajstennaj mentions that will allow me to get feedback by way of score changes.Rt184 is supposed to be very important - and it is in the training set, but there are only 7 variation instances in the entire prediction set in the scoring area. Took me a few times before I figured out why my "improved" method was having either no effect - or a negative one.I don't think 72.6% is that bad (and you have already mentioned some info that leads me to believe an educated guess would get you closer to ~78%)Some of this reminds me of that brain teaser about having 7 coins, a balance, and three weighs to guess which one is fake...
0
null
Thu Apr 29 2010 01:13:08 GMT+0200 (heure d’été d’Europe centrale)
https://www.kaggle.com/discussions/questions-and-answers/1
1st
/judowill
Tanya,Good to hear from you. Since we plan to publish at least the top-entry in a peer-reviewed manuscript we would need a reproducible description of the algorithm. If you can describe it in psuedo-code I'm sure I could implement it in Python for the actual publication. Even if it takes weeks to search the feature-space without your super-computer then that's okay.However, if even the algorithm is proprietary and the company isn't willing to allow its publication then I'm sorry but your outta luck.-Will
0
null
Thu Apr 29 2010 17:48:46 GMT+0200 (heure d’été d’Europe centrale)
https://www.kaggle.com/discussions/questions-and-answers/2
null
/antgoldbloom
Hi Tanya, Kaggle will maintain a rating system. If you win but you're ineligible for prize money, you will still get a strong rating.Anthony
0
null
Thu Apr 29 2010 17:48:46 GMT+0200 (heure d’été d’Europe centrale)
https://www.kaggle.com/discussions/questions-and-answers/2
null
/judowill
The difference in size between the RT and PR protein segments is correct. These are two different genes in the HIV-1 genome. There are instances in this dataset in which there is no RT or no PR sequence (I've taken out any patients that are missing both). This does not mean that the gene is missing, it means that they didn't measure it. So you'll have to make sure your classifier is robust to missing features.And its an ippon-seonage .. my morote is terrible for both myself and my uke ;)
0
null
Fri Apr 30 2010 16:32:29 GMT+0200 (heure d’été d’Europe centrale)
https://www.kaggle.com/discussions/questions-and-answers/3
null
/arnausanchez
I have a question: how do you infer that RT is 1476-chars long? the field length goes from 579 to 1482 in the training data. And what's the reason for having so many incomplete RTs? sequencing errors?
0
null
Fri Apr 30 2010 16:32:29 GMT+0200 (heure d’été d’Europe centrale)
https://www.kaggle.com/discussions/questions-and-answers/3
null
/judowill
The dataset is limited by the total number of HIV-1 samples with sequence and therapeutic information in the public domain. And this dataset has virtually all of them. The entire dataset is skewed towards non-responders since the data is mostly from the 1980's-1990's when most therapies were less effective then they are today.The dataset was intentionally split to ensure that the testing dataset is 50-50 responders vs. non-responders, the training set is everything that remains. This was done to ensure that competitors don't "artificially" lower their "responder" prediction rate because they know the dataset is biased.Some biological datasets are even more biased than this one (protein-protein interaction datasets are highly biased towards "negative" data). A common way to ensure that you have a robust classifier is to test against a 50-50 training set in which you have artificially created an even distribution of positive and negative events.
0
null
Fri Apr 30 2010 17:40:57 GMT+0200 (heure d’été d’Europe centrale)
https://www.kaggle.com/discussions/questions-and-answers/4
null
/brucetabor
Is the test dataset drawn from the same population as the training dataset? Otherwise the predictions made will not be valid.
0
null
Fri Apr 30 2010 17:40:57 GMT+0200 (heure d’été d’Europe centrale)
https://www.kaggle.com/discussions/questions-and-answers/4
23rd
/dalloliogm
The individuals in the dataset are all humans, are all being recently infected by HIV and they received the same treatment.Welcome to the problems faced in science... you can't really know whether the two samples come from the same population, e.g. if the viruses that infected them come all from the same strand, if the indivuals have similar genomes... there are a lot of informations that you won't never know.
0
null
Fri Apr 30 2010 17:40:57 GMT+0200 (heure d’été d’Europe centrale)
https://www.kaggle.com/discussions/questions-and-answers/4
null
/dalloliogm
'N' means that a base can be either A,C,G or T, and problems in the sequencing process have not permitted to identify which one is the correct base for sure.In the same way, Y means a pyrimidine, which is C or T.You can find the correct IUPAC convention here:- http://www.dna.affrc.go.jp/misc/MPsrch/InfoIUPAC.html
0
null
Sat May 01 2010 13:00:57 GMT+0200 (heure d’été d’Europe centrale)
https://www.kaggle.com/discussions/questions-and-answers/5
null
/jstreet
I'm in a similar position. I'm comfortable with the biological concepts but the machine learning is all new to me.Judging from your other post it looks like we're both intending to use python as well. It's not exactly the ideal skills match but perhaps there is still some scope for cross-fertilization of ideas. Get in touch if you're interested. Email sent to jonathan @ the domain in my profile should reach me.
0
null
Sat May 01 2010 13:56:47 GMT+0200 (heure d’été d’Europe centrale)
https://www.kaggle.com/discussions/questions-and-answers/6
98th
/rajstennajbarrabas
Don't get hung up because you don't know machine learning. Machine learning won't get you anything you don't already know about the problem.Machine learning is used to predict patterns in future data given past data, and it's nothing more than an application of concepts you already know.To use a machine learning system, you would feed it patients one at a time and have it learn to classify respond and non respond based on prior data. Once trained, you would have it try to classify the test data.The algorithm would have poor results at the beginning, and get better with training.In this case, we know all the data at the outset, so it makes no sense to train. It will be far more efficient and accurate to just take all the data at once and look for significant features. That's only what a machine learning algorithm would do anyway, but over time and incrementally.For example, gradient descent (a machine learning algorithm) is equivalent to linear regression. Since you have all the data, you can get the same output by just calculating the linear regression.It will be much more effective to just rummage around in the whole dataset using percentages and statistical inference.
0
null
Sat May 01 2010 13:56:47 GMT+0200 (heure d’été d’Europe centrale)
https://www.kaggle.com/discussions/questions-and-answers/6
4th
/alexxanderlarko
Use a mixture of different models (linear regression, neural networks). Choosing the best model by using the wave criterion. The theoretical grounding of the criterion is based on Bayes Theorem, the methods of cybernetics and synergy. See, article "Performance criterion of neural networks learning" published in the "Optical Memory & Neural Networks" Vol. 17, number 3. pp. 208-219. DOI:10.3103/S1060992X08030041http://www.springerlink.com/content/t231300275038307/?p=0c94471924774e8894973ad3c0d391a7&pi=0
0
null
Sat May 01 2010 13:56:47 GMT+0200 (heure d’été d’Europe centrale)
https://www.kaggle.com/discussions/questions-and-answers/6
28th
/rajstennajbarrabas
I don't have access to SpringerLink.Can you post a link to your paper that can be read for free?(Alternately - Can someone hook me up to SpringerLink somehow?)
0
null
Sat May 01 2010 13:56:47 GMT+0200 (heure d’été d’Europe centrale)
https://www.kaggle.com/discussions/questions-and-answers/6
4th
/alexxanderlarko
Write to me. I shall send you by e-mail.
0
null
Sat May 01 2010 13:56:47 GMT+0200 (heure d’été d’Europe centrale)
https://www.kaggle.com/discussions/questions-and-answers/6
28th
/koheiokamura
Thank you for sharing
0
null
Wed Apr 20 2022 13:01:38 GMT+0200 (heure d’été d’Europe centrale)
https://www.kaggle.com/discussions/questions-and-answers/7
null
/dalloliogm
Look at the description in the Data section:"""These sequences are from patients who had only recently contracted HIV-1 and had not been treated before"""moreover, I think you can assume that the posology has been respected correctly.
0
null
Tue May 04 2010 11:32:54 GMT+0200 (heure d’été d’Europe centrale)
https://www.kaggle.com/discussions/questions-and-answers/8
null
/judowill
The dataset contains many different therapies. I have ensured that there is an equal proportion of therapies distributed between the testing and training datasets. For example, if the training dataset is 30% AZT users then the testing dataset is also 30% AZT users. The difficulty with breaking out the dataset by therapeutic intervention is that the "cocktail" of drugs given to each patient is not universal. There are 13 drugs which are given in combination of 1-3 at a time. If I limited the dataset to the largest dug only (AZT in this case) then you'd be stuck with a training and testing dataset of barely 200 patients.There have been numerous publications which can "to some degree" predict which therapies will work for which patients ... based on their viral genome. The more interesting question is "Are there markers which indicate good progression independent of the therapy chosen. I arranged the dataset to facilitate that question.As far as the dosages given to the patients .. even I don't know that information. I can only assume that doctors were prescribing the proper dosage. There is an issue with patient "compliance" ... many of the earlier drugs made you pretty sick (and were very expensive) and so patients would take less then the recommended dosage so it would last longer. If the study directors noticed that patients were being non-compliant then they'll often drop them from the study (since they make the numbers worse), but I have no data indicating the level of compliance.Hope that helps,Will
0
null
Tue May 04 2010 11:32:54 GMT+0200 (heure d’été d’Europe centrale)
https://www.kaggle.com/discussions/questions-and-answers/8
null
/judowill
Good to hear from you Paul,I saw your entry just a little while ago and I noticed that you were listed yourself as a clinical virologist. If you'd like to talk about your methods I'd be glad to hear them. If your interested in writing a guest-post for my blog on this competition feel free to drop me an e-mail.Thanks,Will
0
null
Wed May 05 2010 06:00:15 GMT+0200 (heure d’été d’Europe centrale)
https://www.kaggle.com/discussions/questions-and-answers/9
null
/rajstennajbarrabas
You probably know this, but in case you don't here's some info which might help your analysis.The nucleotide sequence is used by the cell to build a protein, and proteins are made up of a string of amino acids.The cell structure (ribosome) takes the nucleotide sequence in groups of three letters. Each grouping of three letters indicates which is the next amino acid to attach to the growing chain which makes up the protein. Once the chain is complete, the protein is let go and it folds over and around itself to make a single molecule with a specific shape.(I'm glossing over some details. The folding actually happens as the string is being created, and there may be other steps in the process such as chopping off sections after the protein is made.)For example, the following nucleotide sequence:CCTCAAATCACTCTTTGGCAACGACCCCTCGTCCCAATAAGGATAGGG...will be interpreted by the ribosome as this:CCT CAA ATC ACT CTT TGG CAA CGA CCC CTC GTC CCA ATA AGG ATA GGG ...and the protein generated will be thisProline+Glutamine+Isoleucine+Threonine ...The lookup tables for these translations can be found in numerous places on the net under the term "Genetic Code".There are 4 possible nucleotides in 3 positions within such a triplet, giving a total of 64 possible codons. Three of these mean "Stop", one of these is conditionally "Start", and each of the others indicate one of 20 amino acids.This means that there is redundancy in the genetic code: Both TTT and TTC encode Phenylalanine, so two nucleotide sequences which differ in the same position by these codons will generate identical proteins.Furthermore, some common-sense logic can be applied to the sequences. If there is a codon with missing information "TG?" and it's in the middle, the unknown codon is almost certainly not "TGA" because you won't normally have a "STOP" codon in the middle of the sequence.If you are going to do correlations on the nucleotide sequence as a generic string, you can first translate the sequence into a string of amino acids and work with *that* as a string. This will automatically match the redundancies in the genetic code and result in a string 1/3 as long.Any biologists who note an error in the previous, please reply with a correction.
0
null
Thu May 06 2010 15:56:24 GMT+0200 (heure d’été d’Europe centrale)
https://www.kaggle.com/discussions/questions-and-answers/10
4th
/dalloliogm
Rajstennaj, what you have written is correct.I don't believe it is very useful to look at k-mers distribution, it is better to concentrate on variability on certain positions..
0
null
Thu May 06 2010 15:56:24 GMT+0200 (heure d’été d’Europe centrale)
https://www.kaggle.com/discussions/questions-and-answers/10
null
/rajstennajbarrabas
A new version is available with some enhancements and a minor bug fix, available [Link]:http://www.okianwarrior.com/Enjoys/Kaggle/Images/HIV.zip.A complete description of the changes is included with the package. BootstrapMethod.plThis will run TestMethod.pl 50 times with different train and test sets, then calculate the mean MCE.Useful if your method has a random component to it. (Per the [Link]:http://en.wikipedia.org/wiki/Bootstrapping_%28statistics%29 on bootstrapping.)PlotData.plThis will generate several data files from the training data, which can then be displayed using gnuplot.Included are several gnuplot scripts to get you started viewing the data in interesting ways, including this:Enjoy, and let me know if you find problems.
0
null
Fri May 07 2010 22:52:31 GMT+0200 (heure d’été d’Europe centrale)
https://www.kaggle.com/discussions/questions-and-answers/12
4th
/corygiles
To contribute to The Cause, here are all of the training and test instances translated from DNA to amino acids, and aligned to the same reading frame with the consensus sequences. Also with some basic proteomics data such as molecular weight, pI, and percentage of helix, turn, and sheet segments, derived from ProtParam -- [Link]:http://expasy.org/tools/protparam.html) You can download the data here:http://dl.dropbox.com/u/3966882/hiv/alignments_and_proteomics.csvFor the non-biologists: a consensus sequence is sort of the "average" or "standard" sequence within a database of different sequences, and variations from this consensus are called polymorphisms. So in the above file, there are several hyphens within the training/test sequences where the consensus sequence has an amino acid, but the training/test sequence has a deletion there. The consensus sequences for protease and reverse transcriptase are here:http://dl.dropbox.com/u/3966882/hiv/consensus.txt
0
null
Fri May 07 2010 22:52:31 GMT+0200 (heure d’été d’Europe centrale)
https://www.kaggle.com/discussions/questions-and-answers/12
11th
/rajstennajbarrabas
Cory's proteomics data is now included in the quickstart package.(His distinctiveness was added to the collective.)A function to read and add the proteomics to the patient data is included, and all sample programs load the new data.BasicStats.pl prints simple statistics based on the proteomics - look there to see how to access the new data.(But nonetheless it's straightforward.)The new version is [Link]:http://www.okianwarrior.com/Enjoys/Kaggle/Images/HIV.zip.
0
null
Fri May 07 2010 22:52:31 GMT+0200 (heure d’été d’Europe centrale)
https://www.kaggle.com/discussions/questions-and-answers/12
4th
/jstreet
The site for the quickstart package seems to be down.
0
null
Fri May 07 2010 22:52:31 GMT+0200 (heure d’été d’Europe centrale)
https://www.kaggle.com/discussions/questions-and-answers/12
98th
/benholloway
Working for me now
0
null
Fri May 07 2010 22:52:31 GMT+0200 (heure d’été d’Europe centrale)
https://www.kaggle.com/discussions/questions-and-answers/12
null
/rajstennajbarrabas
That's my home server - I turn it off at night [EDT] sometimes.If it doesn't work, try again 12 hours later.
0
null
Fri May 07 2010 22:52:31 GMT+0200 (heure d’été d’Europe centrale)
https://www.kaggle.com/discussions/questions-and-answers/12
4th
/jstreet
Working fine for me as well now.Thanks.
0
null
Fri May 07 2010 22:52:31 GMT+0200 (heure d’été d’Europe centrale)
https://www.kaggle.com/discussions/questions-and-answers/12
98th
/del=91cf5c8be09f685f
Thanks Raj for the quickstart package and Cory for the proteomics data.I noticed for Cory's proteomics data two entries are missing just thought I would add them in ifthey have not been mentioned already.Training data Patient id 659 PR sequenceCCTCAGATCACTCTTTGGCAACGACCCGTCGTCACAGTAAAGATAGGGGGGCAACTAAAGGAAGCTCTATTAGATACAGGAGCAGATGAYACAGTATTAGAAGACATGAATTTGCCAGGAAGATGGAAACCAAAAATGATAGGGGGAATTGGAGGTTTTGTCAAAGTAAGACAGTATGATCAGGTACCTATAGAATTTTGTGGACGTAAAACTATGGGTACAGTATTAGTAGGACCTACACCTGTCAACGTAATTGGAAGRAATCTGTTGACTCAGATTGGGTGCACTTTAAATTTTTranslationPQITLWQRPVVTVKIGGQLKEALLDTGADXTVLEDMNLPGRWKPKMIGGIGGFVKVRQYDQVPIEFCGRKTMGTVLVGPTPVNVIGXNLLTQIGCTLNFTest data Patient id 674 RT sequenceCCCATTAGTCCTATTGAAACTGTRCCAGTAAAATTAAAGCCAGGAATGGATGGCCCAAGAGTTAAACAATGGCCATTGACAGAAGAAAAAATAAAAGCATTAGTAGAAATTTGTACAGAAATGGAAAAGGAAGGAAAAATTTCAAAAATTGGGCCTGAAAATCCATACAATACYCCAGTATTTGCCATAAAGAAAAAGGACAGTTCYANNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNTTAGATAAAGACTTCAGAAAGTATRCTGCATTCACCATACCTAGTGTGAACAATGAGACACCAGGGATTAGATATCAGTATAATGTGCTTCCACAGGGATGGAAAGGATCACCAGCAATATTCCAAAGTAGCATGACAAAAATCCTAGAGCCTTTTAGAAAACAAAATCCAGACATAGTTATCTATCAATACATGGATGATTTGTATGTAGGATCTGACTTAGAAATAGGGCAGCATAGAACAAAAATAGAGGAACTGAGAGATCATCTATTGAAGTGGGGACTTTACACACCAGACMAAAAACAYCAGAAAGAACCTCCATTCCTTTGGATGGGTTATGAACTCCATCCTGATAAATGGACAGTACAGCCTATAGTGCTGCCAGAAAAAGACAGCTGGACTGTCAATGACATACAGAAGTTAGTGGGAAAATTGAATTGGGCAAGTCAGATATATCCAGGGATTAAAGTAAGGCAATTATGTAAACTCCTTAGGGGAACCAAAGCACTAACAGAAGTAGTACCATTAACAGAAGAAGCAGAGCTAGAACTGGCAGAAAACAGGGAGATTYTAAAAGAACCAGTACATGGAGTGTATTATGACCCAACAAAAGACTTAATAGCAGAAATACAGAAACAGGGGCTAGGCCAATGGACATATCAAATTTATCAAGAACCATTTAAAAATCTGAAAACAGGAAAGTATGCAARAATGAGGRGTGCCCACACTAATGATGTAAARCAACTAACAGAGGYGGTRCAAAAAATAGCCACAGAAAGCATAGTAACATGGGGAAAGACTCCTAAAYTTAAATTACCCATACAGAAAGAAACATGGGAGGCATGGTGGACAGAGTATTGGCARGCCACCTGGATTCCTGARTGGGAGTTTGTCAATACCCCTCCCTTAGTGAAATTATGGTACCAGTTAGAGAAAGAACCYATAGTAGGAGCAGAAACTTTCTATGTAGATGGGGCAGCTAATAGGGAAACTAAATTAGGAAAAGCAGGATATGTTACTGACAGAGGAAGACAAAAAGTTGTCTCCCTAACGGACACAACAAATCAGAAGACTGAGTTACAAGCAATTAATCTAGCTTTNTranslationPISPIETXPVKLKPGMDGPRVKQWPLTEEKIKALVEICTEMEKEGKISKIGPENPYNXPVFAIKKKDSXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXLDKDFRKYXAFTIPSVNNETPGIRYQYNVLPQGWKGSPAIFQSSMTKILEPFRKQNPDIVIYQYMDDLYVGSDLEIGQHRTKIEELRDHLLKWGLYTPDXKXQKEPPFLWMGYELHPDKWTVQPIVLPEKDSWTVNDIQKLVGKLNWASQIYPGIKVRQLCKLLRGTKALTEVVPLTEEAELELAENREIXKEPVHGVYYDPTKDLIAEIQKQGLGQWTYQIYQEPFKNLKTGKYAXMRXAHTNDVXQLTEXXQKIATESIVTWGKTPKXKLPIQKETWEAWWTEYWXATWIPXWEFVNTPPLVKLWYQLEKEXIVGAETFYVDGAANRETKLGKAGYVTDRGRQKVVSLTDTTNQKTELQAINLAXThanks,Jack
0
null
Fri May 07 2010 22:52:31 GMT+0200 (heure d’été d’Europe centrale)
https://www.kaggle.com/discussions/questions-and-answers/12
85th
/antgoldbloom
More research... enjoyLove thy Neighbor, Love thy Kin: Voting Biases in the Eurovision Song Contest [Link]:http://papers.econ.ucy.ac.cy/RePEc/papers/1-2006.pdfGeography, culture, and religion: Explaining the bias in Eurovision song contest voting [Link]:http://wwwhome.math.utwente.nl/%7Ewwwtw/publications/2006/1794.pdfA Hybrid System Approach to Determine the Ranking of a Debutant Country in Eurovision [Link]:http://wwwhome.math.utwente.nl/%7Ewwwtw/publications/2006/1794.pdfGoogling Eurovision [Link]:http://www.eurovisionamerica.com/wp-content/uploads/paper1.pdfExpert Judgment Versus Public Opinion – Evidence from the Eurovision Song Contest [Link]:http://www.eco.rug.nl/%7Ehaanma/esc.pdf
0
null
Wed May 12 2010 06:39:30 GMT+0200 (heure d’été d’Europe centrale)
https://www.kaggle.com/discussions/questions-and-answers/16
null
/rajstennajbarrabas
Some parts of the sequence are highly conserved - they cannot change much (or occasionally, at all) without modifying the function of the protein.Look at the same position in all the other sequences. If they all have the same codon, or overwhelmingly most have the same codon, then it's likely that the stop codon is a data error and you can assume it's the overwhelmingly likely case.If the codon is in a position which varys widely across all the other samples, then it's in a position which is *not* highly conserved, which means that it's unlikely to matter.Also, if you suspect an error in transcription and have more than one possible solution (perhaps all the other sequences have one of two possibilities in that position), you can look at all possible cases of what the codon might be, and then look at the shapes and sizes of the corresponding amino acids.For example, you have an error and the possible replacements are Threonine or Tryptophan. If the corresponding codons in the other samples are all Alanine and Serine, then Threonine is the best guess. (Tryptophan is big and bulky, the others are small and similar.)Note that in all this, you are finding the most likely answer, not the correct answer.Any biologists who note an error in the previous, please post a correction.
0
null
Tue May 18 2010 18:53:05 GMT+0200 (heure d’été d’Europe centrale)
https://www.kaggle.com/discussions/questions-and-answers/18
4th
/rajstennajbarrabas
Also of note, if you read Sébastien's paper (from the post about string kernels), they specifically discounted samples that had coding errors.From his paper:Sequences containing #, $ or * were eliminated from the dataset. The signification of these symbols was reported by Brian Foley of Los Alamos National Laboratory (personal communication). The # character indicates that the codon could not be translated, either because it had a gap character in it (a frame-shifting deletion in the virus RNA), or an ambiguity code (such as R for purine). The $ and * symbols represent a stop codon in the RNA sequence. TAA, TGA or TAG are stop codons.http://www.retrovirology.com/content/5/1/110
0
null
Tue May 18 2010 18:53:05 GMT+0200 (heure d’été d’Europe centrale)
https://www.kaggle.com/discussions/questions-and-answers/18
4th
/rajstennajbarrabas
On the subject of ambiguous data, here are some of my thoughts.First of all, the sequences have to be aligned. Once aligned, vast numbers of columns will be highly conserved vertically.Given that, we can talk about a particular column within all sequences, with a column being a 3-nucleotide codon. Some sequences are longer/shorter, so some sequences will have blanks at a particular column.A codon is a triiplet of (one of four) nucleotides, making a total of 64 possible codons. These code for 20 amino acids, so there is some redundancy. A particular amino acid is likely to be encoded my more than one codon, some of them have 6 codes.A first pass might consider duplicate codings as the same. Since they encode the same amino acid, both encodings will generate chemically identical results. One could go through the data and replace all synonyms by some chosen base coding. This will eliminate some of the variation.Next, consider a particular column. If the column has the same codon in both data sets, it has no predictive power so it can be eliminated - cut from the sequence. This will shorten the string and make certain computations easier (such as string kernels).I've got about 2 more pages of thoughts on the matter. Anyone else want to comment?
0
null
Tue May 18 2010 18:53:05 GMT+0200 (heure d’été d’Europe centrale)
https://www.kaggle.com/discussions/questions-and-answers/18
4th
/dalloliogm
No, I have translated the sequences with the tool transeq from EMBOSS, and I have found some stop codons in the middle of some sequences.There is not a specific strand for which all the sequences are completely coding, but with the strand 1 you can translate all the sequences except a few.The sequences of the PR protein for individuals 51, 188, 612 and 665 contain a stop codon in the middle, 785. I was asking what is the best approach to handle these cases. Should I consider that the genotype of these sequences is null for all the nucleotides after the stop codon? or should I keep them ignoring the stop?@ [Link]:../../../../../Rajstennaj-Barrabas/Profile : thank you for your feedback: but please, let's try to keep the discussion on the stop codons here, and let's open new discussions for other topics in this forum.
0
null
Tue May 18 2010 18:53:05 GMT+0200 (heure d’été d’Europe centrale)
https://www.kaggle.com/discussions/questions-and-answers/18
null
/rajstennajbarrabas
I'm sorry if my post seemed off-topic. Let me try an example. [Link]:http://www.okianwarrior.com/Enjoys/Kaggle/Images/RTrans.html is a list of RTrans sequences for all individuals in the study. Scroll down to patient 408.(I've included an excerpt below.)The stop codon for patient 408 is at position 12 (shown in red) in the sequence. Looking at that position in all patients, I note that the vast majority of them seem to be AAG. Counting the codons in that position results in the following:AAA: 71AAC: 2AAG: 1451AAR: 32AAS: 2ARG: 3MAG: 1TAG: 1 <- Stop codonTTA: 1WAG: 1---: 127 <- No codonMost of these are AAG analogues. For example, the "R" in AAR above represents A or G, so those 32 entries could reasonably be AAG as well. (And of course, AAA and AAG are synonyms for the same amino acid.)My conclusion: given the values in the other samples, and knowing that a stop codon won't appear in the middle, it's reasonable to assume that the stop codon is a data error and that the most likely correct value is AAG.The second thing to note is that this end of the sequence is "ragged" among all the patients. Many of the sequences begin after this position - they have no codon here.This would imply that this position in the sequence is not especially critical to the function of the protein, which gives us circumstantial evidence to believe that changing the TAG to AAG is OK because it won't matter much.An excerpt from the (very long) HTML file mentioned above:
0
null
Tue May 18 2010 18:53:05 GMT+0200 (heure d’été d’Europe centrale)
https://www.kaggle.com/discussions/questions-and-answers/18
4th
/rajstennajbarrabas
Here's a question for you. I found 9 stop codons among 7 patients in the Reverse Transcriptase sequence, but none in the Protease sequence. Your post notes fewer stop codons, in the Protease sequence?The IDs seem to match yours - am I using the wrong data? Is RT and PR reversed in the data files? --Train Data--51 : RT( 10) = TAA188: RT( 37) = TGA612: RT(203) = TAA665: RT(288) = TAA665: RT(291) = TAA665: RT(294) = TAA785: RT( 27) = TAA--Test Data--408: RT( 12) = TAG437: RT(209) = TAG
0
null
Tue May 18 2010 18:53:05 GMT+0200 (heure d’été d’Europe centrale)
https://www.kaggle.com/discussions/questions-and-answers/18
4th
/dalloliogm
Hi,sorry for the delay in answering.Don't worry, we have the same results: there are three stop codons in sequence 665, toward the end of the seq; and I didn't calculate anything in the test data.So in total, I have 5 sequences with earlier stop codons in the training data, just like you.
0
null
Tue May 18 2010 18:53:05 GMT+0200 (heure d’été d’Europe centrale)
https://www.kaggle.com/discussions/questions-and-answers/18
null
/dalloliogm
ok, your idea to threat those cases as sequencing errors is nice, but at least in 665 they should be not errors, since there are three stop codons in a close position. Let me think about it..
0
null
Tue May 18 2010 18:53:05 GMT+0200 (heure d’été d’Europe centrale)
https://www.kaggle.com/discussions/questions-and-answers/18
null
/rajstennajbarrabas
For the case of patient 665, note that the length of the RT sequence is not a strict multiple of three. Examining the alignment of the tail end of the sequence against the other sequences, I note that it would line up very well and eliminate the stop codons if an extra nucleotide is inserted.Comparing against other sequences, I edited my input data as follows:AAAGTAAAGSATTATGTAAACTCRTTAGGGGAACCAAAGCACTAACAGAAGTAATACCATTAACA",5,78AAAATAAGGCAATTATGTAAACTCCTTAGGGGAGCCAAAGCATTAACAGAAGTAATACAGTTAACGAAAGAAGCAGAG",3.3,447AAAGTAAAGSAATTATGTAAACTCRTTAGGGGAACCAAAGCACTAACAGAAGTAATACCATTAACA",5,78AAAATAAGGCAATTATGTAAACTCCTTAGGGGAGCCAAAGCATTAACAGAAGTAATACAGTTAACGAAAGAAGCAGAG",3.3,447 That's just my take. Also, this is at the ragged end section which is not strongly conserved, so chances are good that any changes I make are not important.What are your thoughts on this?
0
null
Tue May 18 2010 18:53:05 GMT+0200 (heure d’été d’Europe centrale)
https://www.kaggle.com/discussions/questions-and-answers/18
4th
/dalloliogm
The problem is that a deletion of 1 base is also a possibility in nature, and given the fact that we are talking about HIV, it won't be so strange.The description of the data in this competition doesn't say anything about the quality of the sequences, and I am not sure whether can argue that there are errors in there. I thought we could assume that the sequences are right, especially given the fact that this is not a real-data problem. From another point of view, the only thing we know is that HIV is highly variable and accumulates a lot of mutations, and for the case of 665 the deletion is toward the end of the sequence and likely to not have consequences on the protein structure.
0
null
Tue May 18 2010 18:53:05 GMT+0200 (heure d’été d’Europe centrale)
https://www.kaggle.com/discussions/questions-and-answers/18
null
/judowill
I just wanted to chime in here about the "stop codon" issue and the discussion of sequencing errors.These are sequences from real patients (not simulated sequences). There is an issue with possible sequencing errors but this is an unlikely explanation for finding these stop-codons. Sequencing errors usually result in ambiguous characters like 'N', 'Y', etc.In my research I tend to artificially remove these sequences since they are difficult to interpret. Its unclear whether this is actually a mis-sense mutation, a sequencing error, or a poor sampling of the " [Link]:http://gateway.nlm.nih.gov/MeetingAbstracts/ma?f=102178611.html". I included them in the dataset in-case anyone had a brilliant idea on how to deal with them.Hope that helps,Will Dampier
0
null
Tue May 18 2010 18:53:05 GMT+0200 (heure d’été d’Europe centrale)
https://www.kaggle.com/discussions/questions-and-answers/18
null
/syzygy
Is the correct interpretation of HIV variability and the quasispecies issue that:With respect to the "validity" of individual nucleotide/codons - since the HIV virus generates many variants in a single infected patient in the course of one day, it would be correct to view the single sequence associated with each patient as a sample from an ever-changing population of virus within that patient.If that is true, these independent variable data could be viewed as a sample of the patient's viral population, or as possessing error-in-variables. As such, I'm tempted to try not to discard the entire data point. Rather, it would seem to me that a predictive model that recognizes the viral plasticity would be preferred.On a side note, Will's quasispecies reference states "that the molecular biologist will be able to provide a molecular description of HIV induced disease seems remote" - has that view changed over the intervening 20 years?Thanks in advance for educating a non-biologist
0
null
Tue May 18 2010 18:53:05 GMT+0200 (heure d’été d’Europe centrale)
https://www.kaggle.com/discussions/questions-and-answers/18
null
/rajstennajbarrabas
In the old days, humans did sequencing by looking at (images of) spots on gels. My guess is that the contest data could be of the older sort and prone to human data entry errors.(There's only 12 of these in the entire dataset, and all of them are in sequence areas which are unlikely to matter.)As far as ambiguous codons go, my take is that since the genome is so plastic, any sample will necessarily contain multiple genotypes. In that model, an ambiguous codon might represent both species at once.For example:ARA <- AAA Lysine AGA ArginineBoth genotypes in the original sample in roughly equal numbers would cause this type of ambiguity. A good correlation method should give weight to both possibilities.Any Biologists want to confirm or deny?
0
null
Tue May 18 2010 18:53:05 GMT+0200 (heure d’été d’Europe centrale)
https://www.kaggle.com/discussions/questions-and-answers/18
4th
/dalloliogm
[Link]:http://www.csie.ntu.edu.tw/%7Ecjlin/libsvm/, with interfaces to Python and other languages.
0
null
Mon May 24 2010 11:23:11 GMT+0200 (heure d’été d’Europe centrale)
https://www.kaggle.com/discussions/questions-and-answers/19
null
/jstreet
I also prefer using python. I haven't used any of the following (yet) but might give you some additional options to look into. [Link]:http://pybrain.org/ [Link]:https://mlpy.fbk.eu/ [Link]:http://pyml.sourceforge.net/You're probably already aware of it but there is also [Link]:http://biopython.org/wiki/Main_Page which while not a machine learning library should be useful.
0
null
Mon May 24 2010 11:23:11 GMT+0200 (heure d’été d’Europe centrale)
https://www.kaggle.com/discussions/questions-and-answers/19
98th
/jstreet
You might want to send this to the developers directly using the 'contact us' at the bottom right of the screen as well. I contacted them a couple of hours ago about a small bug and got a reply back saying it had been fixed minutes later.I doubt they can fix all bugs as quick and feature requests will also take longer but they're definitely responsive to feedback.
0
null
Mon May 24 2010 17:19:35 GMT+0200 (heure d’été d’Europe centrale)
https://www.kaggle.com/discussions/questions-and-answers/20
98th
/rajstennajbarrabas
I'll second that.Anthony, the Kaggle person who deals with site feedback, is very accessible and open to suggestions.And he doesn't get angry or put out even if your complaints are snarky. :-)(I know this because I've sent 20 E-mails over the last month suggesting improvements and pointing out things.)You can contact him from the "ask us directly" link under the help page.
0
null
Mon May 24 2010 17:19:35 GMT+0200 (heure d’été d’Europe centrale)
https://www.kaggle.com/discussions/questions-and-answers/20
4th
/dalloliogm
Thank you for answering me. I will send an email to them when I will have time, but I also like when feedback is visible to everyone.Do you agree with the fact that it may be not convenient to someone to collaborate in the forum? That collaboration should be encouraged more?
0
null
Mon May 24 2010 17:19:35 GMT+0200 (heure d’été d’Europe centrale)
https://www.kaggle.com/discussions/questions-and-answers/20
null
/judowill
Giovanni,I've been trying to stimulate discussion about techniques as much as possible however I seem to be shouting into the dark ... as you can see from the empty forum threads on "Technique discussion". I was envisioning that people would have public repos that others commented on, modified, etc. but alas. Apart from a mention of "String Kernels" which have yet to make an appearance on the leaderboard ;) and a quickstart package made by Rajstennaj there hasn't been much discussion.It seems people are willing to discuss questions about the data, since that's helpful to everyone, but exact implementations are lacking.Maybe this post will encourage some people :)Will
0
null
Mon May 24 2010 17:19:35 GMT+0200 (heure d’été d’Europe centrale)
https://www.kaggle.com/discussions/questions-and-answers/20
null
/antgoldbloom
Giovanni,Thanks for your feedback. Using the forum to give feedback is a good idea. It allows others to see and comment on suggestions. We might set up a proper feedback forum, but for the moment this topic will have to suffice. I also agree that the forum is a bit clunky. However, we have a large list of feature requests and only limited resources for the moment - it might take us some time to address this. Apologies. I don't think the prize money in this competition is that relevant (the prize is relatively small). Correct me if I'm wrong but I think contestants are driven by intrinsic factors.A "karma" system that rewards forum posts is a good idea. Again, apologies for any delay in implementing this, there are lots of features on our "to do" list.Anthony
0
null
Mon May 24 2010 17:19:35 GMT+0200 (heure d’été d’Europe centrale)
https://www.kaggle.com/discussions/questions-and-answers/20
null
/rajstennajbarrabas
Any public collaboration would reduce a team's chance of winning the contest.Presumably, a solution requires discovering a set of features with predictive value. Also presumably, these are hard to find, so it's likely that any one team will only find a subset of all predictive features.A team will get no benefit from making a feature publicly known, and doing so risks making another team's score better (if the other team was unaware of the feature).This is a game theoretic result. The Nash equilibrium is for no team to make features publicly known.On the other hand, there is some incentive for teams to collaborate privately. Two teams which are #2 and #3 on the leaderboard could connect in private and agree to share their findings. If they agree to split the prize, then they increase their chances of getting 50% reward, which is better than their individual 0% chance of getting the entire reward.(This will be true for any set of teams which do not include first place.)Collaboration itself takes time and effort, and it's unclear to me whether $250 is worth the trouble. Most people will probably just lose interest rather than make a concert effort to win.If you want people to collaborate, then you should set up the system goals to encourage it. Perhaps a prize for most prolific or best collaboration effort or something.Note that there is an incentive for the winning team to tell you all the features they discovered, but no incentive for 2nd or 3rd place or any of the others. If your goal is to discover new features for science, the contest setup is not optimal.==========================================================================That being said, the flip side is to consider the goals from the point of view of the entrants.I imagine that most people have entered the contest with the single goal of winning. There's nothing wrong with this, but note that with 28 entrants on the leaderboard (currently), there is a strong likelihood that any individual team will not win.Many of these teams haven't made an entry in the last week, some only made one entry.If the only goal is to win the contest, most teams will quickly come to the conclusion that they won't be the winner, or that the payoff is not worth the effort, and such like. I expect many teams will eventually drop out.On the other hand, if you have goals which can be met by *entering* the contest, if your goals can be met in the process and not in the destination, then you will most likely see through to the end.I'm in the latter category. I had a number of goals which could be met by just entering the contest, plus one goal to win.(Why I'm outspoken in the forum.)
0
null
Mon May 24 2010 17:19:35 GMT+0200 (heure d’été d’Europe centrale)
https://www.kaggle.com/discussions/questions-and-answers/20
4th
/maverick
Guys, While I am a newbie on the site; it feels like the site is extremely slow. Not sure what kind of servers/network you are on, but you should definitely look at improving the response times.
0
null
Mon May 24 2010 17:19:35 GMT+0200 (heure d’été d’Europe centrale)
https://www.kaggle.com/discussions/questions-and-answers/20
null
/antgoldbloom
Manish, thanks for the feedback. The site is hosted on an Amazon EC2 server on the east coast of America.It's a fast server but the site has been more popular than we expected.We're currently working on speeding up the site by reducing the number database queries. We may have to implement auto scaling if the site keeps growing so rapidly. Anthony
0
null
Mon May 24 2010 17:19:35 GMT+0200 (heure d’été d’Europe centrale)
https://www.kaggle.com/discussions/questions-and-answers/20
null
/antgoldbloom
Just made a change which should speed things up. Let me know if it has made a difference for you.
0
null
Mon May 24 2010 17:19:35 GMT+0200 (heure d’été d’Europe centrale)
https://www.kaggle.com/discussions/questions-and-answers/20
null
/jstreet
> The site is hosted on an Amazon EC2 server on the east coast of America.It's a fast server but the site has been more popular than we expected.As problems go that's a good one to have. Congratulations.> We're currently working on speeding up the site by reducing the number database queries. We may have to implement auto scaling if the site keeps growing so rapidly.Most of the pages on the site are fairly static so caching those database queries should make a massive difference. The forum is the most dynamic location and even there you're getting 10 to 100 reads for every write.Would it not be quite difficult to dynamically scale the database? You would need to start the new database, copy across the complete database from the master to the slave and then re-route the database queries. Given the fickle nature of visitors from social media sites (where I guess most of your spikes in traffic originate from) auto-scaling could be useful for the apache servers once as many of the database reads as possible are cached.Speaking of apache, a common approach to squeezing a bit more performance out of a web server is to stick nginx in front of apache to serve the static content and act as a reverse proxy. Have you considered this? You could also try serving your static resources with s3 or cloudfront. The bandwidth charges appear to be the same as ec2 (although the asian cloudfront edge locations are more expensive) and it would relieve the pressure on your servers. Particularly with social media spikes when your visitors will have unprimed caches.
0
null
Mon May 24 2010 17:19:35 GMT+0200 (heure d’été d’Europe centrale)
https://www.kaggle.com/discussions/questions-and-answers/20
98th
/antgoldbloom
Jonathan, thanks for your feedback (x2). We're currently working on caching database queries. There are a lot of good suggestions here that we'll try before autoscaling.
0
null
Mon May 24 2010 17:19:35 GMT+0200 (heure d’été d’Europe centrale)
https://www.kaggle.com/discussions/questions-and-answers/20
null
/jstreet
No worries. Semi-intelligent sounding suggestions are easy. It's actually implementing them which is the hard bit. Good luck!
0
null
Mon May 24 2010 17:19:35 GMT+0200 (heure d’été d’Europe centrale)
https://www.kaggle.com/discussions/questions-and-answers/20
98th
/rajstennajbarrabas
This is an excellent resource, thank you.I am wondering, though, is this within the scope of the competition? Are we allowed to use it?To win the competition, a prediction method would have to justify the weight given to each component of the decision process. Are we allowed to say "this piece is weighted highly because of that external data"?Cory's proteomics data appears to be information calculated from the given sequences - molecular weight, for instance. That's probably OK for the contest.Fontanelles disqualified themselves by having specialized information. Will - could you post a reply clarifying the issue?
0
null
Mon May 24 2010 17:21:18 GMT+0200 (heure d’été d’Europe centrale)
https://www.kaggle.com/discussions/questions-and-answers/21
4th
/judowill
I'm perfectly happy with using outside information like known HIV-1 resistance mutations, functional annotations or anything else you can think of.
0
null
Mon May 24 2010 17:21:18 GMT+0200 (heure d’été d’Europe centrale)
https://www.kaggle.com/discussions/questions-and-answers/21
null
/rajstennajbarrabas
I am dismayed by Will's response.This is no longer "no knowledge of biology is necessary to succeed in this competition", the results will be dominated by companies and experts which have gleaned information from patients outside of the dataset.For example, in the database cited:63568 RT Sequences, 63842 Protease SequencesThis database has many patients outside of the contest dataset, and experts have been poring over it making their conclusions publicly available.(As for example [Link]:http://wanglab.ucsd.edu/html/publications/HIV_drug_resistance_PNAS.pdf.)I'd like to make my own conclusions from the data. That's what the contest is about.This seems at odds with the statement of the contest: "This contest requires competitors to predict the likelihood that an HIV patient's infection will become less severe, given a small dataset and limited clinical information."
0
null
Mon May 24 2010 17:21:18 GMT+0200 (heure d’été d’Europe centrale)
https://www.kaggle.com/discussions/questions-and-answers/21
4th
/dalloliogm
Hi Rajstennaj, I understand you but consider that this is the common problem faced by bioinformaticians every day. To do bioinformatics, you have to know both biology and computer science, otherwise it is very difficult to obtain useful results. This is the reason why I came here in this forum to look for help: a good scientist knows that big problems can not be solved by a single mind, you have to interact with people with different skills if you want to obtain real results.You can also approach this competition without knowing anything of biology. I think that it will be very interesting to see if programs written without an a priori knowledge of the problem will perform better than those that make use of these informations. The informations stored in that database are derived from observations made with respect of certain HIV therapies, and it is not certain that they will be applicable to the therapy studied in this competition.
0
null
Mon May 24 2010 17:21:18 GMT+0200 (heure d’été d’Europe centrale)
https://www.kaggle.com/discussions/questions-and-answers/21
null
/judowill
Rajstennaj,I don't see the distinction between knowing the translation matrix of nucleotides to amino-acids and finding a database which implies that specific regions are more important than others. The Stanford database (and their automated annotation webtool) referenced above is in the top google results when you search for "HIV Therapy prediction techniques". I imagined people would stumble upon this website (or any other that could be found from a simple google-search) and use the results as just another featureset in the prediction methods.I would be surprised by any machine-learning researcher who didn't do even a general survey of current techniques, available datasets, data transformation and normalization methods that apply to the field.The other reason I'm not worried about the knowledge of mutation regions is that the techniques that solely use this data barely reach 65% accuracy on this dataset. By reducing the information in the sequence to a vector of ~20 binary calls (most of which have negligible correlation with the response variable) you will ultimately have difficulty fitting a model ... trust me I've tried.
0
null
Mon May 24 2010 17:21:18 GMT+0200 (heure d’été d’Europe centrale)
https://www.kaggle.com/discussions/questions-and-answers/21
null
/dalloliogm
Anyway, I take the opportunity to say that the link given to the lanl.gov database in the Background section of this competition is wrong. The right link should be http://www.hiv.lanl.gov/content/sequence/HIV/mainpage.html , and once you are on that site, be sure to look at the Sequence compendium http://www.hiv.lanl.gov/content/sequence/HIV/COMPENDIUM/compendium.html .I have already contacted the author of the lanl.gov database and they told me that it is not longer maintained. It is better to use the Stanford's one: - http://hivdb.stanford.edu/I will tell the maintainers of the lanl.gov database about this website, let's see if they will come in this forum.
0
null
Mon May 24 2010 17:21:18 GMT+0200 (heure d’été d’Europe centrale)
https://www.kaggle.com/discussions/questions-and-answers/21
null
End of preview.

This is a dataset containing 10,000 posts from Kaggle and 60,000 comments related to those posts in the question-answer topic.

Data Fields

kaggle_post

  1. 'pseudo', The question authors.
  2. 'title', Title of the Post.
  3. 'question', The question's body.
  4. 'vote', Voting on Kaggle is similar to liking.
  5. 'medal', I will share with you the Kaggle medal system, which can be found at https://www.kaggle.com/progression. The system awards medals to users based on their performance.
  6. 'nbr_comment', The comment number.
  7. 'date', The post date.
  8. 'url_post', Link the comment dataset using the post URL.
  9. 'url_competition', If the question is related to a competition, include the competition URL.
  10. 'rank_competition', The author's rank in the competition.

kaggle_comment

  1. 'pseudo_com', the answer authors.
  2. 'answer', The answer's body.
  3. 'vote_com', The answer's number of likes
  4. 'medal_com', I will share with you the Kaggle medal system, which can be found at https://www.kaggle.com/progression. The system awards medals to users based on their performance.
  5. 'date_com', The answer date.
  6. 'url_post', Link the comment dataset using the post URL.
  7. 'rank_competition', the author's rank in the competition.

Data scraping by Mathieu Duverne on august 2023.

Downloads last month
0
Edit dataset card