Instruction
stringlengths
0
86.4k
Response
stringlengths
0
127k
Hi , I think this is due to the bad entry of data, the "Rio"s in the dataset have different meaning. you can check it in Wikipedia on State of Brazil The following entries need to pay attention: Rio = Rio de Janeiro Rio = Rio Grande do Norte Rio = Rio Grande do Sul Mato Grosso = Mato Grosso Mato Grosso = Mato Grosso do Sul Paraiba = Paraíba Paraiba = Paraná
That makes sense! However, how might we determine which Rio is which?
Hi I believe there is a bug with the completion checking feature. I have made a report for functions and getting help in this thread, which has gotten an official Kaggle response. As the course completion feature was recently rolled out, the team are in the process of on getting competion checking more accurate. I am also stuck at 38%, there are likely 4 more questions to be checked in exercise 2. Hope this helps, and do chime into the feedback thread above!
Hi, even I am facing the same issue for the course - Data Visualization. Screenshot attached for reference. I have completed all the exercises but it is only visible for the first two modules.
As the notebook starts with " Learn feature engineering and feature selection techniques", I think I'm in the right place. Nice and different heatmaps and beautiful swarmplots. Sahib Thanks.
Thanks for the support
I don't understand what do you mean. What should we do?
OK I understand. I don't think anyone is doing that in this competition because submission kernel will fail if you will not submit full data, right? And if you want to submit private data as zero first you need to spend some time to find out which data is private and which is public (you need to store public test set first), so you need effort and what's the real benefit of it? Only little smaller GPU time usage on Kaggle?
If you think of the positions as having some hierarchy, some of these are just being recorded at different levels. I'll try to list them in their hierarchy as best as possible - Offense Offensive Line (OL) These are the guys on the line of scrimmage, who's job it is to block the defense. If it is a passing play, they are blocking to give the QB a clean pocket to throw from without getting sacked. On a run play, they are trying to create lanes for the ball carrier to run through cleanly. Except in rare situations, guys on the OL cannot catch passes and generally do not run the ball. Typically there are 5 OL in on a play, although sometimes you might get a Jumbo formation or something that has one or more extra OL (or even DL in rare situations) to help block more. OL - Center (C) The middle of the OL, as the name implies. The center also has the job of snapping the ball to the QB. OL - Guard (G, OG) These are the guys on either side of the Center. OL - Tackle (T, OT) These are the guys on the ends of the offensive line, outside of the Guards. Wide Receiver (WR) These are the guys who run out and catch passes from the QB. They'll still be somewhat involved on running plays as downfield blockers. Tight End (TE) Somewhat in between a WR and a TE, these guys can go out and catch passes but are much more involved in blocking. They'll typically line up just outside the Tackles, and there are typically 0, 1, or 2 of them in on a play. Running Back (RB) These are the guys typically running the ball, although more and more they are also good at catching passes either behind the line fo scrimmage or shortly beyond it. They also help protect the QB by being an additional blocker on passing plays sometimes. RB - Halfback (HB) A smaller, faster RB. Typically when people say "Running Back", this is what they mean. RB - Fullback (FB) A bigger, slower RB who typically is running ahead of the HB to help create a better running lane. FBs can be ballcarriers themselves, however. Not all teams have a FB on their roster or use them regularly. Quarterback (QB) This is the guy who is in charge of receiving the snap and then either passing it or handing it off for a running play. QBs are sometimes ballcarriers in running plays, either by design or because they can't find anyone open and want to try getting yards by running instead. A frequent planned running play for a QB is a "QB Sneak" where the QB will immediately push through the center of the line, basically falling forward. This usually results in a fairly small distribution of yardage outcomes - usually -1 to 1 yards. Defense Defensive Line (DL) These are the guys trying to get to the quarterback and/or ballcarrier. They are typically some of the biggest guys on the field. DL - Nose Tackle (NT) A defensive lineman who sits on the middle of the line. DL - Defensive Tackle (DT) A defensive lineman who is in the interior of the defensive line. DE - Defensive End (DE) A defensive lineman who is on the outsides of the defensive line. Linebackers (LB) Linebackers are behind the defensive line. Their job is to help stop the run, help cover intermediate passing routes, and sometimes put pressure on the QB. Their sub-positions are the hardest to define, because they vary widely by the type of defense that team employs. LB - Inside Linebacker (ILB) LB - Middle Linebacker (MLB) LB - Outside Linebacker (OLB) Defensive Backs (DB) These guys are at the back of the defense, mostly protecting against passing plays, but they'll come up to stop runners and are often the last defense against a runner who has managed to break through the first few yards of the defense. Cornerback (CB) These are your fastest guys on the field and they'll be covering receivers. DB - Safeties (SAF, S) These are a littler bigger and often a little slower than CBs (often times they are former CBs who have lost a step or two), but are still largely responsible for covering receivers. They also often end up being the furthest back from the line of scrimmage - as their name implies, part of their job is being the last line of defense. S - Free Safety (FS) This safety is more involved in pass coverage and being the last line of defense, typically. S - Strong Safety (SS) This guy will typically line up on the "Strong" side of the play (ie, the one with an extra TE or OL if there is one) and be more involved in stopping run plays. Some of the exact uses of each position will vary from play to play and team to team, but hopefully that gives you some idea of what they all mean.
Thank you very much! I'll give it a good read and go watch some more games =)
For a Google company they have had lots of dead time for me over the last year - never longer than a 1 hour or so time frame when I could get nothing. Submissions very often seem to go to sleep - I close my browser and reopen and most times my submissions are there. Was doing a submission last month with a team mate watching the leaderboard (we were trying blends) - he saw the submission as soon as it was done - I had to close and reopen to see it - so part of the issue seems to be browser cache, etc and maybe not 100% Kaggle fault.
I am sincerely sorry... It's been a long time for me when I was addict and I completely forgot to click on the submit button. Crazy indeed, really. I must apologize for that mistake.
Nice kernel! I tried running this in Colab, but I get the following error: ``` TypeError Traceback (most recent call last) in () 40 41 ---> 42 model.compile(optimizer = Adam(learning_rate = LR), 43 loss = 'binary_crossentropy', 44 metrics = ['acc', tf.keras.metrics.AUC()]) 1 frames /usr/local/lib/python3.6/dist-packages/keras/optimizers.py in init(self, lr, beta_1, beta_2, epsilon, decay, amsgrad, **kwargs) 455 def init(self, lr=0.001, beta_1=0.9, beta_2=0.999, 456 epsilon=None, decay=0., amsgrad=False, *kwargs): --> 457 super(Adam, self).init(*kwargs) 458 with K.name_scope(self.class._name_): 459 self.iterations = K.variable(0, dtype='int64', name='iterations') /usr/local/lib/python3.6/dist-packages/keras/optimizers.py in init(self, **kwargs) 77 if k not in allowed_kwargs: 78 raise TypeError('Unexpected keyword argument ' ---> 79 'passed to optimizer: ' + str(k)) 80 self.dict.update(kwargs) 81 self.updates = [] TypeError: Unexpected keyword argument passed to optimizer: learning_rate ```
I didn't realize that I didn't paste the error I encountered. I updated my original post.
Hi, running the script train001.sh appear to me that validation dataset is not created having 0 elements. From the log infact i get the following: mode: train workdir: ./model/model001 fold: 5 batch size: 28 acc: 1 model: se_resnext50_32x4d pretrained: imagenet loss: BCEWithLogitsLoss optim: Adam dataset_policy: all window_policy: 2 read dataset (665414 records) applied dataset_policy all (665414 records) use default(random) sampler dataset_policy: all window_policy: 2 read dataset (0 records) applied dataset_policy all (0 records) use default(random) sampler train data: loaded 665414 records valid data: loaded 0 records Also in file main.py I see that the function valid is never called. Probably I miss something, but to my eyes appear that this code should not work. Please correct me where I am wrong.
Hi, I try to report my debug outputs. when I runpython3 -m main train ./conf/model001.py --fold 5 --gpu 0 --n-tta 5 the file main.py run epoch 0 correctly After that, the code line 134 val = run_nn(cfg.data.valid, 'valid', model, loader_valid, criterion=criterion) jump in an error File "/media/alberto/50C03782C0376D7A/RSNA/main.py", line 205, in run_nn 'loss': np.sum(losses) / (i+1), UnboundLocalError: local variable 'i' referenced before assignment To my eyes it is because len(loader_valid.dataset) =0 as visible in the above log. Due to that the line 166 for i, (inputs, targets, ids) in enumerate(loader): doesn't run end the final error is reporting the index 'i' not assigned. To my eyes len(loader_valid.dataset) should be !=0 but I did not seen where it is populated. I do believe I recognized my mistake. In the command line of main.py I put fold=5 but it is wrong. I need to use fold=0 or 1 or 2 or 3 or 4 . Is it used as the portion for validation in all epoch, am I right ?
Did anyone probed public/private data split? Is it timebased splits or random? Didn't find information about it in competition description.
A logical split for this would be by building_id - so maybe we looking at only 22% of the buildings. How hard is that assumption to probe?
You can get them down to only 3/4 of GB by dropping precision to 4 decimals - as noted in the overview Kaggle using only 4. I use this - ``` need to format the res to only 4 decimals to reduce memory - kaggle evaluation only to 4 digets res = np.round_(res, decimals=4, out=None) ``` Kaggle will also accept compressed files - zipped my 3/4GB gets to much more data friendly 0.16GB File Format Your submission should be in CSV format. You can upload this in a zip/gz/rar/7z archive, if you prefer.>
Got to read the fine print :)
You can get them down to only 3/4 of GB by dropping precision to 4 decimals - as noted in the overview Kaggle using only 4. I use this - ``` need to format the res to only 4 decimals to reduce memory - kaggle evaluation only to 4 digets res = np.round_(res, decimals=4, out=None) ``` Kaggle will also accept compressed files - zipped my 3/4GB gets to much more data friendly 0.16GB File Format Your submission should be in CSV format. You can upload this in a zip/gz/rar/7z archive, if you prefer.>
https://www.kaggle.com/c/ashrae-energy-prediction/data sample_submission.csv A valid sample submission. All floats in the solution file were truncated to four decimal places; we recommend you do the same to save space on your file upload. There are gaps in some of the meter readings for both the train and test sets. Gaps in the test set are not revealed or scored.
I don't understand what do you mean. What should we do?
Fast submission is where you make predictions on the public test and upload the submission file as an external dataset and submit that to kaggle(with a little modification to the code so that it goes through unseen test data without error) It is fast but gives 0 score on private test set. Edit - I took a lot of time to reply 😅
You can get them down to only 3/4 of GB by dropping precision to 4 decimals - as noted in the overview Kaggle using only 4. I use this - ``` need to format the res to only 4 decimals to reduce memory - kaggle evaluation only to 4 digets res = np.round_(res, decimals=4, out=None) ``` Kaggle will also accept compressed files - zipped my 3/4GB gets to much more data friendly 0.16GB File Format Your submission should be in CSV format. You can upload this in a zip/gz/rar/7z archive, if you prefer.>
Silly question but where does it say that only 4 decimals are used?
I don't understand what do you mean. What should we do?
Fast submission is computing public test predictions offline and then only submitting public test predictions to Kaggle. When you do this, the corresponding private LB score is 0.0.
About 3/4 of the way thru the V1 course on fastai - in your opinion will I get too confused if I continue thru the course on V1 and fork your V2 kernel ?
Thanks - started to watch the walkthroughs last week thinking I might just skip V1 but realized you do need to attend 1st grade before you go to high school. Will be forking your script and will blame you if I get confused :)
If you change your loss functions the pre-trained model should do even better and they will converge to 0.1.
something like a weighted_crossentropy_loss?
some of the buildings spend more energy cooling than heating. I had assumed it would be the opposite. I take it you've never paid for utility bills before ;-). Summer month are always more expensive than winter months. Cooling is a lot more difficult to do than heating and a lot less efficient, from an energy transfer standpoint.
Well said.
Only worked on the edges of building energy and design over the years working for glass company making great glass for big buildings, but I know that for many "office" type buildings all the electrical gear, lights, etc. in a building generate a lot of heat and that the cooling load is much larger than the outside temperatures would suggest. Even my house built in 1941 shows that effect - when I am running all 4 of my PC's (dual GPU for each) I am easily at 2000 watts or more. So when in full Kaggle mode I only see my heater kick on when the outside temperature drops below 50F. So when you look at the meter reading for electrical - remember that the load you see is generating heat as it operates all the gear in the joint.
That's probably part of the story indeed. I still can't wrap my head around why, for e.g. building 2, the meter readings increase as you go from 0 to 15 degrees C outdoor temperature. Would buildings use AC when it's 10 degrees outside?
Looking forward to an interesting competition, ! Some kagglers in another thread expressed concern regards to how 'production usable' models out of this competition would be if they incorporated weather information—in reality, only forecasts would be known at inference, yet we're seemingly given actual accurate weather information for up to two years into the future. How do we reconcile this? If weather truly impacts energy usage, then model output should correlate to weather and without an accurate future two-year forecast, ASHRAE won't be able to do much with the results.
In 2009 I retired and was thinking about a new city/state to live in. Being a data driver xx?% kind of guy I downloaded all the weather data for USA for a decade. Than looked for city where it never got too hot or too cold. When you plot a decade worth of temperatures for a city, the curve is very repeating. So making a model of the next decade of weather for a city is pretty much only an issue of getting the data set. Can't of course model climate change, but otherwise not that big of a task for design. I would assume that such a data set can in fact be purchased so folks designing buildings know what cooling and heating load will occur.
Awesome work, ! , your work is cross-pollinating!
Thanks Felipe. Didn't think someone would ever put so much effort into CQ500!
Just out of curiosity , if it is ASHRAE - Great Energy Predictor III , where are Predictor I and Predictor II ?? 🙄
What Clayton has mentioned elsewhere - Kaggle has the floppy disk data available for you to see if todays methods can beat the 90's.
Yes. In an interactive kernel, if you view the 1801 test images in the folder test_images, those are the images for your public leaderboard score. When you click submit on your kernel, your code loads a new folder of test_images which contains 5403 = 1801 + 3602 images. The new folder includes 3602 images that we have not and can not see. Your code processes the new images and your private score is computed from those new images.
The sample_submission.csv that we can download and view has 7204 rows which is 1 row per defect per image = 4 * 1801 images. When you submit, your code has access to a new different sample_submission.csv. This new csv has 21612 rows which is = 4 * 5403 because your code gains access to 5403 test images.
Hi, I have a Ph.D. in Industrial Engineering and I did my thesis about Life Cycle Assessment of buildings. In which the energy consumption has a great importance. I program in R and Python and I'm a beginner in machine learning. I'm from Barcelona and my email is juanmah@gmail.com
Hola Juan Mah Tengo 20 años desarrollando en diferentes lenguajes. Para inteligencia artificial utilizo FastAi y PyTorch (Python) he logrado con esto realizar varias ejercicios de Kaggle e igualar los errores de los primeros lugares. Nunca he competido en Kaggle en una competencia abierta Cuento con los equipos para realizar la computación en GPU Me gustaría formar un equipo.
Great approach sir ! Congrats ! R is great 🙌 I upvoted ! Appreciate if you can upvote my kernel as well or give me any recommendation for improvement. Kernel : Intro Ashrae Thank you!
Done!
Yes. In an interactive kernel, if you view the 1801 test images in the folder test_images, those are the images for your public leaderboard score. When you click submit on your kernel, your code loads a new folder of test_images which contains 5403 = 1801 + 3602 images. The new folder includes 3602 images that we have not and can not see. Your code processes the new images and your private score is computed from those new images.
the shape is 7204 because 1801 is not the number of rows, it is the number of images and the goal of this competition is to predict 4 masks per image
So far best is NN with CV 0.01300, LB 0.01343 Tree based models seems to be performing significantly worse here (at least on CV for me).
Thanks for sharing. If you don't mind me asking- are you using a tabular NN with engineered features - or something like a graph NN?
resnet-34 single model, single fold, tta LB 0.90283 can't reproduce on local GPU
I use classification and segmentation stages, I use pytorch, I use efficientnet and resnet34. Can't say more at this point about my method :)
Just a guess on my part but I think the buildings only in North America and likely USA only. Go to the DOE web site for USA and it would seem that the data is result of survey's conducted in US. So I will be using USA holidays myself.
Thanks, . I'll check that site out, but the reason I was confused is because the "Data Description" section for this competition says, "The dataset includes three years of hourly meter readings from over one thousand buildings at several different sites around the world." (bolding is mine)
Yes. In an interactive kernel, if you view the 1801 test images in the folder test_images, those are the images for your public leaderboard score. When you click submit on your kernel, your code loads a new folder of test_images which contains 5403 = 1801 + 3602 images. The new folder includes 3602 images that we have not and can not see. Your code processes the new images and your private score is computed from those new images.
, the shape of the sample submission file is (7204, 2). So I think the calculation is that test_images contains 7204 = 1801 + 5403 images.
resnet-34 single model, single fold, tta LB 0.90283 can't reproduce on local GPU
are you using class based thresholds?
So many techniques to master, makes me dizzy... which is sort of fun. Your kernel unites many interesting features. I'll read it a few times. Thanks a bunch
Thanks for the good feedback! In short, I have this one - "The one line of the code for prediction : LB = 0.83253 (Titanic Top 2-3%)": https://www.kaggle.com/vbmokin/titanic-top-3-one-line-of-the-prediction-code
Its downloading now. Google Colab users please upgrade pip, uninstall kaggle and then reinstall it with --upgrade option. If you upgrade directly it doesn't upgrades to 1.5.6, it just uses the 1.5.4 version.
Did you manage to fix the issue ?
I don't understand what do you mean. What should we do?
Thanks now I understand. Experiment with more than 1 hour can be only executed this way. My strategy is to always do full prediction, even if I change very small thing.
resnet-34 single model, single fold, tta LB 0.90283 can't reproduce on local GPU
2 folds out of 5, more or less clock is ticking, 4 days to go LB 0.91874
Thank you for posting this Konstantin! Your TPU code is very informative, I'd tried a few times to get pytorch_xla working before without success so seeing a pytorch model training on the tpu is most excellent.
Good point, this is something that took me a few tries to get right. I used the Kaggle api to get the data into my gcp instance. I had to request a quota increase from the 500GB ssd limit since the zip file and the unzipped data were larger than this. I agree with Konstatin, you should get the storage you need as it is not the expensive part of this by any means, you can always downsize your gcp disk later.
I don't understand what do you mean. What should we do?
There are many benefits. You can work offline (avoid logging into Kaggle), you save time (don't run your code twice to see LB score), you save GPU quota allowance (don't use GPU to submit), and you can run experiments that take longer than 1 hour to run. (Of course if those experiments are successful you will need to find a way to do it quicker). (Note: "fast submission" is where your code loads sample_submission.csv which has 5304*4 rows and you merge your offline public LB EncodedPixels predictions into 1801*4 of those rows using ImageId_ClassId. There is no computation nor time). This is why some of the top LB placeholders are talking about whether they can achieve their score in under 1 hour. Many of the high LB scores that we are seeing are using more than 1 hour of compute offline with "fast submission". Therefore scores are not as good as they look on public LB.
Could you explain why do you mirror Y coordinates? Y_std = ifelse(ToLeft, 160/3-Y, Y) It's rotation not mirroring.
It's precisely because it is a rotation: +-----------+ 180° +-----------+ | | rotation | <-X | | | ----------> | | | X-> | | | +-----------+ +-----------+ If you don't adjust y, it's just a reflection about the Y-axis.
I don't understand what do you mean. What should we do?
But what is benefit of it? You still need to implement valid code later and if you will do it later you have big chance to have some error.
This is the error I get with running in Colab:
Ahh yes .... that one ... Colab is using an older version of Keras compared to the Kaggle Kernels. Change the 'learning_rate' to 'lr'. See this commit from the Keras Github
Thank you for posting this Konstantin! Your TPU code is very informative, I'd tried a few times to get pytorch_xla working before without success so seeing a pytorch model training on the tpu is most excellent.
I created an SSD disk with the size of 400 GB, which is enough to hold the unpacked dataset. I said it's not an issue because you can create large enough disk if you're already on GCP, and it's price is small compared to other components.
In which line you determined the scatters colours, in[20], [23] and [24]? I couldn't find. I loved that matching of colours. Great notebook Ronald.
Thank you! The colors on those plots are determined by: c=train.Cover_Type It assigns a color to each cover type value. I used the default palette, but there are others and I think you can define your own.
this was a good competition on kaggle. you can get some interesting Notebook for Recommendation here. https://www.kaggle.com/c/data-science-for-good-careervillage
thanks bharat , you are always helpful 👍
Is there still value in converting to KITTI as only the front facing camera data is retained ? What i am not able to grapple in this competition is where to even start ?
Use this kernel, it'll convert all cam boxes to kitti format. Assuming that you are familiar with 2d object detection, this blog can give you a head start
-- It's how far, in yards, the player traveled since the previous time frame. Snapshots of location data come in ten times every second, so this equates to how far the player traveled in the previous tenth of a second
Thank you for replying. ' the player traveled since the previous time frame' Which time frame exactly? Is it -BEFORE the play (PlayID) starts? (something like preparation) or - is this the LAST 1/10 second of the TOTAL play (PlayId) , or - is this the LAST 1/10 second BEFORE Handoff? Im trying to find out the use of this information.
I don't understand what do you mean. What should we do?
I believe many people are doing "fast submission". It allows you to work completely offline. You train and infer offline. And then when you want to see your LB score, you click a button and see your LB score in 15 seconds.
I don't understand what do you mean. What should we do?
it will not fail. I am doing it
Hi , I think this is due to the bad entry of data, the "Rio"s in the dataset have different meaning. you can check it in Wikipedia on State of Brazil The following entries need to pay attention: Rio = Rio de Janeiro Rio = Rio Grande do Norte Rio = Rio Grande do Sul Mato Grosso = Mato Grosso Mato Grosso = Mato Grosso do Sul Paraiba = Paraíba Paraiba = Paraná
Hi ! I believe that looking at how data is presented to us and after doing my analytics and location search, Rio would be a combined group (State) of Rio de Janeiro region in Brazil. According to this outline:
Nice work! how did you choose the step = 50000 ?
Thanks ! great approach ! simple and helpful !💪
t's nice work and great approach I upvoted! 💪 Appreciate if you can upvote my kernel as well or give me any recommendation for improvement. Simple EDA: https://www.kaggle.com/caesarlupum/catcomp-simple-target-encoding/notebook Thank you!
Great work sir !
I decided to have some fun with the data, as I am very new with Python, and who doesn't love some deep dive into a new dataset? wink There are 2 HYPOTHESIS: - H1: there were no new houses built indeed ( maybe demand of the real estate market soared, and so did demand for new houses) - H2: there is a gap in the data/ no up to date survey If H1 is correct the analysis won't be affected and we'll be able to draw some meaningful conclusions. But if it's alternative is correct, this would affect any prediction we want to make for the current time. SHORT PEEK AT THE DATA: Looks like building in Iowa stopped sharply between 2006 and 2010, after a pretty crazy peak. Maybe they moved to apartment buildings? Let's take a closer look. Yep. After a creazy boom in 2005 - 2006, building of houses started to drop rapidly to almost none in 2010. Now, let's look at the demand. Well look at that, who doesn't looove a cute looking graph? July is the most favourite month to buy out of all! And the seasonality of the data is outstanding, except for 2010. It looks that starting June, the trend in seasonality is waaay different than what we've seen before. What are the chances of such a big disruption in the data to happen organically? Maybe the prices have gotten way too far? Also, pretty weird that, although we have been building houses from 1972, sold data is starting from 2006 onwards. Nope, looks like average sale prices have actually decreased in the last 3 years. This should mean people buying more, and when demand rises, supply has room to rise to. So, it should be pretty affordable as a Real Estate investor to put some cash into making more houses. CONCLUSION: So maybe it really is a gap in the data. From a little search on the Googles, it doesn't look like something major happened in Iowa in 2010. And with the economy rising so fast in the last years, it would be pretty weird that people didn't find ANY room to build in the state. 😅
Hello Andrada, I don't agree with your conclusion for hypothesis 1. If an event occurs with such an impact, we can't make meaningful conclusions for the future from the historical data in my opinion.
It could be something with your batch sizes. The csv that you can download is produced when your notebook runs the first time using a test_image directory of 1801 files. The csv that gets submitted to Kaggle is produced when your notebook runs the second time using a test_image directory of 5403 files. If your batch size is even then one image is not being predicted. Because Python reads filenames from directories randomly, each run you avoid predicting a different image. Perhaps during the first run the missing image isn't important but during your second run the missing image is important.
IIRC at this point I was using keras
It could be something with your batch sizes. The csv that you can download is produced when your notebook runs the first time using a test_image directory of 1801 files. The csv that gets submitted to Kaggle is produced when your notebook runs the second time using a test_image directory of 5403 files. If your batch size is even then one image is not being predicted. Because Python reads filenames from directories randomly, each run you avoid predicting a different image. Perhaps during the first run the missing image isn't important but during your second run the missing image is important.
Interesting do you use keras? With pytorch the dataloader by default loads the last batch even it is smaller than the batch size.
Hi!) Hmm, your answer is correct, but for some reason the checker doesn't accept it. Have you tried restarting the session? I ran your query (with original formatting) and didn't get any errors. But I can't understand what this code segment does and how it influences the result. It limits the amount of data processed by your query to 10 GB (if this limit is exceeded, the query will fail) and converts the results to a pandas DataFrame. The query_to_pandas_safe() function you use in your solution does the same thing. I looked for it in the tutorial section but couldn't find any mention of any of these methods. The last section of the second tutorial briefly discusses possible ways of estimating query costs. For more info, read the docs.
😊
Thanks for sharing !👍
Thanks for support ! Have a good day sir !
Great work, upvoted! Any kernel with plotly just instantly looks so much better! I recently also did an EDA (on housing sale in King County) Can you maybe take a look at it and upvote it if you like it?
Thanks, Plotly really does stand out! Yes, sure, I'll take a look : )
Thanks for share the kernel .
Waiting for your helpfull and great approach !👈
Thanks for this post! I can add another tip: Using Colab with 25 GB of RAM :smile:
Nice! What are the specs and run time limit of colab? Is it better than Kaggle kernels only in terms of RAM?
THANK YOU!!!
Waiting for your helpfull and great kernel !👈
THANK YOU!!!
Thanks (Goku 💯 )! I'm very happy for this ! 😀
thanks... upvoted. Btw how would you handle the missing values??
In **LightGBM **for example - The trees and cutoffs, including NAs, are established during training where we have the targets. See this: http://www.r2d3.us/visual-intro-to-machine-learning-part-1/
Thanks for your sharing,you're really genius. Could you tell me the explanation of the following code in your misc.py please? return {attr:cast(getattr(dicom,attr)) for attr in dir(dicom) if attr[0].isupper() and attr not in ['PixelData']} Much Thanks~
It's supposed to get all metadata from pydicom object. Because all the metadata attributes start with capital letter, this does the job. This is not very clean way though.
I have a CV score of 0.92009 and an LB score of 1.09. I'm using a standard 3-fold validation. I have some ideas for different validation schemes. However my gut tells me that getting a stable CV/LB score difference is down to using features that generalize well across years. Previous experience tells me that the specifics of the validation scheme don't matter.
Thanks! That's great.
thanks... upvoted. Btw how would you handle the missing values??
Thanks for question ! For my 1 approach I'm just fillNaN = -999 for the 4 features with most missing values . ` Feature Total Percent floor_count 16709167 82.652772 year_built 12127645 59.990033 age 12127645 59.990033 cloud_coverage 8825365 43.655131 ` And for the others I'm ignore because I'm use lightGBM mlodel. So, from what I understand, lightGBM will ignore missing values during a split, then allocate them to whichever side reduces the loss the most, see this > Manually dealing with missing values will often improve model performance. It sounds like if you set missing values to something like -999, those values will be considered during a split (not sure though). And of course if you impute missing values and the imputation is directionally correct, you should also see an improvement as long as the factor itself is meaningful. This topic is extremely helpful
Hi, running the script train001.sh appear to me that validation dataset is not created having 0 elements. From the log infact i get the following: mode: train workdir: ./model/model001 fold: 5 batch size: 28 acc: 1 model: se_resnext50_32x4d pretrained: imagenet loss: BCEWithLogitsLoss optim: Adam dataset_policy: all window_policy: 2 read dataset (665414 records) applied dataset_policy all (665414 records) use default(random) sampler dataset_policy: all window_policy: 2 read dataset (0 records) applied dataset_policy all (0 records) use default(random) sampler train data: loaded 665414 records valid data: loaded 0 records Also in file main.py I see that the function valid is never called. Probably I miss something, but to my eyes appear that this code should not work. Please correct me where I am wrong.
valid function can be used when you want to do validation using a snapshot and save prediction to a file. In train function, it does validation every epoch. Regarding the error, it's hard to say anything based on the information you provided. Could you provide a diff?
I like your thinking... I'm doing that, but on steroids. My pre-processing routine follows the steps: 1) Group all data by PlayId where NflId == NflIdRusher, and engineer features about the rusher & the overall play 2) Group all data by PlayId and engineer features about all the play's participants (offense, defense, distances, times, spaces, etc) 3) Group all data by GameId and PlayId and engineer features about the game at the moment of that specific play 4) Normalize everything I'm still focused on (2)... Doing (1) got me ~0.0145 score... now that I'm growing the features engineered in (2), I'm at ~0.0139. The more features I add, the better it gets... Now at 55 pre-processed features, with an idea list of more 10 to implement. From the 'distances' idea that you shared, I came up with some features: - distance from runner to quarter back - distance from runner to 1st defensor - distance from runner to defense centroid - # of defensors within a radius of 2, 4, 6, 8, 10+ yards from runner - etc...
I think the "VIP hint" (http://www.lukebornn.com/papers/fernandez_ssac_2018.pdf) from Michael is a very good way to conceptualize a neutralized defender. I just implemented it but I still have to figure out how to make the best of it...
High scoring public kernels were posted. They broke the leaderboard. A bad marketing tactic in my opinion.
True about Severstal's point of view. However we'll probably get more out of this competition given the highly noisy segmentation labels. I think it will give people better understanding of which loss functions and approaches are good for such noisy labels. So experimentation should be promoted.
What a kernel! Thank you! I jst have a question for the following code you wrote at the beginning part train_monthly = train_monthly.sort_values('date').groupby(['date_block_num', 'shop_id', 'item_category_id', 'item_id'], as_index=False) train_monthly = train_monthly.agg({'item_price':['sum', 'mean'], 'item_cnt_day':['sum', 'mean','count']}) Could you tell me the difference between 'item_cnt_day_sum' and 'item_cnt_day_count'? I think they should be the same ?
You're welcome , It's just a matter of the aggregation function applied on the group "itemcntdaysum" uses a "sum" function and will result on the sum of "itemcnt_day" for each group, and "itemcntdaycount" uses a "count" function and will result on the count of the amount of rows for each group.
It is better to use automation to tuning the parameters - see in my kernel: https://www.kaggle.com/vbmokin/diabete-prediction-20-tuned-models
Thank you very much. I thought the same, but it was just for a homework and we weren't allowed to use automatic tool for tuning. (We were just told to try to use difference parameters and notice the differences)
High scoring public kernels were posted. They broke the leaderboard. A bad marketing tactic in my opinion.
I think from the Severstal point of view they just want to have good model for their industry. They don't care so much how it will be created, they just want it for work. They pay lots of money to the winner so I don't think anyone is going to publish high-score models for fun. I agree wrote lots of great ideas here, and currently he is probably very busy thinking how to be nr 1 and not 2 or 3 :) I also learned a lot from this competition and even if the results will be bad for me I still will have my knowledge about pytorch and all experiments I did. IIRC in Porto Seguro I had my own models and someone published high-score model on forum. I submitted two solutions: my own and ensemble of my own and public one. Of course my own was better in private.
There is no sense to check correlation with ID's - there're only random numbers, not variables... Anyway, nice charts!
thank you for your feedback
hey , thank you for sharing the script. I have a question here str_type = ['ProductCD', 'card4', 'card6', 'P_emaildomain', 'R_emaildomain','M1', 'M2', 'M3', 'M4','M5', 'M6', 'M7', 'M8', 'M9', 'id_12', 'id_15', 'id_16', 'id_23', 'id_27', 'id_28', 'id_29', 'id_30', 'id_31', 'id_33', 'id_34', 'id_35', 'id_36', 'id_37', 'id_38', 'DeviceType', 'DeviceInfo'] I turn features like card1, card2, card3, addr1 and addr2 to category type before feeding them into lgb model through the competition, and now I finally find my model are overfit by these features. But I am still curious whether it's ok to let lgb model learn these features by numerical type. I always do df['category_feature'].astype('category') when I think this feature is not numeric. Looking forward to your reply. I have almost read all your posts. Thank you so much!
really useful ,give my model some boost ,thank you Chris !
I have a CV score of 0.92009 and an LB score of 1.09. I'm using a standard 3-fold validation. I have some ideas for different validation schemes. However my gut tells me that getting a stable CV/LB score difference is down to using features that generalize well across years. Previous experience tells me that the specifics of the validation scheme don't matter.
LightGBM with default parameters, 300 trees, and some decent features.
some of the buildings spend more energy cooling than heating. I had assumed it would be the opposite. I take it you've never paid for utility bills before ;-). Summer month are always more expensive than winter months. Cooling is a lot more difficult to do than heating and a lot less efficient, from an energy transfer standpoint.
Wouldn't that depend on your climate? My understanding is that the energy used is proportional to the temperature difference between inside and outside. So if you set your thermostat at 18, you'd use more energy to heat up from 5 deg C than to cool down from 25. Although there might be some efficiency differences between your heating and cooling systems. I wonder if, for some buildings, some of the heating is not included in the meters. Maybe they use a fossil fuel source for heating that is not accounted for? For example, building 227 uses less energy (averaged per hour) when it's 0 degrees C outside than when it's 20
High scoring public kernels were posted. They broke the leaderboard. A bad marketing tactic in my opinion.
Apologies for the long reply - Sorry maybe I didn't convey it properly, posting a high scoring public kernel plays psychologically with people trying the problem through experimentation and learning many things along the way. If you post an ensemble and a huge number of people simply use that then your hard work feels diminished as you are trying to learn. Compared to that Heng posted very good ideas and working code which has many experimental options. This is my first time on Kaggle and I learnt a lot from just reading his code and ideas. Even though couldn't use it as I'm more familiar with Tensorflow, but in my opinion that's the kind of sharing mentality we should have not one that looks like a catalyst advertisement. P.S - I guess now at the end of the competition its time for me to prepare an ensemble for my model. :)
Frog brother,I want to see you always in first place, Because of your passion and sharing spirit!
cant agree more
Fun fact. Officials also have numbers! https://www.latimes.com/sports/nfl/la-sp-nfl-ask-farmer-20161001-snap-story.html Now that I think of it, the position of referees on the field would be nice to have, especially since they are considered to be part of the field and can sometimes impact the path taken by players.. but I guess they aren't wearing shoulder pads to track and I'm sure having that data would be of little impact. 😄
We aren’t given any information about referee position in this dataset. I wouldn’t worry about it.
great! to be tested!
glad to be of help mate :)
Validation: 1.25 LB: 1.25
Wow. What's your validation technique?
I have a CV score of 0.92009 and an LB score of 1.09. I'm using a standard 3-fold validation. I have some ideas for different validation schemes. However my gut tells me that getting a stable CV/LB score difference is down to using features that generalize well across years. Previous experience tells me that the specifics of the validation scheme don't matter.
Which model are you using ?
Baval . You used folium👍 , I liked the pie chart with a hole in the middle, the catplot. I think I'm gone try some of your codes and give your deserved credits in next works, Ok?
sure and thanks for the appreciation:)
You can do oversampling but remember while oversampling you should not make 50%-50% balancing classes..... you can use SMOTE oversampling technique check out the documentation for the SMOTE.. https://imbalanced-learn.readthedocs.io/en/stable/generated/imblearn.over_sampling.SMOTE.html
60-40 or 75-25 would be preferable
You can do oversampling but remember while oversampling you should not make 50%-50% balancing classes..... you can use SMOTE oversampling technique check out the documentation for the SMOTE.. https://imbalanced-learn.readthedocs.io/en/stable/generated/imblearn.over_sampling.SMOTE.html
Thank you! What proportion do you suggest based on my description?
Validation: 1.25 LB: 1.25
nice try! hahaha
Well done! Your works are really detailed. P.S. I found you from London and I'm gonna to London for a trip at December😄 . Hello World!
Thanks. London is a very nice city. It is a great place to visit.
I don't understand what do you mean. What should we do?
What is "fast submission"?
Not sure if you are aware (and apologies in advance if you are, but perhaps this might help someone else), but there are very specific rules governing which player positions are allowed to wear which jersey numbers. Broadly, the "bigger" guys tend to occupy positions that are obligated to wear larger uniform numbers. In other words, not surprising that there is a correlation between PlayerWeight and JerseyNumber. %20-%20Wikipedia.png?generation=1570978683118194&alt=media) https://en.wikipedia.org/wiki/Uniform_number_(American_football) This is really awesome work and hugely beneficial to someone like myself who is just starting out in DS and trying to better understand EDA and feature engineering. Much appreciated and obviously upvoted. 🙌
Thanks for saving us some time :)
You can use a join for this instead. Eg., using your namings: rushers.columns = ['PlayId', 'BallX', 'BallY'] train = train.join(rushers.set_index('PlayId'), on='PlayId') train['DistanceToBall'] = ((train['X'] - train['BallX'])**2 + (train['Y'] - train['BallY'])**2 )**0.5
Thanks. That has massively sped up my code!
I don't understand what do you mean. What should we do?
Sorry for not being clear. I wanted to remind the people who are using fast submission to stop doing that and run their models on the private test dataset. Failing to do so will result in 0 score on the private LB.
Great EDA. This clearly shows what this competition is about. Thanks for posting. (The top bar graph would be more clear if it used the labels 1, 2, 3, 4 instead of 0, 1, 2, 3).
can you tell me how to use .scn file?
thanks for sharing
thanks! could you upvote if you like it, please? ^^
Nice kernel! I tried running this in Colab, but I get the following error: ``` TypeError Traceback (most recent call last) in () 40 41 ---> 42 model.compile(optimizer = Adam(learning_rate = LR), 43 loss = 'binary_crossentropy', 44 metrics = ['acc', tf.keras.metrics.AUC()]) 1 frames /usr/local/lib/python3.6/dist-packages/keras/optimizers.py in init(self, lr, beta_1, beta_2, epsilon, decay, amsgrad, **kwargs) 455 def init(self, lr=0.001, beta_1=0.9, beta_2=0.999, 456 epsilon=None, decay=0., amsgrad=False, *kwargs): --> 457 super(Adam, self).init(*kwargs) 458 with K.name_scope(self.class._name_): 459 self.iterations = K.variable(0, dtype='int64', name='iterations') /usr/local/lib/python3.6/dist-packages/keras/optimizers.py in init(self, **kwargs) 77 if k not in allowed_kwargs: 78 raise TypeError('Unexpected keyword argument ' ---> 79 'passed to optimizer: ' + str(k)) 80 self.dict.update(kwargs) 81 self.updates = [] TypeError: Unexpected keyword argument passed to optimizer: learning_rate ```
I just noticed that I didnt copy the whole error
Good job! But KMeans is not often used for large amounts of data, especially those that may still be subject to change because it wants to work with the entire data set at once. Instead, the methods MiniBatchKMeans or MeanShift are more popular. You may like my vizalization of KMeans, MiniBatchKMeans, MeanShift and 8 other clustering methods for competition "Titanic: Machine Learning from Disaster": https://www.kaggle.com/vbmokin/titanic-top-3-cluster-analysis
Hello Vitalii, Sure, will explore more on the topic, thanks for the note.
Nice kernel! I tried running this in Colab, but I get the following error: ``` TypeError Traceback (most recent call last) in () 40 41 ---> 42 model.compile(optimizer = Adam(learning_rate = LR), 43 loss = 'binary_crossentropy', 44 metrics = ['acc', tf.keras.metrics.AUC()]) 1 frames /usr/local/lib/python3.6/dist-packages/keras/optimizers.py in init(self, lr, beta_1, beta_2, epsilon, decay, amsgrad, **kwargs) 455 def init(self, lr=0.001, beta_1=0.9, beta_2=0.999, 456 epsilon=None, decay=0., amsgrad=False, *kwargs): --> 457 super(Adam, self).init(*kwargs) 458 with K.name_scope(self.class._name_): 459 self.iterations = K.variable(0, dtype='int64', name='iterations') /usr/local/lib/python3.6/dist-packages/keras/optimizers.py in init(self, **kwargs) 77 if k not in allowed_kwargs: 78 raise TypeError('Unexpected keyword argument ' ---> 79 'passed to optimizer: ' + str(k)) 80 self.dict.update(kwargs) 81 self.updates = [] TypeError: Unexpected keyword argument passed to optimizer: learning_rate ```
please share the full error log
Thank you for sharing :)
you are welcome
Nice kernel! I tried running this in Colab, but I get the following error: ``` TypeError Traceback (most recent call last) in () 40 41 ---> 42 model.compile(optimizer = Adam(learning_rate = LR), 43 loss = 'binary_crossentropy', 44 metrics = ['acc', tf.keras.metrics.AUC()]) 1 frames /usr/local/lib/python3.6/dist-packages/keras/optimizers.py in init(self, lr, beta_1, beta_2, epsilon, decay, amsgrad, **kwargs) 455 def init(self, lr=0.001, beta_1=0.9, beta_2=0.999, 456 epsilon=None, decay=0., amsgrad=False, *kwargs): --> 457 super(Adam, self).init(*kwargs) 458 with K.name_scope(self.class._name_): 459 self.iterations = K.variable(0, dtype='int64', name='iterations') /usr/local/lib/python3.6/dist-packages/keras/optimizers.py in init(self, **kwargs) 77 if k not in allowed_kwargs: 78 raise TypeError('Unexpected keyword argument ' ---> 79 'passed to optimizer: ' + str(k)) 80 self.dict.update(kwargs) 81 self.updates = [] TypeError: Unexpected keyword argument passed to optimizer: learning_rate ```
Thank you. I haven't encounter any other error as of yet.
Thanks for sharing, 😊
I am glad you liked it :)
I've some problems with the graphs. When they are a lot of graphs, in some graphs the data disappear. Clicking in a trace redraws the graphs and all the information shows again. It seems a problem with the plotly express library. I'm trying to solve it.
Solved! 😄
-- It's how far, in yards, the player traveled since the previous time frame. Snapshots of location data come in ten times every second, so this equates to how far the player traveled in the previous tenth of a second
I'm not sure if I undestood correctly. Is it the yards player traveled in the last 1/10 seconds before handoff? Why only the last time frame is used in Dis? Are other player features also the last time frame of play?
thanks for sharing !
✌️
Use a virtual machine with high ram and work on a IDE instead of kaggle kernel.
Many thanks for your response.