Instruction
stringlengths
0
86.4k
Response
stringlengths
0
127k
I bet it's pretty much similar to the training set...
There is a good possibilty of it not being similar to the training set. Because the validation score and test score have a big gap for everyone.
So if all site = 0 - is there a country in the world that anyone knows of that has June 4 as a shut down and party holiday?
Wiki not a very good source - too much monkey business. I found a source - will post the link 8 days before the competition ends :)
The Aristocats are back. Upvoted.
! 😹
How can we know safely that our inference kernel will work (within 1 hour) on unseen data? Is it already what happen when we click on "Submit to competition"? Kernel might have bugs or memory issue due to new data. Update: It's still not clear for me if 1 hour is on commit or on submit: https://www.kaggle.com/c/severstal-steel-defect-detection/discussion/108038 Some discussions are reporting 3 hours on submit and 1 hour on commit. Submit would include full private data so if it runs then we know we don't have blocking bug. Could you confirm or not? 1 hour on commit or on submit. And submit already runs on all private data (but currently scoring only on public)? Thanks a lot.
Is it necessary to install packages with pip or can we have the code as part of the dataset?
okay, thanks got it, but is OH encoding a column with 2000 unique values appropriate, won't it e computationally too expensive?
Yeah actually, I tried one hot encoding .... I was using it with RandomForestRegressor but got an error regarding lack of memory. I can try reducing the size..thats an excellent idea, that did not occur to me..Thanks!
If your current LB is 0.898, I suggest that you focus on class 3 and 4 for now. With just class 3 and 4, you should be able to get above LB 0.910 with efficientnetB4 segmentation and classification.
The original file list order will change every time I restart the kernel. So even I already fix the random_state of train_test_split function, the feed-in file lists are different every time after I restart the kernel , which means the train/valid data will be different either. Here is the link: https://stackoverflow.com/questions/37245921/python-order-of-os-listdir
What we can do is, do all pre-processing locally & upload data as a dataset. And in kernel, directly load images from that.
submission file contains only public part of test data which affects public score but will not affect private score, there is no way to find out how private part of test data looks like
Wonderful visualizations, simple but efficient to know more about the overall distribution of this dataset! Thank you for sharing!
Thanks!🙂
If your current LB is 0.898, I suggest that you focus on class 3 and 4 for now. With just class 3 and 4, you should be able to get above LB 0.910 with efficientnetB4 segmentation and classification.
but you have list of files in train data, not in folder, you should split train data into train/valid, how can order of filenames matter?
Its downloading now. Google Colab users please upgrade pip, uninstall kaggle and then reinstall it with --upgrade option. If you upgrade directly it doesn't upgrades to 1.5.6, it just uses the 1.5.4 version.
I ran into the same problem. I even tried just unzipping the the training set and still went over the allotted space.
Thank you for posting this Konstantin! Your TPU code is very informative, I'd tried a few times to get pytorch_xla working before without success so seeing a pytorch model training on the tpu is most excellent.
but the datasets is 150G.how do you manage to upload into GCP or colab?
Is anyone or any team looking to seriously compete in this competition that I can join with? I can rent cloud GPUs for us to train on via potentially vast.ai.
Hello and my email id is : alphalimma@yahoo.com. However, as said, it will good to connect on skype as well. You can find me on Skype with (cranjanrishi@gmail.com and Ranjan Rishi user id) Looking forward. Thanks, Rishi
t's nice work and great approach I upvoted! 💪 Appreciate if you can upvote my kernel as well or give me any recommendation for improvement. Simple EDA: https://www.kaggle.com/caesarlupum/catcomp-simple-target-encoding/notebook Thank you!
Sure . I'll check it 👍 .
Nice work! how did you choose the step = 50000 ?
Earlier i took log1p of the target value target = np.log1p(train["meter_reading"]) which was mainly because our evaluation metric is Root Mean Squared Logarithmic Error . You can find the equation here. As i converted it using log1p, i had to reverse it back to original value after training using expm1.
Is anyone or any team looking to seriously compete in this competition that I can join with? I can rent cloud GPUs for us to train on via potentially vast.ai.
my email is epeagbas@gmail.com
Nice work
Thank you:)
Nice work! how did you choose the step = 50000 ?
The step number is random. But the main reason is the kernel RAM was overflowing when i was trying to predict whole testset at once. So i predicted them in small batches and the kernel survived.
Did anyone manage to get the 2018 results (or 2019 for that matter? I'm a total novice in coding and i was hoping i could download the data in the same way that 2015-17 was provided.
I am in search of this myself! the year of the rain deluge; would be a fascinating data set, especially to compare this against data from prior years!
If your current LB is 0.898, I suggest that you focus on class 3 and 4 for now. With just class 3 and 4, you should be able to get above LB 0.910 with efficientnetB4 segmentation and classification.
After I checked my code, I had find the problem. I had training data leak to validating data... Since I always read the training image filenames from the os by os.listdir function, and I thought it will keep the same order of filenames. Turn out I need to sort it everytime after I read the filenames.. what an epic fail!! Thank you guys for the advice again! Hope I still have enough time to train and tune the new model.
I agree with you. My validation and lb also do not seem to correlate, so big shake-up is possible.
That's what I concern. Given public LB has 25% of test data, equivalently to roughly 1000 images. Then, by correcting only 4 rows in submission.csv as empty (from a prediction of non-empty mask), and if it matches ground-truth, we can improve 0.001 public LB. Given a lot of people have same LB scores, shakeup is inevitable.
Does it mean I need to upload my weights as an h5 file and then read it in my code? Yes why are we given some test images? What help does it do at all? You'll need to make a prediction for these images and submit it. do people normally split the training test to get some data for validation set? Yes, we usually split data into train and validation or train on folds.
May I ask how to upload the h5 weights to the submission kernel?
What we can do is, do all pre-processing locally & upload data as a dataset. And in kernel, directly load images from that.
, when you say unseen private test set, do you mean the private test data is not included in the test data file we have right now? I find the discssions a bit confusing. Since this is not a 2-stage competition, I thought we already have all the required test data but scored only on the public LB portion before closing. Then on competition end the rest wil be scored. Someone correct me if I am wrong.
Yes, prophet is a nice time series package, but as I know, it can not include other infos like count features or weather infos(unless you treat weather as holidays).
With Prophet, weather can be included here because we know the values for both the training and the testing data. Weather can be considered as an additional regressor as described in the Prophet documentation. Also here is a notebook showing weather being used to assist forecasting bicycle usage in New Zealand.
Does data minification only reduces RAM usage, or it improves model training/predicting time too?
The spike occurs because when you create your lgb datasets, you're likely passing in a pandas dataframe. LGBM's python api converts everything to float64 IIRC. Try passing in a raw numpy array, and you might not see that occur. There's also a flag you can pass where it'll first write to disk rather than creating another copy of the dataframe in ram.
It could be something with your batch sizes. The csv that you can download is produced when your notebook runs the first time using a test_image directory of 1801 files. The csv that gets submitted to Kaggle is produced when your notebook runs the second time using a test_image directory of 5403 files. If your batch size is even then one image is not being predicted. Because Python reads filenames from directories randomly, each run you avoid predicting a different image. Perhaps during the first run the missing image isn't important but during your second run the missing image is important.
how fix it?
learnt a lot from this kernel,you are a very good data analyst,no doubt about it,thank you
Thanks ! :)
efficientnet-b2 256x256 w/o 3 windowing preprocess publicLB: 0.069
how many epochs?
The only other weird annoying thing that happens in my kernels is at random points it will cut off the last half of my cells and reloading or stopping and restarting does not bring it back. What I have to do then is fork the notebook to get all the code back.
Have faced this multiple times, it's extremely messed up, sometimes I am only left with the first cell of my notebook.
Have you tried pandas pd.to_datetime()? That function converts column(s) of timestamps (or single ones) like yours into pandas datetime datatype. Then you could e.g. try sth like: ts = pd.to_datetime(your_timestamps_here) print(ts.hour)
Cool! I'm glad that you could sort it out. 👍
resnet-34 single model, single fold, tta LB 0.90283 can't reproduce on local GPU
(editing posts on kaggle is still broken) my solution is classification plus segmentation but with some twist (differently than anyone I read here)
resnet-34 single model, single fold, tta LB 0.90283 can't reproduce on local GPU
It will be tough decision to decide what to make final solution. In last day I had following situation: - 3 folds of 5 ---> LB 0.914 - 5 folds of 5 ---> LB 0.913 so I can choose to overfit to public LB (which is always bad idea) and use only 3 folds of 5 or trust validation (which is always good idea) and use 5 folds of 5 and have lower public LB score to make it worse, my current score is from 1 fold of 5 I see people on top LB with only few submissions, they have good models and don't have such problems :)
resnet-34 single model, single fold, tta LB 0.90283 can't reproduce on local GPU
What kind of model is it? seg+cls?
Nice work! how did you choose the step = 50000 ?
Why do you use np.expm1(predict) ?
You shound try to use strptime: ``` from datetime import datetime datetime_str = '03/01/2019 12:00:00 AM' datetime_object = datetime.strptime(datetime_str, '%m/%d/%Y %I:%M:%S %p') print(datetime_object) print(datetime_object.strftime('%I:%M:%S %p')) print(datetime_object.strftime('%I:%M:%S')) ``` output: 2019-03-01 00:00:00 12:00:00 AM 12:00:00
Thank both of you, I solved the problem(Actually you both solved : ) ). I hope you have a good day : )
resnet-34 single model, single fold, tta LB 0.90283 can't reproduce on local GPU
I don't know about your model but this looks like your learning curve in the past few weeks. Well done and looking forward to your solution
Have you tried pandas pd.to_datetime()? That function converts column(s) of timestamps (or single ones) like yours into pandas datetime datatype. Then you could e.g. try sth like: ts = pd.to_datetime(your_timestamps_here) print(ts.hour)
Thank both of you, I solved the problem(Actually you both solved : ) ). I hope you have a good day : )
Hope your Simple Modeling section comes out of Still in progress zone soon
I'm doing an anomaly detection study! 🙌
Hope your Simple Modeling section comes out of Still in progress zone soon
is coming soon! I hope, I've helped you ! 🙏 🙌
hi thanks for this beautiful kernel,how long does it take this kernel to finish committing in kaggle?
You're welcome! I do have a personal GPU and I think it is now already running nonstop for over 10 days trying out various ideas. I will take a look at colab...should be a nice addition. Good luck.
resnet-34 single model, single fold, tta LB 0.90283 can't reproduce on local GPU
amazing!
hi thanks for this beautiful kernel,how long does it take this kernel to finish committing in kaggle?
thank you for your aid,i will try it in paperspace or in colab after finishing steel defect detection competition and i recommend you the same if you don't have personal GPU
resnet-34 single model, single fold, tta LB 0.90283 can't reproduce on local GPU
6 days before deadline I found new, better way so may ideas so little time LB 0.91719
You shound try to use strptime: ``` from datetime import datetime datetime_str = '03/01/2019 12:00:00 AM' datetime_object = datetime.strptime(datetime_str, '%m/%d/%Y %I:%M:%S %p') print(datetime_object) print(datetime_object.strftime('%I:%M:%S %p')) print(datetime_object.strftime('%I:%M:%S')) ``` output: 2019-03-01 00:00:00 12:00:00 AM 12:00:00
That's why I suggested pd.to_datetime() 😏 See my updated answer and try that code on your columns.
hi thanks for this beautiful kernel,how long does it take this kernel to finish committing in kaggle?
Yes unfortunately almost all time used.. With the 30 hours a week GPU limit I can do 2 more versions this week ;-) Things that might improve the kernel and that (likely) won't effect the run time: - Changing/adding the augmentation. - Changing the dropout value. - Changing the learning rates. Things that will impact the total run time .. so you will have to save time at some other point: - Adding additional FC layers in the head. - Starting earlier/later with creating predictions for the final submission. - Varying the steps_per_epoch slightly. This kernel had still about 1500 seconds left...so if you can get it even more close to the limit of the kernel than that might give some benefit.
Have you tried pandas pd.to_datetime()? That function converts column(s) of timestamps (or single ones) like yours into pandas datetime datatype. Then you could e.g. try sth like: ts = pd.to_datetime(your_timestamps_here) print(ts.hour)
I have updated my answer. And: Alessias answer is good too! 👍
I am confused from the beginning of the competition . But I interpreted the rules as : The Test dataset given to us and we see is both Private and Public data . Only the public score is calculated based on 33% of this dataset and everytime we submit the whole private score based on 100% of this test data also is getting calculated . There is no separate hidden dataset . But my interpretation could be wrong ?
We know exactly nothing about private dataset. What we see in submission csv file and in the test images folder will not be used in final score. We can't probe anything because no output is possible from kernel run, the score we see is for public dataset.
You shound try to use strptime: ``` from datetime import datetime datetime_str = '03/01/2019 12:00:00 AM' datetime_object = datetime.strptime(datetime_str, '%m/%d/%Y %I:%M:%S %p') print(datetime_object) print(datetime_object.strftime('%I:%M:%S %p')) print(datetime_object.strftime('%I:%M:%S')) ``` output: 2019-03-01 00:00:00 12:00:00 AM 12:00:00
Oh, I think this will work thanks, But do you know how can I apply this method to my column in my data? The column is data["Date"] and the values in it like "03/01/2019 12:00:00 AM" : )
Have you tried pandas pd.to_datetime()? That function converts column(s) of timestamps (or single ones) like yours into pandas datetime datatype. Then you could e.g. try sth like: ts = pd.to_datetime(your_timestamps_here) print(ts.hour)
If you want ,I can send you the link of the data, I think you will find the solution easily
If your current LB is 0.898, I suggest that you focus on class 3 and 4 for now. With just class 3 and 4, you should be able to get above LB 0.910 with efficientnetB4 segmentation and classification.
I just looked at my submission history and I have LB 0.90239 for single model (no classification stage) with only resnet34 and bce_dice_loss. Please remember to do postprocesssing - remove noise masks from submission.
Till now, that's the only kernel with Geolocation (I thinK). Thanks Carlos Alberto you did great.
Thank you so much for your support. Greetings
Have you tried pandas pd.to_datetime()? That function converts column(s) of timestamps (or single ones) like yours into pandas datetime datatype. Then you could e.g. try sth like: ts = pd.to_datetime(your_timestamps_here) print(ts.hour)
Actually no, I just want to delete year, month and day information from my column and turn this form "03/01/2019 12:00:00 AM" into this"12:00:00 AM" or this "12:00:00" but to_datetime() doesn't work or since I am very new at this area I am doing something wrong : )
wow, Impressive work. I read every single row! thanks for sharing
Thanks! I'm glad you liked it :)
It's nice work and great approach! I upvoted! would you please check out my kernel as well? appreciate if you can upvote my kernel as well or give me any recommendation for improvement. Thank you! https://www.kaggle.com/billynguyen/using-r-for-eda-modeling
Upvoted, thanks :)
Have you tried pandas pd.to_datetime()? That function converts column(s) of timestamps (or single ones) like yours into pandas datetime datatype. Then you could e.g. try sth like: ts = pd.to_datetime(your_timestamps_here) print(ts.hour)
If I execute this: import pandas as pd from datetime import datetime format = '%d/%m/%Y %I:%M:%S %p' ts = pd.to_datetime("03/01/2019 12:00:00 AM", format=format) just_the_hour ='%I:%M:%S %p' print(ts.strftime(just_the_hour)) I get this printout: 12:00:00 AM Is that what you are looking for?
hi thanks for this beautiful kernel,how long does it take this kernel to finish committing in kaggle?
almost 9 hours :( what are the suggestions you will suggest to us to improve the work of this kernel?
hi thanks for this beautiful kernel,how long does it take this kernel to finish committing in kaggle?
Thanks . This kernel took about 30900 seconds to finish running..so not much time left.
Thanks for this. Have you tried shifting by -90 degrees to see if that improves your LB score?
Yes, I have tried, and it makes my LB score worse. :/
Why is so important to you direction in which players are facing? It does no tell much. They may have been rotating their heads just when the data were collected. Also, they don't have to move in the way they are looking.
I agree it does not tell much, but it does add value to my model and I'm curious to understand if it could add even more value to the model. “All great events hang by a hair. The man of ability takes advantage of everything and neglects nothing that can give him a chance of success."
Have you tried pandas pd.to_datetime()? That function converts column(s) of timestamps (or single ones) like yours into pandas datetime datatype. Then you could e.g. try sth like: ts = pd.to_datetime(your_timestamps_here) print(ts.hour)
Yes, I tried but the date values that are in my data just made of strings, so this gives me only "0" values. The date values in my data like just someone write it with his hands so I think the data doesn't know the are dates, it just sees them normal strings
Don't trust public leaderboard too much ,because public and private are also splited by time. do you have any evidence of it? I did split lb probe and seems it's random split. What makes you think it's timebased split?
4.69^2 = 22.00 1.36^2 = 1.85 2.60^2 = 6.76 (6.76 - 1.85)/(22.0 - 1.85) = 0.24 Given that last months are winter, 0.24 being a bit higher than 0.22 makes sense. So the split is probably not 78/22 by time.
Just an observation: the buildings from your list are all have the same site_id=0.
Right. So my guess is that each site corresponds to a region of the USA. Even if site_id 0 was not part of the USA, I fail to what the 4th of June corresponds to.
So if all site = 0 - is there a country in the world that anyone knows of that has June 4 as a shut down and party holiday?
2016 On June 7, Kenya went without power for over 4 hours. The nationwide blackout[173] was caused when a rogue monkey entered a power station. Only about 10 million citizens were affected by the outage as the World Bank estimates that only 23% of the country's population have access to electricity.[174] On Thursday, September 1, Hurricane Hermine swept across the big bend area of Florida, directly affecting the state's capital of Tallahassee. Hermine disrupted power for more than 350,000 people in Florida and southern Georgia, many of whom were without power for a week. On September 21, 2016, a full power system collapse occurred on the island of Puerto Rico affected its 3.5 million inhabitants. The power outage, popularly referred to as the "Apagón" (translated as "super outage") has been labeled as the largest in Puerto Rico not caused by an atmospheric event. The outage occurred after two transmission lines, with power running up to 230 kV, failed.[175] On September 28, the 2016 South Australian blackout affected the entire state of South Australia (1.7 million people). It was caused by two tornados that destroyed three critical elements of infrastructure, and the power system protected itself by shutting down.[176] While some politicians and commentators have tried to link this power failure with the state's high mix of renewable energy sources (particularly wind energy), some experts have indicated that the blackout had nothing to do with this.[177] A number of technical reports in the previous 18 months expressed concern that the reliability and security of the power supply in South Australia had decreased following the introduction of substantial wind power, and the consequent withdrawal of major conventional power stations.[178]
So if all site = 0 - is there a country in the world that anyone knows of that has June 4 as a shut down and party holiday?
For a site (a city) significant power outages can bring down most buildings. Hospitals would have backup generators - not sure if the meter readings would include electric generated by a backup. But if site_id = 0 had most buildings with low usage on Jun 4 than power outage could be the reason. So google - major power outages 2016 ??
I am confused from the beginning of the competition . But I interpreted the rules as : The Test dataset given to us and we see is both Private and Public data . Only the public score is calculated based on 33% of this dataset and everytime we submit the whole private score based on 100% of this test data also is getting calculated . There is no separate hidden dataset . But my interpretation could be wrong ?
I think that the 1801 images are exclusive of private test dataset, they are public test dataset. There are other 3602 images hidden from us are inclusive of private test dataset. Because the submitted kernel's running time is 3x longer than the committed kernel's running time.
It's a nice work ! I upvoted ! would you please check out my kernel as well? appreciate if you can upvote my kernel as well or give me any recommendation for improvement. Thank you! intro Kernel: Introduction: ASHRAE - Great Energy Predictor III https://www.kaggle.com/caesarlupum/ashrae-start-here-a-gentle-introduction Interesting approach!✔️
Done, great work
nice work! :)
Thank you so much
how did you find the best hyperparameters for xgb and lightgbm ? gridsearch ??
I tried to do my own voting regression using 4 algorithms and I chosed the weights after more than 50 several tests you can see this article about Automate Stacking in Python
Hi I am a student at the University of Chicago and am looking for a team to work on this competition. I have some experience working with R, Python and large data sets.
Hi Nupur, I will like to join your team
Curious if anyone knows the reason, but I've tried to run many kernels here, this one included, using a Google Cloud instance. Latest drivers and everything. Each Epoch shows as 20-100 hours, even though the GPU is the same or better than the only kaggle uses. The gpu shows full memory, running at ~100% as well. Is it somehow a CPU issue? I've used their 'high' cpu's, but still have the problem.
Interresting but annoying issue. I have no experience with using GPU's on linux but the performance in between the different versions should likely be the same. May'be you can find/get some support from Google or Tensorflow on this issue. Or post it as a seperate question in the discussion forums. Good luck!
Hello everyone! Anyone from Dallas/ Fort worth area wants to work on this project. It's fine if you live some where else we can work remotely too. I am very much interested in this project
null
Have you found out why it casts the target to Long when doing .databunch() ?
Found it! when we create our to we can do TabularPandas(df_main, procs, cat_names, cont_names, y_names='age', type_y=Float, splits=splits), I'll modify the kernel
Have you found out why it casts the target to Long when doing .databunch() ?
It's currently on their todo's https://github.com/fastai/fastai_dev/blob/master/dev/local/tabular/core.py#L137
So if all site = 0 - is there a country in the world that anyone knows of that has June 4 as a shut down and party holiday?
5 min of googling did not bring any positive results. Maybe I should try harder :-). Or maybe this outliers have a completely different meaning. We are assuming that holidays must dramatically affect the power consumption. But then why don't we see any changes in the power consumption during the Christmas time or around the Thanksgiving Day?
About 3/4 of the way thru the V1 course on fastai - in your opinion will I get too confused if I continue thru the course on V1 and fork your V2 kernel ?
I'd recommend the v2 walkthroughs, atleast the tabular one. I'm working on comparison notebooks for each (tabular, image, etc) and I'll post a kernel when that's done. But everything more or less operates the same just we now have a pipeline instead of a imageList etc.
Hi , may I know why np.expm1 is preferable?
Functions np.log & np.exp used when argument is guaranteed to be greater than zero. Since variable could be zero sometimes (10% of target in our case), np.log1p & np.expm1 are prefered. When y == 0, np.log(y) is not defined, while np.log1p(y) = np.log(y+1) is equal to 0. And of course it's obvious, that np.exp(np.log(y)) = y (if np.log(y) exists), while np.expm1(np.log1p(y)) = y. That is why they used in pairs.
Kostiantyn - I think that the peak temperature date for most of USA and Canada falls in July and August - while your correct on the peak position, the peak btu's delivered by the sun are higher in July and August. Don't know why, just know the numbers.
, Yes, you are correct about the temperature, but in these months the length of a sunny day is less than in June. Difference for July 0.5-1 hours, For August 1-1,5 hours. So in August every day people should start to use electricity earlier at evening than in June, for example.
Not sure if this is what you are looking for, but found these: https://datahub.io/dataset/global-garment-supply-chain-data
The files available at this link are not opening.
1.53 for median
It is expected that mean will perform better (assuming no weird outliers) as the mean minimises MSE, while media minimises MAE.
Have you tried pandas pd.to_datetime()? That function converts column(s) of timestamps (or single ones) like yours into pandas datetime datatype. Then you could e.g. try sth like: ts = pd.to_datetime(your_timestamps_here) print(ts.hour)
This is my favourite go to ressource: https://jakevdp.github.io/PythonDataScienceHandbook/ Very well explained, comprehensive. Free.
The meter reading is even lower during end of July . But is end of July (or August if there is a month shift) a national holiday there ? There is a list of holiday in this discussion
I guess that some of the low peaks can be random shutdowns and/or measurement errors. If you look at the series you'll see that some buildings have extremely low electricity consumption on dates that look seemingly random.
Is anyone or any team looking to seriously compete in this competition that I can join with? I can rent cloud GPUs for us to train on via potentially vast.ai.
sure, lets connect somewhere, other than this discussion forum. How should I contact you?
I'm back . Deutsch( german) is hard as Latin. I'm here to help all who ask for my help. Probably not with the codes. (não espalha) But I'll do my best conterrâneo. You and all the Aristocats team from this CatCategoricalChallenge .
All the best for U ! 🐱 . For now, I try learn DS and improve my slackline skills hehe
I'm back . Deutsch( german) is hard as Latin. I'm here to help all who ask for my help. Probably not with the codes. (não espalha) But I'll do my best conterrâneo. You and all the Aristocats team from this CatCategoricalChallenge .
You're great ❤️ ! ' I don't speak Deutsch( german), but I have some friends live in Germany. I have a dream to travel to Canadá, London, German ! for now, I want to finish my master degree and to hard work for learning Data Science. I would be very happy if I ever meet you. Have a good day!
some building block for your experiments. warning: these code are a bit messy and uncleaned but you can easily adapted them to my starter kit - drn22d deeplab3 softmax - resnet34 unet plus plus
The submission script you provided always runs out of memory on kaggle kernel. Also the grid search code runs out of memory because it stores all the predicted masks and truth masks as variable. Any way to overcome this problem?
Have you tried pandas pd.to_datetime()? That function converts column(s) of timestamps (or single ones) like yours into pandas datetime datatype. Then you could e.g. try sth like: ts = pd.to_datetime(your_timestamps_here) print(ts.hour)
And I wanted to ask you, how can I practise the DateTime library and Algorithms about Data analysis in Python? Do you know any good source for this? Because I think to practise very hard on these topics.
Cheers I couldn't find where the data is coming from. Are you sure it's US [only]?
Did not check your links yet - but do any include annual summary reports that contain the ASHRAE buildings, etc ? I was very modestly involved in data collection several years ago for a building in DOE survey. It's a decent investment to collect the data, standardize things, etc for the building owner and data collector. Someone has to pay for that survey effort - I know it's been done in USA by DOE. You can find reports on USA for summary of the years where surveys made. Next point - plot the temperatures over the 3 years. They are certainly north hemisphere values. For example, not going to match Hong Kong weather patterns. Last point - it's your model. Do it your way :)
Curious if anyone knows the reason, but I've tried to run many kernels here, this one included, using a Google Cloud instance. Latest drivers and everything. Each Epoch shows as 20-100 hours, even though the GPU is the same or better than the only kaggle uses. The gpu shows full memory, running at ~100% as well. Is it somehow a CPU issue? I've used their 'high' cpu's, but still have the problem.
It's the GPU version. Pre-installed instance, I even removed and re-installed it. GPU is showing 100% memory and work use once the model starts running, which is why I'm confused. If it's showing roughly the same with the gpu off for you, there might be a bug or something I'm doing causing the gpu to somehow load it (and run something to show 98-100% use?) then the cpu does it as well. Thanks, that at least gives me something to look into! Thanks for the kernel btw! Edit: Cpu usage is only 21% while the model is being run, gpu is 100%. Definitely some other bug. I'll do a clean install later and retry on an Ubuntu instance. The pre-setup ones are Debian.
Curious if anyone knows the reason, but I've tried to run many kernels here, this one included, using a Google Cloud instance. Latest drivers and everything. Each Epoch shows as 20-100 hours, even though the GPU is the same or better than the only kaggle uses. The gpu shows full memory, running at ~100% as well. Is it somehow a CPU issue? I've used their 'high' cpu's, but still have the problem.
Hi Not sure about what might cause that...but one thing is sure..if it says 20 - 100 hours for one epoch it is definitely running only on CPU. When I'am debugging my notebooks in Kaggle I turn the GPU off to make sure I don't use any of the 30 hours GPU quota....so for one epoch it usually says something like 25 to 27 hours. Could it be the tensorflow version? Is it the GPU or CPU one?
Good job, man 👍
thank you
Thanks for sharing in such high speed. I wrote some another model training example kernel based on your notebook. https://www.kaggle.com/corochann/exploratory-data-analysis-and-catboost-submission
, I see you have got good results with your kernel :)
I am just curious about your isHoliday column, since you didn't populate it correctly. Also how can you fetch the holiday dates for this data?
, why do you think so? In one of the versions I added link to US holidays.
Hi , I just tried running your exact code on the kaggle kernal but it produce this error on the Unet chunk: AttributeError: module 'segmentation_models.backbones' has no attribute 'get_preprocessing'. Any idea why this might occur? Thanks!
Thank you so much . I just had a look at both the links you attached but they both seem to be using the same syntax in the code - without the backbone_name=BACKBONE. Maybe I am missing something here though - any advice would be really appreciated!
Thank you for posting this Konstantin! Your TPU code is very informative, I'd tried a few times to get pytorch_xla working before without success so seeing a pytorch model training on the tpu is most excellent.
Thanks for your replies. Could you tell me how large is your GCP disk space?thanks. why did you say "it's not as issue"? Thanks
Hi , I just tried running your exact code on the kaggle kernal but it produce this error on the Unet chunk: AttributeError: module 'segmentation_models.backbones' has no attribute 'get_preprocessing'. Any idea why this might occur? Thanks!
Hi , It's just a matter of what repository you use, this is the Keras version and this is the Pytorch version, they should have the same API to do the same things, but as you noted somethings are a little different.
Basic question: This competition is kernel-only. I know this means that we can submit though kernels only. The only way to interact with the test set is thought the kaggle.competition.nflrush module - is this module available to be used outside kernels - if so then how?
I've downloaded the docker image (gcr.io/kaggle-images/python because kaggl/images gave me tensorflow instructionset error), but im not sure what i should do now to access the nflrush module, can you elaborate?
Nice kernel Donut plot was a new thing
thank you!!
upload the notebook
So...the code in stage1 is NOT needed to be uploaded,am I right? Thanks~!
There is a bug in the code above. The loglikelihood computation has to happen before the normalization. In other words this line self.loglike += log(wp1 + wp2) has to be replace with self.loglike += log(den)
Good catch, tested and updated it (indeed better result when means of both Gaussian's are closer together)
Very nice and useful kernel, I upvoted! Could you also please my kernel and give feedback? https://www.kaggle.com/wguesdon/heart-disease-eda-and-random-forest-with-r
Thanks bro! I really appreciate it! Your analysis works are really detailed. You could also click my personal website for more Python and R works: https://xsong.ltd/en
Fun fact. Officials also have numbers! https://www.latimes.com/sports/nfl/la-sp-nfl-ask-farmer-20161001-snap-story.html Now that I think of it, the position of referees on the field would be nice to have, especially since they are considered to be part of the field and can sometimes impact the path taken by players.. but I guess they aren't wearing shoulder pads to track and I'm sure having that data would be of little impact. 😄
great job getting a lot of help from you :) just one question. is there a way to find referee in the dataset? I wonder if I'm missing some feature here. how do we find referee since they have different jersey number