Instruction
stringlengths
0
86.4k
Response
stringlengths
0
127k
Thanks for this post! I can add another tip: Using Colab with 25 GB of RAM :smile:
The runtime limit should be 9h, so the same as Kaggle kernels. The cool thing is that there's no GPU per week limitation at the moment. It's also pretty simple to work with Colab and Drive to read and write files. It requires less time rather than connect through Kaggle's API.
Hi, I have a Ph.D. in theoretical physics and I am looking for someone to team up for this competition. I have experience with machine learning techniques with python and am eager to work on this project.
I would love to team up with you. I have a master in mathematical finance
Follow these steps to run the code . 1) Open the file *BigQuery-Dataset-Access.md * and follow the instructions . To gain access to this competition's dataset on BigQuery, you must: 1. Join the open google group provisioned for access to the private dataset. First follow this link: https://groups.google.com/d/forum/bigquery-geotab 2. Log in with a Google account. 3. Click on Join group in the pop-up that appears. 4. You will now have access to query the dataset kaggle-competition-datasets.geotab_intersection_congestion.train. NOTE: This is NOT an email distribution list. So once you see You cannot view topics in this forum after joining the group, you do not need to take any further action and can proceed to access the dataset. 5. You can now access the dataset and run your own copy of the starter kernel To gain access to this competition's dataset on BigQuery, you must: 1. Join the open google group provisioned for access to the private dataset. First follow this link: https://groups.google.com/d/forum/bigquery-geotab 2. Log in with a Google account. 3. Click on Join group in the pop-up that appears. 4. You will now have access to query the dataset kaggle-competition-datasets.geotab_intersection_congestion.train. NOTE: This is NOT an email distribution list. So once you see You cannot view topics in this forum after joining the group, you do not need to take any further action and can proceed to access the dataset. 5. You can now access the dataset and run your own copy of the starter kernel 2) Create Google Project as mentioned in this page https://cloud.google.com/resource-manager/docs/creating-managing-projects 3) Once you login into Project , see the tile called " Project info" Pick Project ID 4) Replace Project ID in the code .
My case is signing in Kaggle with one google account and using another for BigQuery cloud project. Without explicitly telling which account for use, it might automatically re-direct to the Kaggle account by default. Either opening a clean tab/window, or explicitly logging into the BigQuery user, and then join google group for dataset access permission. Hope this is helpful.
null
Where does it say that NaN value of floor count implies 'ground floor'? I am not able to figure it out. If I see the definition of floor_count: floor_count - Number of floors of the building Does not it mean that a building with just ground floor (single floored building) should have a count of 1? Am I missing something?
I am currently having an issue. When I click Link an account, it opens a small window that is just blank white, and downloads a file called auth.dms. Then nothing. Using Safari on macOS Catalina if that matters.
How about using Google Chrome? Guessing auth.dms maybe should be stored in your kernel...
One thing you have to keep in mind is that the classical time series models exploit patterns in the data. We do not have enough data to understand the intrinsic patterns of each building. Moreover forecasting methods are usually best for short term forecasts because they use a bootstrap process of using previous predictions to make future predictions. The errors will accumulate and the method will necessarily be unreliable after a certain amount of time. General regression models will thus usually outperform standard time series models in this competition. You could however mix both up, by using a classical time series models for the short term (e.g January and February 2017) and a regression model for the long term.
There is no such thing as a "best model" :). You should be spending time thinking about the features, not the model.
, , Congratulations! and thanks for sharing your insights and your approach. I have 2 quick questions if you get a chance to answer. I tried creating UIDs but mine did not involve D1 and I was not confident of my UIDs being right. I would love to understand your approach for creating the UIDs. 1) How did you identify the meanings of D1 and D3? 2) How were you sure "card1, addr1, and D1" were enough for the UID (how did you validate that)? Many Thanks!
Thanks a lot for your responses! Appreciate it very much!
Hi Shawn Yan (and all others on this thread), Sorry that this has some kinks we are still trying to work out. Committing won't affect the completion status, though re-running your notebook interactively should in many cases. To help debug the cases where you still aren't getting 100%, can you tell me whether this is on exercises that you first started before progress tracking went live (Aug 8) or after? Hoping to have this updated soon so you can get full credit. Dan PS For those worried about being invited to the invite-only competition associated with these courses, you won't need to get 100% to get an invite. As long as you've made any progress. We'll send you an invitation if you've made any progress.
Hi DanB, Kaggle team, Thanks for looking into this topic. I experience the same issues with the "Intro to SQL" and the "Advanced SQL" courses. Both courses have been started after Aug 8. For "Intro to SQL", I have only 40% completion for the "As & With" exercise and 0% completion for the "Joining Data" Exercise although I have completed all tasks within the exercise. Re-running the notebook one day later also does not seem to solve the problem. For "Advanced SQL", I have 0% completion in the "JOINs and UNIONs" exercise although I have completed the tasks in the notebook. (I'm currently working through the rest of the course, so I don't know if the issue persists for the other exercises as well.) Hope this helps you to identify the bug in the system.
If you think of the positions as having some hierarchy, some of these are just being recorded at different levels. I'll try to list them in their hierarchy as best as possible - Offense Offensive Line (OL) These are the guys on the line of scrimmage, who's job it is to block the defense. If it is a passing play, they are blocking to give the QB a clean pocket to throw from without getting sacked. On a run play, they are trying to create lanes for the ball carrier to run through cleanly. Except in rare situations, guys on the OL cannot catch passes and generally do not run the ball. Typically there are 5 OL in on a play, although sometimes you might get a Jumbo formation or something that has one or more extra OL (or even DL in rare situations) to help block more. OL - Center (C) The middle of the OL, as the name implies. The center also has the job of snapping the ball to the QB. OL - Guard (G, OG) These are the guys on either side of the Center. OL - Tackle (T, OT) These are the guys on the ends of the offensive line, outside of the Guards. Wide Receiver (WR) These are the guys who run out and catch passes from the QB. They'll still be somewhat involved on running plays as downfield blockers. Tight End (TE) Somewhat in between a WR and a TE, these guys can go out and catch passes but are much more involved in blocking. They'll typically line up just outside the Tackles, and there are typically 0, 1, or 2 of them in on a play. Running Back (RB) These are the guys typically running the ball, although more and more they are also good at catching passes either behind the line fo scrimmage or shortly beyond it. They also help protect the QB by being an additional blocker on passing plays sometimes. RB - Halfback (HB) A smaller, faster RB. Typically when people say "Running Back", this is what they mean. RB - Fullback (FB) A bigger, slower RB who typically is running ahead of the HB to help create a better running lane. FBs can be ballcarriers themselves, however. Not all teams have a FB on their roster or use them regularly. Quarterback (QB) This is the guy who is in charge of receiving the snap and then either passing it or handing it off for a running play. QBs are sometimes ballcarriers in running plays, either by design or because they can't find anyone open and want to try getting yards by running instead. A frequent planned running play for a QB is a "QB Sneak" where the QB will immediately push through the center of the line, basically falling forward. This usually results in a fairly small distribution of yardage outcomes - usually -1 to 1 yards. Defense Defensive Line (DL) These are the guys trying to get to the quarterback and/or ballcarrier. They are typically some of the biggest guys on the field. DL - Nose Tackle (NT) A defensive lineman who sits on the middle of the line. DL - Defensive Tackle (DT) A defensive lineman who is in the interior of the defensive line. DE - Defensive End (DE) A defensive lineman who is on the outsides of the defensive line. Linebackers (LB) Linebackers are behind the defensive line. Their job is to help stop the run, help cover intermediate passing routes, and sometimes put pressure on the QB. Their sub-positions are the hardest to define, because they vary widely by the type of defense that team employs. LB - Inside Linebacker (ILB) LB - Middle Linebacker (MLB) LB - Outside Linebacker (OLB) Defensive Backs (DB) These guys are at the back of the defense, mostly protecting against passing plays, but they'll come up to stop runners and are often the last defense against a runner who has managed to break through the first few yards of the defense. Cornerback (CB) These are your fastest guys on the field and they'll be covering receivers. DB - Safeties (SAF, S) These are a littler bigger and often a little slower than CBs (often times they are former CBs who have lost a step or two), but are still largely responsible for covering receivers. They also often end up being the furthest back from the line of scrimmage - as their name implies, part of their job is being the last line of defense. S - Free Safety (FS) This safety is more involved in pass coverage and being the last line of defense, typically. S - Strong Safety (SS) This guy will typically line up on the "Strong" side of the play (ie, the one with an extra TE or OL if there is one) and be more involved in stopping run plays. Some of the exact uses of each position will vary from play to play and team to team, but hopefully that gives you some idea of what they all mean.
Thanks for the detailed explanation. Why do you think OLs are in the DefensePersonnel values? Like: 5 DL, 3 LB, 2 DB, 1 OL
and you are his personal advertiser :) sorry for the sarcasm. Agree, kernel master is the toughest to achieve.
haha . I'm not but Im t trying to promote myself in a way since I was the one who interviewed him.
.073 is achievable with b0 224x224. I grouped on patient and used some tricks, though
B0 is one type of the efficientnet architecture s ( there is b0, b1, ... B7)
Hi , I am sorry for this dumb question ;) image = np.array([ image1 - image1.mean(), image2 - image2.mean(), image3 - image3.mean(), ]) Do I understand correctly that you minus different means for each individual image? (I have been confused on this point on other competitions too — I thought we should apply channel means calculated from all data )
Thanks ! I am a bit relieved that I don’t fundamentally misunderstand something :D
Hi, running the script train001.sh appear to me that validation dataset is not created having 0 elements. From the log infact i get the following: mode: train workdir: ./model/model001 fold: 5 batch size: 28 acc: 1 model: se_resnext50_32x4d pretrained: imagenet loss: BCEWithLogitsLoss optim: Adam dataset_policy: all window_policy: 2 read dataset (665414 records) applied dataset_policy all (665414 records) use default(random) sampler dataset_policy: all window_policy: 2 read dataset (0 records) applied dataset_policy all (0 records) use default(random) sampler train data: loaded 665414 records valid data: loaded 0 records Also in file main.py I see that the function valid is never called. Probably I miss something, but to my eyes appear that this code should not work. Please correct me where I am wrong.
Thank you for providing more information. --fold 5 is the wrong part. Because it uses 5 folds(n_fold=5) and each fold is counted from 0 to 4, 5 is out of range and it ends up using all data for training.
Hi , I am sorry for this dumb question ;) image = np.array([ image1 - image1.mean(), image2 - image2.mean(), image3 - image3.mean(), ]) Do I understand correctly that you minus different means for each individual image? (I have been confused on this point on other competitions too — I thought we should apply channel means calculated from all data )
Hi Neuron, Thank you for pointing this out. Now I realized I made a mistake and you are right. I meant to use min-max normalization. For policy == 2, this part is I think okay, image1 = (image1 - 0) / 80 image2 = (image2 - (-20)) / 200 image3 = (image3 - (-150)) / 380 But as you metioned, image = np.array([ image1 - image1.mean(), image2 - image2.mean(), image3 - image3.mean(), ]) this does not make sense and should be just image = np.array([image1, image2, image3]). And yes, you can use z-score normalization (by calculating mean and std from all data) instead of this too.
Hi , I am sorry for this dumb question ;) image = np.array([ image1 - image1.mean(), image2 - image2.mean(), image3 - image3.mean(), ]) Do I understand correctly that you minus different means for each individual image? (I have been confused on this point on other competitions too — I thought we should apply channel means calculated from all data )
No,it's used for different part of each picture. e.g. brain,bone blood..
Hi, running the script train001.sh appear to me that validation dataset is not created having 0 elements. From the log infact i get the following: mode: train workdir: ./model/model001 fold: 5 batch size: 28 acc: 1 model: se_resnext50_32x4d pretrained: imagenet loss: BCEWithLogitsLoss optim: Adam dataset_policy: all window_policy: 2 read dataset (665414 records) applied dataset_policy all (665414 records) use default(random) sampler dataset_policy: all window_policy: 2 read dataset (0 records) applied dataset_policy all (0 records) use default(random) sampler train data: loaded 665414 records valid data: loaded 0 records Also in file main.py I see that the function valid is never called. Probably I miss something, but to my eyes appear that this code should not work. Please correct me where I am wrong.
It seems there are two similar variable. fold in train001.sh n_fold in model001.py if you want 5 folds. it's better to modify the n_fold in model001.py
Nice EDA. Thank you for sharing. 👍
Thanks for support ! I'm very happy for this ! ✔️
Very useful information. Thanks for sharing👍
You are welcome and good luck
Many thanks for sharing! I was trying to find some guide like this. This is perfect!
You are welcome and good luck
Great Visualization!! 👍
Thanks you for a good comments :)
One thing you have to keep in mind is that the classical time series models exploit patterns in the data. We do not have enough data to understand the intrinsic patterns of each building. Moreover forecasting methods are usually best for short term forecasts because they use a bootstrap process of using previous predictions to make future predictions. The errors will accumulate and the method will necessarily be unreliable after a certain amount of time. General regression models will thus usually outperform standard time series models in this competition. You could however mix both up, by using a classical time series models for the short term (e.g January and February 2017) and a regression model for the long term.
Thanks , please can you tell us based on your experience and the available data, which is maybe the best single model that we can use in this contest. I am thinking about XGBoost and LightGBM, is there any other model that we can took it in consideration. Thanks
I receive this error when I try use the code above even though I have the latest version of the library:
The following works for me. First turn on internet. Then run these 3 lines of code: ! pip install git+https://github.com/qubvel/segmentation_models.pytorch import segmentation_models_pytorch as smp model = smp.Unet('efficientnet-b3', encoder_weights='imagenet', classes=4)
I don't understand what do you mean. What should we do?
I learned about GCP interactive notebook last week and have not used it yet. What I have used are the basic GCP instances. And I've used the Kaggle command line utility to download Kaggle datasets. I created an Ubuntu 18.04 instance and SSHed into it. Then I installed Nvidia drivers, CUDA, and TensorFlow-GPU. The beginning of this process is explained here. After following the steps there, you must convert your trial account into a full account to use GPU. (This doesn't cost any money. It converts your free $300 coupon into $300 credit within full account). Next you request a GPU increase quota here. And lastly install CUDA, directions here. In retrospect, installing Nvidia drivers yourself is tricky and it's better to use Google GCP instances with CUDA already installed and/or GCP notebook with GPU. However I have not tried either of these two things yet.
Hi , I think this is due to the bad entry of data, the "Rio"s in the dataset have different meaning. you can check it in Wikipedia on State of Brazil The following entries need to pay attention: Rio = Rio de Janeiro Rio = Rio Grande do Norte Rio = Rio Grande do Sul Mato Grosso = Mato Grosso Mato Grosso = Mato Grosso do Sul Paraiba = Paraíba Paraiba = Paraná
That's true, I did think about that... That is most likely how it should be approached! : )
I like your analysis and visualizations! A note to add about this dataset, however: There are 3 different states in Brazil that begin with "Rio". It's not just Rio de Janeiro. You will see this is the reason why "Rio" appears in the data 3x more than other states, and the case is also similar for Mato Grasso and Paraiba. I also made an analysis of this data if you're interested in taking a look! https://www.kaggle.com/etsc9287/a-quick-analysis-of-forest-fires-in-brazil
Thank you for your comment! I noticed this and will fix it in future versions.
Upvoted! Thanks for sharing.
thank you for your comment
I like your visualizations! When cleaning, something I discovered is that there are 3 different states in Brazil that begin with "Rio". You will see this is the reason why "Rio" appears in the data 3x more than other states, and the case is also similar for Mato Grasso and Paraiba. I also made an analysis of this data in R if you're interested in taking a look! https://www.kaggle.com/etsc9287/a-quick-analysis-of-forest-fires-in-brazil Thanks!
thank you for your comment
Thank you for posting this Konstantin! Your TPU code is very informative, I'd tried a few times to get pytorch_xla working before without success so seeing a pytorch model training on the tpu is most excellent.
Right, good point that it's better to download with kaggle API. I used a large HDD for download and unzipping, and then copied data to a smaller SSD to avoid quota increase.
One thing you have to keep in mind is that the classical time series models exploit patterns in the data. We do not have enough data to understand the intrinsic patterns of each building. Moreover forecasting methods are usually best for short term forecasts because they use a bootstrap process of using previous predictions to make future predictions. The errors will accumulate and the method will necessarily be unreliable after a certain amount of time. General regression models will thus usually outperform standard time series models in this competition. You could however mix both up, by using a classical time series models for the short term (e.g January and February 2017) and a regression model for the long term.
For the moment I've only tried standard k-fold CV with k set to 3. I'll be putting some effort into designing a better CV strategy this week.
I am looking for a team. I'm a graduate student and my research interests is CV. There are eight TITAN Xp in my lab. So I have enough time and sources. I usually use pytorch, and know little about tensorflow. If you are interested, you can contact me. dcp9765@126.com
I am a newbie to Kaggle and have access to a GPU. I have been learning ML/DL for over 8 months now. Hope to join the team.
One thing you have to keep in mind is that the classical time series models exploit patterns in the data. We do not have enough data to understand the intrinsic patterns of each building. Moreover forecasting methods are usually best for short term forecasts because they use a bootstrap process of using previous predictions to make future predictions. The errors will accumulate and the method will necessarily be unreliable after a certain amount of time. General regression models will thus usually outperform standard time series models in this competition. You could however mix both up, by using a classical time series models for the short term (e.g January and February 2017) and a regression model for the long term.
Thanks a lot sir for your inputs Can you please give your 2 cents on CV? I know Stratified KFold won't work. So should I keep 1 month data (Last month) as a validation set for cv? Or any other approach?
So what is the difference of this kernel and the kernel published by ? And why is the score different?
image preprocess has difference, see code block 6 mean=(0.485, 0.456, 0.406), std=(0.230, 0.225, 0.223) I'd like to know how to try the parameter in an efficient way?
efficientnet-b2 410x410 single fold hflip tta publicLB: 0.064
Yes.
The speed of the prediction loop in this script is unfortunately making it impractical. I don't know how long it takes for the loop to finish, but I know it's more than 54 minutes. I don't think this is going to qualify as "R is now supported". We can make this script one third as impractical if we treat predictions as a vector, but that requires a change to the Python env.predict() function. Two of the three conversions involve the prediction data frame, which is always one row long, and does not have to be a data frame. It seems like there is 0.3 second fixed cost to converting a data frame, regardless of its size, while converting a vector is essentially instant. Unfortunately, if we're going to stick with feeding the test data one play at a time, the third data frame conversion appears to be more difficult to get around. I wonder if we can pass the test dataset play as a list of 48 vectors and convert it, that would probably still take a small fraction of the time it takes to convert a data frame. All these potential changes seem to require a modification to the Python code. I realize that may be a tall order now that the competition is underway, but at the same time I don't think that Kaggle can claim that R is supported when the scoring loop is taking forever. Anything over 5 minutes fails to qualify as support, in my opinion.
Actually, I think we can manage without any changes on the Kaggle end. With some tweaks, I managed to reduce the prediction loop run-time from about 1 hour down to 4 minutes. Here is the notebook: https://www.kaggle.com/dmitriyguller/r-starter-notebook-with-15x-faster-prediction-loop.
efficientnet-b2 410x410 single fold hflip tta publicLB: 0.064
Thanks for sharing! Are you using 5 epochs, png images as you mentioned before also?
def evaluation(data, net, ctx): val_acc = 0.0 hybridize = False tbar = tqdm(data) for i, batch_data in enumerate(tbar): image, mask, label = batch_data image = image.as_in_context(ctx) mask = mask.as_in_context(ctx) label = label.as_in_context(ctx) logits = net(image) probs = F.sigmoid(logits) probs = F.where(probs > 0.5, F.ones_like(probs), F.zeros_like(probs)) val_acc += **F.mean(label==logits).asscalar()** if i % 20: tbar.set_description(f'val_accs {val_acc/(i+1):.6f}') return val_acc * 1.0 /(i+1) Thanks for sharing. My question is why u compared label with logits to calculate val_acc. My understanding is you would have used probs instead of logits. Thank you.
My error, sorry
If you think of the positions as having some hierarchy, some of these are just being recorded at different levels. I'll try to list them in their hierarchy as best as possible - Offense Offensive Line (OL) These are the guys on the line of scrimmage, who's job it is to block the defense. If it is a passing play, they are blocking to give the QB a clean pocket to throw from without getting sacked. On a run play, they are trying to create lanes for the ball carrier to run through cleanly. Except in rare situations, guys on the OL cannot catch passes and generally do not run the ball. Typically there are 5 OL in on a play, although sometimes you might get a Jumbo formation or something that has one or more extra OL (or even DL in rare situations) to help block more. OL - Center (C) The middle of the OL, as the name implies. The center also has the job of snapping the ball to the QB. OL - Guard (G, OG) These are the guys on either side of the Center. OL - Tackle (T, OT) These are the guys on the ends of the offensive line, outside of the Guards. Wide Receiver (WR) These are the guys who run out and catch passes from the QB. They'll still be somewhat involved on running plays as downfield blockers. Tight End (TE) Somewhat in between a WR and a TE, these guys can go out and catch passes but are much more involved in blocking. They'll typically line up just outside the Tackles, and there are typically 0, 1, or 2 of them in on a play. Running Back (RB) These are the guys typically running the ball, although more and more they are also good at catching passes either behind the line fo scrimmage or shortly beyond it. They also help protect the QB by being an additional blocker on passing plays sometimes. RB - Halfback (HB) A smaller, faster RB. Typically when people say "Running Back", this is what they mean. RB - Fullback (FB) A bigger, slower RB who typically is running ahead of the HB to help create a better running lane. FBs can be ballcarriers themselves, however. Not all teams have a FB on their roster or use them regularly. Quarterback (QB) This is the guy who is in charge of receiving the snap and then either passing it or handing it off for a running play. QBs are sometimes ballcarriers in running plays, either by design or because they can't find anyone open and want to try getting yards by running instead. A frequent planned running play for a QB is a "QB Sneak" where the QB will immediately push through the center of the line, basically falling forward. This usually results in a fairly small distribution of yardage outcomes - usually -1 to 1 yards. Defense Defensive Line (DL) These are the guys trying to get to the quarterback and/or ballcarrier. They are typically some of the biggest guys on the field. DL - Nose Tackle (NT) A defensive lineman who sits on the middle of the line. DL - Defensive Tackle (DT) A defensive lineman who is in the interior of the defensive line. DE - Defensive End (DE) A defensive lineman who is on the outsides of the defensive line. Linebackers (LB) Linebackers are behind the defensive line. Their job is to help stop the run, help cover intermediate passing routes, and sometimes put pressure on the QB. Their sub-positions are the hardest to define, because they vary widely by the type of defense that team employs. LB - Inside Linebacker (ILB) LB - Middle Linebacker (MLB) LB - Outside Linebacker (OLB) Defensive Backs (DB) These guys are at the back of the defense, mostly protecting against passing plays, but they'll come up to stop runners and are often the last defense against a runner who has managed to break through the first few yards of the defense. Cornerback (CB) These are your fastest guys on the field and they'll be covering receivers. DB - Safeties (SAF, S) These are a littler bigger and often a little slower than CBs (often times they are former CBs who have lost a step or two), but are still largely responsible for covering receivers. They also often end up being the furthest back from the line of scrimmage - as their name implies, part of their job is being the last line of defense. S - Free Safety (FS) This safety is more involved in pass coverage and being the last line of defense, typically. S - Strong Safety (SS) This guy will typically line up on the "Strong" side of the play (ie, the one with an extra TE or OL if there is one) and be more involved in stopping run plays. Some of the exact uses of each position will vary from play to play and team to team, but hopefully that gives you some idea of what they all mean.
Typically, players in offensive positions only ever play on offense, and defensive players on defense. There are some rare exceptions though - and what you are seeing is one of them. In a situation where running is pretty obvious (short yardage to go, within a yard or two of the endzone, etc), you want more weight to help push the line toward the other guy, and teams will often bring in more weight to help get an edge. So it isn't unheard of for a defense to bring in extra line help in the form of offensive linemen, or for the offense to line up a defensive linemen to help push. It looks like there are 7 plays in the dataset with that personnel combo, and they are all pretty obvious run-heavy situations - 1 or 2 yards to go, near the endzone, and a jumbo offense package (6 OL instead of the usual 5, or extra TEs to help block).
Excellent kernel..
If you like please try to upvote my kernel.
What I don't understand from this kernel is, on what basis you are dropping Medical History columns.. What is the exact meaning of biased here?? Does it mean that each value counts should be equally divided between all the categorical values in the column equally?
Dropping medical history columns which has biased value more than 80% pointing to single data point in columns.
I don't understand what do you mean. What should we do?
where can I read about Kaggle coupons? how much can I do with $300?
I receive this error when I try use the code above even though I have the latest version of the library:
Now I have this error, what am I suppose to import?
I receive this error when I try use the code above even though I have the latest version of the library:
If you type pip install segmentation-models-pytorch you don't get the latest version. You get the latest version that pip has. Instead type pip install git+https://github.com/qubvel/segmentation_models.pytorch then you will get the latest version in the GitHub repository which includes efficientnet
Very thorough notebook. If I could make a suggestion, it would be to have some descriptions of what you're seeing and/or some very descriptive titles and subtitles on your plots. Just my opinion!
Thanks again for support ! Have a good day !🙌
I don't understand what do you mean. What should we do?
Google gives first time users a free $300 coupon and Kaggle gives coupons too. Once your coupons run out, then you have to pay with a credit card.
Very thorough notebook. If I could make a suggestion, it would be to have some descriptions of what you're seeing and/or some very descriptive titles and subtitles on your plots. Just my opinion!
Thanks for insights sir! And help me! I'll descript all outputs and sections.
I don't understand what do you mean. What should we do?
...then you give credit card? :)
So far best is NN with CV 0.01300, LB 0.01343 Tree based models seems to be performing significantly worse here (at least on CV for me).
That’s awesome ! I’m having the opposite experience I do have a strong NN but my best model so far is a LGBM.
I receive this error when I try use the code above even though I have the latest version of the library:
download from here https://github.com/qubvel/segmentation_models.pytorch then install with pip
I receive this error when I try use the code above even though I have the latest version of the library:
How do I that? Could you show me how to properly install it? Thank you.
Could you explain why do you mirror Y coordinates? Y_std = ifelse(ToLeft, 160/3-Y, Y) It's rotation not mirroring.
Say that you're used to playing on the right side of the field, as in if you're looking down the field towards the opponent's goal. Then when you switch sides, if you don't flip the Y-axis as well, you would find yourself on the left side (along the direction of the field) which would be strange. (Imagine you're the X in the illustration above.) (It still might make sense to use both flipped and non-flipped for augmentation purposes, but it's not certain it will be an improvement given that we're not entirely symmetrical creatures =)
I receive this error when I try use the code above even though I have the latest version of the library:
It means you don't have latest version of library. Check what you have in site-packages/segmentation_models_pytorch/encoders You should have there also efficientnet.py
I don't understand what do you mean. What should we do?
First you choose how many CPU cores, RAM, GPUs that you want . Then you open up a notebook
Could you explain why do you mirror Y coordinates? Y_std = ifelse(ToLeft, 160/3-Y, Y) It's rotation not mirroring.
But the direction changes apply only along the X axis, isn't the Y axis always the same?
I don't understand what do you mean. What should we do?
If you're low on Kaggle GPU, you can use Google Cloud GPU. They have interactive kernels (web browser based jupyter notebooks) just like Kaggle kernels with GPU. You can run your experiment there and submit from there and get a public LB score without using any Kaggle GPU. Then if you like the public LB score, you run the code in a Kaggle kernel.
some of the buildings spend more energy cooling than heating. I had assumed it would be the opposite. I take it you've never paid for utility bills before ;-). Summer month are always more expensive than winter months. Cooling is a lot more difficult to do than heating and a lot less efficient, from an energy transfer standpoint.
The primary use of building 227 is Entertainment/public assembly. We can assume that this building may not be heated or cooled all the time. Did you compare buildings with the same primary use?
Start from simple competion like MNIST.
I know simple classification using cnn and nn.But is it all we need for image classification competition?
I explain how to install it here without internet.
Thank you.
I wrote about this earlier here. When you do not take into account the angle of the stadium with respect to Earth's parallels, then you actually do not know whether the wind is blowing against or in the back of players, so there is great chance, that you are only introducing noise to your data. There was also similar thread and it is allowed to hardcode angles of stadiums, so it may actually add value to your model, but I am sceptical about this.
Apart from that, isn't this mapping wrong? According to https://en.wikipedia.org/wiki/Wind_direction , Wind direction is reported by the direction from which it originates. By this definition, what I understand is, for example, that 'NNE' and 'From NNE' refer to the same wind direction, not 'SSW' and 'From NNE'.
I don't understand what do you mean. What should we do?
I needed to buy GPU just for this competition :) I use kaggle GPU time only for submission now, that's why I see no point in saving GPU time. I might try to use kaggle GPU now to train more, but the risk is very high that I will be out of submission time before end of competition.
This is great achievement. Awesome Job.
Thanks!
This is the error I get with running in Colab:
Thank you!
I don't understand what do you mean. What should we do?
yes. i dont have local GPU so this helps me to train a few more epoch per week. this way you can use kernel with GPU switched off
Just a guess on my part but I think the buildings only in North America and likely USA only. Go to the DOE web site for USA and it would seem that the data is result of survey's conducted in US. So I will be using USA holidays myself.
Yep I saw what they said - plot out the temperatures on one year scale. The shape of the curve is very good match to data I have worked with several times over the past decade for USA / Canada. Must admit however, that I only have looked at the summary plot and not done the plot by the 15 sites. Will add that to my todo list.
Does the last column of the data "target" mean that the patient has heart disease? And "1" means suffering from heart disease?thank you very much!!
it means the patient has heart disease
I don't understand what do you mean. What should we do?
so you store submission in your own dataset and if sample_submission changes you just stop execution to save GPU time?
I don't understand what do you mean. What should we do?
here is how to do it p=pd.read_csv("../input/yourdataset/submission.csv") sub = pd.read_csv('../input/severstal-steel-defect-detection/sample_submission.csv') try: sub.EncodedPixels=p.EncodedPixels except: sub.EncodedPixels = [""]*len(sub.EncodedPixels) sub.to_csv('submission.csv', index=False)
Yes. In an interactive kernel, if you view the 1801 test images in the folder test_images, those are the images for your public leaderboard score. When you click submit on your kernel, your code loads a new folder of test_images which contains 5403 = 1801 + 3602 images. The new folder includes 3602 images that we have not and can not see. Your code processes the new images and your private score is computed from those new images.
Thanks and .
(605545.1684345007, 'meter_reading'), how is meter_reading one of the feature ??
Great catch That is weird! It doesn't show up on the graph just above it, as you can see.
Oops I think the dataset I saved has mismatched files. I'll fix it on Monday. In the meantime, use this notebook to create your own dataset instead of using the pre-prepared one.
OK the file mismatch in the dataset should be finished now.
I'll try to use some of your codes. I liked the proportions of the charts. Thanks
Thank you! Let us know if you found anything interesting
Cheers I couldn't find where the data is coming from. Are you sure it's US [only]?
The data description mentions: "The dataset includes three years of hourly meter readings from over one thousand buildings at several different sites around the world." The data might be from different parts of the world as well? Thoughts?
Only worked on the edges of building energy and design over the years working for glass company making great glass for big buildings, but I know that for many "office" type buildings all the electrical gear, lights, etc. in a building generate a lot of heat and that the cooling load is much larger than the outside temperatures would suggest. Even my house built in 1941 shows that effect - when I am running all 4 of my PC's (dual GPU for each) I am easily at 2000 watts or more. So when in full Kaggle mode I only see my heater kick on when the outside temperature drops below 50F. So when you look at the meter reading for electrical - remember that the load you see is generating heat as it operates all the gear in the joint.
I guess you have to add the heat generated by the occupants, it's actually quite significant! I read 67W of sensible heat per person (not too familiar with what sensible vs latent heat means in this context).
I don't understand what do you mean. What should we do?
: how did you download kaggle datasets in GCP interactive notebook? I tried to use that service via command line browser and/or thru notebook. Still, I haven't had a chance to download the datasets successfully.
0.069 - single fold, no tta, 300x300, 80:20 train/val split using the full dataset, custom model.
Yea, I'm using Sigmoid (Brain + Subdural + Bone) Windowing from Gradient & Sigmoid Windowing.
Kaggle is not asleep, i just need to wake up !!!
, are you from Brittany ?
(605545.1684345007, 'meter_reading'), how is meter_reading one of the feature ??
i think its the meter id.
I don't understand what do you mean. What should we do?
Google cloud GPU pricing (which gets deducted from your coupon) is $0.45 per hour for a Tesla K80 and $1.46 per hour for a Tesla P100. (Kaggle currently uses Tesla P100). Full price list here. Additionally add a 2 core CPU for $0.10 per hour. And maybe $0.10 per hour if preconfigured software. To get a free Kaggle coupon, you must pay attention to when a new competition begins. In the first week of some competitions, Kaggle will post information how the first 300 people can get a free coupon. It usually requires making 1 submission in the competition. (You can just submit the sample_submission.csv). Once you get the coupon, you can use it for any competition. An example is here in Fraud Comp.
Those are some crazy plays! Sorry, Rob, but think about the happiness experienced by the Michigan fans!
GO BLUE! But yes, if that was my teams ball carrier I'd be losing my mind.
So far best is NN with CV 0.01300, LB 0.01343 Tree based models seems to be performing significantly worse here (at least on CV for me).
no I am treating it as a multi class classification problem.
Hi , I think this is due to the bad entry of data, the "Rio"s in the dataset have different meaning. you can check it in Wikipedia on State of Brazil The following entries need to pay attention: Rio = Rio de Janeiro Rio = Rio Grande do Norte Rio = Rio Grande do Sul Mato Grosso = Mato Grosso Mato Grosso = Mato Grosso do Sul Paraiba = Paraíba Paraiba = Paraná
I disagree that, you see the Rio have different physical location, thus, that is not a good idea to combine all Rio in a group:
Hi , I think this is due to the bad entry of data, the "Rio"s in the dataset have different meaning. you can check it in Wikipedia on State of Brazil The following entries need to pay attention: Rio = Rio de Janeiro Rio = Rio Grande do Norte Rio = Rio Grande do Sul Mato Grosso = Mato Grosso Mato Grosso = Mato Grosso do Sul Paraiba = Paraíba Paraiba = Paraná
I assume the data was followed alphabetical order of "state" column, so the first Rio: Rio de Janeiro, the second: Rio Grande do Norte, etc
Is it legal and under rules to make mask annotation based on your bbox annotation?
Nice! Soon we should also add masks (polygons on md.ai), working on it. You can also make it, according to the licensing rules of qure.ai, any modifications must be made publicly available under the same license, please check by yourself: - https://creativecommons.org/licenses/by-nc-sa/4.0/ - http://headctstudy.qure.ai/dataset To use it in the competition, you probably know the External Data Thread.
So far best is NN with CV 0.01300, LB 0.01343 Tree based models seems to be performing significantly worse here (at least on CV for me).
Is your LGBM solving a univariate regression problem?
0.069 - single fold, no tta, 300x300, 80:20 train/val split using the full dataset, custom model.
If you allow me, did you use any data normalization like Multi channel windowing?
Why is so important to you direction in which players are facing? It does no tell much. They may have been rotating their heads just when the data were collected. Also, they don't have to move in the way they are looking.
I actually think the orientation/direction of motion could be incredibly valuable features. A few things that immediately come to mind - For the OL/DL, looking at their orientation vs motion tells you who is winning the battle at the line. If the defensive linemen are facing one way but moving another, that is a good indication that the OL is getting a good push and that bodes well for the run. If it is the offensive linemen who are facing one way and moving another, it means the defensive line is getting a good push against them and probably have a better chance of stopping the run for little/no gain. A defensive player who's orientation is offset 90 degrees from his motion maybe moving sideways and unsure of how the play is developing, vs someone who is driving toward the ball carrier. It can also tell you more about how the downfield blocking is developing. All of this is predicated on orientation/motion directions being consistent and somewhat clean data, which seems in question, and it is a lot harder to figure some of this out than it may seem at first, but I think the positional/velocity data will be really important to winning this competition.
So far Resnet50 .0094 w/o any augmentation or tta.
TTA = Test Time Augmentation
This is really great. I learned a lot!
Thank you! Glad it helped. :)
I'm trying to run your code and am getting an X_train not defined? Did you upload portions of the code?
Thanks for letting me know. All the code is there. The code used to run as is in the Kaggle notebook/kernel. It must be new versions of Python libraries are making it error. I can't use my GPU quota this week to fix it. But I'll fix it next week.
resnet-34 single model, single fold, tta LB 0.90283 can't reproduce on local GPU
Wow Jacek. You're doing awesome. Great work.
, thank you very much for sharing your knowledge. It really helps us to improve our skills and start to think differently. Thank you. Sincerely. Can you explain: why don't you remove correlated C-, D- and M-columns? I have tried to remove redundant C-columns by your methodolgy but got huge score decrease (~1.5%) on validation. Why is this approach appliable to V-columns but isn't to all others? The second question is why do you append TransactionDT to ALL feature blocks on correlation plots? There was no correlation found between this field and any other. You refer to this analysis, in which it was already noted that only TransactionID has a high correlation with TransactionDT. What kind of correlation between TransactionDT and all other features did you assume you can find?
Good questions. In general, I try not to remove columns that Kaggle gives us. In all competitions most features are correlated to some degree but a new column (even though it has correlations) will usually add new information that can be important. The V columns are a different situation. We are told that they are made from the raw features C, D, M etc plus some other columns that we don't have. Therefore we are told that they are highly redundant. They don't contain 339 columns of new information. Perhaps they contain 50 columns of new information. Also, models without them can achieve nearly LB 0.960 so this indicates that they don't have much new information. In this special case, it is best to remove most of them so they don't clutter up your model. I added TransactionDT to see which V features are correlated with time.
I recently joined this contest, and have made a few submissions. So far I have not received this error message. I have participated in other Kaggle contests involving images and masks, and I still find it tricky to get the code correct, even involving such trivial matters as handling the width and height in a consistent manner throughout my program. I don't think it matters whether masks overlap with other masks, whether for different images or different cloud types within the same image. But each mask must not overlap itself. So there is probably something wrong with your code that constructs the RLE or formats it for a submission file. There is a notebook called "RLE functions - Run Lenght Encode & Decode", but since it's in Python and I work in R I haven't tested it.
Ahhhhh ok wonderful! So this really was my mistake. Thank you so much for pointing out the issue. Sometimes it's hard to see our own mistakes in life without another pair of eyes. I am going to recode for run-start run-length and try again. Have a great day and best of luck with your entry as well!!!
The model in this kernel is scoring well due to data leakage - the Matlab file downloaded includes the test partition of the data from the original MNIST database and is being used to train the model.
Oh! I see. I've tried to adjust this kernel to Kanada MNIST - Not working.
i find that average dice for a correctly detected defect image is about 0.70 for validation images, but only 0.60 to 0.625 for public test images. Any kaggler would like to comment on his findings? based on the above, having a reliable way to reject FP is the most important. below are what i think are achievable: (precision/recall) defect1,2,3,4 : 0.95/0.65, 0.80/0.30, 0.925/0.75,0.98/0.925 [update] getting 0.675 on LB test images seems possible in some cases
Your defect 1 prec/recall is so good!
resnet-34 single model, single fold, tta LB 0.90283 can't reproduce on local GPU
back to 4xTTA (hflp+vflip) LB 0.91807
If your current LB is 0.898, I suggest that you focus on class 3 and 4 for now. With just class 3 and 4, you should be able to get above LB 0.910 with efficientnetB4 segmentation and classification.
Thanks a lot for the advice!
i find that average dice for a correctly detected defect image is about 0.70 for validation images, but only 0.60 to 0.625 for public test images. Any kaggler would like to comment on his findings? based on the above, having a reliable way to reject FP is the most important. below are what i think are achievable: (precision/recall) defect1,2,3,4 : 0.95/0.65, 0.80/0.30, 0.925/0.75,0.98/0.925 [update] getting 0.675 on LB test images seems possible in some cases
I have the same observation for my single-model defect 3 dice score, i.e., ~0.61, my n-fold+tta can boost the score to ~0.68-0.69.
Hi , may I know why np.expm1 is preferable?
Now I see. Thanks !
Have you found out why it casts the target to Long when doing .databunch() ?
It's currently experimental (it's in a _ starting notebook) so there's a bug with the full databunch when doing it for the entire dataset. I'll see if I can get by with a subset (the kernel keeps crashing)
So if all site = 0 - is there a country in the world that anyone knows of that has June 4 as a shut down and party holiday?
June 4 2016 was not listed as a major outage event in USA by EIA in DOE. Could still be outage but localized. Not sure how big it needs to be before Uncle Sam calls it major.,
The Aristocats are back. Upvoted.
the cat is a programmer ! 😹