Question,Answer,Category What are the different types of competitions available on Kaggle?,"# Types of Competitions Kaggle Competitions are designed to provide challenges for competitors at all different stages of their machine learning careers. As a result, they are very diverse, with a range of broad types. ## Featured Featured competitions are the types of competitions that Kaggle is probably best known for. These are full-scale machine learning challenges which pose difficult, generally commercially-purposed prediction problems. For example, past featured competitions have included: - [Allstate Claim Prediction Challenge](https://www.kaggle.com/c/allstate-purchase-prediction-challenge) - Use customers’ shopping history to predict which insurance policy they purchase - [Jigsaw Toxic Comment Classification Challenge](https://www.kaggle.com/c/jigsaw-toxic-comment-classification-challenge) - Predict the existence and type of toxic comments on Wikipedia - [Zillow Prize](https://www.kaggle.com/c/zillow-prize-1) - Build a machine learning algorithm that can challenge Zestimates, the Zillow real estate price estimation algorithm Featured competitions attract some of the most formidable experts, and offer prize pools going as high as a million dollars. However, they remain accessible to anyone and everyone. Whether you’re an expert in the field or a complete novice, featured competitions are a valuable opportunity to learn skills and techniques from the very best in the field. ## Research Research competitions are another common type of competition on Kaggle. Research competitions feature problems which are more experimental than featured competition problems. For example, some past research competitions have included: - [Google Landmark Retrieval Challenge](https://www.kaggle.com/c/landmark-retrieval-challenge) - Given an image, can you find all the same landmarks in a dataset? - [Right Whale Recognition](https://www.kaggle.com/c/noaa-right-whale-recognition) - Identify endangered right whales in aerial photographs - [Large Scale Hierarchical Text Classification](https://www.kaggle.com/c/lshtc) - Classify Wikipedia documents into one of ~300,000 categories Research competitions do not usually offer prizes or points due to their experimental nature. But they offer an opportunity to work on problems which may not have a clean or easy solution and which are integral to a specific domain or area in a slightly less competitive environment. ## Getting Started Getting Started competitions are the easiest, most approachable competitions on Kaggle. These are semi-permanent competitions that are meant to be used by new users just getting their foot in the door in the field of machine learning. They offer no prizes or points. Because of their long-running nature, Getting Started competitions are perhaps the most heavily tutorialized problems in machine learning - just what a newcomer needs to get started! - [Digit Recognizer](https://www.kaggle.com/c/digit-recognizer) - [Titanic: Machine Learning from Disaster](https://www.kaggle.com/c/titanic) - Predict survival on the Titanic - [Housing Prices: Advanced Regression Techniques](https://www.kaggle.com/c/house-prices-advanced-regression-techniques) Getting Started competitions have two-month rolling leaderboards. Once a submission is more than two months old, it is automatically invalidated and no longer counts towards the leaderboard. Similarly, your team will drop from the leaderboard if all its submissions are older than two months. This gives new Kagglers the opportunity to see how their scores stack up against a cohort of competitors, rather than many tens of thousands of users. If your team is removed from a Getting Started competition due to the rolling expiry and wishes to rejoin, creating a new submission will cause it to show again on the leaderboard. Additionally, the Kaggle [Learn platform](https://www.kaggle.com/learn/overview) has several tracks for beginners interested in free hands-on data science learning from pandas to deep learning. Lessons within a track are separated into easily digestible chunks and contain Notebook exercises for you to practise building models and new techniques. You’ll learn all the skills you need to dive into Kaggle Competitions. ## Playground Playground competitions are a “for fun” type of Kaggle competition that is one step above Getting Started in difficulty. These are competitions which often provide relatively simple machine learning tasks, and are similarly targeted at newcomers or Kagglers interested in practicing a new type of problem in a lower-stakes setting. Prizes range from kudos to small cash prizes. Some examples of Playground competitions are: - [Dogs versus Cats](https://www.kaggle.com/c/dogs-vs-cats) - Create an algorithm to distinguish dogs from cats - [Leaf Classification](https://www.kaggle.com/c/leaf-classification) - Can you see the random forest for the leaves? - [New York City Taxi Trip Duration](https://www.kaggle.com/c/nyc-taxi-trip-duration) - Share code and data to improve ride time predictions",competition What are the different competition formats on Kaggle?,"There are handful of different formats competitions are run in. ## Simple Competitions Simple (or “classic”) competitions are those which follow the standard Kaggle format. In a simple competition, users can access the complete datasets at the beginning of the competition, after accepting the competition’s rules. As a competitor you will download the data, build models on it locally or in [Notebooks](https://www.kaggle.com/notebooks), generate a prediction file, then upload your predictions as a submission on Kaggle. By far most competitions on Kaggle follow this format. One example of a simple competition is the Porto Seguro Safe Driver Prediction Competition [Porto Seguro Safe Driver Prediction Competition](https://www.kaggle.com/c/porto-seguro-safe-driver-prediction). ## Two-stage Competitions In two-stage competitions the challenge is split into two parts: Stage 1 and Stage 2, with the second stage building on the results teams achieved in Stage 1. Stage 2 involves a new test dataset that is released at the start of the stage. Eligibility for Stage 2 typically requires making a submission in Stage 1. In two-stage competitions, it’s especially important to read and understand the competition’s specific rules and timeline. One example of such a competition is the Nature Conservancy Fisheries Monitoring Competition [Nature Conservancy Fisheries Monitoring Competition](https://www.kaggle.com/c/the-nature-conservancy-fisheries-monitoring). ## Code Competitions Some competitions are code competitions. In these competitions all submissions are made from inside of a Kaggle Notebook, and it is not possible to upload submissions to the Competition directly. These competitions have two attractive features. The competition is more balanced, as all users have the same hardware allowances. And the winning models tend to be far simpler than the winning models in other competitions, as they must be made to run within the compute constraints imposed by the platform. Code competitions are configured with their own unique constraints on the Notebooks you can submit. These may be restricted by characteristics like: CPU or GPU runtime, ability to use external data, and access to the internet. To learn the constraints you must adhere to, review the Requirements for that specific competition. An example of a code competition is Quora Insincere Questions Classification [Quora Insincere Questions Classification](https://www.kaggle.com/c/quora-insincere-questions-classification). ### Code Competition FAQ **I'm getting errors when submitting. What should I do?** 1. Please see our page on code competition debugging [code competition debugging](https://www.kaggle.com/code-competition-debugging) for tips on understanding and preventing submission errors. 2. First you'll need to write a Notebook which reads the Competition's dataset and makes predictions on the test set. Specifically, have your Notebook write your predictions to a ""submission file"", which is typically a submission.csv file, though some competitions have special formats. See the competition's Evaluation page, or look for sample_submission.csv (or similar) in the Data page for more information on the expected name and format of your submission file. 3. Save a full version of your Notebook by clicking ""Save Version"" and selecting ""Save & Run All"". This saves your code, runs it, and creates a version of the code and output. Once your save finishes, navigate to the Viewer page for your new Notebook Version. 4. In the Notebook Viewer, navigate to the Output section, find and select the submission file you created, and click the ""Submit"" button. **Can I upload external data?** Some competitions allow external data and some do not. If a competition allows external data, you can attach it to your Notebook by adding it as a data source. If a competition does not allow external data, attaching it to your Notebook will deactivate the ""Submit"" button on the associated saved version. **What are the compute limits of Notebooks?** The compute limits of the Notebooks workers are subject to change. You can view the site-wide memory, CPU, runtime limits, and other limits from the editor. Code competitions come in many shapes and sizes, and will often impose limits specific to a competition. You should view the competition description to understand if these limits are activated and what they are. Example variations include: - Specific runtime limits - Specific limits that apply to Notebooks using GPUs - Internet access allowed or disallowed - External data allowed or disallowed - Custom package installs allowed or disallowed - Submission file naming expectations **How do I team up in a code competition?** All the competitions setup is the same as normal competitions, except that submissions are only made through Notebooks. To team up, go to the ""Team"" tab and invite others. **How will winners be determined?** In some code competitions, winners will be determined by re-running selected submissions’ associated Notebooks on a private test set. In such competitions, you will create your models in Notebooks and make submissions based on the test set provided on the Data page. You will make submissions from your Notebook using the above steps and select submissions for final judging from the “My Submissions” page, in the same manner as a regular competition. Following the competition deadline, your code will be rerun by Kaggle on a private test set that is not provided to you. Your model's score against this private test set will determine your ranking on the private leaderboard and final standing in the competition.",competition How to join a competition?,"Before you start, navigate to the [Competitions listing](https://www.kaggle.com/competitions). It lists all of the currently active competitions. Public competitions are viewable on Kaggle and appear in Kaggle search results. Depending on the privacy and access set by the host, some competitions may be unavailable for you to see or join. If a host set a competition's visibility to private, you would only see the competition's details if they shared a unique URL with you. If you click on a specific Competition in the listing, you will go to the Competition’s homepage. The first element worth calling out is the **Rules tab**. This contains the rules that govern your participation in the sponsor’s competition. You must accept the competition’s rules before downloading the data or making any submissions. It’s extremely important to read the rules before you start. This is doubly true if you are a new user. Users who do not abide by the rules may have their submissions invalidated at the end of the competition or banned from the platform. So please make sure to read and understand the rules before choosing to participate. If anything is unclear or you have a question about participating, the competition’s forums are the perfect place to ask. The information provided in the **Overview tabs** will vary from Competition to Competition. Five elements which are almost always included and should be reviewed are the “Description,” “Data”, “Evaluation,” “Timeline,” & “Prizes” sections. - The **description** gives an introduction into the competition’s objective and the sponsor’s goal in hosting it. - The **data** tab is where you can download and learn more about the data used in the competition. You’ll use a training set to train models and a test set for which you’ll need to make your predictions. In most cases, the data or a subset of it is also accessible in Notebooks. - The **evaluation** section describes how to format your submission file and how your submissions will be evaluated. Each competition employs a metric that serves as the objective measure for how competitors are ranked on the leaderboard. - The **timeline** has detailed information on the competition timeline. Most Kaggle Competitions include, at a minimum, two deadlines: a rules acceptance deadline (after which point no new teams can join or merge in the competition), and a submission deadline (after which no new submissions will be accepted). It is very, very important to keep these deadlines in mind. - The **prizes** section provides a breakdown of what prizes will be awarded to the winners, if prizes are relevant. This may come in the form of monetary, swag, or other perks. In addition to prizes, competitions may also award ranking points towards the Kaggle progression system. This is shown on the Overview page. Ready to join? If the competition allows anyone to join, you should be able to click ""Join"" and accept the competition's rules. If the competition has restricted access, the host will share a private link with you that allows you to join. Once you have chosen a competition, read and accepted the rules, and made yourself aware of the competition deadlines, you are ready to submit!",competition "How to form, manage, and disband teams in a competition?","Everyone that competes in a Competition does so as a team. A team is a group of one or more users who collaborate on the competition. Joining a team of other users around the same level as you in machine learning is a great way to learn new things, combine your different approaches, and generally improve your overall score. It’s important to keep in mind that team size does not affect the limit on how many submissions you may make to a competition per day: whether you are a team of one or a team of five, you will have the same daily submission limit. When you accept the rules and join a Competition, you automatically do so as part of a new team consisting solely of yourself. You can then adjust your team settings in various ways by visiting the “Team” tab on the Competition page: You can perform a number of different team-related actions on this tab. ## Types of Team Memberships There are two team membership statuses. One person serves as the Team Leader. They are the primary point of contact when we need to communicate with a team, and also have some additional team modification privileges (to be discussed shortly). Every other person in the team is a Member. If you are the Team Leader you will see a box next to every other team member’s name on the Team page that says “Make Leader”. You may click on this at any time to designate someone else on your team the Team Leader. ## Changing your Team Name The team name is distinct from the names of its members, even if the team only consists of a single person (yourself). You can always change your team name to something custom, and other users will see that custom name when they visit the competition leaderboard. Most teams customize their names! Anyone in the team can modify the team name by visiting the Team tab. ## Merging Teams You may invite another team to your team or, reciprocally, accept a merge request from another team. If you propose a merger, the merger can be accepted or rejected by the Team Leader of the other team. If you are proposed a merger, the Team Leader may choose to accept or reject it. There are some limits on when you can merge teams: - Most competitions have a team merger deadline: a point in time by which all teams must be finalized. No mergers may occur after this date - Some competitions specify a maximum team size; you will not be able to merge teams whose cumulative number of members exceeds this cap - You will not be able to merge teams whose combined daily submission count exceeds the total submission limit to that date (daily limit x number of days). All of this can be managed through the Team tab. ## Disbanding a Team Choose your teammates wisely as only teams that have not made any submissions can be disbanded. This can be done through the Team tab",competition How do I make a submission in a competition?,"You will need to submit your model predictions in order to receive a score and a leaderboard position in a Competition. How you go about doing so depends on the format of the competition. Either way, remember that your team is limited to a certain number of submissions per day. This number is five, on average, but varies from competition to competition. ## Leaderboard One of the most important aspects of Kaggle Competitions is the Leaderboard. The Competition leaderboard has two parts. - The **public leaderboard** provides publicly visible submission scores based on a representative sample of the test data. This leaderboard is visible throughout the competition. - The **private leaderboard**, by contrast, tracks model performance using the remainder of the test data. The private leaderboard thus has final say on whose models are best, and hence, who the winners and losers of the Competition will be. Which subset of data is calculated on the private leaderboard or a submission’s performance on the private leaderboard is not released to users until the competition has been closed. Many users watch the public leaderboard closely, as breakthroughs in the competition are announced by score gains in the leaderboard. These jumps in turn motivate other teams working on the competition in search of those advancements. But it’s important to keep the public leaderboard in perspective. It’s very easy to overfit a model, creating something that performs very well on the public leaderboard, but very badly on the private one. This is called [overfitting](https://en.wikipedia.org/wiki/Overfitting). In the event of an exact score tie, the tiebreaker is the team which submitted earlier. Kaggle always uses full precision when determining rankings, not just the truncated precision shown on the Leaderboard. ## Submitting Predictions ### Submitting by Uploading a File For most competitions, submitting predictions means uploading a set of predictions (known as a “submission file”) to Kaggle. Any competition which supports this submission style will have “Submit Predictions” and “My Submissions” buttons in the Competition homepage header. To submit a new prediction use the Submit Prediction button. This will open a modal that will allow you to upload your submission file. We will attempt to score this file, then add it to My Submissions once it is done being processed. Note that to count, your submission must first pass processing. If your submission fails during the processing step, it will not be counted and not receive a score; nor will it count against your daily submission limit. If you encounter problems with your submission file, your best course of action is to ask for advice on the Competition’s discussion forum. If you click on the My Submissions tab you will see a list of every submission you have ever made to this competition. You may also use this tab to select which submission file(s) to submit for scoring before the Competition closes. Your final score and placement at the end of the competition will be whichever selected submission performed best on the private leaderboard. If you do not select submission(s) to be scored before the competition closes, the platform will automatically select those which performed the highest on the public leaderboard, unless otherwise communicated in the competition. ### Submitting by Uploading from a Notebook In addition to our usual Competitions, Kaggle may also allow competition submissions from Kaggle Notebooks. Notebooks are an interactive in-browser code editing environment; to learn more about them, see the documentation sections on Notebooks. [Here](https://www.kaggle.com/docs/notebooks) is the link. To build a model, start by initializing a new Notebook with the Competition Dataset as a data source. This is easily done by going to the “Notebooks” tab within a competition’s page and then clicking “New Notebook.” That competition’s dataset will automatically be used as the data source. New Notebooks will default as private but can be toggled to public or shared with individual users (for example, others on your team). Build your model and test its performance using the interactive editor. Once you are happy with your model, use it to generate a submission file within the Notebook, and write that submission file to disk in the default working directory (/kaggle/working). Then click ""Save Version"" and select ""Save & Run All"" to build a new Notebook version using your code. Once the new Notebook Version is done (it must run top-to-bottom within the Notebooks platform constraints), navigate to the Notebook Viewer page to see the execution results, then find and select your submission file in the Output section, and you should see a “Submit” button to submit it to the Competition.",competition "What is Data Leakage? ","Data Leakage is the presence of unexpected additional information in the training data, allowing a model or machine learning algorithm to make unrealistically good predictions. Leakage is a pervasive challenge in applied machine learning, causing models to over-represent their generalization error and often rendering them useless in the real world. It can be caused by human or mechanical error, and can be intentional or unintentional in both cases. Some types of data leakage include: - Leaking test data into the training data - Leaking the correct prediction or ground truth into the test data - Leaking of information from the future into the past - Retaining proxies for removed variables a model is restricted from knowing - Reversing of intentional obfuscation, randomization or anonymization - Inclusion of data not present in the model’s operational environment - Distorting information from samples outside of scope of the model’s intended use - Any of the above present in third party data joined to the training set ## Examples One concrete example we’ve seen occurred in a dataset used to predict whether a patient had prostate cancer. Hidden among hundreds of variables in the training data was a variable named PROSSURG. It turned out this represented whether the patient had received prostate surgery, an incredibly predictive but out-of-scope value. The resulting model was highly predictive of whether the patient had prostate cancer but was useless for making predictions on new patients. This is an extreme example - many more instances of leakage occur in subtle and hard-to-detect ways. An early Kaggle competition, Link Prediction for Social Networks, makes a good case study in this. There was a sampling error in the script that created that dataset for the competition: a > sign instead of a >= sign meant that, when a candidate edge pair had a certain property, the edge pair was guaranteed to be true. A team exploited this leakage to take second in the competition. Furthermore, the winning team won not by using the best machine-learned model, but by scraping the underlying true social network and then defeated anonymization of the nodes with a very clever methodology. Outside of Kaggle, we’ve heard war stories of models with leakage running in production systems for years before the bugs in the data creation or model training scripts were detected. ## Leakage in Competitions Leakage is especially challenging in machine learning competitions. In normal situations, leaked information is typically only used accidentally. But in competitions, participants often find and intentionally exploit leakage where it is present. Participants may also leverage external data sources to provide more information on the ground truth. In fact, “the concept of identifying and harnessing leakage has been openly addressed as one of three key aspects for winning data mining competitions” (source paper [source paper](http://www.cs.umb.edu/~ding/history/470_670_fall_2011/papers/cs670_Tran_PreferredPaper_LeakingInDataMining.pdf)). Identifying leakage beforehand and correcting for it is an important part of improving the definition of a machine learning problem. Many forms of leakage are subtle and are best detected by trying to extract features and train state-of-the-art models on the problem. This means that there are no guarantees that competitions will launch free of leakage, especially for Research competitions (which have minimal checks on the underlying data prior to launch). When leakage is found in a competition, there are many ways that we can address it. These may include: - Let the competition continue as is (especially if the leakage only has a small impact) - Remove the leakage from the set and relaunch the competition - Generate a new test set that does not have the leakage present Updating the competitions isn’t possible in all cases. It would be better for the competition, the participants, and the hosts if leakage became public knowledge when it was discovered. This would help remove leakage as a competitive advantage and give the host more flexibility in addressing the issue.",competition How to get started with competitions?,"## Getting Started - The Getting Started Competitions are specifically targeted at new users getting their feet wet with Kaggle and/or machine learning: - Binary classification: [Titanic: Machine Learning from Disaster](https://www.kaggle.com/c/titanic) - Regression: [House Prices: Advanced Regression Techniques](https://www.kaggle.com/c/house-prices-advanced-regression-techniques) - The [Kaggle Learn](https://www.kaggle.com/learn/overview) platform has several tracks for beginners interested in free hands-on data science learning from pandas to deep learning. Lessons within a track are separated into easily digestible chunks and contain Notebook exercises for you to practise building models and new techniques hands-on. It is a great way to start deep diving into data science and quickly get familiar with the field! - What Kaggle has learned from almost 2MM machine learning models on [Youtube](https://www.youtube.com/watch?v=oYNKc_u9Os8). This [data.bythebay.io](http://data.bythebay.io/) talk by Kaggle founder Anthony Goldbloom lays out what Kaggle competitions are all about. - How to (almost) win at Kaggle on [Youtube](https://www.youtube.com/watch?v=JyEm3m7AzkE). In this talk competitor Kiri Nichols summarizes the appeal of Competitions as a data science learner. ## Discussion - [General Discussion](https://www.kaggle.com/discussion): There are six general site Discussion Forums: - Kaggle Forum: Events and topics specific to the Kaggle community - Getting Started: The first stop for questions and discussion for new Kagglers - Product Feedback: Tell us what you love, hate, or wish for - Questions & Answers: Technical advice from other data scientists - Datasets: Requests for and discussion of open data - Learn: Questions, answers, and requests related to Kaggle Learn courses - Competition Discussion Forums: No matter the competition you are participating in, you can count on plenty of active community members making posts to the forums. If you get stuck on a particular aspect of the problem, Discussions are a great place to ask questions. - Competition Notebooks: Similar to Discussions, Notebooks shared within a competition are an excellent source of Exploratory Data Analyses (EDAs) & basic starter models which can be forked and built upon for applied learning. - The Kaggle Noobs Slack channel: This Slack channel is a popular watering hole for general banter among Kaggle ML practitioners from Novice to Grandmaster. ## Techniques - Public, reproducible code examples in Notebooks are a great way to learn and put to practice new techniques. Search for techniques in Notebooks by tag using the search syntax |tag:classification|. Fork Notebooks to make a copy of the code to modify and experiment with. - The [No Free Hunch](http://blog.kaggle.com/) blog. No Free Hunch is a great way of keeping up with goings-on on Kaggle. Many past Competitions winners have been interviewed about and presented their winning models on No Free Hunch. Here are some examples of past winner’s interviews: - NOAA Right Whale Identification - Instacart Market Basket Analysis, Winner’s Interview: 2nd place, Kazuki Onodera - Two Sigma Financial Modeling Code Competition - Various tutorials have been published on No Free Hunch: - An Intuitive Introduction to Generative Adversarial Networks - Introduction To Neural Networks - A Kaggle Master Explains Gradient Boosting - A Kaggler’s Guide to Model Stacking in Practice - Marios Michailidis: How to become a Kaggle #1: An introduction to model stacking: In this Data Science Festival talk top Kaggler Marios Michailidis (Kasanova) explains model stacking, a key feature of winning competition models, in great detail. - Kaggle Grandmaster Panel: A panel Q&A from H2O World 2017 featuring some top Kagglers. - How to Win A Kaggle Competition - Learn From Top Kagglers: This Coursera course, put together by high-ranking Kagglers, going into great detail on the tools and techniques used by winning Competitions models.",competition How does Kaggle handle cheating?,"Cheating is not taken lightly on Kaggle. We monitor our compliance account [the formal channel for reporting cheaters, or appealing a removal for cheating](https://www.kaggle.com/compliance) during competitions. We also spend a considerable amount of time at the close of each competition to review suspicious activity and remove people who have violated the rules from the leaderboard. When we believe we have sufficient evidence, we take action through removal or possibly even an account ban. We also monitor and investigate moderation reports (plagiarism, voting rings, etc.) throughout the week, and take action as appropriate, which includes removing medals as well as full-out blocking accounts. If you believe you have evidence that suggests a team violated competition rules, please report it to the Competitions compliance account [here](https://www.kaggle.com/compliance) for a thorough investigation.",competition How can I efficiently utilize GPUs on Kaggle? ,"Kaggle provides free access to NVIDIA P100 (16GB) and 2 x T4 (16GB) GPUs. These GPUs are useful for training deep learning models, though they do not accelerate most other workflows (i.e. libraries like pandas and scikit-learn do not benefit from access to GPUs). You can use up to a quota limit per week of GPU. The quota resets weekly and is 30 hours or sometimes higher depending on demand and resources. Here are some tips and tricks to get the most of your GPU usage on Kaggle. In general, your most helpful levers will be: - Only turn on the GPU if you plan on using the GPU. GPUs are only helpful if you are using code that takes advantage of GPU-accelerated libraries (e.g. TensorFlow, PyTorch, etc). - Actively monitor and manage your GPU usage - Kaggle has tools for monitoring GPU usage in the settings menu of the Notebooks editor, at the top of the page at kaggle.com/notebooks, on your profile page, and in the session management window. - Avoid using batch sessions (the commit button) to save or checkpoint your progress. Batch sessions (commits) run all of the code from top to bottom. This is less efficient than simply downloading the .ipynb file from the Notebook editor. - Cancel unnecessary batch sessions - The same Notebook can have multiple concurrent batch sessions if you press the commit button prior to completing the first commit. If your latest code has been updated as compared to your previous code, it is likely better for you to cancel that first commit and leave only the 2nd commit running. - Stop interactive sessions prior to closing the window. Interactive sessions remain active until they reach the 60 minute idle timeout limit. If you stop the session prior to closing your window it can save you up to 60 minutes of compute. - You can use the Active Events window in the lower left hand corner of your screen to manage your active sessions including stopping unused interactive sessions. Learn more about Active Events [here](https://www.kaggle.com/product-feedback/193925). - Consider using the Kaggle-API to avoid interactive sessions entirely. With the Kaggle API you can push a new version of your notebook without ever opening up an interactive session in the Notebook editor. We hope help you get the most from our free GPU compute. Happy Kaggling!",gpu What is Tensor Processing Units (TPUs)?,"TPUs (TPU-v3) are now available on Kaggle, for free. TPUs are hardware accelerators specialized in deep learning tasks. They are supported in Tensorflow 2.1 both through the Keras high-level API and, at a lower level, in models using a custom training loop. You can use up to 20 hours per week of TPUs and up to 9h at a time in a single session. > If you'd like to jump straight into a sample, here it is: [Five flowers with Keras and Xception on TPU](https://www.kaggle.com/mgornergoogle/five-flowers-with-keras-and-xception-on-tpu).",tpu How to use TPU with Keras or TensorFlow?,"Once you have flipped the ""Accelerator"" switch in your notebook to ""TPU v3-8"", this is how to enable TPU training in Tensorflow Keras: ```python # detect and init the TPU tpu = tf.distribute.cluster_resolver.TPUClusterResolver() # instantiate a distribution strategy tf.tpu.experimental.initialize_tpu_system(tpu) tpu_strategy = tf.distribute.TPUStrategy(tpu) # instantiating the model in the strategy scope creates the model on the TPU with tpu_strategy.scope(): model = tf.keras.Sequential( … ) # define your model normally model.compile( … ) # train model normally model.fit(training_dataset, epochs=EPOCHS, steps_per_epoch=…) ``` TPUs are network-connected accelerators and you must first locate them on the network. This is what `TPUClusterResolver.connect()` does. You then instantiate a `TPUStrategy`. This object contains the necessary distributed training code that will work on TPUs with their 8 compute cores (see hardware section below <#tpuhardware>). Finally, you use the `TPUStrategy` by instantiating your model in the scope of the strategy. This creates the model on the TPU. Model size is constrained by the TPU RAM only, not by the amount of memory available on the VM running your Python code. Model creation and model training use the usual Keras APIs.",tpu What are some best practices for optimizing performance on TPUs?,"To go fast on a TPU, increase the batch size. The rule of thumb is to use batches of 128 elements per core (ex: batch size of 128*8=1024 for a TPU with 8 cores). At this size, the 128x128 hardware matrix multipliers of the TPU (see hardware section below <#tpuhardware>) are most likely to be kept busy. You start seeing interesting speedups from a batch size of 8 per core though. In the sample above, the batch size is scaled with the core count through this line of code: ``` BATCH_SIZE = 16 * tpu_strategy.num_replicas_in_sync ``` With a TPUStrategy running on a single TPU v3-8, the core count is 8. This is the hardware available on Kaggle. It could be more on larger configurations called TPU pods available on Google Cloud. ### Illustration of Batch Size and Learning Rate Scaling Rule of Thumb on TPU With larger batch sizes, TPUs will be crunching through the training data faster. This is only useful if the larger training batches produce more “training work” and get your model to the desired accuracy faster. That is why the rule of thumb also calls for increasing the learning rate with the batch size. You can start with a proportional increase but additional tuning may be necessary to find the optimal learning rate schedule for a given model and accelerator. Starting with Tensorflow 2.4, `model.compile()` accepts a new `steps_per_execution` parameter. This parameter instructs Keras to send multiple batches to the TPU at once. In addition to lowering communications overheads, this gives the XLA compiler the opportunity to optimize TPU hardware utilization across multiple batches. With this option, it is no longer necessary to push batch sizes to very high values to optimize TPU performance. As long as you use batch sizes of at least 8 per core (>=64 for a TPUv3-8) performance should be acceptable. Example: ```python model.compile( … , steps_per_execution=32) ``` ### `tf.data.Dataset` and TFRecords Because TPUs are very fast, many models ported to TPU end up with a data bottleneck. The TPU is sitting idle, waiting for data for the most part of each training epoch. TPUs read training data exclusively from GCS (Google Cloud Storage). And GCS can sustain a pretty large throughput if it is continuously streaming from multiple files in parallel. Following a couple of best practices will optimize the throughput: > For TPU training, organize your data in GCS in a reasonable number (10s to 100s) of reasonably large files (10s to 100s of MB). With too few files, GCS will not have enough streams to get max throughput. With too many files, time will be wasted accessing each individual file. Data for TPU training typically comes sharded across the appropriate number of larger files. The usual container format is TFRecords. You can load a dataset from TFRecords files by writing: ```python # On Kaggle you can also use KaggleDatasets().get_gcs_path() to obtain the GCS path of a Kaggle dataset filenames = tf.io.gfile.glob(""gs://flowers-public/tfrecords-jpeg-512x512/*.tfrec"") # list files on GCS dataset = tf.data.TFRecordDataset(filenames) dataset = dataset.map(...) # TFRecord decoding here... ``` To enable parallel streaming from multiple TFRecord files, modify the code like this: ```python AUTO = tf.data.experimental.AUTOTUNE ignore_order = tf.data.Options() ignore_order.experimental_deterministic = False # On Kaggle you can also use KaggleDatasets().get_gcs_path() to obtain the GCS path of a Kaggle dataset filenames = tf.io.gfile.glob(""gs://flowers-public/tfrecords-jpeg-512x512/*.tfrec"") # list files on GCS dataset = tf.data.TFRecordDataset(filenames, num_parallel_reads=AUTO) dataset = dataset.with_options(ignore_order) dataset = dataset.map(...) # TFRecord decoding here... ``` There are two settings here: - `num_parallel_reads=AUTO` instructs the API to read from multiple files if available. It figures out how many automatically. - `experimental_deterministic = False` disables data order enforcement. We will be shuffling the data anyway so order is not important. With this setting the API can use any TFRecord as soon as it is streamed in. Some details have been omitted from these code snippets so check the sample for the full data pipeline code. In Keras and TensorFlow 2.1, it is also possible to send training data to TPUs as numpy arrays in memory. This works but is not the most efficient way, although for datasets that fit in memory, it can be OK.",tpu How to use TPUs with private datasets?,"TPUs work with both public Kaggle Datasets as well as private Kaggle Datasets. The only difference is that if you want to use a private Kaggle Dataset then you need to: 1. Enable “Google Cloud SDK” in the “Add-ons” menu of the notebook editor. 2. Initialize the TPU and then run the “Google Cloud SDK credentials” code snippet. 3. Take note of the Google Cloud Storage path that is returned. ```python # Step 1: Get the credential from the Cloud SDK from kaggle_secrets import UserSecretsClient user_secrets = UserSecretsClient() user_credential = user_secrets.get_gcloud_credential() # Step 2: Set the credentials user_secrets.set_tensorflow_credential(user_credential) # Step 3: Use a familiar call to get the GCS path of the dataset from kaggle_datasets import KaggleDatasets GCS_DS_PATH = KaggleDatasets().get_gcs_path() ```",tpu What are the main features of the TPU?,"At approximately 20 inches (50 cm), a TPU v3-8 board is a fairly sizeable piece of hardware. It sports 4 dual-core TPU chips for a total of 8 TPU cores. Each TPU core has a traditional vector processing part (VPU) as well as dedicated matrix multiplication hardware capable of processing 128x128 matrices. This is the part that specifically accelerates machine learning workloads. TPUs are equipped with 128GB of high-speed memory allowing larger batches, larger models, and also larger training inputs. In the sample above, you can try using 512x512 px input images, also provided in the dataset, and see the TPU v3-8 handle them easily.",tpu How to monitor TPU?,"TPU monitor When you are runnig a TPU workload on Kaggle, a performance monitor appears when you click on the TPU gauge. The MXU percentage indicates how efficiently the TPU compute hardware is utilized. Higher is better. The ""Idle Time"" percentage measures how often the TPU is sitting idle waiting for data. You should optimize you data pipeline to make this as low as possible. The measurements are refreshed approximately every 10 seconds and only appear when the TPU is running a computation.",tpu How to load and save model on TPU?,"When loading and saving TPU models from/to the local disk, the `experimental_io_device` option must be used. The technical explanation is at the end of this section. It can be omitted if writing to GCS because TPUs have direct access to GCS. This option does nothing on GPUs. ## Saving a TPU model locally ```python save_locally = tf.saved_model.SaveOptions(experimental_io_device='/job:localhost') model.save('./model', options=save_locally) # saving in Tensorflow's ""SavedModel"" format ``` ## Loading a TPU model from local disk ```python with strategy.scope(): load_locally = tf.saved_model.LoadOptions(experimental_io_device='/job:localhost') model = tf.keras.models.load_model('./model', options=load_locally) # loading in Tensorflow's ""SavedModel"" format ``` ## Writing checkpoints locally from a TPU model ```python save_locally = tf.saved_model.SaveOptions(experimental_io_device='/job:localhost') checkpoints_cb = tf.keras.callbacks.ModelCheckpoint('./checkpoints', options=save_locally) model.fit(…, callbacks=[checkpoints_cb]) ``` ## Loading a model from Tensorflow Hub to TPU directly ```python import tensorflow_hub as hub with strategy.scope(): load_locally = tf.saved_model.LoadOptions(experimental_io_device='/job:localhost') pretrained_model = hub.KerasLayer('https://tfhub.dev/tensorflow/efficientnet/b6/feature-vector/1', trainable=True, input_shape=[512,512,3], load_options=load_locally) ``` Example in this [EfficientNetB7 Notebook](https://www.kaggle.com/mgornergoogle/efficientnetb7-on-100-flowers#Model). ## `experimental_io_device` explained To understand what the `experimental_io_device='/job:localhost'` flag does, some background info is needed first. TPU users will remember that in order to train a model on TPU, you have to instantiate the model in a `TPUStrategy` scope. Like this: ```python # connect to a TPU and instantiate a distribution strategy tpu = tf.distribute.cluster_resolver.TPUClusterResolver(tpu='local') tf.tpu.experimental.initialize_tpu_system(tpu) tpu_strategy = tf.distribute.TPUStrategy(tpu) # instantiate the model in the strategy scope with tpu_strategy.scope(): model = tf.keras.Sequential( … ) ``` This boilerplate code actually does 2 things: 1. The strategy scope instructs Tensorflow to instantiate all the variables of the model in the memory of the TPU. 2. The `TPUClusterResolver.connect()` call automatically enters the TPU device scope which instructs Tensorflow to run Tensorflow operations on the TPU. Now if you call `model.save('./model')` when you are connected to a TPU, Tensorflow will try to run the save operations on the TPU and since the TPU is a network-connected accelerator that has no access to your local disk, the operation will fail. Notice that saving to GCS will work though. The TPU does have access to GCS. If you want to save a TPU model to your local disk, you need to run the saving operation on your local machine and that is what the `experimental_io_device='/job:localhost'` flag does.",tpu How to use TPU in competitions?,"Due to technical limitations for certain kinds of code-only competitions we aren’t able to support notebook submissions that run on TPUs, made clear in the competition's rules. But that doesn’t mean you can’t use TPUs to train your models! A workaround to this restriction is to run your model training in a separate notebook that uses TPUs, and then to save the resulting model. You can then load that model into the notebook you use for your submission and use a GPU to run inference and generate your predictions. Here’s how that would work in practice: ## Step 1: Save the Model ```python # Save your model to disk using the .save() functionality. Here we save in .h5 format # This step will be replaced with an alternative call to save models in Tensorflow 2.3 model.save('model.h5') ``` ## Step 2: Put your model in a dataset You can easily create a dataset from the output of your notebook from the dataviewer. For more details, you can see our [Dataset Documentation](https://www.kaggle.com/docs/datasets#creating-a-dataset). ## Step 3: Load your model into inference Notebook ```python # You can now load your model and run inference using a GPU in this notebook. # Because this notebook only uses a GPU, you can submit it to competitions model = tf.keras.models.load_model('../input/yourDataset/model.h5') ``` ## More information and tutorials A hands-on TPU tutorial containing more information, best practices and samples is available here: [Keras and modern convnets, on TPUs](https://codelabs.developers.google.com/codelabs/keras-flowers-tpu/). You can also check out our TPU video tutorial, [Learn With Me: Getting Started With TPUs](https://youtu.be/1pdwRQ1DQfY), on our [YouTube channel](https://www.youtube.com/kaggle)! ## TPU playground competition We have prepared a dataset of 13,000 images of flowers for you to play with. You can give TPUs a try in this playground competition: [Flower Classification with TPUs](https://www.kaggle.com/c/flower-classification-with-tpus). For an easy way to begin, check out this tutorial notebook and starter project, a part of our Deep Learning course: * [Getting Started with Petals to the Metal](https://www.kaggle.com/ryanholbrook/create-your-first-submission) * [Starter Project: Create Your First Submission](https://www.kaggle.com/kernels/fork/10204702)",tpu How to use TPUs in PyTorch?,"Once you have flipped the ""Accelerator"" switch in your notebook to ""TPU v3-8"", this is how to enable TPU training in PyTorch: # Step 1: Install Torch-XLA (PyTorch with Accelerated Linear Algebra (XLA) support) ```shell !curl https://raw.githubusercontent.com/pytorch/xla/master/contrib/scripts/env-setup.py -o pytorch-xla-env-setup.py !python pytorch-xla-env-setup.py --version nightly --apt-packages libomp5 libopenblas-dev ``` # Step 2: Run your PyTorch code TPUs (TPU v3-8) have 8 cores, and each core is itself an XLA device.You can run code on a single XLA device, but to take full advantage of the TPU you will want to run your code on all 8 cores simultaneously. For examples that demonstrate how to do this, you can refer to - [The Ultimate PyTorch TPU Tutorial](https://www.kaggle.com/tanlikesmath/the-ultimate-pytorch-tpu-tutorial-jigsaw-xlm-r), - [I Like Clean TPU Training Kernels and I Can Not Lie](https://www.kaggle.com/abhishek/i-like-clean-tpu-training-kernels-i-can-not-lie), - [Super Duper Fast PyTorch TPU Kernel](https://www.kaggle.com/abhishek/super-duper-fast-pytorch-tpu-kernel), - [XLM Roberta Large Pytorch TPU](https://www.kaggle.com/philippsinger/xlm-roberta-large-pytorch-pytorch-tpu?scriptVersionId=38462589). You should also note the following when using TPUs with PyTorch: #1: Startup Script [https://raw.githubusercontent.com/pytorch/xla/master/contrib/scripts/env-setup.py](https://raw.githubusercontent.com/pytorch/xla/master/contrib/scripts/env-setup.py) #2: Distributed training function mp_fn ```python xmp.spawn(_mp_fn, nprocs=8, start_method='fork') ``` #3: Instantiate model outside of mp_fn and use MpModelWrapper ```python MX = JigsawModel() => MX = xmp.MpModelWrapper(JigsawModel()) ``` #4: Send model to TPU device ```python device = xm.xla_device() model = MX.to(device) ``` #5: Changes to training loop: send data to device ```python ids = ids.to(device, dtype=torch.long) token_type_ids = token_type_ids.to(device, dtype=torch.long) mask = mask.to(device, dtype=torch.long) targets = targets.to(device, dtype=torch.float) ``` #6: Printing messages ```python xm.master_print ``` #7: Loading data ```python train_dataset = … # user-defined, can be outside of mp_fn # in mp_fn: train_sampler = torch.utils.data.distributed.DistributedSampler(train_dataset, num_replicas=xm.xrt_world_size(),rank=xm.get_ordinal(), …) train_data_loader = torch.utils.data.DataLoader(train_dataset, sampler=train_sampler, …) ``` #8: Training on data ```python for epoch in range(EPOCHS): para_loader = pl.ParallelLoader(train_data_loader, [device]) train_fn(para_loader.per_device_loader(device), …) ``` #9: Results from TPU ```python xm.mesh_reduce ``` #10: Model save / restore (memory-optimized) ```python import torch_xla.utils.serialization as xser xser.save(model.state_dict(), f""model.bin"", master_only=True) model.load_state_dict(xser.load(f""model.bin"")) ``` #11: Model save / restore (PyTorch standard) ```python torch_xla.core.xla_model.save torch.load(...) ``` #12: Out of memory datasets: ```python Can be loaded from localhost Of loaded from GCS in TFRecord format, a TFRecords PyTorch loader exists ```",tpu What is Kaggle Models?,"[Kaggle Models](https://www.kaggle.com/models) provides a way to discover, use, and (soon) share public pre-trained models for machine learning. Kaggle Models is a repository of TensorFlow and PyTorch pre-trained models that are easy to use in Kaggle Competition notebooks. Like Datasets, Kaggle Models will also organize community activity which will enrich models' usefulness; every model page will contain discussions, public notebooks, and usage statistics like downloads and upvotes that make models more useful. Kaggle Models is a new product which the Kaggle team will continue to develop and improve based on what the community would like to see. If you'd like to make suggestions for improvements or new features or report bugs, we recommend you create a new topic on the [Product Feedback forum](https://www.kaggle.com/discussions/product-feedback/new).",model Where do Models come from?,"Currently, Kaggle Models come from curated sources. In the future, we will add publishing capabilities so anyone who wants to release a model can do so. In the meantime, if you'd like to suggest a new curated source, you can either post a request on the [Product Feedback forum](https://www.kaggle.com/discussions/product-feedback/new) or submit a response to this [Google Form](https://forms.gle/7LMF6f4wfGmoUTcm8) for our team to review. Alternatively, if you publish a model on TensorFlow Hub, it will be automatically synced to Kaggle Models.",model How to find Kaggle Models?,"You can find Kaggle Models by using the [Models landing page](https://www.kaggle.com/models). There are a number of filters and sorts plus free text search. For instances you can search by: - Filtering to TensorFlow models - Filtering by the task tag you want (e.g., classification) - Filtering by model size - Searching ""BERT"" in the free text search - Sorting by number of upvotes - Etc. You may also want to peruse competitions to see what models are performing well or are otherwise popular for tasks relevant to your use case. Competitors commonly share which models they're using in public notebooks and in discussion write-ups. When you fork a notebook that has a model from Kaggle Models attached to it, your copy will also have the same model attached. Finally, you can also search for models from within the notebook editor. Use the ""Add Models"" component in the right-hand pane of the editor to search and attach models to your notebooks. This works similarly to Datasets.",model What is in the model detail page?,"When you click on a model you will be taken to the ""detail page"" for that model. For example, this is the detail page for a [BERT model](https://www.kaggle.com/models/google/bert). The model detail page contains an overview tab with a Model Card (metadata and information about how the model was trained, what its acceptable use cases are, any limitations, etc.), a framework and variation explorer, and a usage dashboard. There are tabs for notebooks and discussions. If a model is useful, you can upvote it. Beyond the overall metadata, a model detail page also organizes all variations and frameworks for a given model. For example: - **Variations**: The same model with different numbers of parameters, e.g., small, medium, and large. - **Frameworks**: The same model with different ML library compatibility, e.g., TensorFlow, PyTorch, etc. You can view and use the specific framework and variation that you want by selecting it in the file explorer on the overview page beneath the Model Card. From here, you can use click ""New Notebook"" to attach it to a new notebook to start using the model.",model "How to use Kaggle Models? ","Currently, Kaggle Models are most useful within the context of Competitions, specifically for use within Notebooks. Start by either forking a notebook that has a model attached (you can view the attached models on the ""Input"" tab of any notebook), creating a new notebook on a model, or adding a model to a new notebook from the right-hand pane of the editor. You’ll be prompted to confirm your framework and model variation(s), then simply copy and paste the starter code to load the model.",model How to create Kaggle Models?,"Currently, Kaggle Models is a repository of model sources curated by Kaggle Team. In the future, anyone will be able to share a model to Kaggle Models for use in Competition notebooks and beyond. In the meantime, if you'd like to suggest a new curated source, you can either post a request on the [Product Feedback forum](https://www.kaggle.com/discussions/product-feedback/new) or submit a response to this [Google Form](https://forms.gle/7LMF6f4wfGmoUTcm8) for our team to review. Alternatively, if you publish a model on TensorFlow Hub, it will be synced to Kaggle Models as long as it uses an Apache 2.0, MIT, or CC0 license type.",model What is Kaggle Notebooks?,"A cloud computational environment that enables reproducible and collaborative analysis to explore and run machine learning code.",noteboook What are the different types of notebooks available on Kaggle?,"There are two different types of Notebooks on Kaggle. ## Scripts The first type is a script. Scripts are files that execute everything as code sequentially. To start a script, click on “Create Notebook” and select “Script”. This will open the Scripts editing interface. From here you may select what type of script you would like to execute. You may write scripts in R or in Python. You can also execute selected lines of code by highlighting the code in the editor interface and clicking the “Run” button or hitting shift-enter. Any results will be printed to the console. “[Deep Learning Support [.9663]](https://www.kaggle.com/alexanderkireev/deep-learning-support-9663)” from the [TalkingData AdTracking Fraud Detection Challenge](https://www.kaggle.com/c/talkingdata-adtracking-fraud-detection) is a great example of a Script-type. ### RMarkdown Scripts RMarkdown scripts are a special type of script that executes not just R code, but RMarkdown code. This is a combination of R code and Markdown editing syntax that is preferred by most R authors in our community. The RMarkdown editor is the same one used for basic R or Python scripts, except that it uses the special RMarkdown syntax. To start editing an RMarkdown script, click on “Create Notebook”, navigate to the “Scripts” pane, and click on that. Then, in the language dropdown, click on “RMarkdown”. “[Head Start for Data Science](https://www.kaggle.com/hiteshp/head-start-for-data-scientist)” is a great example of a RMarkdown Script-type. ## Notebooks The last type is Jupyter notebooks (usually just “notebooks”). Jupyter notebooks consist of a sequence of cells, where each cell is formatted in either Markdown (for writing text) or in a programming language of your choice (for writing code). To start a notebook, click on “Create Notebook”, and select “Notebook”. This will open the Notebooks editing interface. Notebooks may be written in either R or Python. “[Comprehensive data exploration with Python](https://www.kaggle.com/pmarcelino/comprehensive-data-exploration-with-python)” is a great example of a Python Jupyter Notebook-type. “[How to Become a Data Scientist](https://www.kaggle.com/jackcook/how-to-become-a-data-scientist)” is a great example of an R Jupyter Notebook-type.",noteboook How to find notebooks on Kaggle?,"In addition to being an interactive editing platform, you can find and use notebooks/codes that others in the community have shared public. Kagglers working with data across both the Datasets and Competitions platforms are constantly building cool things. Exploring and reading other Kagglers’ code is a great way to both learn new techniques and stay involved in the community. There’s no better place than Kaggle Notebooks to discover such a huge repository of public, open-sourced, and reproducible code for data science and machine learning. The latest and greatest from Notebooks is surfaced on Kaggle in several different places. ## Site Search You can use the site search in the top bar of the website while on any page to look for not only Notebooks but Datasets, Competitions, Users, and more across Kaggle. Start typing a search query to get quick results and hit ""Enter"" to see a full page of results that you can drill down into. From the full page search results, you can filter just to ""Notebooks"" and add even more filter criteria using the filter options on the left-hand side of the page. ## Homepage When you’re logged into your Kaggle account, the Kaggle homepage [Kaggle Homepage](https://kaggle.com/) provides a live newsfeed of what people are doing on the platform. While Discussion forum posts and new Datasets make up some of the contents of the homepage, most of it is dedicated to hot new Notebooks activity. By browsing down the page you can check out all the latest updates from your fellow Kagglers. You can tweak your newsfeed to your liking by following other Kagglers. To follow someone, go to their profile page and click on “Follow User”. Content posted and upvotes made by users you have followed will show up more prominently. The same is true of other users who choose to follow you. Post high-quality notebooks and datasets and you will soon find other users following along with what you are doing! ## Notebook Listing A more structured way of accessing Notebooks is the Notebook listing [Notebook Listing](https://www.kaggle.com/notebooks), accessible from the “Notebooks” tab in the main menu bar. The Notebook listing is sorted by [Hotness](https://www.kaggle.com/notebooks?sortBy=hotness&group=everyone&pageSize=20) by default. “Hotness” is what it sounds like: a way of measuring the interestingness of Notebooks on the platform. Notebooks which score highly in Hotness, and thus appear highly in this list, are usually either recently written Notebooks that are scoring highly in things like upvotes and views, or “all-time” greats that have been consistently popular on the platform for a long time. Other methods of sorting are by - [Most Votes](https://www.kaggle.com/code?sortBy=voteCount): Surfaces the most popular notebooks of all time - [Most Comments](https://www.kaggle.com/code?sortBy=commentCount): Returns the most discussed notebooks of all time - [Recently Created](https://www.kaggle.com/code?sortBy=dateCreated): A real-time stream of new Notebooks - [Recently Run](https://www.kaggle.com/code?sortBy=dateRun): A real-time stream of activity - [Relevance](https://www.kaggle.com/code?sortBy=relevance): Sorts the results based on their relevance to the query Other filtering options, available from the navigation bar, are Categories (Datasets or Competitions?), Outputs, Languages (R or Python?), and Types (Script or Notebook?). You can also use the Notebook listing to sort through your own Notebooks (“Your Work”), find Notebooks that others have shared with you (""Shared With You""), or to look at Notebooks you have previously upvoted (“Favorites”). Finally, a Notebooks-specific search bar is available here. This is often the fastest way to find a specific Notebook that you are looking for. ## Datasets and Competitions Data on Kaggle is available through either Datasets or our Competitions. Both prominently feature the best community-created Notebooks on the “Notebooks” tab. Browsing Notebooks on Datasets and Competitions provides a way to quickly get acquainted with a specific dataset. You can fork any existing public Notebook to make a copy of the code and start experimenting with changes. The Iris Species dataset [Iris Species Dataset](https://www.kaggle.com/uciml/iris) and the Titanic competition [Titanic Competition](https://www.kaggle.com/c/titanic/notebooks) are two classic examples of Datasets and Competitions, respectively, hosting great Notebooks on their content. ## Tags and Tag Pages Tags are the most advanced of the searching options available in the Notebook listing page. Tags are added by Notebook owners to indicate the topic of the Notebook, techniques you can use (e.g., “classification”), or the type of the data itself (e.g., “text data”). You can navigate to tag pages to browse more content sharing a tag either by clicking on a tag on a Notebook, or by searching by tag using the tag-specific search syntax: |tag:[TAG NAME]|. Searching by tags allow you to search for Notebooks by topical area or technique. For example, if you are interested in learning new techniques for tackling classification problems you might try a search with the tag “classification” (|tag:classification|); if you are interested in an analysis of police records maybe a search with “crime” (|tag:crime|) would do the trick. Alternatively, you can achieve the same thing by visiting the related tag pages. For example, the crime and classification tags live at [Crime Tag](https://www.kaggle.com/tags/crime) and [Classification Tag](https://www.kaggle.com/tags/classification), respectively. Tag pages include a section listing the most popular pages with the given tag, making them a great way of searching for Notebooks by content.",noteboook How to use Notebook Editor on Kaggle?,"Kaggle Notebooks may be created and edited via the Notebook editor. On larger screens, the Notebook editor consists of three parts: - An editing window - A console - A settings window The Notebook editor allows you to write and execute both traditional Scripts (for code-only files ideal for batch execution or Rmarkdown scripts) and Notebooks (for interactive code and markdown editor ideal for narrative analyses, visualizations, and sharing work). The main difference between Scripts and Notebooks is the editing pane and how you experience editing and executing code. ## Editing Whether you use Scripts or Notebooks might depend on your choice of language and what your use case is. R users tend to prefer the Scripts, while Python users prefer the Notebooks. For more on why that is, refer to the “[Types of Notebooks](https://www.kaggle.com/docs/notebooks#types-of-notebooks)” section. Scripts are also favored for making competition submissions where the code is the focus, whereas Notebooks are popular for sharing EDAs (exploratory data analysis), tutorials, and other share-worthy insights. Both editing interfaces are organized around the concept of “Versions”. This is a collection consisting of a Notebook version, the output it generates, and the associated metadata about the environment. In the Script editor, the code you write is executed all at once, whenever you generate a new version. For finer-grained control, it’s also possible to specifically execute only a single line or selection of lines of code. Notebooks are built on Jupyter notebooks. Notebook Notebooks consist of individual cells, each of which may be a Markdown (text) cell or a code cell. Code can be run (and the resulting variables saved) by running individual code cells, and cells can be added or deleted from the notebook at any time. ## Console The console tab provides an alternative interface to the same Python or R container running in the Notebook. Commands you input into the console will not change the content of your version. However, any variables you create in the console will persist throughout the session (unless you delete them). Additionally, any code that you execute in the editor will also execute in the console pane. ## Settings In the expanded editor, the settings pane takes up the right side of the screen. In the compact editor (where you hide the settings pane), it is folded into tabs above the Editor tab. In either case, the settings pane contains the following tabs: There's a tab called “Data” that provides a way of adding or removing data from the Notebook. There's a tab called the Settings. The Settings tab has settings for toggling Language, toggling Docker image selection, toggling Internet (which is on by default), and toggling an Accelerator between CPU (default), GPU, and TPU. Language is the programming language the Notebook is authored in. You can use it to switch between R and Python in the notebook flavor, and between R, RMarkdown, and Python in the script flavor. For more details on the differences, see the “[Types of Notebooks](https://www.kaggle.com/docs/notebooks#types-of-notebooks)” section. The Docker image section can be used to pin the R or Python environment used for the Notebook against a certain Docker container version. More information can be found in ""The Notebook Environment"" section.",noteboook How to add data sources to Kaggle Notebooks?,"One of the advantages of using Notebooks as your data science workbench is that you can easily add data sources from thousands of publicly available Datasets or even upload your own. You can also use output files from another Notebook as a data source. You can add multiple data sources to your Notebook’s environment, allowing you to join together interesting datasets. ## Datasets Kaggle Datasets provide a rich mix of interesting datasets for any kind of data science project. There are two ways of loading a Dataset in a Notebook. The first is to navigate to a chosen dataset’s landing page, then click on the “[New Notebook](https://www.kaggle.com/notebooks?modal=true)” button. This will launch a new Notebook session with the dataset in question spun up and ready to go. Alternatively, you may wish to add datasets after creating your Notebook. To do that, navigate to the “Data” pane in a Notebook editor and click the “Add Data” button. This will open a modal that lets you select Datasets to add to your Notebook. ## Competitions You can also add Competition data sources to your Notebook environment using the same steps as above. The main difference is that you need to accept the rules for any Competition data sources you add to your Notebook. Whether you start a new Notebook from the “Notebooks” tab of a Competition or add a Competition data source from an existing Notebook editor, you’ll be prompted to read and accept the rules first. You can mix Competitions and Datasets data sources in the same Notebook, but please be sure to abide by the rules of the specific Competition with respect to using external data sources. If you don’t, you risk consequences for rule-breaking in the Competition. ## Notebooks You will notice that there is a third option in the “Add Data” modal: Notebook Output Files. Up to 20 GBs of output from a Notebook may be saved to disk in `/kaggle/working`. This data is saved automatically and you can then reuse that data in any future Notebook: just navigate to the “Data” pane in a Notebook editor, click on “Add Data”, click on the ""Notebook Output Files"" tab, find a Notebook of interest, and then click to add it to your current Notebook. By chaining Notebooks as data sources in this way, it’s possible to build pipelines and generate more and better content than you could in a single notebook alone. “[Minimal LSTM + NB-SVM baseline ensemble](https://www.kaggle.com/jhoward/minimal-lstm-nb-svm-baseline-ensemble/notebook)”, written by Jeremy Howard, is one example of a great Notebook using this feature. Click on the “Data” tab to view the data sources he uses.",noteboook How to collaborate on Kaggle Notebooks?,"Notebooks collaboration is a powerful feature. It allows multiple users to co-own and edit a Notebook. For example, you can work with Competition teammates to iterate on a model or collaborate with classmates on a data science project. ## Inviting Collaborators From your Notebook editor or viewer, public or private, you may navigate to the 'Share' or 'Sharing' button in the Notebook’s menu to expose, among other settings, the Collaborators options. There, use the search box to find and add other users as Notebook collaborators. If your Notebook is private, you may choose between giving Collaborators either viewing privileges (“Can view”) or editing privileges (“Can edit”). If your Notebook is public, Collaborators can only be added with editing privileges (“Can edit”), as anyone can view it already. When you add a collaborator, they will receive a notification via email. “[Creating, Reading & Writing Data](https://www.kaggle.com/residentmario/creating-reading-writing-data)”, a Notebook from the Advanced Pandas Kaggle Learn track, is one example of great collaborative Notebook. ## Collaborating on Datasets Using Notebooks is a powerful way to work with your collaborators on Datasets, too. Datasets created on Kaggle also have privacy settings, and these settings are distinct from the sharing settings on your Notebook meaning each can be shared with a different group of users. That is, your Notebook collaborators won’t automatically have the same access to any private Datasets as you unless they are explicitly invited to collaborate on the Dataset. Anyone has access to Datasets shared publicly. To learn more about how to use Datasets collaboratively, read more [here](https://www.kaggle.com/docs/datasets#collaborating-on-datasets).",noteboook What are the features of the environment of Kaggle Notebooks?,"Notebooks are more than just a code editor. They’re a versioned computational environment designed to make it easy to reproduce data science work. In the Notebooks IDE, you have access to an interactive session running in a Docker container with pre-installed packages, the ability to mount versioned data sources, customizable compute resources like GPUs, and more. ## Notebook Versions and Containers When you create a Notebook version using 'Save & Run All', you execute the Notebook from top to bottom in a separate session from your interactive session. Once it finishes, you will have generated a new Notebook version. A Notebook version is a snapshot of your work including your compiled code, log files, output files, data sources, and more. The latest Notebook version of your Notebook is what is shown to users in the Notebook viewer. Every Notebook version you create is associated with a specific Docker image version as well. Docker is a containerization technology which provides an isolated environment in which to do your work. Docker specifies the contents of this environment including installed Python and R packages using what is known as an image. Every Notebook version you create is associated with a Docker image. By default for new notebooks, this will be the latest version of the default Python or R images that we maintain at Kaggle. The contents of this image is publicly available on GitHub. You may view it at [docker-rstats](https://github.com/Kaggle/docker-rstats) for the R container, or [docker-python](https://github.com/Kaggle/docker-python) for the Python container. ## Dockerfiles and Notebook Versions Even if you are using one of the default Kaggle containers, the number, names, and versions of the packages that you’re using are still a moving target as our team continually updates them to ensure the latest and greatest packages are available. We update the images about every two weeks, mainly to upgrade to the latest versions of the packages we provide but also occasionally to add or remove certain packages. You can subscribe to notifications when we release a new Docker image on [GitHub](https://www.kaggle.com/product-feedback/161327). It is also possible to pin a specific Docker image for use in a Notebook if there are multiple custom images available. This can be done by accessing the “Settings” tab in the Notebook editor. Next to ""Environment"", there is an option to select ""Preferences."" This opens a modal where you can select what your environment preference is between pinning to a specific image (the image version when your notebook was created) or always using the latest image. You can read more about these options [here](https://admin.kaggle.com/product-feedback/150261). In order to ensure that your Notebooks remain reproducible, we publicly expose the Dockerfile defining the environment the Notebook version was created in. You may download the contents of that Dockerfile by visiting the ""Execution Info” section on your Notebook and navigating to the “Container image” field. ## Modifying the Default Environment You can request a modification to the default environment by submitting a pull request or an issue to the R [container](https://github.com/Kaggle/docker-rstats) or Python [container](https://github.com/Kaggle/docker-python) on GitHub. Be sure to explain why you think a package should be added to the default environment. We welcome pull requests and engagement with our public images if users believe there are new packages that will be helpful and used by a significant majority of our users. More rarely, if you notice that something in our default environments broke, you may notify us of it using the same mechanism. Note that, even if approved, it can take several days for requested packages to be added to the live container image on the website. ## Modifying a Notebook-specific Environment It is also possible to modify the Docker container associated with the current Notebook image. ### Using a standard package installer In the Notebook Editor, make sure ""Internet"" is enabled in the Settings pane (it will be by default if it's a new notebook). For Python, you can run arbitrary shell commands by prepending ! to a code cell. For instance, to install a new package using pip, run `!pip install my-new-package`. You can also upgrade or downgrade an existing package by running `!pip install my-existing-package==X.Y.Z`. To install packages from GitHub in R, load the devtools package by running `library(devtools)`. Then, you can run commands such as `install_github(""some_user/some_package"")` to install a new package from GitHub.",noteboook How to add GPU to Kaggle Notebook?,"You can add a single NVIDIA Tesla P100 or two NVIDIA T4 GPUs to your Notebook for free. GPU environments have lower CPU and main memory, but are a great way to achieve significant speed-ups for certain types of work like training neural networks on image data. One of the major benefits to using Notebooks as opposed to a local machine or your own VM is that the Notebook environment is already pre-configured with GPU-ready software and packages which can be time-consuming and frustrating to set up. Free GPU availability is limited: in busy times, you might be placed in a queue. To add a GPU, navigate to the “Settings” pane from the Notebook editor and click the “Accelerator"" > GPU option. Your session will restart which may take a few moments to several minutes if you don’t need to wait in a queue to access a GPU-enabled machine. To learn more about getting the most out of using a GPU in Notebooks, check out this tutorial Notebook by Dan Becker [here](https://www.kaggle.com/dansbecker/running-kaggle-kernels-with-a-gpu).",noteboook How can I add TPU to a notebook on Kaggle?,"You can add a TPU v3-8 to your Notebook for free. TPUs are hardware accelerators specialized in deep learning tasks. They are supported in TensorFlow 2.1 both through the Keras high-level API and, at a lower level, in models using a custom training loop. Free TPU availability is limited: in busy times, you might be placed in a queue. To learn more about getting the most out of using a TPU in Notebooks, check out this in-depth guide [here](https://www.kaggle.com/docs/tpu). To add a TPU, navigate to the “Settings” pane from the Notebook editor and click the “Accelerator"" > TPU v3-8 option. Your session will restart which may take a few moments to several minutes if you don’t need to wait in a queue to access a TPU-enabled machine.",noteboook How to use Kaggle Notebooks with Google Cloud Services (GCS)?,"Kaggle currently has integrations with the Google Cloud Storage, BigQuery, and AutoML products. To enable these integrations, click on the “Add-ons” menu in the notebook editor and select “Google Cloud Services”. Once on the “Google Cloud Services” page, you will need to attach your account to your notebook and select which of the integrations you want to enable. After enabling these integrations, you will be provided with a code snippet that can be copied and pasted into your notebook. > *Some of these services incur charges to attached GCP accounts. Please review pricing for each of the following products before you begin to use them in your notebook.* Each line of this code snippet corresponds to a different Google Cloud Services Integration where |PROJECT_ID| should be an existing Google Cloud Project. Per AutoML docs (linked below), AutoML currently requires that the location (|COMPUTE_REGION|) must be `us-central1` for your GCS Bucket. For more information on how to use these services, please refer to [Google Cloud Documentation](https://cloud.google.com/docs/) or any of the specific product documentation. ## BigQuery [**BQ Documentation**](https://cloud.google.com/bigquery/docs/), [**BQML Documentation**](https://cloud.google.com/bigquery-ml/docs/bigqueryml-intro) Google BigQuery is a fully managed, petabyte scale, low-cost analytics data warehouse. There is no management required for users—instead, users can focus solely on analyzing data through queries and BigQuery ML to find meaningful insights in a pay-as-you-go billing model. Google BigQuery can be accessed using Kaggle’s free-tier account to query [public data](https://console.cloud.google.com/marketplace/browse?filter=solution-type:dataset&_ga=2.188761902.446093747.1583860775-118720642.1583860775) but requires a [billing-enabled](https://cloud.google.com/billing/docs/how-to/modify-project) GCP account to query any data that isn’t publicly released by BigQuery. You should carefully review the prices of BigQuery before trying the integration in Kaggle Notebooks, as it can be easy to incur charges. ```python # Set your own project id here PROJECT_ID = 'your-google-cloud-project' from google.cloud import bigquery bigquery_client = bigquery.Client(project=PROJECT_ID) ``` For a more in-depth walkthrough of using the integration, please refer to the following notebooks: - [BigQuery in Kaggle Notebooks](https://www.kaggle.com/jessicali9530/tutorial-how-to-use-bigquery-in-kaggle-kernels) - [BigQuery Machine Learning Tutorial](https://www.kaggle.com/rtatman/bigquery-machine-learning-tutorial) ## Google Cloud Storage (GCS) [**GCS Documentation**](https://cloud.google.com/storage/docs/) Google Cloud Storage allows for storage and retrieval of data at any time across the globe. Users are able to use the storage space for any type of data and only pay for used storage space (per GB per month). Google Cloud Storage is a paid service and requires a [billing-enabled](https://cloud.google.com/billing/docs/how-to/modify-project) GCP account. You should carefully review the prices of GCS before trying the integration in Kaggle Notebooks, as it can be easy to incur charges. ```python # Set your own project id here PROJECT_ID = 'your-google-cloud-project' from google.cloud import storage storage_client = storage.Client(project=PROJECT_ID) ``` For a more in-depth walkthrough of using the integration, please refer to the following notebooks: - [Moving Data to/from GCS](https://www.kaggle.com/paultimothymooney/how-to-move-data-from-kaggle-to-gcs-and-back) ## AutoML [**AutoML Documentation**](https://cloud.google.com/automl/docs/) Google AutoML is a suite of products that enables users to train custom machine learning models for tasks on structured data, vision, and language. It is currently in Beta, so you may encounter usability frictions or known issues. We welcome all feedback from the community. User feedback will help us improve documentation and be shared directly with the AutoML team to help improve the product. Google AutoML is a paid service and requires a [billing-enabled](https://cloud.google.com/billing/docs/how-to/modify-project) GCP account. You should carefully review the prices of AutoML before trying the integration in Kaggle Notebooks, as it can be easy to incur charges. You can see the pricing for each of the offerings in beta [here](https://cloud.google.com/products/#product-launch-stages): ```python # Set your own project id and compute region here PROJECT_ID = 'your-google-cloud-project' COMPUTE_REGION = 'us-central1' # must be `us-central1` to use AutoML (see docs) from google.cloud import automl_v1beta1 as automl automl_client = automl.AutoMlClient() project_location = automl_client.location_path(PROJECT_ID, COMPUTE_REGION) ``` For a more in-depth walkthrough of using the integration, please refer to the following notebooks: - [AutoML Tables Tutorial](https://www.kaggle.com/devvret/automl-tables-tutorial-notebook) ## Google Cloud AI Notebooks If you run into compute constraints while using notebooks on Kaggle, you can consider upgrading to Google Cloud AI Notebooks. These notebooks run under your project in Google Cloud and can be configured to use your choice of virtual machine, accelerators, and run without limits. To export your notebook to Google Cloud, you can go to the **File** menu and select ""Upgrade to Google Cloud AI Notebooks"" from within the Notebooks Editor. You can also upgrade a notebook from the Viewer by clicking on the three-dot menu on the top right. For a more detailed description of how to export your Kaggle Notebooks to Google Cloud AI Notebooks, check out the announcement post [here](https://www.kaggle.com/product-feedback/159602).",noteboook What are the technical specifications of Kaggle Notebooks?,"Kaggle Notebooks run in a remote computational environment. We provide the hardware—you need only worry about the code. At the time of writing, each Notebook editing session is provided with the following resources: - 12 hours execution time for CPU and GPU notebook sessions and 9 hours for TPU notebook sessions - 20 Gigabytes of auto-saved disk space (/kaggle/working) - Additional scratchpad disk space (outside /kaggle/working) that will not be saved outside of the current session ## CPU Specifications - 4 CPU cores - 30 Gigabytes of RAM ## P100 GPU Specifications - 1 Nvidia Tesla P100 GPU - 4 CPU cores - 29 Gigabytes of RAM ## T4 x2 GPU Specifications - 2 Nvidia Tesla T4 GPUs - 4 CPU cores - 29 Gigabytes of RAM ## TPU 1VM Specifications - 96 CPU cores - 330 Gigabytes of RAM **NOTE:** CPU Platforms (ex. Intel Skylake, Broadwell, AMD) may be variable during regular notebook runs; however, submissions runs (for code competitions or when submissions are rerun in bulk) are always run on Intel Skylake CPUs. ### CPU Specifications While editing a Notebook, you are provided with 20 minutes of idle time for your interactive session. If the code is not modified or executed in that time, the current interactive session will end. If this happens, you will need to click the Edit button again to continue editing. If you want to run a computation that takes longer, you can Save a Version of your Notebook from top to bottom by selecting the ""Save & Run All"" option in the ""Save Version"" menu (see below). Once you are satisfied with the contents of the Notebook, you can click ""Save Version"" to save your changes. From there, you will have two options for creating a new version: - **Quick Save**: skips the top-to-bottom notebook execution and just takes a snapshot of your notebook exactly as it’s displayed in the editor. This is a great option for taking a bunch of versions while you’re still actively experimenting. Quick Save is a brand new way of saving work on Kaggle. - **Save & Run All**: creates a new session with a completely clean state and runs your notebook from top to bottom. This is perfect for major milestones or when you want to share your work, as it gives you (and anyone else who reads your notebook) the confidence that your notebook can be run reproducibly. In order to save successfully, the entire Notebook must execute within 12 hours (9 hours for TPU notebooks). Save & Run All is identical to the “Commit” behavior you may have used previously on Kaggle.",noteboook What is organization profiles on Kaggle?,"Anyone can create an organization profile on Kaggle. Organization profiles allow anyone in the community can find your organization's datasets, models, and competitions in one place.",organization How do organization profiles work?,"## What are organizations for? Organization profiles are a ""landing page"" for your organization's published competitions, models, and datasets. For example, it gives you an easy way to share (and other users to find) all of the datasets and models that your team has published with a single link. ## What are organizations NOT for? Currently, organizations are not meant to be used as a tool for collaboration with a group of people. While all members of an organization can create competitions, datasets, and models as an organization, this does not give other members of the organization the ability to manage that content (edit, delete, update, or view private resources). Read more about organization permissions below. ## Who should create and use organization profiles? There are a number of groups for whom organization profiles can be helpful! For professors, an organization profile can make it easier to see and manage the community competitions that you host for your classes. For research labs, whether part of a university or industry corporation, organization profiles provide a way to organize the models and datasets your team has published in one place. For large companies, an organization profile will display all of the competitions you've hosted.",organization "How to create a new organization profile on Kaggle? ","## Creation Anyone can create an organization profile. To create one, click on the ""+Create"" button in the upper lefthand corner on any page on Kaggle. This will open up the creation flow. On this page you'll fill out the following information: - **Name**: The name of your organization - **Tagline**: A short description of your organization - **URL**: You should edit this to something that's short. All links to this organization page will start with this URL, e.g., any datasets or models it owns. - **Website**: A URL to your organization website - **Image**: A 400 x 400px image of your organization logo - **Moderation Details**: Information you share here won't appear on your organization profile page, but will be used by our team to review your organization for approval. You'll be able to change your organization Name, Tagline, Website, and Image among other things once you've clicked ""Create organization"". You will also be able to add a bio and invite members to your organization and more. Once you click ""Create organization"", your organization will be reviewed by Kaggle's moderation team for approval before it's made public. Continue to the next section ""Review"" to learn more about the next steps. ## Review While your organization is being reviewed by Kaggle's moderation team it's in a ""pending"" state. While your organization is in a pending state, you are able to invite members but you won't be able to start creating competitions, datasets, or models under your organization profile until it's approved. While your organization is in a pending state, the organization profile will not be publicly visible to non-members. At this point, Kaggle's moderation team will review your organization profile for approval. You will receive a notification when your organization profile's status changes. If you have questions about the review process or you would like to appeal a review, please see our [contact page](https://www.kaggle.com/contact#/other/issue). ## Approval Once your organization has been approved, you'll receive an email and/or site notification. You and other members of the organization can now create organization-owned datasets, models, or competitions including making them public. Anyone can also see your organization's profile page.",organization What permissions do members of an organization have?,"## Abilities of organization members Organization members can create datasets, models, and competitions under approved organization profiles. Again, organizations are not currently meant to be used as a tool for collaboration with a group of people. While all members of an organization can create competitions, datasets, and models as an organization, this does not give other members of the organization the ability to manage that content (edit, delete, update, or view private resources). If you want to share private datasets or models owned by an organization profile, you will need to use Collaboration features. Similarly, organization members are NOT able to see any unlaunched competitions unless their user is the creator of the competition. Members will not be able to add new members to an organization unless the organization owner shares the unique invitation link. ## Abilities of organization admins Organization admins have the same abilities and permissions as organization members. In addition, they can add and remove members, transfer ownership of the organization to another member, and edit information about the organization (logo, tagline, description, etc.).",organization How to create contents as an organization?,"## Competitions Anyone can host a community competition by clicking the ""+Create"" button in the upper lefthand corner of any page on Kaggle and selecting ""Competition."" In order to associate your competition with an organization profile that you are an admin or member of, simply choose your organization from the ""Creating As"" dropdown. When a competition is created under an organization profile, the competition will feature your organization's logo, and the competition will show up on the ""Competitions"" tab of your organization's profile page. When a competition is created under an organization profile, there are NO changes to who can see or manage your competition. That is, other members of the organization cannot see an unlaunched competition, and they cannot manage the settings of your competition when it is launched. ## Datasets and Models Anyone can publish datasets or models by clicking the ""+Create"" button in the upper lefthand corner of any page on Kaggle and selecting ""Dataset"" or ""Model"". In order to associate your dataset or model with an organization profile that you are an admin or member of, simply choose your organization from the ""Creating As"" dropdown. When a dataset or model is created under an organization profile, the dataset or model will feature your organization's logo, and the dataset or model will show up on the ""Datasets"" or ""Models"" tab respectively of your organization's profile page. When a dataset or model is created under an organization profile, other members will be able to see it while it's private. There are NO changes to who can see or manage your datasets or models created under an organization profile. That is, other members of the organization cannot edit, delete, or update the datasets or models unless they are separately added as edit collaborators on the ""Settings"" tab of the dataset or model.",organization How can one get started with installation and authentication for Kaggle's public API?,"The easiest way to interact with Kaggle’s public API is via our command-line tool (CLI) implemented in Python. This section covers installation of the kaggle package and authentication. ## Installation Ensure you have Python and the package manager pip installed. Run the following command to access the Kaggle API using the command line: `pip install kaggle` (You may need to do `pip install --user kaggle` on Mac/Linux. This is recommended if problems come up during the installation process.) Follow the authentication steps below and you’ll be able to use the `kaggle` CLI tool. If you run into a `kaggle: command not found` error, ensure that your python binaries are on your path. You can see where kaggle is installed by doing `pip uninstall kaggle` and seeing where the binary is. For a local user install on Linux, the default location is `~/.local/bin`. On Windows, the default location is `$PYTHON_HOME/Scripts`. ## Authentication In order to use the Kaggle’s public API [https://github.com/Kaggle/kaggle-api#api-credentials](https://github.com/Kaggle/kaggle-api#api-credentials), you must first authenticate using an API token. Go to the 'Account' tab of your user profile [https://www.kaggle.com/settings/account](https://www.kaggle.com/settings/account) and select 'Create New Token'. This will trigger the download of `kaggle.json`, a file containing your API credentials. If you are using the Kaggle CLI tool, the tool will look for this token at `~/.kaggle/kaggle.json` on Linux, OSX, and other UNIX-based operating systems, and at `C:\Users\\.kaggle\kaggle.json` on Windows. If the token is not there, an error will be raised. Hence, once you’ve downloaded the token, you should move it from your Downloads folder to this folder. If you are using the Kaggle API directly, where you keep the token doesn’t matter, so long as you are able to provide your credentials at runtime.",api How can one interacting with competitions using Kaggle API or CLI?,"The Kaggle API and CLI tool provide easy ways to interact with Competitions on Kaggle. The commands available can make participating in competitions a seamless part of your model building workflow. If you haven’t installed the package needed to use the command line tool or generated an API token, check out the getting started steps first. Just like participating in a Competition normally through the user interface, you must read and accept the rules in order to download data or make submissions. You cannot accept Competition rules via the API. You must do this by visiting the Kaggle website and accepting the rules there. Some of the commands for interacting with Competitions via CLI include: - `kaggle competitions list`: list the currently active competitions - `kaggle competitions download -c [COMPETITION]`: download files associated with a competition - `kaggle competitions submit -c [COMPETITION] -f [FILE] -m [MESSAGE]`: make a competition submission View all available commands on the official documentation on GitHub [here](https://github.com/Kaggle/kaggle-api#competitions) and keep up-to-date with the latest features and bug fixes in the [changelog](https://github.com/Kaggle/kaggle-api/blob/master/CHANGELOG.md). To explore additional CLI arguments, remember that you can always append `-h` after any call to see the help menu for that command. # Submitting to a Competition Assuming that you have already accepted the terms of a Competition (this can only be done through the website, and not through the CLI), you may use the Kaggle CLI to submit predictions to the Competition and have them scored. To do so, run the command `kaggle competitions submit -c [COMPETITION NAME] -f [FILE PATH]`. You can list all previous submissions to a Competition you have entered using the command `kaggle competitions submissions -c [COMPETITION NAME]`. To explore some further CLI arguments, remember that you can always append `-h` after any call to see the help menu for that command.",api How can you interact with Datasets using Kaggle's CLI an API?,"The Kaggle API and CLI tool provide easy ways to interact with Datasets on Kaggle. The commands available can make searching for and downloading Kaggle Datasets a seamless part of your data science workflow. If you haven’t installed the Kaggle Python package needed to use the command line tool or generated an API token, check out the getting started steps first. Some of the commands for interacting with Datasets via CLI include: - `kaggle datasets list -s [KEYWORD]`: list datasets matching a search term - `kaggle datasets download -d [DATASET]`: download files associated with a dataset If you are creating or updating a dataset on Kaggle, you can also use the API to make maintenance convenient or even programmatic. View all available commands on the official documentation on GitHub [here](https://github.com/Kaggle/kaggle-api#datasets) and keep up-to-date with the latest features and bug fixes in the [changelog](https://github.com/Kaggle/kaggle-api/blob/master/CHANGELOG.md). To explore additional CLI arguments, remember that you can always append `-h` after any call to see the help menu for that command. Other than the Kaggle API, there is also a Kaggle connector on DataStudio! [Here](https://datastudio.google.com/datasources/create?connectorId=AKfycbz8WVuZI1FRHJM3g_ucqP-L7B9EIIPDsC9RofvZk1Xw-bD6p55SNjs7JudEsOYK1o2t) You can select Kaggle Datasets as a data source to import directly into DataStudio. Work in DataStudio to easily create beautiful and effective dashboards on Kaggle Datasets! # Creating and Maintaining Datasets The Kaggle API can be used to create new Datasets and Dataset versions on Kaggle from the comfort of the command-line. This can make sharing data and projects on Kaggle a simple part of your workflow. You can even use the API plus a tool like crontab to schedule programmatic updates of your Datasets to keep them well maintained. If you haven’t installed the Kaggle Python package needed to use the command line tool or generated an API token, check out the getting started steps first. ## Create a New Dataset Here are the steps you can follow to create a new dataset on Kaggle: 1. Create a folder containing the files you want to upload 2. Run `kaggle datasets init -p /path/to/dataset` to generate a metadata file [here](https://github.com/Kaggle/kaggle-api/wiki/Dataset-Metadata) 3. Add your dataset’s metadata to the generated file, `datapackage.json` 4. Run `kaggle datasets create -p /path/to/dataset` to create the dataset Your dataset will be private by default. You can also add a `-u` flag to make it public when you create it, or navigate to “Settings” > “Sharing” from your dataset’s page to make it public or share with collaborators. ## Create a New Dataset Version If you’d like to upload a new version of an existing dataset, follow these steps: 1. Run `kaggle datasets init -p /path/to/dataset` to generate a metadata file [here](https://github.com/Kaggle/kaggle-api/wiki/Dataset-Metadata) (if you don’t already have one) 2. Make sure the `id` field in `dataset-metadata.json` (or `datapackage.json`) points to your dataset 3. Run `kaggle datasets version -p /path/to/dataset -m ""Your message here""` These instructions are the basic commands required to get started with creating and updating Datasets on Kaggle. You can find out more details from the official documentation on GitHub: - Initializing metadata [here](https://github.com/Kaggle/kaggle-api#initialize-metadata-file-for-dataset-creation) - Create a Dataset [here](https://github.com/Kaggle/kaggle-api#create-a-new-dataset) - Update a Dataset [here](https://github.com/Kaggle/kaggle-api#create-a-new-dataset-version) ## Working with Dataset Metadata If you want a faster way to complete the required `dataset-metadata.json` file (for example, if you want to add column-level descriptions for many tabular data files), we recommend using Frictionless Data’s Data Package Creator [here](http://create.frictionlessdata.io/). Simply upload the `dataset-metadata.json` file that you’ve initialized for your dataset, fill out metadata in the user interface, and download the result. To explore some further CLI arguments, remember that you can always append `-h` after any call to see the help menu for that command.",api How to use Kaggle's API / CLI for interacting with Notebooks?,"The Kaggle API and CLI tool provide easy ways to interact with Notebooks on Kaggle. The commands available enable both searching for and downloading published Notebooks and their metadata as well as workflows for creating and running Notebooks using computational resources on Kaggle. If you haven’t installed the Kaggle Python package needed to use the command line tool or generated an API token, check out the getting started steps first. Some of the commands for interacting with Notebooks via CLI include: - `kaggle kernels list -s [KEYWORD]`: list Notebooks matching a search term - `kaggle kernels push -k [KERNEL] -p /path/to/folder`: create and run a Notebook on Kaggle - `kaggle kernels pull [KERNEL] -p /path/to/download -m`: download code files and metadata associated with a Notebook If you are creating a new Notebook or running a new version of an existing Notebook on Kaggle, you can also use the API to make this workflow convenient or even programmatic. View all available commands on the official documentation on GitHub [here](https://github.com/Kaggle/kaggle-api#kernels) and keep up-to-date with the latest features and bug fixes in the [changelog](https://github.com/Kaggle/kaggle-api/blob/master/CHANGELOG.md). To explore additional CLI arguments, remember that you can always append `-h` after any call to see the help menu for that command. # Creating and Running a New Notebook The Kaggle API can be used to create new Notebooks and Notebook versions on Kaggle from the comfort of the command-line. This can make executing and sharing code on Kaggle a simple part of your workflow. If you haven’t installed the Kaggle Python package needed to use the command line tool or generated an API token, check out the getting started steps first. Here are the steps you can follow to create and run a new Notebook on Kaggle: 1. Create a local folder containing the code files you want to upload (e.g., your Python or R notebooks, scripts, or RMarkdown files) 2. Run `kaggle kernels init -p /path/to/folder` to generate a metadata file [here](https://github.com/Kaggle/kaggle-api/wiki/Kernel-Metadata) 3. Add your Notebook's metadata to the generated file, `kernel-metadata.json`; As you add your title and slug, please be aware that Notebook titles and slugs are linked to each other. A Notebook slug is always the title lowercased with dashes (-) replacing spaces and removing special characters. 4. Run `kaggle kernels push -p /path/to/folder` to create and run the Notebook on Kaggle Your Notebook will be private by default unless you set it to public in the metadata file. You can also navigate to ""Options"" > “Sharing” from your published Notebook's page to make it public or share with collaborators. # Creating and Running a New Notebook Version If you’d like to create and run a new version of an existing Notebook, follow these steps: 1. Run `kaggle kernels pull [KERNEL] -p /path/to/download -m` to download your Notebook's most recent code and metadata [files](https://github.com/Kaggle/kaggle-api/wiki/Kernel-Metadata) (if your local copies aren't current) 2. Make sure the `id` field in `kernel-metadata.json` points to your Notebook; you no longer need to include the `title` field which is optional for Notebook versions unless you want to rename your Notebook (make sure to update the `id` field in your next push AFTER the rename is complete) 3. Run `kaggle kernels push -p /path/to/folder` These instructions are the basic commands required to get started with creating, running, and updating Notebooks on Kaggle. You can find out more details from the official documentation on GitHub: - Initializing metadata [here](https://github.com/Kaggle/kaggle-api#initialize-metadata-file-for-a-kernel) - Push a Notebook [here](https://github.com/Kaggle/kaggle-api#push-a-kernel) - Pull a Notebook [here](https://github.com/Kaggle/kaggle-api#pull-a-kernel) - Retrieve a Notebook's output [here](https://github.com/Kaggle/kaggle-api#retrieve-a-kernels-output)",api Can I create a competition on Kaggle?,"Anybody can launch a machine learning competition using Kaggle's Community Competitions platform, including educators, researchers, companies, meetup groups, hackathon hosts, or inquisitive individuals!",competition-setup How do Kaggle competitions work?,"## Overview Every competition has two things: a) a clearly defined problem that participants need to solve using a machine learning model b) a dataset that’s used both for training and evaluating the effectiveness of these models. For example, in the [Store Sales – Time Series Forecasting](https://www.kaggle.com/competitions/store-sales-time-series-forecasting) competition, participants must accurately predict how many of each grocery item will sell using a dataset of past product and sales information from a grocery retailer. Once the competition starts, participants can submit their predictions. Kaggle will score them for accuracy, and the team will be placed on a ranked leaderboard. The team at the top of the leaderboard at the deadline wins! ## Datasets, Submissions & Leaderboards Every competition’s dataset is split into two smaller datasets. - One of these smaller datasets will be given to participants to train their models, typically named `train.csv`. - The other dataset will be mostly hidden from participants and used by Kaggle for testing and scoring, named `test.csv` and `solution.csv` (`test.csv` is the same as `solution.csv` except that `test.csv` contains the feature values and `solution.csv` contains the ground truth variable(s) – participants will never, ever see `solution.csv`). When a participant feels ready to make a submission to the competition, they will use `test.csv` to generate a prediction and upload a CSV file. Kaggle will automatically score the submission for accuracy using the hidden `solution.csv` file. Most competitions have a maximum number of submissions that a participant can make each day and a final deadline at which point the leaderboard will be frozen. It’s conceivable that a participant could use the mechanics of a Kaggle competition to overfit a solution - which would be great for winning a competition, but not valuable for a real-world application. To help prevent this, Kaggle has two leaderboards – the public and private leaderboard. The competition host splits the `solution.csv` dataset into two parts, using one part for the public leaderboard and another part for the private leaderboard. Participants generally will now know which samples are public vs private. The private leaderboard is kept a secret until after the competition deadline and is used as the official leaderboard for determining the final ranking.",competition-setup How to create a competition️ on Kaggle?,"To create a new competition, click on the “Create new competition” button at the top of the Kaggle Community landing page. Then, enter a descriptive title, subtitle, and URL for your competition. Be as descriptive and to the point as possible. In our example above, the title “Store Sales - Time Series Forecasting” quickly outlines the type of data, the industry of the dataset, and the type of problem to be solved. If you want to create a competition with more privacy, you can limit your competition's visibility and restrict who can join on this page. ## Visibility: Competitions with their visibility set to public are viewable on Kaggle and appear in Kaggle search results. Competitions with visibility set to private are hidden and only accessible via invitation URLs from the host. ## Who Can Join: Competitions access can be set to three levels: - **Anyone**: all Kagglers can join your competition. - **Only people with a link**: restricts access to those users provided with a special URL. - **Restricted email list**: only Kagglers with accounts that match the emails or email domains specified will be able to join. Note: if you select restricted email list, notebooks will be turned off. This provides a way to ensure that any private data that you have in a competition is not accidentally leaked through shared notebooks. You can choose to re-enable notebooks if you choose. Review and accept our terms of service, then click “Create Competition”. Your competition listing is now in draft mode. You can take your time to prepare the details before making the competition public.",competition-setup How to prepare dataset for a Kaggle Competition?,"## Overview You will typically need to prepare and split your chosen dataset into four CSV files with different purposes and formatting requirements: - `train.csv` will be given to participants to train their models. It includes the inputs and the ground truth. For example, in the grocery store competition, `train.csv` contains columns of product data and the solution columns – whether or not the product sold. Typically, this is roughly 70% of the original dataset. - `test.csv` is given to participants and includes the features of the test set so they can create a submission file with their predictions. - `solution.csv` is always hidden from participants and used by Kaggle’s platform to score submissions. The rows should correspond with those of `test.csv` and typically comprise roughly 30% of the original dataset. - `sample_submission.csv` is a placeholder CSV file with the correct formatting, which helps participants understand the expected submission format for the competition. It's up to you to determine how exactly you'd like to split your dataset into train and test files, but it's typically best practice to ensure both train and test have the same type of data represented. Also, most people go with a 70/30 or 75/25 train/test split, but it's problem and dataset dependent. **Note:** this guide provides instructions for tabular data. Other problem types like image data are possible using similar steps. ### Implement a unique ID column Before splitting the dataset, make sure that your dataset has an `Id` column with unique values. The `Id` column is how the scoring system knows which rows of a submission correspond to which rows of the solution. Make sure that the `Id` column is the very first column of your solution file. ### Prepare the `train.csv` file Take a large chunk of your dataset, typically 70%, and split it into its own dataset named `train.csv`. Be sure not to remove the ground truth column(s) because participants need that information to train their models. Save and set aside for upload later. For example: ```csv train.csv input_feature1,input_feature2,target_feature 100,52.12,1 192,203.2,1 64,-59.1,0 ```",competition-setup How to set up scoring / metric in a Kaggle Competition?,"Navigate to the Host tab > Evaluation Metric page in the right-side navigation to set up scoring. ## Designate your scoring metric Choose the scoring metric you’d like to use for your competition in the drop-down menu, or see below for how to write your own metric in Python. There are many ways to determine “how accurate” a submission may be. In the grocery store competition example, you may want to reward underestimates more than overestimates, or reward predictions exponentially more the closer they get to the ground truth. If you are unfamiliar with the types of common evaluation metrics used in machine learning, we’d encourage you to take a look at the details of common evaluation metrics to find the right fit. Kaggle provides two types of metrics: Python (tagged with the icon ) and Legacy (no icon). There are a few key differences. The source code for Legacy metrics is not publicly available and they typically have limited documentation. The setup process is also slightly different: Legacy metrics require manually mapping every column. However, Legacy metrics do offer speed advantages in some circumstances. When a metric is selected, your competition will be tied to the latest version of that metric. If a newer version is later published, you must manually update your competition to use it. ## Upload the `solution.csv` file Click on the upload icon to upload your `solution.csv` file. If you've chosen a Python metric, check that your solution file's format matches that expected by the metric's documentation, or just continue to testing a submission to see if it matches. If you've chosen a Legacy metric, then after uploading the `solution.csv` file, the column headers will auto-populate the Solution Mapping table below. Mapping allows our metric code to understand which columns to use for calculations. Choose the correct “Expected Column” values. Note, some evaluation metrics let you score multiple columns simultaneously. ## Upload the `sample_submission.csv` file and map the verification Click on the upload icon to upload your `sample_submission.csv` file. If you've chosen a Legacy metric, then after uploading you'll again need to complete the same process of column mapping for the submission format. ## Upload data for participants Click on the Data tab and “Upload first version” button on the bottom of your screen to upload all data that participants can access – `test.csv`, `train.csv` files, and `sample_submission.csv` file. Note: you will have additional data files if creating an image/video/etc. competition. Kaggle will process your data and create a versioned dataset, which will also be made accessible via Kaggle notebooks.",competition-setup How to creating a new scoring metric for Kaggle competition?,"You can implement a new metric in a Python notebook at this [link](http://www.kaggle.com/code/metrics/new) or from the Host > Evaluation Metric tab on a competition. Metric notebooks can be published and shared, but currently only Kaggle staff can add metrics to the public metric listing. If you think your metric is a good candidate for general use, please make the notebook public and post in the [competition hosting forum](https://www.kaggle.com/discussions/competition-hosting). Before your metric executes, Kaggle automatically reads the solution and submission file into Pandas dataframes, aligns the solution and submission rows based on a provided id column, and calls a `score()` function. Your metric code needs to define this `score()` function and it must return a single float. Almost all solution files are split into a `Public` and `Private` set by way of a `Usage` column in the file. The `score()` function is called separately for each of these respective sets. Your `score()` function must satisfy the following constraints: - Accept the arguments `solution: pd.DataFrame, submission: pd.DataFrame, row_id_column_name: str`, in that order. You can add any other keyword arguments that you need after those three. Any additional keyword arguments are configured on a per-competition basis on the Evaluation Metric page. - All arguments and the return value of `score` must have type annotations. - Default argument values are encouraged but not required. - `score()` must return a finite float. - `score()` must have a docstring. The docstring will be shown to competition hosts on the evaluation tab after they have selected a metric. We encourage you to include at least the same sections covered in our [example metric's docstring](https://www.kaggle.com/metric/example-metric-code): a general description of the metric, explanations of each of the `score` arguments, references for the metric math, and examples of valid use. - In order to prevent data leaks from the solution file, errors must specify who will see the details. Only errors raised as `ParticipantVisibleError` will be visible to all participants. - Error messages will be truncated to 280 characters. - The scoring runtime is limited to 30 minutes total for the `Public` and `Private` splits combined. - Metric notebooks do not have internet access and cannot use accelerators, so your `score()` function must not rely on these notebook features. Once your code is ready, you will also need to define some metadata in the `Metric` section of the notebook sidebar. You must save this metadata separately from the rest of the notebook. - **Name**: your metric will use the metric notebook's name. Save the metadata to update the name. - **Description**: a short (less than 255 characters) description of the metric. - **Category**: the main use of the metric, such as clustering or regression. - **Leaderboard sort order**: toggle this to indicate if a higher score is better or worse. - **Pass complete submission**: Advanced use only. You almost certainly only want to use this if your submission can have a different number of rows than the solution file. When enabled, your metric will receive the entire submission file for both the public and private scoring rounds. Your metric will need to manage matching the solution and submission rows using the `row_id_column_name`. You will need to use the dedicated `Save` button in the Metric section of the notebook sidebar for this metadata, in addition to the `Save & Validate` button used to save the notebook's source code. When you save your metric, your notebook will first be committed like any other notebook, followed by a series of metric-specific validation checks. This validation step will also re-run any unit test functions and doctests that are discoverable with Pytest. We strongly encourage you to include test cases, but they are not mandatory. If the validation step fails, your notebook code will still save, but no new metric version will be created. We recommend reviewing this [example metric](https://www.kaggle.com/metric/example-metric-code) or [metric template](https://www.kaggle.com/code/metric/metric-template/) before you begin coding.",competition-setup How to test your newly created competition?,"## Sandbox Testing Once you set up the solution and submission files you can test submissions in the submission sandbox. You will need at least one sample submission that successfully generates a score in order to launch your competition. Verify that the scoring is working as intended (e.g. a random submission should have a random score, a perfect submission should have a perfect score, etc.). You may have to experiment to understand what is and is not allowed in submission formats, but the system should provide clear error messages in the event something is wrong with a file. ## Benchmarking a Solution (Optional) To create a benchmark score for your participants to meet or exceed, check the box next to the submission you’d like to use as a benchmark. You’ll then see that score listed as a benchmark on the leaderboard.",competition-setup "How to finalize your settings and descriptions for a Kaggle competition? ","First navigate to the Host tab and complete your configuration in the Basic Details, Images, and Evaluation Metric pages. Then click through the Overview, Data, and Rules tabs and make sure all text descriptions are polished and ready for participants. You can also go to the Launch Checklist page which shows your remaining steps. ## Score Decimals to Display The ""Score Decimals to Display"" setting on the Basic Details page controls how many decimal places are shown in the user interface. We always use full-precision scores for calculations and ranking comparisons, but it can be useful to truncate the displayed scores to make them look cleaner or to prevent leaderboard probing. For example, if participants can see full-precision scores, they could make small changes to their submission and examine the score difference to infer the ground truth of the public test set, or reverse engineer the split between public and private leaderboards.",competition-setup How to launch a competiton and invite participants on Kaggle?,"Go to ""Host > Launch Checklist"" and confirm that all the boxes are checked green. Once they are, you’re good to go! Buttons allowing you to launch the competition now or schedule launch in the future will appear – choose according to your needs. You’ll know your competition is live when it says “Competition is active.” You can invite participants to your competition by sharing the URL at the bottom of the Launch Checklist or Basic Details. This link respects the access settings you specified when creating the competition. If you selected anyone can join, this link will be the competition URL. If you selected only people with a link, anyone with this URL can participate in the competition, so make sure you share the link with the right audience. If you’d like a select group to participate, send the URL via email. If you’d like broad participation, use social media or encourage participants to invite their friends. If you selected restricted email access, the link will only work if the Kaggler's email address appears on the list of restricted emails you specified.",competition-setup What are the commonly or frequently asked questions (FAQ) about creating a competition on Kaggle?,"### Where can I get a dataset for my competition? We recommend that you source your own, since it’s typically best to use data to which the participants do not have access (to minimize the temptations to cheat). But, if you don’t mind it being fully accessible by participants (e.g. for a purely educational competition), consider browsing Kaggle’s Datasets platform. It hosts thousands of public datasets and has rich search and filter tools to help you find something that fits your needs. Each dataset should include a data use license, which will indicate if you can use it for your competition. ### I’m receiving [an error]. How can I resolve it? Start by reading through this setup guide. If you still can’t resolve the issue, try asking other Community Competition hosts in the Kaggle forums. ### I want to run the same competition again. Do I need to start from scratch? For now, you are not able to clone a past competition. You’ll need to start setup from the beginning. ### Who can see my competition? It depends on the privacy setting that you chose. Kaggle has 2 privacy settings – public and limited. Public means that your competition will be listed and discoverable on kaggle.com. Limited means that only people with the provided URL can view and join the competition. ### Where can I find the invitation link? If you selected Public, you can share your competition from your browser tab – anyone can see the competition. If your competition is set to Limited privacy, visit your competition > Host > Privacy > URL for Sharing (if you’ve selected Limited). ### How do I contact support? Unfortunately, we aren’t able to provide hands-on support for setting up or troubleshooting your competition. But, if you are experiencing an issue that you believe is affecting the entire platform, please contact us. We also encourage connecting with other community competition host on Kaggle’s forum. ### Can I offer a prize for a Community Competition? Unfortunately, a cash prize cannot be offered without additional paperwork with Kaggle. If you’d like to run a competition with a cash prize, please reach out to our Kaggle Competitions Team, who can walk you through the necessary steps. ## During Your Competition ### Can I invalidate or delete a participant’s submissions? Yes, go to your competition and navigate to: ""Host > All Submissions"". There you can hide specific submissions. ### Can I upload a new solution file and rescore the competition? You can upload a new solution file, but you cannot rescore a competition on your own. Please upload a new solution file and contact support. An administrator can rescore your competition. Competitors’ new submissions will be scored against the new solution file. ### I would like to download my participants’ email addresses so I can email them for a new competition. How do I do this? Due to privacy regulations, you cannot currently download the email addresses of participants. ### I want to give participants more time to compete, how do I change my competition deadline? If the competition has already ended, you should set up a new competition, as participants will have seen the private leaderboard. If the competition is still active, you can change the deadline by going to: ""Your competition > Host > Settings > Deadline"".",competition-setup What file formats does Kaggle Datasets support?,"Kaggle supports a variety of dataset publication formats, but we strongly encourage dataset publishers to share their data in an accessible, non-proprietary format if possible. Not only are open, accessible data formats better supported on the platform, they are also easier to work with for more people regardless of their tools. This page describes the file formats that we recommend using when sharing data on Kaggle Datasets. Plus, learn why and how to make less well-supported file types as accessible as possible to the data science community. ## Supported File Types ### CSVs The simplest and best-supported file type available on Kaggle is the “Comma-Separated List”, or CSV, for tabular data. CSVs uploaded to Kaggle should have a header row consisting of human-readable field names. A CSV representation of a shopping list with a header row, for example, looks like this: ``` id,type,quantity 0,bananas,12 1,apples,7 ``` CSVs are the most common of the file formats available on Kaggle and are the best choice for tabular data. On the Data tab of a dataset, a preview of the file’s contents is visible in the data explorer. This makes it significantly easier to understand the contents of a dataset, as it eliminates the need to open the data in a Notebook or download it locally. CSV files will also have associated column descriptions and column metadata. The column descriptions allow you to assign descriptions to individual columns of the dataset, making it easier for users to understand what each column means. Column metrics, meanwhile, present high-level metrics about individual columns in a graphic format. [The Complete Pokemon Dataset](https://www.kaggle.com/rounakbanik/pokemon) is an example of a great CSV-type Dataset. ### JSON While CSV is the most common file format for “flat” data, JSON is the most common file format for “tree-like” data that potentially has multiple layers, like the branches on a tree: ```json [{""id"": 0, ""type"": ""bananas"", ""quantity"": 12}, {""id"": 1, ""type"": ""apples"", ""quantity"": 7}] ``` For JSON files, the Data tab preview will present an interactive tree with the nodes in the JSON file attached. You can click on individual keys to open and collapse sections of the tree, exploring the structure of the dataset as you go along. JSON files do not support column descriptions or metrics. You can filter the Datasets listing by File Type to show all datasets containing JSON files [here](https://www.kaggle.com/datasets?sortBy=hottest&group=public&page=1&pageSize=20&size=all&filetype=json&license=all). ### SQLite Kaggle supports database files using the lightweight SQLite format. SQLite databases consist of multiple tables, each of which contains data in tabular format. These tables support large datasets better than CSV files do, but are otherwise similar in practice. The Data tab represents each table in a database separately. Like CSV files, SQLite tables will be fully populated by “Column Metadata” and “Column Metrics” sections. [European Soccer Database](https://www.kaggle.com/hugomathien/soccer) is an example of a great SQLite-type Dataset. ### Archives Although not technically a file format per se, Kaggle also has first-class support for files compressed using the ZIP file format as well as other common archive formats like 7z. Compressed files take up less space on disk than uncompressed ones, making them significantly faster to upload to Kaggle and allowing you to upload datasets that would otherwise exceed the Dataset size limitations. Archives are uncompressed on our side so that their contents are accessible in Notebooks without requiring users to unzip them. Archives do not currently populate previews for individual file contents, but you can still browse the contents by file name. As a result, we recommend that you only upload your dataset as an archive if the dataset is large enough, is made up of many smaller files, or is organized into subfolders. For instance, ZIPs and other archive formats are a great choice for making image datasets available on Kaggle. [Chest X-Ray Images (Pneumonia)](https://www.kaggle.com/paultimothymooney/chest-xray-pneumonia) is an example of a dataset made of archived images. ### BigQuery Kaggle also supports special BigQuery Datasets. BigQuery is a “big data” SQL store invented by Google. Many massive public datasets, like all the code in GitHub and the complete history of the Bitcoin blockchain, are available publicly through the Google BigQuery Public Datasets initiative. Some of these are in turn also available as Kaggle Datasets! BigQuery Datasets are special in many ways. Because they are multi-terabyte datasets hosted on Google’s servers they cannot be uploaded or downloaded. Within Notebooks, instead of loading the files from disk, you interact with the dataset by writing SQL fetch queries within either the Google BigQuery Python library or Kaggle’s bq_helper library. And, due to the large size of the datasets involved, there is a quota of 5 TB of data scanned per user per 30-days. Some resources for understanding how to use BigQuery: - [Getting Started with Big Query](https://www.kaggle.com/sohier/getting-started-with-big-query) - [Beyond Queries: Exploring the Bigquery API](https://www.kaggle.com/sohier/beyond-queries-exploring-the-bigquery-api) [USA Names Data](https://www.kaggle.com/datagov/usa-names) is an example of a BigQuery-type Dataset. Here are some helpful Notebooks for learning more about BigQuery: [SQL Scavenger Hunt Handbook](https://www.kaggle.com/rtatman/sql-scavenger-hunt-handbook/), [Getting Started with BigQuery](https://www.kaggle.com/sohier/getting-started-with-big-query), and [Beyond Queries: Exploring the BigQuery API](https://www.kaggle.com/sohier/beyond-queries-exploring-the-bigquery-api). ### Other File Formats The file formats listed in the section above are the ones best supported and most common on the Kaggle format. This doesn’t mean that other types of files can’t be uploaded; any file you can think of can be uploaded. Other formats are just less well-supported: they may not have previews or any of the other data explorer components available. They will also likely be less familiar with Kaggle users, and hence, less accessible. If you can convert your file into one of the formats above (the simpler the better), we highly recommend doing so. For example, Excel spreadsheets are a proprietary format that should be uploaded as CSV files instead. Your users will thank you! However, there are nevertheless use cases for alternative data formats. We do encourage uploads in specialty data formats like NPZ, image file formats like PNG, and complex hierarchical data formats like HDF5. But, when doing so, we suggest also uploading a Notebook discussing what and where the files are, how to work with them, and demonstrating how to get started with the dataset. Reproducible code samples can go a long way towards making your data files accessible to the data science world!",dataset "How to search for datasets on Kaggle? ","Datasets is not just a simple data repository. Each dataset is a community where you can discuss data, discover public code and techniques, and create your own projects in Notebooks. You can find many different interesting datasets of all shapes and sizes if you take the time to look around and find them! The latest and greatest from Datasets is surfaced on Kaggle in several different places. ## Newsfeed When you’re logged into your Kaggle account, the [Kaggle homepage](https://kaggle.com/) provides a live newsfeed of what people are doing on the platform. New Datasets uploaded by people you follow and hot Datasets with lots of activity will show up here. By browsing down the page you can check out all the latest updates from your fellow Kagglers. You can tweak your news feed to your liking by following other Kagglers. To follow someone, go to their profile page and click on “Follow User”. Content posted and upvotes made by users you have followed will show up more prominently. The same is true of other users who choose to follow you. Post high-quality content and you will soon find other users following along with what you are doing! ## Datasets Listing A more structured way of accessing datasets is accessible from the “Datasets” tab in the main menu bar. Datasets are grouped by different categories: ""Trending Datasets"", ""Popular Datasets"", ""Recently Viewed Datasets"" and a few other rotating categories. At the bottom of this page, you can click on the ""Explore all public datasets"" button to get a list view of all datasets. The list is sorted by “Hotness” by default. “Hotness” is what it sounds like: a way of measuring the interestingness and recency of datasets on the platform. Datasets which score highly in Hotness, and thus appear highly in this list, are usually either recently released Datasets that have been marked Reviewed and are scoring highly in engagement, or “all-time” greats that have been consistently popular on the platform for a long time. Other methods of sorting are by Most Votes, New, Updated and Usability. Other filtering options, available from the navigation bar, are Sizes (Small, Medium, or Large), File types (CSV, SQLite, JSON, BigQuery), Licenses (Creative Commons, GPL, Other Database, Other), and Tags (described in the next section). You can also use the listing to view your own Datasets (“Your Datasets”), or to look at datasets you have previously bookmarked (""Bookmarks""). Finally, a Datasets-specific search bar is available here. This is often the fastest way to find a specific dataset that you are looking for. ## Tags and Tag Pages Tags are the most advanced of the searching options available in the Datasets listing page. Tags are added by dataset owners to indicate the topic of the Dataset, techniques you can use (e.g., “classification”), or the type of the data itself (e.g., “text data”). You can navigate to tag pages to browse more content sharing a tag either by clicking on a tag on a Dataset, or by clicking on the “Tags” dropdown in the site header. Searching by tags allow you to search for Datasets by topical area. For example, if you are interested in animal shelter data you might try a search with the tag “animals”; if you are interested in police records a search with “crime” would do the trick. Tag pages include a section listing the most popular pages with the given tag, making them a great way of searching for datasets by content.",dataset How can I create a Dataset on Kaggle?,"It’s easy to create a dataset on Kaggle and doing so is a great way to start a data science portfolio, share reproducible research, or work with collaborators on a project for work or school. You have the option to create private datasets to work solo or with invited collaborators or publish a dataset publicly to Kaggle for anyone to view, download, and analyze. ## Navigating the Dataset Interface To publish a private or public dataset, start by navigating to the [Datasets listing](https://www.kaggle.com/datasets). There you will find a New Dataset button. Click on it to open the New Dataset modal. The required “bare minimum” fields for uploading a dataset to Kaggle in descending order are: - The *Title* is the name of the Dataset – e.g. what will appear in the listing when searching or browsing. - The *URL* is the link the Dataset will live at. The slug will first auto-populate and mimic your Title. However, you can hover over the slug to change it right away. - Finally, you may upload data from one of four sources: - *Your local machine* - upload files/folders via drag and drop or by selecting them in your file browser. To speed up file/folder uploads, try uploading them as a ZIP archive; the contents will be unzipped on our side to make them accessible in Notebooks. - *Remote Files* - enter list of public URL(s) which identify files to be imported into dataset - *Github Repository* - enter URL to github repository whose files will be imported into dataset - *Notebook Outputs* - use inbuilt search to explore publicly available files produced from Kaggle’s large repository of public Notebooks To make your dataset more useful for your collaborators and the community it is recommended you update the following settings: - The Sharing menu controls the Dataset’s visibility. Datasets may be Private (visible only to you and your collaborators, and to Kaggle for purposes consistent with the Kaggle Privacy Policy) or Public (visible to everyone). The default setting is Private. - The Licence is the license the dataset is released under (relevant for public datasets). If the license you need doesn’t appear in the dropdown, select the “Other (specified in description)” option and be sure to provide information on the license when writing the dataset description (in the next step). Below is a list of common licenses. ### Common Licenses #### Creative Commons - CC0: Public Domain - CC BY-NC-SA 4.0 - CC BY-SA 4.0 - CC BY-SA 3.0 - CC BY 4.0 (Attribution 4.0 International) - CC BY-NC 4.0 (Attribution-NonCommercial 4.0 International) - CC BY 3.0 (Attribution 3.0 Unported) - CC BY 3.0 IGO (Attribution 3.0 IGO) - CC BY-NC-SA 3.0 IGO (Attribution-NonCommercial-ShareAlike 3.0 IGO) - CC BY-ND 4.0 (Attribution-NoDerivatives 4.0 International) - CC BY-NC-ND 4.0 (Attribution-NonCommercial-NoDerivatives 4.0 International) #### GPL - GPL 2 - LGPL 3.0 (GNU Lesser General Public License 3.0) - AGPL 3.0 (GNU Affero General Public License 3.0) - FDL 1.3 (GNU Free Documentation License 1.3) #### Open Data Commons - Database: Open Database, Contents: Database Contents - Database: Open Database, Contents: © Original Authors - PDDL (ODC Public Domain Dedication and Licence) - ODC-BY 1.0 (ODC Attribution License) #### Community Data License - Community Data License Agreement - Permissive - Version 1.0 - Community Data License Agreement - Sharing - Version 1.0 #### Special - World Bank Dataset Terms of Use - Reddit API Terms - U.S. Government Works - EU ODP Legal Notice - Owner allows you to specify the dataset Owner if you belong to any Organizations. You may assign ownership to yourself or to any Organizations you are a member of (see the section “Creating and using organizations” to learn more about this feature). Once you have provided the required information alongside your data source, click on “Create Dataset” and your dataset will start processing. Once the dataset is finished processing, you will be taken to your new dataset’s home page. Note that if your dataset is very large (multiple gigabytes in size), processing may take a while, up to several minutes. Feel free to navigate away from the browser window whilst processing is inflight as it will continue in the background. Your datasets has now been created! However, for truly great Datasets, the work doesn’t stop there. Once you have specified the required fields there are a few other things you should do in order to maximize your dataset’s usefulness to the community or your collaborators: - Upload a cover image. We recommend using [unsplash.com](http://unsplash.com/) for shareable, high resolution images. - Add a subtitle to the dataset. This is a short bit of text explaining in slightly more detail what is in it. This subtitle will appear alongside the title in the search listings. - Add tags. Tags help users find datasets on topics they are interested in by making them easier to find. - Add a description. The description should explain what the dataset is about in long-form text. A great description is extremely useful to Kaggle community members looking to get started with your data. - Publish a public Notebook. Use Notebooks to show community members or your collaborators how to get started with the data. This can be something simple like an exploratory data analysis or a more complex project reproducing research using the data. A few examples of well-formatted datasets are “[CS:GO Competitive Matchmaking Data](https://www.kaggle.com/skihikingkevin/csgo-matchmaking-damage)”, “[Yelp Dataset](https://www.kaggle.com/yelp-dataset/yelp-dataset)”, “[1.6 million UK traffic accidents](https://www.kaggle.com/daveianhickey/2000-16-traffic-flow-england-scotland-wales)”, and “[Fashion MNIST](https://www.kaggle.com/zalando-research/fashionmnist)”. ## Creating Datasets from Various Connectors As outlined above, in addition to uploading files from your local machine, you can also create Datasets from various data sources including GitHub, remote URLs (any public file hosted on the web), and Notebook output files. These are each icons that can be found in the Dataset Upload Modal sidebar. ### GitHub and Remote File Datasets Datasets created from a GitHub repository or hosted (remote) files are downloaded directly from the remote server to Kaggle’s cloud storage and, therefore, will consume none of your local network’s bandwidth. This makes the remote files connector a convenient solution for creating datasets from large files. When a dataset is created from a github repository or hosted file, the publisher is able to set up automatic interval updates from the dataset’s Settings tab. Here’s an example [stock market dataset](https://www.kaggle.com/timoboz/stock-data-dow-jones) that updates daily. Don’t want to wait for a refresh? No problem! Click the Update button within the ""..."" dropdown in the dataset menu header to sync your dataset immediately. ### Notebook Output File Datasets Creating a dataset from a Notebook’s output files will let you create reproducible data pipelines. To create a dataset from a Notebook’s output files, click on the icon in the uploader and search for your Notebook. Alternatively, you can click “Create Dataset” from the Output tab on your rendered Notebook. Then, select the files you want to use in your dataset. ### Limitations It's worth noting that for user experience and technical simplicity, a dataset can be created and versioned from exclusively one data source. That is, data sources currently can not be mixed and matched in any given dataset (for example, a dataset created from a GitHub repository can't also include files uploaded from your local machine). If you would like to use various different data sources in a Notebook you can create multiple datasets and add them both to said Notebook. The usual technical specifications for dataset creation apply to connectors too. See the [Technical Specifications](https://www.kaggle.com/docs/datasets#technical-specifications) section for more information. ## Updating Dataset Using JSON Config For advanced users, you may find it easier to update key parameters of your dataset by specifying the details as JSON configuration. To do this, navigate to your dataset and click Settings, followed by “JSON Config” in the menu of options on the left. You can update any of the settings you would normally edit through the datasets user interface, such as title, collaborators, licenses, keywords and more. For a reference to the schema you can use for updating dataset settings, you can look at our documentation for the relevant actions within the Public API. Please note, there are some subtle differences between the Public API schema and the schema supported in the JSON Config settings UI. They are as follows: - *id* is omitted as it cannot be changed after dataset creation - *resources* is omitted as you cannot change the uploaded files using this UI - The *isPrivate* is an added boolean option that allows users to change the privacy of their datasets (note: public datasets can NOT be made private) - *collaborators* is an added array of objects with shape | { “username”: string; “role”: “read” | “write”}| that can be used to specify dataset collaborators",dataset How to collaborate on Kaggle Datasets?,"Dataset collaboration is a powerful feature. It allows multiple users to co-own and co-maintain a private or publicly shared dataset. For example, you can invite collaborators to view and edit a private dataset to work together on preparing it before changing its visibility to public. When uploading a Dataset you may choose either yourself or any Organization you are a part of as the Owner of that Dataset. If you select yourself, that Dataset will be created with yourself as the Owner. If you select an Organization, that Organization will be the Owner of the dataset, and every other user in the Organization (including yourself) will be added as a Collaborator with editing privileges (if you are unfamiliar with Organizations, you may also want to read the section “Creating and using organizations”). This means that Organizations are an easy way to manage access to datasets or groups of datasets. ## Inviting Collaborators Alternatively, you may manage Collaborators directly. To do so, go to any dataset you own and navigate to Settings > Sharing. There, use the search box to find and add other users as Dataset collaborators. If your Dataset is private, you may choose between giving Collaborators either viewing privileges (“Can view”) or editing privileges (“Can edit”). If your Dataset is public, Collaborators can only be added with editing privileges (“Can edit”), as anyone can view it already. When you add a collaborator, they will receive a notification via email. “Data Science for Good: Kiva Crowdfunding [Kaggle](https://www.kaggle.com/kiva/data-science-for-good-kiva-crowdfunding)” is a great example of a Collaborative Dataset. ## Using Notebooks with Dataset Collaborators Using Notebooks, Kaggle’s interactive code editing and execution environment, is a powerful way to work with your collaborators on a Dataset. You might want to work with collaborators to write public Notebooks that help familiarize other users with your dataset. Or you may want to keep all of your code private among your collaborators as you work on privately shared projects together. Notebooks you create are private by default, and their sharing settings are distinct from the sharing settings on your Dataset. That is, your Dataset collaborators won’t automatically see your private Notebooks. Here’s what that means and how you can productively use sharing settings on Datasets and Notebooks together: - You can make public Notebooks on a private Dataset which will allow anyone to view your Notebook, but not the underlying private data source. - If you want to add view or edit collaborators to a private Notebook (whether the dataset is private or public), you can do so by adding users via Options > Sharing on the Notebook.",dataset What are resources available for getting started with a Data Project on Kaggle?,"There are many resources available online to help you get started working on your open data project. ## Using Datasets - [Getting Started on Kaggle video tutorials](https://www.youtube.com/playlist?list=PLqFaTIg4myu8gbDh6oBl7XRYNBlthpDEW): Just started on Kaggle? Not sure what is where and why? Here are our very own Kaggle team tutorials to orient you quickly on navigating the Kaggle platform and creating your own datasets and Notebooks. - [A Guide to Open Data Publishing](http://blog.kaggle.com/2016/10/21/a-guide-to-open-data-publishing-analytics/): This article includes the key ingredients to an open data project. - [Web scraping data in Python](http://blog.kaggle.com/2017/01/31/scraping-for-craft-beers-a-dataset-creation-tutorial/): A tutorial showing you how to scrape data with BeautifulSoup. It goes over the same code used to create the Craft Beers dataset published on Kaggle. - [Making Kaggle the Home of Open Data](http://blog.kaggle.com/2016/08/17/making-kaggle-the-home-of-open-data/): Ben’s post shares instructions for publishing your open data project on Kaggle and how you can explore others’ datasets. - [Creating an Organization](https://www.kaggle.com/organizations/new): If you’re publishing data from an organization, you can create an organization profile first. Then you just select the organization profile from the dropdown near your avatar when publishing. - [Open Data Spotlights](http://blog.kaggle.com/tag/open-data-spotlight/): This series highlights some of the best open data projects on Kaggle. - Have requests or want to discuss data collection, cleaning, or other aspects of open data projects? Post away in the Datasets Discussion forum on Kaggle. ## Using Notebooks - [Getting Started on Kaggle video tutorials](https://www.youtube.com/playlist?list=PLqFaTIg4myu8gbDh6oBl7XRYNBlthpDEW): Just started on Kaggle? Not sure what is where and why? Here are our very own Kaggle team tutorials to orient you quickly on navigating the Kaggle platform and creating your own datasets and Notebooks. - [Kaggle Learn](https://www.kaggle.com/learn/overview) is a great place to start getting hands on with data science and machine learning techniques using Notebooks. - [Does open data make you happy? An introduction to Kaggle Notebooks](https://medium.com/@meganrisdal/does-open-data-make-you-happy-an-introduction-to-kaggle-kernels-d8cce437d5ff): Learn how to use Notebooks to explore any combination of datasets published on Kaggle. - [Seventeen Ways to Map Data in Notebooks](http://blog.kaggle.com/2016/11/30/seventeen-ways-to-map-data-in-kaggle-kernels/): A collection of mini-tutorials by Kaggle users for Python and R users. ## Analysis - [How to Get Started with Data Science in Containers](http://blog.kaggle.com/2016/02/05/how-to-get-started-with-data-science-in-containers/): One of our data scientists, Jamie Hall, explains how and why Docker containers are at the heart of Notebooks – reproducible analysis. - [Approaching (Almost) Any Machine Learning Problem by Kaggle Grandmaster Abhishek Thakur](http://blog.kaggle.com/2016/07/21/approaching-almost-any-machine-learning-problem-abhishek-thakur/): Exactly what it says – a great tutorial. ## Other - [Kaggle Datasets Twitter](https://twitter.com/KaggleDatasets): The new account features newly featured datasets plus open data news. - [Collecting & Using Open Data](http://mlwave.com/how-to-produce-and-use-datasets-lessons-learned/): A blog by Kaggler MLWave recommended by Triskelion.",dataset What are the technical specifications of Kaggle Datasets?,"Kaggle Datasets allows you to publish and share datasets privately or publicly. We provide resources for storing and processing datasets, but there are certain technical specifications: * 100GB per dataset limit * 100GB max private datasets (if you exceed this, either make your datasets public or delete unused datasets) * A max of 50 top-level files (if you have more, use a directory structure and upload an archive) When you upload a dataset we apply certain processing steps to make the dataset more usable. * A complete archive is created so the dataset can be easily downloaded later * Any archives (e.g., ZIP files) that you upload are uncompressed so that the files are easily accessible in Notebooks (directory structure is preserved) * Data types for tabular data files are automatically detected (e.g., geospatial types) * Column-level metrics are calculated for tabular data which are viewable on the data explorer on the dataset's ""Data"" tab When publishing datasets, you might also want to consider the technical specifications of [Notebooks](https://www.kaggle.com/docs/notebooks#technical-specifications) if you intend to use (or encourage other Kaggle users to use) Notebooks to analyze the data.",dataset