Datasets:

Modalities:
Text
Formats:
arrow
Languages:
English
Libraries:
Datasets
License:
Dataset Viewer
Auto-converted to Parquet
markdown
stringlengths
44
160k
filename
stringlengths
3
39
--- title: Platform home description: Access the full DataRobot UI documentation, including feature descriptions for the UI and API, data preparation, tutorials, and a glossary. --- # DataRobot UI documentation The **UI docs** tab describes workflow and reference material for the UI version of the DataRobot AI Platform, regardless of deployment type. Resource | Description -------- | ----------- ![](images/nc-tile-release.png) | [**Get Started**](get-started/index): A quick introduction to analyzing data, creating models, and writing code with DataRobot. ![](images/nc-tile-workbench.png) | [**Workbench**](workbench/index): Workbench, an organizational hierarchy that creates a Use Case as a top level concept and supports experimentation and sharing. ![](images/nc-tile-data.png) | [**Data**](data/index): Data management (import, transform, analyze, store) and the DataRobot Data Prep tool. ![](images/nc-tile-modeling.png) | [**Modeling**](modeling/index): Building, understanding, and analyzing models; time series modeling; business operations tools; a modeling reference. ![](images/nc-tile-preds.png) | [**Predictions**](predictions/index.html): The prediction API and associated reference; Scoring Code guide; batch scoring methods; UI-based prediction methods. ![](images/nc-tile-mlops.png) | [**MLOps**](mlops/index): The pillars of the centralized hub for managing models in production— deployment, monitoring, managing, and governance. ![](images/nc-tile-code.png) | [**Notebooks**](dr-notebooks/index): **Public Preview**: Create interactive, computational environments that hosts code execution and rich media for various use cases and workflows. ![](images/nc-tile-blueprint.png) | [**No-Code AI Apps**](app-builder/index): Use a no-code interface to configure AI-powered applications and enable core DataRobot services (making predictions, optimizing target features, simulating scenarios).
index
--- title: ELI5 description: Explain it Like I'm 5 provides a list with brief, easily digestible answers. Answers links to more complete documentation. --- # ELI5 {: #eli5 } Explain it like I'm 5 (ELI5) contains complex DataRobot and data science concepts, broken down into brief, digestible answers. Many topics include a link to the full documentation where you can learn more. ??? ELI5 "What is MLOps?" Machine learning operations (MLOps) itself is a derivative of DevOps; the thought being that there is an entire “Ops” (operations) industry that exists for normal software, and that such an industry needed to emerge for ML (machine learning) as well. Technology (including DataRobot AutoML) has made it easy for people to build predictive models, but to get value out of models, you have to deploy, monitor, and maintain them. Very few people know how to do this and even fewer than know how to build a good model in the first place. This is where DataRobot comes in. DataRobot offers a product that performs the "deploy, monitor, and maintain" component of ML (MLOps) in addition to the modeling (AutoML), which automates core tasks with built in best practices to achieve better cost, performance, scalability, trust, accuracy, and more. _Who can benefit from MLOps?_ MLOps can help AutoML users who have problems operating models, as well as organizations that do not want AutoML but do want a system to operationalize their existing models. Key pieces of MLOps include the following: * The **Model Management** piece in which DataRobot provides model monitoring and tracks performance statistics. * The **Custom Models** piece makes it applicable to the 99.9% of existing models that weren’t created in DataRobot. * The **Tracking Agents** piece makes it applicable even to models that are never brought into DataRobot&mdash;this makes it much easier to start monitoring existing models (no need to shift production pipelines). [Learn more about MLOps](mlops/index). ??? ELI5 "What are stacked predictions?" DataRobot produces predictions for training data rows by making "stacked predictions," which just means that for each row of data that is predicted on, DataRobot is careful to use a model that was trained with data that does not include the given row. An analogy: You're a teacher teaching five different math students and want to be sure that your teaching material does a good job of teaching math concepts. So, you take one hundred math problems and divide them up into five sets of question-answer pairs. You give each student a different collection of four sets to use as study material. The remaining fifth set of math problems you use as the exam for that student. When you present your findings to the other teachers, you don't want to present the student's answers on the study material as evidence of learning&mdash;the students already had the answers available and could have just copied them without understanding the concepts. Instead you show how each student performed on their exam, where they didn't have the answers given to them. In this analogy, the students are the _models_, the question-answer pairs are the _rows of data_, and the different sets of question-answer pairs are the different _cross-validation partitions_. Your presentation to the other teachers is all the charts DataRobot makes to understand model performance (Lift Charts, ROC curve, etc). The student's answers on their exams are the stacked predictions. Learn more about stacked predictions [here](data-partitioning#what-are-stacked-predictions). ??? ELI5 "What's a rating table, why are generalized additive models (GAM) good for insurance, and what's the relation between them?" A **rating table** is a ready-made set of rules you can apply to insurance policy pricing, like, "if driving experience and number of accidents is in this range, set this price."" A **GAM model** is interpretable by an actuary because it models things like, "if you have this feature, add $100; if you have this, add another $50." The way you learn GAMs allows you to automatically learn ranges for the rating tables. Learn more about [rating tables](rating-table). ??? ELI5 "Loss reserve modeling vs. loss cost modeling" You just got paid and have $1000 in the bank, but in 10 days your $800 mortgage payment is due. If you spend your $1000, you won't be able to pay your mortgage, so you put aside $800 as a reserve to pay the future bill. **Example: Insurance** Loss reserving is estimating the ultimate costs of policies that you've already sold (regardless of what price you charged). If you sold 1000 policies this year, at the end of the year lets say you see that there have been 50 claims reported and only $40,000 has been paid. They estimate that when they look back 50 or 100 years from now, they'll have paid out a total of $95k, so they set aside an additional $55k of "loss reserve". Loss reserves are by far the biggest liability on an insurer's balance sheet. A multi-billion dollar insurer will have hundreds of millions if not billions of dollars worth of reserves on their balance sheet. Those reserves are very much dependent on predictions. ??? ELI5 "Algorithm vs. model" The following is an example model for sandwiches: a sandwich is a savory filling (such as pastrami, a portobello mushroom, or a sausage) and optional extras (lettuce, cheese, mayo, etc.) surrounded by a carbohydrate (bread). This model allows you to describe foods simply (you can classify all foods as "sandwich" or "not sandwich"), and allows you to predict new sets of ingredients to make a sandwich. An algorithm for making a sandwich would consist of a set of instructions: 1. Slice two pieces of bread from a loaf. 2. Spread chunky peanut butter on one side of one slice of bread. 3. Spread raspberry jam on one side of the other slice. 4. Place one slice of bread on top of the other so that the sides with the peanut butter and jam are facing each other. ??? ELI5 "API vs. SDK" **API:** "This is how you talk to me." **SDK:** "These are the tools to help you talk to me." **API:** "Talk into this tube." **SDK:** "Here's a loudspeaker and a specialized tool that holds the tube in the right place for you." **Example** DataRobot's REST API is an API but the Python and R packages are a part of DataRobot's SDK because they provide an easier way to interact with the API. **API:** Bolts and nuts **SDK:** Screwdrivers and wrenches Learn more about [DataRobot's APIs and SDKs](api/index). ??? ELI5 "What does monotonic mean?" **Examples** === "Comic books" Let's say you collect comic books. You expect that the more money you spend, the more value your collection has (**monotonically** increasing relationship between value and money spent). However, there could be other factors that affect this relationship, like a comic book tears and your collection is worth less even though you spent more money. You don't want your model to learn that spending more money decreases value because it's really decreasing from a comic book tearing or other factor it doesn't consider. So, you force it to learn the **monotonic relationship**. === "Insurance" Let's say you're an insurance company, and you give a discount to people who install a speed monitor in their car. You want to give a bigger discount to people who are safer drivers, based on their speed. However, your model discovers a small population of people who drive incredibly fast (e.g., 150 MPH or more), that are also really safe drivers, so it decides to give a discount to these customers too. Then other customers discover that if they can hit 150 MPH in their cars each month, they get a big insurance discount, and then you go bankrupt. **Monotonicity** is a way for you to say to the model: "as top speed of the car goes up, insurance prices must always go up too." Learn more about [monotonics](monotonic). ??? ELI5 "What is ridge regressor?" If you have a group of friends in a room talking about which team is going to win a game, you want to hear multiple opinions and not have one friend dominate the conversation. So if they keep talking and talking, you give them a 'shush' and then keep 'shushing' them louder the more they talk. Similarly, the ridge regressor penalizes one variable from dominating the model and spreads the signal to more variables. There are two kinds of penalized regression&mdash;one kind of penalty makes the model keep all the features but spend less on the unimportant features and more on the important ones. This is **Ridge**. The other kind of penalty makes the model leave some unimportant variable completely out of the model. This is called **Lasso**. ??? ELI5 "Anomaly detection vs. other machine learning problems DataRobot can solve" **Anomaly detection is an unsupervised learning problem**. This means that it does not use a target and does not have labels, as opposed to supervised learning which is the type of learning many DataRobot problems fall into. In supervised learning there is a "correct" answer and models predict that answer as close as possible by training on the features. There are a number of anomaly detection techniques, but no matter what way you do it, there is no real "right answer" to whether something is an anomaly or not&mdash;it's just trying to group common rows together and find a heuristic way to tell you "hey wait a minute, this new data doesn't look like the old data, maybe you should check it out." **Supervised** = I know what I’m looking for. **Unsupervised** = Show me something interesting. **Example: Network access and credit card transactions** In some anomaly detection use cases there are millions of transactions that require a manual process of assigning labels. This is impossible for humans to do when you have thousands of transactions per day, so they have large amounts of unlabeled data. Anomaly detection is used to try and pick up the abnormal transactions or network access. A ranked list can then be passed on to a human to manually investigate, saving them time. === "Supervised" A parent feeds their toddler; the toddler throws the food on the floor and Mom gets mad. The next day, the same thing happens. The next day, Mom feeds the kid, the kid eats the food, and Mom's happy. The kid is particularly aware of Mom's reaction, and that ends up driving their learning (or supervising their learning), i.e., they learn the association between their action and mom's reaction&mdash;that's supervised. === "Unsupervised" Mom feeds the kid; the kid separates his food into two piles: cold and hot. Another day, the kid separates the peas, carrots, and corn. They're finding some structure in the food, but there isn't an outcome (like Mom's reaction) guiding their observations. Learn more about [machine learning problems DataRobot can help solve](unsupervised/index). ??? ELI5 "What are tuning parameters and hyperparameters?" Tuning parameters and hyperparameters are like knobs and dials you can adjust to make a model perform differently. DataRobot automates this process to make a model fit data better. **Examples** === "Playing the guitar" Say you are playing a song on an electric guitar. The chords progression is the model, but you and your friend play it with different effects on the guitar&mdash;your friend might tune their amplifier with some rock distortion and you might increase the bass. Depending on that, the same song will sound different. That's hyperparameter tuning. === "Tuning a car" Some cars, like a Honda Civic, have very little tuning you can do to them. Other cars, like a race car, have a lot of tuning you can do. Depending on the racetrack, you might change the way your car is tuned. ??? ELI5 "What insights can a user get from Hotspots visualization?" Hotspots can give you feature engineering ideas for subsequent DataRobot projects. Since they act as simple IF statements, they are easy to add to see if your models get better results. They can also help you find clusters in data where variables go together, so you can see how they interact. If Hotspots could talk to the user: "The model does some heavy math behind the scenes, but let's try to boil it down to some if-then-else rules you can memorize or implement in a simple piece of code without losing much accuracy. Some rules look promising, some don't, so take a look at them and see if they make sense based on your domain expertise." **Example** If a strongly-colored, large rule was something like "Age > 65 & discharge_type = 'discharged to home'" you might conclude that there is a high diabetes readmission rate for people over 65 who are discharged to home. Then, you might consider new business ideas that treat the affected population to prevent readmission, although this approach is completely non-scientific. Learn more about [Hotspots visualizations](general-modeling-faq#model-insghts). ??? ELI5 "What is target leakage?" Target leakage is like that scene in Mean Girls where the girl can predict when it's going to rain if it's already raining. One of the features used to build your model is actually derived from the target, or closely related. **Example** You and a friend are trying to predict who’s going to win the Super Bowl (the Rams, or the Patriots). You both start collecting information about past Super Bowls. Then your friend goes, “Hey wait! I know, we can just grab the newspaper headline from the day after the Super Bowl! Every past Super Bowl, you could just read the next day’s newspaper to know exactly who won.” So you start collecting all newspapers from past Super Bowls, and become really good at predicting previous Super Bowl winners. Then, you get to the upcoming Super Bowl and try to predict who’s going to win, however, something is wrong: “where is the newspaper that tells us who wins?” You were using target leakage that helped you predict the past winners with high accuracy, but that method wasn't useful for predicting future behavior. **The newspaper headline was an example of target leakage, because the target information was “leaking” into the past**. **Interesting links:** * [AI Simplified: What is Target Leakage in Data Science?](https://youtu.be/y8qaI5mpJeA){ target=_blank } * [Karen Smith's Weather Report](https://youtu.be/MG_LL9m7cl4){ target=_blank } Learn more about [Target Leakage](data-quality#target-leakage). ??? ELI5 "Scoring data vs. scoring a model" You want to compete in a cooking competition, so you practice different recipes at home. You start with your ingredients (training data), then you try out different recipes on your friend to optimize each of your recipes (training my models). After that, you try out the recipes on some external guests who you trust and are somewhat unbiased (validation), ultimately, choosing the recipe that you will try in the competition. This is the model that you will be using for scoring. Now you go to the competition where they give you a bunch of ingredients&mdash;this is your scoring data (new data that you haven't seen). You want to run these through your recipe and produce a dish for the judges&mdash;that is making predictions or scoring using the model. You could have tried many recipes with the same ingredients&mdash;so the same scoring data can be used to generate predictions from different models. ??? ELI5 "Bias vs. variance" You're going to a wine tasting party and are thinking about inviting one of two friends: * **Friend 1:** Enjoys all kinds of wine, but may not actually show up (low bias/high variance). * **Friend 2:** Only enjoys bad gas station wine, but you can always count on them to show up to things (high bias/low variance). Best case scenario: You find someone who isn’t picky about wine and is reliable (low bias/low variance). However, this is hard to come by, so you may just try to incentivize Friend 1 to show up or convince Friend 2 to try other wines (hyperparameter tuning). You avoid friends who only drink gas station wine and are unreliable about showing up to things (high bias/high variance). ??? ELI5 "Structured vs. unstructured datasets" Structured data is neat and organized&mdash;you can upload it right into DataRobot. Structured data is CSV files or nicely organized Excel files with one table. Unstructured data is messy and unorganized&mdash;you have to add some structure to it before you can upload it to DataRobot. Unstructured data is a bunch of tables in various PDF files. **Example** Let’s imagine that you have a task to predict categories for new Wikipedia pages (Art, History, Politics, Science, Cats, Dogs, Famous Person, etc.). All the needed information is right in front of you&mdash;just go to wikipedia.com and you can find all categories for each page. But the way the information is structured right now is not suitable for predicting categories for new pages. It would be hard to extract some knowledge from this data. On the other hand, if you will query Wikipedia’s databases and form a file with columns corresponding to different features of articles (title, content, age, previous views, number of edits, number of editors) and rows corresponding to different articles, this would be a structured dataset, it is more suitable to extract some hidden value from it using machine learning methods. Note that text is not ALWAYS unstructured. For example, you have 1000 short stories, some of which you liked, and some of which you didn't. As 1000 separate files, this is an unstructured problem. But if you put them all together in one CSV, the problem becomes structured, and now DataRobot can solve it. ??? ELI5 "Log scale vs. linear scale" In _log scale_ the values keep multiplying by a fixed factor (1, 10, 100, 1000, 10000). In _linear scale_ the values keep adding up by a fixed amount (1, 2, 3, 4, 5). **Examples** === "Richter scale" Going up one point on the Richter scale is a magnitude increase of about 30x. So 7 is 30 times higher than 6, but 8 is 30 times higher than 7, so 8 is 900 (30x30) times higher than 6. === "Music theory" The octave numbers increase linearly, but the sound frequencies increase exponentially. So note A 3rd octave = 220 Hz, note A 4th octave = 440 Hz, note A 5th octave = 880 Hz, note A 6th octave = 1760 Hz. **Interesting facts:** - In economics and finance, log scale is used because it's much easier to translate to a % change. - It's possible that a reason log scale exists is because many events in nature are governed by exponential laws rather than linear, but linear is easier to understand and visualize. - If you have large linear numbers and they make your graph look bad, then you can log the numbers to shrink them and make your graph look prettier. ??? ELI5 "What is reinforcement learning?" **Examples** === "Restaurant reviews" Let's say you want to find the best restaurant in town. To do this, you have to go to one, try the food, and decide if you like it or not. Now, every time you go to a new restaurant, you will need to figure out if it is better than all the other restaurants you've already been to, but you aren't sure about your judgement, because maybe a dish you had was good/bad compared to others offered in that restaurant. Reinforcement learning is the targeted approach you can take to still be able to find the best restaurant for you, by choosing the right amount of restaurants to visit or choosing to revisit one to try a different dish. It narrows down your uncertainty about a particular restaurant, trading off the potential quality of unvisited restaurants. === "Dog training" Reinforcement learning is like training a dog&mdash;for every action your model takes, you either say "good dog" or "bad dog". Over time, by trial and error, the model learns the behavior so as to maximize the reward. Your job is to provide the environment to respond to the agent's (dog's) actions with numeric rewards. Reinforcement learning algorithms operate in this environment and learn a policy. **Interesting Facts:** - Reinforcement learning works better if you can generate an unlimited amount of training data, like with Doom/Atari, AlphaGo games, and so on. You need to emulate the training environment so the model can learn its mechanics by trying different approaches a _gazillion_ times. - A good reinforcement learning framework is OpenAI Gym. In it you set some goal for your model, put it in some environment, and keep it training until it learns something. - Tasks that humans normally consider "easy" are actually some of the hardest problems to solve. It's part of why robotics is currently behind machine learning. It is significantly harder to learn how to stand up or walk or move smoothly than it is to perform a supervised multiclass prediction with 25 million rows and 200 features. ??? ELI5 "What is target encoding?" Machine learning models don't understand categorical data, so you need to turn the categories into numbers to be able to do math with them. Some example methods to do target encoding include: - _One-hot encoding_ is a way to turn categories into numbers by encoding categories as very wide matrices of 0s and 1s. This works well for linear models. - _Target encoding_ is a different way to turn a categorical into a number by replacing each category with the mean of the target for that category. This method gives a very narrow matrix, as the result is only one column (vs. one column per category with a one-hot encoding). Although more complicated, you can also try to avoid overfitting while using target encoding&mdash;DataRobot's version of this is called _credibility encoding_. ??? ELI5 "What is credibility weighting?" Credibility weighting is a way of accounting for the certainty of outcomes for data with categorical labels (e.g. what model vehicle you drive). **Examples:** === "Vehicle models" For popular vehicle category types, e.g. Ford F-series was the top-selling vehicle in USA in 2018, there will be many people in your data, and you will be more certain that the historical outcome is reliable. For unpopular category types, e.g. Smart Fortwo was ranked one of the rarest vehicle models in the US in 2017, you may only have one or two people in your data, and you will not be certain that the historical outcome is a reliable guide to the future. Therefore, you will use broader population statistics to guide your decisions. === "Flipping a coin" You know that when you toss a coin, you can't predict with any certainty whether it is going to be heads or tails, but if you toss a coin 1000 times, you are going to be more certain about how many times you see heads (close to 500 times if you're doing it correctly). ??? ELI5 "What is ROC Curve?" The ROC Curve is a measure of how well a good model can classify data, and it's also a good off-the-shelf method of comparing two models. You typically have several different models to choose from, so you need a way to compare them. If you can find a model that has a very good ROC curve, meaning the model classifies with close to 100% True Positive and 0% False Positive, then that model is probably your best model. **Example: Alien sighting** Imagine you want to receive an answer to the question "Do aliens exist?" Your best plan to get an answer to this question is to interview a particular stranger by asking "Did you see anything strange last night?" If they say "Yes", you conclude aliens exist. If say "No", you conclude aliens don't exist. What's nice is that you have a friend in the army who has access to radar technology so they can determine whether an alien did or did not show up. However, you won't see your friend until next week, so for now conducting the interview experiment is your best option. Now, you have to decide which strangers to interview. You inevitably have to balance whether you should conclude aliens exist now, or just wait for your army friend. You get about 100 people together and you will conduct this experiment with each of them tomorrow. The ROC curve is a way to decide which stranger you should interview because people are blind, drink alcohol, shy, etc. It represents a ranking of each person and how good they are, and at the end you pick the "best" person, and if the "best" person is good enough, you go with the experiment. The ROC curve's y-axis is True Positives, and the x-axis is False Positives. You can imagine people that drink a lot of wine are ranked on the top right of the curve. They think anything is an alien, so they have a 100% True Positive ranking. They will identify an alien if one exists, but they also have 100% False Positive ranking&mdash;if you say everything is an alien, you're flat out wrong when there really aren't aliens. People ranked on the lower left don't believe in aliens, so nothing is an alien because aliens never existed. They have a 0% False Positive ranking, and 0% True Positive ranking. Again, nothing is an alien, so they will never identify if aliens exist. What you want is a person with a 100% True Positive ranking and 0% False Positive ranking&mdash;they correctly identify aliens when they exist, but only when they exist. That's a person that is close to the top-left of the ROC Chart. So your procedure is, take 100 people, and rank them on this space of True Positives vs. False Positives. Learn more about [ROC Curve](roc-curve-tab/index). ??? ELI5 "What is overfitting?" You tell Goodreads that you like a bunch of Agatha Christie books, and you want to know if you'd like other murder mysteries. It says “no,” because those other books weren't written by Agatha Christie. Overfitting is like a bad student who only remembers book facts but does not draw conclusions from them. Any life situation that wasn't specifically mentioned in the book will leave them helpless. But they'll do well on an exam based purely on book facts (that's why you shouldn't score on training data). ??? ELI5 "Dedicated Prediction Server (DPS) vs. Portable Prediction Server (PPS)" === "The Simple Explanation" * **Dedicated Prediction Server (DPS)**: A service built into the DataRobot platform, allowing you to easily host and access your models. This type of prediction server provides the easiest path to MLOps monitoring since the platform is handling scoring directly. * **Portable Prediction Server (PPS)**: A containerized service running outside of DataRobot, serving models exported from DataRobot. This type of prediction server allows more flexibility in terms of where you host your models, while still allowing monitoring when you configure DataRobot's [MLOps agents](mlops-agent/index). This can be helpful in cases where data segregation or network performance are barriers to more traditional scoring with a DPS. The PPS might be a good option if you're considering using scoring code but would benefit from the simplicity of the prediction API or if you have a requirement to collect Prediction Explanations. === "The Garage Metaphor" * **Dedicated Prediction Server (DPS)**: You have a garage attached to your house, allowing you to open the door to check in on your car whenever you want. * **Portable Prediction Server (PPS)**: You have a garage but it's down the street from your house. You keep it down the street because you want more space to work and for your car collection to be safe from damage when your teenage driver tries to park. However, if you want to regularly check in on your collection, you must install cameras. ??? ELI5 "What is deep learning?" Imagine your grandma Dot forgot her chicken matzo ball soup recipe. You want to try to replicate it, so you get your family together and make them chicken matzo ball soup. It’s not even close to what grandma Dot used to make, but you give it to everyone. Your cousin says “too much salt,” your mom says, “maybe she used more egg in the batter,” and your uncle says, “the carrots are too soft.” So you make another one, and they give you more feedback, and you keep making chicken matzo ball soup over until everyone agrees that it tastes like grandma Dot's. That’s how a neural network trains&mdash;something called backpropagation, where the errors are passed back through the network, and you make small changes to try to get closer to the right answers. ??? ELI5 "What is federated machine learning?" The idea is that once a central model is built, you can retrain the model to use in different edge devices. **Example** === "McDonald's menu items" McDonald's set aside its menu (central model) and gives the different franchise flexibility to use it. Then, McDonald's locations in India use that recipe and tweak it to include McPaneer Tikka burger. To do that tweaking, the Indian McDonald's did not need to reach out to the central McDonald's&mdash;they can update those decisions locally. It's advantageous because your models will be updated faster without having to send the data to some central place all the time. The model can use the devices local data (e.g., smartphone usage data on your smartphone) without having to store it in a central training data storage, which can also be good for privacy. === "Phone usage" One example that Google gives is the smart keyboard on your phone. There is a shared model that gets updated based on your phone usage. All that computing is happening on your phone without having to store your usage data in a central cloud. ??? ELI5 "What are offsets?" Let's say you are a 5 year old who understands linear models. With linear regression, you find that betas minimize the error, but you may already know some of the betas in advance. So you give the model the value of those betas and ask it to go find the values of the other betas that minimize error. When you give a model an effect that’s known ahead of time, you're giving the model an offset. ??? ELI5 "What is F1 score?" Let's say you have a medical test (ML model) that determines if a person has a disease. Like many tests, this test is not perfect and can make mistakes (call a healthy person unhealthy or otherwise). We might care the most about maximizing the % of truly sick people among those our model calls sick (_precision_), or we might care about maximizing the % of detection of truly sick people in our population (_recall_). Unfortunately, tuning towards one metric often makes the other metric worse, especially if the target is imbalanced. Imagine you have 1% of sick people on the planet and your model calls everyone on the planet (100%) sick. Now it has a perfect recall score but a horrible precision score. On the opposite side, you might make the model so conservative that it calls only one person in a billion sick but gets it right. That way it has perfect precision but terrible recall. F1 score is a metric that considers precision and recall at the same time so that you could achieve balance between the two. How do you consider precision and recall at the same time? Well, you could just take an average of the two (arithmetic mean), but because precision and recall are ratios with different denominators, arithmetic mean doesn't work that well in this case and a harmonic mean is better. That's exactly what an F1 score is&mdash;a harmonic mean between precision and recall. **Interesting Facts:** Explanations of [Harmonic Mean](https://www.quora.com/How-do-you-explain-the-arithmetic-mean-and-harmonic-mean-to-kids){ target=_blank }. ??? ELI5 "What is SVM?" Let's say you build houses for good borrowers and bad borrowers on different sides of the street so that the road between them is as wide as possible. When a new person moves to this street, you csn see which side of the road they're on to determine if they're a good borrower or not. SVM learns how to draw this "road" between positive and negative examples. SVMs are also called “maximum margin” classifiers. You define a road by the center line and the curbs on either side, and then try to find the widest possible road. The curbs on the side on the road are the “support vectors”. Closely related term: _Kernel Trick_. In the original design, SVM could only learn roads that are straight lines, however, kernels are a math trick that allows them to learn curve-shaped roads. Kernels project the points into a higher dimensional space where they are still separated by a linear "road," but in the original space, they are no longer a straight line. The ingenious part about kernels compared to manually creating polynomial features in logistic regression, is that you don't have to compute those higher-dimensional coordinates beforehand because kernel is always applied to a pair of points and only needs to return a dot product, not the coordinates. This makes it very computationally efficient. **Reference:** Help me understand Support Vector Machines on [Stack Exchange](https://stats.stackexchange.com/a/3954){ target=_blank }. ??? ELI5 "What is an end-to-end ML platform?" Think of it as baking a loaf of bread. If you take ready-made bread mix and follow the recipe, but someone else eats it, that's not end-to-end. If you harvest your own wheat, mill it into flour, make your loaf from scratch (flour, yeast, water, etc.), try out several different recipes, take the best loaf, eat some of it yourself, and then watch to see if it doesn't become moldy&mdash;that's end to end. ??? ELI5 "What are lift charts?" **Example** === "Rock classification" You have 100 rocks. Your friend guesses the measurement of each rock while you actually measure each one. Next, you put them in order from smallest to largest (according to your friend's guesses, not by how big they actually are). You divide them into groups of 10 and take the average size of each group. Then, you compare what your friend guessed with what you measured. This allows you to determine how good your friend is at guessing the size of rocks. === "Customer churn" Let's say you build a model for customer churn, and you want to send out campaigns to 10% of your customers. If you use a model to target that 10% with higher probability of churn, then you have more chance of targeting clients that might churn vs. not using model and just sending your campaigns randomly. Cumulative lift chart shows this more clearly. Learn more about [Lift Charts](lift-chart). ??? ELI5 "What is transfer learning?" **Short version:** When you teach someone how to distinguish dogs from cats, the skills that go into that can be useful when distinguishing foxes and wolves. **Example** You are a 5-year old whose parents decided you need to learn tennis, while you are wondering who "Tennis" is. === "Scenario 1" Every day your parents push you out the door and say, “go learn tennis and if you come back without learning anything today, there is no food for you.” Worried that you'll starve, you started looking for "Tennis." It took a few days for you to figure out that tennis is a game and where tennis is played. It takes you a few more days to understand how to hold the racquet and how to hit the ball. Finally, by the time you figured out the complete game, you are already 6 years old. === "Scenario 2" Your parents took you to the best tennis club in town, and found Roger Federer to coach you. He can immediately start working with you&mdash;teaching you all about tennis and makes you tennis-ready in just a week. Because this guy also happened to have a lot of experience playing tennis, you were able to take advantage of all his tips, and within a few months, you are already one of the best players in town. Scenario 1 is similar to how a regular machine learning algorithm starts learning. With the fear of being punished, it starts looking for a way to learn what is being taught and slowly starts learning stuff from scratch. On the other hand, by using Transfer Learning the same ML algorithm has a much better guidance/starting ground or in other words it is using the same ML algorithm that was trained on a similar data as an initialization point so that it can quickly learn the new data but at a much faster rate and sometimes with better accuracy. ??? ELI5 "What are summarized categorical features?" Let's say you go to the store to shop for food. You walk around the store and put items of different types into your cart, one at a time. Then, someone calls you on the phone and asks you what have in your cart, so you respond with something like "6 cans of soup, 2 boxes of Cheerios, 1 jar of peanut butter, 7 jars of pickles, 82 grapes..." Learn more about [Summarized Categorical Features](histogram#summarized-categorical-features). ??? ELI5 "Particle swarm vs. GridSearch" GridSearch takes a fixed amount of time but may not find a good result. Particle swarm takes an unpredictable, potentially unlimited amount of time, but can find better results. **Examples** === "Particle swarm" You’ve successfully shown up to Black Friday at Best Buy with 3 of your friends and walkie talkies. However, you forgot to look at the ads in the paper for sales. Not to worry, you decided the way you were going to find the best deal in the Big Blue Box is to spread out around the store and walk around for 1 minute to find the best deal and then call your friends and tell them what you found. The friend with the best deal is now an anchor and the other friends start moving in that direction and repeat this process every minute until the friends are all in the same spot (2 hours later), looking at the same deal, feeling accomplished and smart. === "GridSearch" You’ve successfully shown up to Black Friday at Best Buy with 3 of your friends. However, you forgot to look at the ads in the paper for sales and you also forgot the walkie talkies. Not to worry, you decided the way you were going to find the best deal in the Big Blue Box is to spread out around the store in a 2 x 2 grid and grab the best deal in the area then meet your friends at the checkout counter and see who has the best deal. You meet at the checkout counter (5 minutes later), feeling that you didn’t do all you could, but happy that you get to go home, eat leftover pumpkin pie and watch college football. ??? ELI5 "What's GridSearch and why is it important?" Let’s say that you’re baking cookies and you want them to taste as good as they possibly can. To keep it simple, let’s say you use exactly two ingredients: flour and sugar (realistically, you need more ingredients but just go with it for now). How much flour do you add? How much sugar do you add? Maybe you look up recipes online, but they’re all telling you different things. There’s not some magical, perfect amount of flour you need and sugar you need that you can just look up online. So, what do you decide to do? You decide to try a bunch of different values for flour and sugar and just taste-test each batch to see what tastes best. - You might decide to try having 1 cup, 2 cups, and 3 cups of sugar. - You might also decide to try having 3 cups, 4 cups, and 5 cups of flour. In order to see which of these recipes is the best, you’d have to test each possible combination of sugar and of flour. So, that means: - Batch A: 1 cup of sugar & 3 cups of flour - Batch B: 1 cup of sugar & 4 cups of flour - Batch C: 1 cup of sugar & 5 cups of flour - Batch D: 2 cups of sugar & 3 cups of flour - Batch E: 2 cups of sugar & 4 cups of flour - Batch F: 2 cups of sugar & 5 cups of flour - Batch G: 3 cups of sugar & 3 cups of flour - Batch H: 3 cups of sugar & 4 cups of flour - Batch I: 3 cups of sugar & 5 cups of flour If you want, you can draw this out, kind of like you’re playing the game tic-tac-toe. <table> <tr> <th></th> <th scope="col">1 cup of sugar</th> <th scope="col">2 cups of sugar</th> <th scope="col">3 cups of sugar</th> </tr> <tr> <th scope="row">3 cups of flour</th> <td>1 cup of sugar & 3 cups of flour</td> <td>1 cup of sugar & 4 cups of flour</td> <td>1 cup of sugar & 5 cups of flour</td> </tr> <tr> <th scope="row">4 cups of flour</th> <td>2 cups of sugar & 3 cups of flour</td> <td>2 cups of sugar & 4 cups of flour</td> <td>2 cups of sugar & 5 cups of flour</td> </tr> <tr> <th scope="row">5 cups of flour</th> <td>3 cups of sugar & 3 cups of flour</td> <td>3 cups of sugar & 4 cups of flour</td> <td>3 cups of sugar & 5 cups of flour</td> </tr> </table> Notice how this looks like a grid. You are _searching_ this _grid_ for the best combination of sugar and flour. _The only way for you to get the best-tasting cookies is to bake cookies with all of these combinations, taste test each batch, and decide which batch is best._ If you skipped some of the combinations, then it’s possible you’ll miss the best-tasting cookies. Now, what happens when you’re in the real world and you have more than two ingredients? For example, you also have to decide how many eggs to include. Well, your “grid” now becomes a 3-dimensional grid. If you decide between 2 eggs and 3 eggs, then you need to try all nine combinations of sugar and flour for 2 eggs, and you need to try all nine combinations of sugar and flour for 3 eggs. The more ingredients you include, the more combinations you'll have. Also, the more values of ingredients (e.g. 3 cups, 4 cups, 5 cups) you include, the more combinations you have to choose. **Applied to Machine Learning:** When you build models, you have lots of choices to make. Some of these choices are called hyperparameters. For example, if you build a random forest, you need to choose things like: - How many decision trees do you want to include in your random forest? - How deep can each individual decision tree grow? - At least how many samples must be in the final “node” of each decision tree? The way we test this is just like how you taste-tested all of those different batches of cookies: 1. You pick which hyperparameters you want to search over (all three are listed above). 2. You pick what values of each hyperparameter you want to search. 3. You then fit a model separately for each combination of hyperparameter values. 4. Now it’s time to taste test: you measure each model’s performance (using some metric like accuracy or root mean squared error). 5. You pick the set of hyperparameters that had the best-performing model. (Just like your recipe would be the one that gave you the best-tasting cookies.) Just like with ingredients, the number of hyperparameters and number of levels you search are important. - Trying 2 hyperparameters (ingredients) of 3 levels apiece → 3 * 3 = 9 combinations of models (cookies) to test. - Trying 2 hyperparameters (ingredients) of 3 levels apiece and a third hyperparameter with two levels (when we added the eggs) → 3 * 3 * 2 = 18 combinations of models (cookies) to test. The formula for that is: you take the number of levels of each hyperparameter you want to test and multiply it. So, if you try 5 hyperparameters, each with 4 different levels, then you’re building 4 * 4 * 4 * 4 * 4 = 4^5 = 1,024 models. Building models can be time-consuming, so if you try too many hyperparameters and too many levels of each hyperparameter, you might get a really high-performing model but it might take a really, really, really long time to get. DataRobot automatically GridSearches for the best hyperparameters for its models. It is not an exhaustive search where it searches every possible combination of hyperparameters. That’s because this would take a very, very long time and might be impossible. **In one line, but technical:** GridSearch is a commonly-used technique in machine learning that is used to find the best set of hyperparameters for a model. **Bonus note:** You might also hear RandomizedSearch, which is an alternative to GridSearch. Rather than setting up a grid to check, you might specify a range of each hyperparameter (e.g. somewhere between 1 and 3 cups of sugar, somewhere between 3 and 5 cups of flour) and a computer will randomly generate, say, 5 combinations of sugar/flour. It might be like: - Batch A: 1.2 cups of sugar & 3.5 cups of flour. - Batch B: 1.7 cups of sugar & 3.1 cups of flour. - Batch C: 2.4 cups of sugar & 4.1 cups of flour. - Batch D: 2.9 cups of sugar & 3.9 cups of flour. - Batch E: 2.6 cups of sugar & 4.8 cups of flour. ??? ELI5 "Keras vs. TensorFlow" In DataRobot, “TensorFlow" really means “TensorFlow 0.7” and “Keras” really means “TensorFlow 1.x". In the past, TensorFlow had many interfaces, most of which were lower level than Keras, and Keras supported multiple backends (e.g., Theano and TensorFlow). However, TensorFlow consolidated these interfaces and Keras now only supports running code with TensorFlow, so as of Tensorflow 2.x, Keras and TensorFlow are effectively one and the same. Because of this history, upgrading from an older TensorFlow to a new TensorFlow is easier to understand than switching from TensorFlow to Keras. **Example:** Keras vs. Tensorflow is like an automatic coffee machine vs. grinding and brewing coffee manually. There are many ways to make coffee, meaning TensorFlow is not the only technology that can be used by Keras. Keras offers "buttons" (interface) that is powered by a specific "brewing technology" (TensorFlow, CNTK, Theano or something else, known as the Keras backend). Earlier, DataRobot used the lower-level technology, TensorFlow, directly. But just like grinding and brewing coffee manually, this takes a lot more effort and maintenance, and increased the maintenance burden as well, so DataRobot switched to a higher-level technology, like Keras, that provides many nice things under the hood, for example, delivering more advanced blueprints in the product more quickly, which would have taken a lot of effort if manually implemented in TensorFlow. ??? ELI5 "How does CPU differ from GPU (in term of training ML models)?" Think about CPU (central processing unit) as a 4-lane highway with trucks delivering the computation, and GPUs (graphics processing unit) as a 100-lane highway with little shopping carts. GPUs are great at parallelism, but only for less complex tasks. Deep learning specifically benefits from that since it's mainly batches of matrix multiplication, and these can be parallelized very easily. So, training a neural network in a GPU can be 10x faster than on a CPU. But not all model types get that benefit. Here's another one: Let’s say there is a very large library, and the goal is to count all of the books. The librarian is knowledgeable about where books are, how they’re organized, how the library works, etc. The librarian is perfectly capable of counting the books on their own and they’ll probably be very good and organized about it. But what if there is a big team of people who could count the books with the librarian&mdash;not library experts, just people who can count accurately. * If you have 3 people who count books, that speeds up your counting. * If you have 10 people who count books, your counting gets even faster. * If you have 100 people who count books...that’s awesome! *A CPU is like a librarian.* Just like you need a librarian running a library, you need a CPU. A CPU can basically do any jobs that you need done. Just like a librarian could count all of the books on their own, a CPU can do math things like building machine learning models. *A GPU is like a team of people counting books.* Just like counting books is something that can be done by many people without specific library expertise, a GPU makes it much easier to take a job, split it among many different units, and do math things like building machine learning models. For more details on this analogy, see [Robot-to-Robot](rr-gpu-v-cpu). ??? ELI5 "What is meant by single-tenant and multi-tenant SaaS?" *Single-tenant*: You rent an apartment. When you're not using it, neither is anybody else. You can leave your stuff there without being concerned that others will mess with it. *Multi-tenant*: You stay in a hotel room. *Multi-tenant:* Imagine a library with many individual, locked rooms, where every reader has a designated room for their personal collection, but the core library collection at the center of the space is shared, allowing everyone to access those resources. For the most part, you have plenty of privacy and control over your personal collection, but there's only one of copy of each book at the center of the building, so it's possible for someone to rent out the entire collection on a particular topic, leaving others to wait their turn. *Single-tenant:* Imagine a library network of many individual branches, where each individual library branch carries a complete collection while still providing private rooms. Readers don't need to share the central collection of their branch with others, but the branches are maintained by the central library committee, ensuring that the contents of each library branch is regularly updated for all readers. *On-prem:* Some readers don't want to use our library space and instead want to make a copy to use in their own home. These folks make a copy of the library and resources and take them home, and then maintain them on their own schedule with their own personal resources. This gives them even more privacy and control over their content, but they lose the convenience of automated updates, new books, and library management.
eli5
--- title: Learn more description: Get started in DataRobot with descriptions of common terms and concepts, as well as how-tos. --- # Learn more {: #learn-more } This page provides access to learning resources that help you get started in DataRobot, including simplified explanations of concepts, how-tos and end-to-end walkthroughs, and descriptions of terms in the application. Topic | Describes... ---------- | ----------- [Glossary](glossary/index) | Read descriptions for terms used throughout DataRobot. [How-tos](how-to/index) | Step-by-step instructions to perform tasks within the DataRobot application as well as partners, cloud providers, and 3rd party vendors. [ELI5](eli5) | Read simplified descriptions of common DataRobot concepts. [Robot-to-Robot](robot-to-robot/index) | See the data science topics that DataRobot employees talk about in Slack. [Business accelerators](biz-accelerators/index) | End-to-end walkthroughs, based on best practices and patterns, that address common business problems.
index
--- title: MLOps description: DataRobot machine learning operations (MLOps) provides a central hub for you to deploy, monitor, manage, and govern your models in production. --- # MLOps {: #mlops } DataRobot MLOps provides a central hub to deploy, monitor, manage, and govern all your models in production. You can deploy models to the production environment of your choice and continuously monitor the health and accuracy of your models, among other metrics. The following sections describe: Topic | Describes... ----- | ------ [Deployment](deployment/index) | How to bring models to production by following the workflows provided for all kinds of starting artifacts. [Deployment settings](deployment-settings/index) | How to use the settings tabs for individual MLOps features to add or update deployment functionality. [Lifecycle management](manage-mlops/index) | Maintaining model health to minimize inaccurate data, poor performance, or unexpected results from models in production. [Performance monitoring](monitor/index) | Tracking the performance of models to identify potential issues, such as service errors or model accuracy decay, as soon as possible. [Governance](governance/index) | Enacting workflow requirements to ensure quality and comply with regulatory obligations. [MLOps FAQ](mlops-faq) | A list of frequently asked MLOps questions with brief answers linking to the relevant documentation.
index
--- title: MLOps FAQ dataset_name: N/A description: Provides a list, with brief answers, of frequently asked MLOps deployment and monitoring questions. Answers link to complete documentation. domain: mlops expiration_date: 10-10-2024 owner: nick.aylward@datarobot.com url: docs.datarobot.com/docs/mlops/mlops-faq.html --- # MLOps FAQ {: #mlops-faq } ??? faq "What are the supported model types for deployments?" DataRobot MLOps supports three types of model for deployment: * [DataRobot models](model-data) built with AutoML and deployed directly to the inventory * [Custom inference models](custom-inf-model) assembled in the Custom Model Workshop * External models [registered as model packages](reg-create#register-external-model-packages) and monitored by the [MLOps agent](mlops-agent/index). ??? faq "How do I make predictions on a deployed model?" To make predictions with a deployment, navigate to the **Predictions** tab. From there, you can use the [predictions interface](batch-pred) to drag and drop prediction data and return prediction results. Supported models can [download and configure Scoring Code](sc-download-deployment) from a deployment. External models can score datasets in batches on a remote environment [with the Portable Prediction Server](portable-batch-predictions). For a code-centric experience, use the provided [Python Scoring Code](code-py), which contains the commands and identifiers needed to submit a CSV or JSON file for scoring with the [Prediction API](dr-predapi). ??? faq "What is a prediction environment?" Models that run on your own infrastructure (outside of DataRobot) may be run in different environments and can have differing deployment permissions and approval processes. For example, while any user may have permission to deploy a model to a test environment, deployment to production may require a strict approval workflow and only be permitted by those authorized to do so. [Prediction environments](pred-env) support this deployment governance by grouping deployment environments and supporting grouped deployment permissions and approval workflows. They indicate the platform used in your external infrastructure (AWS, Azure, Snowflake, etc.) and the model formats it supports. ??? faq "How do I enable accuracy monitoring?" To activate the [**Accuracy**](deploy-accuracy) tab for deployments, you must first select an association ID; a [foreign key](https://www.tutorialspoint.com/Foreign-Key-in-RDBMS) that links predictions with future results (referred to as actuals or outcome data). In the **Settings** > **Data** tab for a deployment, the **Inference** section has a field for the column name containing the association IDs. Enter the column name here, and then, after making predictions, [add actuals](accuracy-settings#add-actuals) to the deployment to generate accuracy statistics. ??? faq "What is data drift? How is this different from model drift?" Data Drift refers to changes in the distribution of prediction data versus training data. Data Drift alerts indicate that the data you are making predictions on looks different from the data the model used for training. DataRobot uses PSI or ["Population Stability Index"](https://www.listendata.com/2015/05/population-stability-index.html){ target=_blank } to measure this. Models themselves cannot drift; once they are fit, they are static. Sometimes the term "Model Drift" is used to refer to drift in the predictions, which simply indicates that the average predicted value is changing over time. ??? faq "What do the green, yellow, and red status icons mean in the deployment inventory (on the **Deployments** tab)?" The [**Service Health**](service-health), [**Data Drift**](data-drift), and [**Accuracy**](deploy-accuracy) summaries in the deployment inventory provide an at-a-glance indication of health and accuracy for all deployed models. To view this more detailed information for an individual model, click on the model in the inventory list. For more information about interpreting the color indicators, [reference the documentation](deploy-inventory). ??? faq "What data formats does the Prediction API support for scoring?" Prediction data needs to be provided in a CSV or JSON file. For more information, reference the documentation for the [DataRobot Prediction API](dr-predapi). ??? faq "How do I use a different model in a deployment?" To replace a model, use the [**Replace model**](deploy-replace) functionality found in the **Actions** menu for a deployment. Note that DataRobot issues a warning if the replacement model differs from the current model in either of these ways: <ul><li>Feature names do not match. </li><li>Feature names match but have different data types in the replacement model.</li></ul> ??? faq "What is humility monitoring?" Humility monitoring, available from a deployment's [**Humility** tab](humble), allows you to [configure rules](humility-settings) that allow models to be capable of recognizing, in real-time, when they make uncertain predictions or receive data they have not seen before. Unlike data drift, model humility does not deal with broad statistical properties over time&mdash;it is instead triggered for individual predictions, allowing you to set desired behaviors with rules that depend on different triggers. Humility rules help to identify and handle data integrity issues during monitoring and to better identify the root cause of unstable predictions. ??? faq "What is the Portable Prediction Server and how do I use it?" The [Portable Prediction Server (PPS)](portable-pps) is a DataRobot execution environment for DataRobot model packages (`.mlpkg` files) distributed as a self-contained Docker image. PPS can be run disconnected from main installation environments. Once started, the image serves HTTP API via the `:8080` port. In order to use it, you create an external deployment, [create an external prediction environment](pred-env) for your infrastructure, download the [PPS Docker image](portable-pps#obtain-the-pps-docker-image), and download the [model package](portable-pps#download-the-model-package). This configuration allows you to run PPS outside of DataRobot but still have access to insights and statistics from your deployment in the application. ??? faq "The **Challengers** tab is grayed out. Why can't I add a challenger?" In order to add a challenger model to compare against your deployed model, you must be an MLOps user and you must enable the **Challengers** tab. To do so, select **Settings > Data** in your deployment. Under **Data Drift** in the right pane, toggle **Enable prediction rows storage** and click **Save change(s)**. This setting is required for you to compare challenger models to the champion model. When you select a deployment, you can now select the **Challengers** tab.
mlops-faq
--- title: Platform description: This section includes information and links for managing user settings; authentication and SSO; the administrator's guide; sharing and permissions; user documentation for companion tools; and more. --- # Platform {: #platform } The platform section provides materials for users and administrators to manage their DataRobot accounts. !!! note DataRobot performs service maintenance regularly. Although most maintenance will occur unnoticed, some may cause a temporary impact. Status page announcements provide information on service outages, scheduled maintenance, and historical uptime. You can view and subscribe to notifications from the [DataRobot status page](https://status.datarobot.com/){ target=_blank }. Topic | Describes... ----- | ------ [Account management](account-mgmt/index) | View information to help manage your DataRobot account. [Authentication](authentication/index) | Learn about authentication in DataRobot, including SSO, 2FA, and stored credentials. [Administrator's guide](admin-guide/index) | For administrators, get help in managing the DataRobot application. [Data and sharing](data-sharing/index) | Learn about sharing, permissions, and data file size requirements. [Companion tools](companion-tools/index) | Access user documentation for Algorithmia and Paxata Data Prep. ## Browser compatibility {: #browser-compatibility } {% include 'includes/browser-compatibility.md' %}
index
With the **Comments** link, you can add comments to&mdash;even host a discussion around&mdash;any item in the catalog that you have access to. Comment functionality is available in the **AI Catalog** (illustrated below), and also as a model tab from the Leaderboard and in use case tracking. With comments you can: * Tag other users in a comment; DataRobot will then send them an email notification. * Edit or delete any comment you have added (you cannot edit or delete other users' comments). ![](images/catalog-26.png)
comm-add
??? note "Dataset requirements for time series batch predictions" To ensure DataRobot can process your time series data, configure the dataset to meet the following requirements: * Sort prediction rows by their timestamps, with the earliest row first. * For multiseries, sort prediction rows by series ID and then by timestamp. * There is *no limit* on the number of series DataRobot supports. The only limit is the job timeout, as mentioned in [Limits](batch-prediction-api/index#limits). For dataset examples, see the [requirements for the scoring dataset](batch-pred-ts#requirements-for-the-scoring-dataset).
batch-pred-ts-scoring-data-requirements
!!! note "DataRobot fully supports the latest version of Google Chrome" Other browsers such as Edge, Firefox, and Safari are not fully supported. As a result, certain features may not work as expected. DataRobot recommends using Chrome for the best experience. Ad block browser extensions may cause display or performance issues in the DataRobot web application.
browser-compatibility
The **Clustering** tab sets the number of clusters that DataRobot will find during Autopilot. The default number of clusters is based on number of series in the dataset. To set the number, add or remove values from the entry box and select the value from the dropdown: ![](images/cluster-adv-opt-1.png) Note that when using Manual mode, you are prompted to set the number of clusters when building models from the Repository.
ts-cluster-adv-opt-include
There are several options available in the **Actions** menu, which can be accessed for each model package in the **Model Packages** tab of the **Model Registry**: ![](images/reg-action-1.png) The available options depend on a variety of criteria, including user permissions and the data available to your model package: ![](images/reg-action-2.png) Option | Description -------|------------ Deploy | Select **+ Deploy** to [create a deployment](deploy-model#deploy-from-the-model-registry) from a model package. For external models, you can [create an external deployment](deploy-external-model#deploy-an-external-model-package). Share | The sharing capability allows [appropriate user roles](roles-permissions#reg-roles) to grant permissions on a model package. To share a model package, select the **Share** (![](images/icon-share.png)) action. <br> You can only share up to your own access level (a consumer cannot grant an editor role, for example) and you cannot downgrade the access of a collaborator with a higher access level than your own. Permanently Archive | If you have the appropriate [permissions](roles-permissions#reg-roles), you can select **Permanently Archive** ![](images/icon-delete.png) to archive a model package, which also removes it from the **Model Packages** list.
manage-model-packages
??? faq "How does DataRobot track drift?" For data drift, DataRobot tracks: * **Target drift**: DataRobot stores statistics about predictions to monitor how the distribution and values of the target change over time. As a baseline for comparing target distributions, DataRobot uses the distribution of predictions on the holdout. * **Feature drift**: DataRobot stores statistics about predictions to monitor how distributions and values of features change over time. As a baseline for comparing distributions of features: * For training datasets larger than 500 MB, DataRobot uses the distribution of a random sample of the training data. * For training datasets smaller than 500 MB, DataRobot uses the distribution of 100% of the training data.
how-dr-tracks-drift-include
## Deep dive: Imbalanced targets {: #deep-dive-imbalanced-targets } In AML and Transaction Monitoring, the SAR rate is usually very low (1%–5%, depending on the detection scenarios); sometimes it could be even lower than 1% in extremely unproductive scenarios. In machine learning, such a problem is called _class imbalance_. The question becomes, how can you mitigate the risk of class imbalance and let the machine learn as much as possible from the limited known-suspicious activities? DataRobot offers different techniques to handle class imbalance problems. Some techniques: * Evaluate the model with <a target="_blank" rel="noopener noreferrer" href="https://docs.datarobot.com/en/docs/modeling/reference/model-detail/opt-metric.html#optimization-metrics"><b>different metrics</b></a>. For binary classification (the false positive reduction model here, for example), LogLoss is used as the default metric to rank models on the Leaderboard. Since the rule-based system is often unproductive, which leads to a very low SAR rate, it’s reasonable to take a look at a different metric, such as the SAR rate in the top 5% of alerts in the prioritization list. The objective of the model is to assign a higher prioritization score with a high risk alert, so it’s ideal to have a higher rate of SAR in the top tier of the prioritization score. In the example shown in the image below, the SAR rate in the top 5% of prioritization score is more than 70% (the original SAR rate is less than 10%), which indicates that the model is very effective in ranking the alert based on the SAR risk. * DataRobot also provides flexibility for modelers when tuning hyperparameters which could also help with the class imbalance problem. In the example below, the Random Forest Classifier is tuned by enabling the balance_boostrap (a random sample with an equal amount of SAR and non-SAR alerts in each decision tree in the forest); you can see the validation score of the new ‘Balanced Random Forest Classifier’ model is slightly better than the parent model. ![](images/aml-change-metric.png) * You can also use <a target="_blank" rel="noopener noreferrer" href="https://docs.datarobot.com/en/docs/modeling/build-models/adv-opt/smart-ds.html#smart-downsampling"><b>Smart Downsampling</b></a> (from the Advanced Options tab) to intentionally downsample the majority class (i.e., non-SAR alerts) in order to build faster models with similar accuracy.
aml-4-include
* Frozen thresholds are not supported. * Blenders that contain monotonic models do not display the MONO label on the Leaderboard for OTV projects. * When previewing predictions over time, the interval only displays for models that haven’t been retrained (for example, it won’t show up for models with the **Recommended for Deployment** badge). * If you configure long backtest durations, DataRobot will still build models, but will not run backtests in cases where there is not enough data. In these case, the backtest score will not be available on the Leaderboard. * Timezones on date partition columns are ignored. Datasets with multiple time zones may cause issues. The workaround is to convert to a single time zone outside of DataRobot. Also there is no support for daylight savings time. * Dates before 1900 are not supported. If necessary, shift your data forward in time. * Leap seconds are not currently supported.
dt-consider
| | Element | Description | |---|---|---| | ![](images/icon-1.png) | Filter by predicted or actual | Narrows the display based on the predicted and actual class values. See [Filters](#filters) for details.| | ![](images/icon-2.png) | Show color overlay | Sets whether to display the activation map in either black and white or full color. See [Color overlay](#color-overlay) for details. | | ![](images/icon-3.png) | Activation scale | Shows the extent to which a region is influencing the prediction. See [Activation scale](#activation-scale) for details. | See the [reference material](vai-ref#ref-map) for detailed information about Visual AI. ### Filters {: #filters } Filters allow you to narrow the display based on the predicted and the actual class values. The initial display shows the full sample (i.e., both filters are set to *all*). You can instead set the display to filter by specific classes, limiting the display). Some examples: | "Predicted" filter | "Actual" filter | Display results | |--------------------|-------------------|--------------------| | All | All | All (up to 100) samples from the validation set | | Tomato Leaf Mold | All | All samples in which the predicted class was Tomato Leaf Mold | | Tomato Leaf Mold | Tomato Leaf Mold | All samples in which both the predicted and actual class were Tomato Leaf Mold | | Tomato Leaf Mold | Potato Blight | Any sample in which DataRobot predicted Tomato Leaf Mold but the actual class was potato blight | Hover over an image to see the reported predicted and actual classes for the image: ![](images/vai-20.png) ### Color overlay {: #color-overlay } DataRobot provides two different views of the activation maps&mdash;black and white (which shows some transparency of original image colors) and full color. Select the option that provides the clearest contrast. For example, for black and white datasets, the alternative color overlay may make activation areas more obvious (instead of using a black-to-transparent scale). Toggle **Show color overlay** to compare. ![](images/vai-18.png) ### Activation scale {: #activation-scale } The high-to-low activation scale indicates how much of a region in an image is influencing the prediction. Areas that are higher on the scale have a higher predictive influence&mdash;the model used something that was there (or not there, but should have been) to make the prediction. Some examples might include the presence or absence of yellow discoloration on a leaf, a shadow under a leaf, or an edge of a leaf that curls in a certain way. Another way to think of scale is that it reflects how much the model "is excited by" a particular region of the image. It’s a kind of prediction explanation&mdash;why did the model predict what it did? The map shows that the reason is because the algorithm saw _x_ in this region, which activated the filters sensitive to visual information like _x_.
activation-map-include
Consider the following when working with segmented modeling deployments: * Time series segmented modeling deployments do not support data drift monitoring. * Automatic retraining for segmented deployments that use clustering models is disabled; retraining must be done manually. * Retraining can be triggered by accuracy drift in a Combined Model; however, it doesn't support monitoring accuracy in individual segments or retraining individual segments. * Combined model deployments can include standard model challengers.
deploy-combined-model-include
The **Histogram** chart is the default display for numeric features. It "buckets" numeric feature values into equal-sized ranges to show frequency distribution of the variable&mdash;the target observation (left Y-axis) plotted against the frequency of the value (X-axis). The height of each bar represents the number of rows with values in that range. ??? note "Histogram display variations" The display differs depending on whether the [data quality](data-quality#interpret-the-histogram-tab) issue "Outliers" was found. Without data quality issues: ![](images/histogram.png) With data quality issues: ![](images/histogram-outlier.png) Initially, the display shows the bucketed data: ![](images/dq-8.png) Select the **Show outliers** checkbox to calculate and display outliers: ![](images/dq-10.png) The traditional box plot above the chart (shown in gold) highlights the middle quartiles for the data to help you determine whether the distribution is skewed. To determine whisker length, DataRobot uses [Ueda's algorithm](https://jsdajournal.springeropen.com/articles/10.1186/s40488-015-0031-y){ target=_blank } to identify the outlier points&mdash;the whiskers depict the full range for the lowest and highest data points in the dataset excluding those outliers. This is useful for helping to determine whether a distribution is skewed and/or whether the dataset contains a problematic number of outliers. Note the change in the X-axis scale and compression of the box plot to allow for outlier display. Because there tend to be fewer rows recording an outlier value (it's what makes them outliers), the blue bar may not display. Hover on that column to display a tooltip with the actual row count. After EDA2 completes, the histogram also displays an [average target value](histogram#average-target-values) overlay.
histogram-include
## Business problem {: #business-problem } A key pillar of any AML compliance program is to monitor transactions for suspicious activity. The scope of transactions is broad, including deposits, withdrawals, fund transfers, purchases, merchant credits, and payments. Typically, monitoring starts with a rules-based system that scans customer transactions for red flags consistent with money laundering. When a transaction matches a predetermined rule, an alert is generated and the case is referred to the bank’s internal investigation team for manual review. If the investigators conclude the behavior is indicative of money laundering, then the bank will file a Suspicious Activity Report (SAR) with FinCEN. Unfortunately, the standard transaction monitoring system described above has costly drawbacks. In particular, the rate of false-positives (cases incorrectly flagged as suspicious) generated by this rules-based system can reach 90% or more. Since the system is rules-based and rigid, it cannot dynamically learn the complex interactions and behaviors behind money laundering. The prevalence of false-positives makes investigators less efficient as they have to manually weed out cases that the rules-based system incorrectly marked as suspicious. Compliance teams at financial institutions can have hundreds or even thousands of investigators, and the current systems prevent investigators from becoming more effective and efficient in their investigations. The cost of reviewing an alert ranges between `$30~$70`. For a bank that receives 100,000 alerts a year, this is a substantial sum; on average, penalties imposed for proven money laundering amount to `$145` million per case. A reduction in false positives could result in savings between `$600,000~$4.2` million per year. ## Solution value {: #solution-value } This use case builds a model that dynamically learns patterns in complex data and reduces false positive alerts. Financial crime compliance teams can then prioritize the alerts that legitimately require manual review and dedicate more resources to those cases most likely to be suspicious. By learning from historical data to uncover patterns related to money laundering, AI also helps identify which customer data and transaction activities are indicative of a high risk for potential money laundering. The primary issues and corresponding opportunities that this use case addresses include: Issue | Opportunity :- | :- Potential regulatory fine | Mitigate the risk of missing suspicious activities due to lack of competency with alert investigations. Use alert scores to more effectively assign alerts&mdash;high risk alerts to more experienced investigators, low risk alerts to more junior team members. Investigation productivity | Increase investigators' productivity by making the review process more effective and efficient, and by providing a more holistic view when assessing cases. Specifically: * **Strategy/challenge**: Help investigators focus their attention on cases that have the highest risk of money laundering while minimizing the time they spend reviewing false-positive cases. For banks with large volumes of daily transactions, improvements in the effectiveness and efficiency of their investigations ultimately results in fewer cases of money laundering that go unnoticed. This allows banks to enhance their regulatory compliance and reduce the volume of financial crime present within their network. * **Business driver**: Improve the efficiency of AML transaction monitoring and lower operational costs. With its ability to dynamically learn patterns in complex data, AI significantly improves accuracy in predicting which cases will result in a SAR filing. AI models for anti-money laundering can be deployed into the review process to score and rank all new cases. * **Model solution**: Assign a suspicious activity score to each AML alert, improving the efficiency of an AML compliance program. Any case that exceeds a predetermined threshold of risk is sent to the investigators for manual review. Meanwhile, any case that falls below the threshold can be automatically discarded or sent to a lighter review. Once AI models are deployed into production, they can be continuously retrained on new data to capture any novel behaviors of money laundering. This data will come from the feedback of investigators. Specifically, the model will use rules that trigger an alert whenever a customer requests a refund of any amount since small refund requests could be the money launderer’s way of testing the refund mechanism or trying to establish refund requests as a normal pattern for their account. The following table summarizes aspects of this use case. Topic | Description :- | :- **Use case type** | Anti-money laundering (false positive reduction) **Target audience** | Data Scientist, Financial Crime Compliance Team **Desired outcomes**| <ul><li>Identify which customer data and transaction activity are indicative of a high risk for potential money laundering.</li><li>Detect anomalous changes in behavior or nascent money laundering patterns before they spread.</li><li>Reduce the false positive rate for the cases selected for manual review.</li></ul> **Metrics/KPIs** | <ul><li>Annual alert volume</li><li>Cost per alert</li><li>False positive reduction rate</li></ul> **Sample dataset** | https://s3.amazonaws.com/datarobot-use-case-datasets/DR_Demo_AML_Alert_train.csv ### Problem framing {: #problem-framing } The target variable for this use case is **whether or not the alert resulted in a SAR** after manual review by investigators, making this a binary classification problem. The unit of analysis is an individual alert&mdash;the model will be built on the alert level&mdash;and each alert will receive a score ranging from 0 to 1. The score indicates the probability of being a SAR. The goal of applying a model to this use case is to lower the false positive rate, which means resources are not spent reviewing cases that are eventually determined not to be suspicious after an investigation. In this use case, the False Positive Rate of the rules engine on the validation sample (1600 records) is: The number of `SAR=0` divided by the total number of records = `1436/1600` = `90%`. ### ROI estimation {: #roi-estimation } ROI can be calculated as follows: `Avoided potential regulatory fine + Annual alert volume * false positive reduction rate * cost per alert` A high-level measurement of the ROI equation involves two parts. 1. The total amount of `avoided potential regulatory fines` will vary depending on the nature of the bank and must be estimated on a case-by-case basis. 2. The second part of the equation is where AI can have a tangible impact on improving investigation productivity and reducing operational costs. Consider this example: * A bank generates 100,000 AML alerts every year. * DataRobot achieves a 70% false positive reduction rate without losing any historical suspicious activities. * The average cost per alert is `$30~$70`. Result: The annual ROI of implementing the solution will be `100,000 * 70% * ($30~$70) = $2.1MM~$4.9MM`. ## Working with data {: #working-with-data } The linked synthetic dataset illustrates a credit card company’s AML compliance program. Specifically, the model detects the following money-laundering scenarios: - The customer spends on the card but overpays their credit card bill and seeks a cash refund for the difference. - The customer receives credits from a merchant without offsetting transactions and either spends the money or requests a cash refund from the bank. The unit of analysis in this dataset is an individual alert, meaning a rule-based engine is in place to produce an alert to detect potentially suspicious activity consistent with the above scenarios. ### Data preparation {: #data-preparation } Consider the following when working with data: * **Define the scope of analysis**: Collect alerts from a specific analytical window to start with; it’s recommended that you use 12–18 months of alerts for model building. * **Define the target**: Depending on the investigation processes, the target definition could be flexible. In this walkthrough, alerts are classified as `Level1`, `Level2`, `Level3`, and `Level3-confirmed`. These labels indicate at which level of the investigation the alert was closed (i.e., confirmed as a SAR). To create a binary target, treat `Level3-confirmed` as SAR (denoted by 1) and the remaining levels as non-SAR alerts (denoted by 0). * **Consolidate information from multiple data sources**: Below is a sample entity-relationship diagram indicating the relationship between the data tables used for this use case. ![](images/aml-entity-rel.png) Some features are static information&mdash;for example, `kyc_risk_score` and `state of residence`&mdash;these can be fetched directly from the reference tables. For transaction behavior and payment history, the information will be derived from a specific time window prior to the alert generation date. This case uses 90 days as the time window to obtain the dynamic customer behavior, such as `nbrPurchases90d`, `avgTxnSize90d`, or `totalSpend90d`. Below is an example of one row in the training data after it is merged and aggregated (it is broken into multiple lines for easier visualization). ![](images/aml-training-row.png) ### Features and sample data {: #features-and-sample-data } The features in the sample dataset consist of KYC (Know-Your-Customer) information, demographic information, transactional behavior, and free-form text information from notes taken by customer service representatives. To apply this use case in your organization, your dataset should contain, at a minimum, the following features: - Alert ID - Binary classification target (`SAR/no-SAR`, `1/0`, `True/False`, etc.) - Date/time of the alert - "Know Your Customer" score used at the time of account opening - Account tenure, in months - Total merchant credit in the last 90 days - Number of refund requests by the customer in the last 90 days - Total refund amount in the last 90 days Other helpful features to include are: - Annual income - Credit bureau score - Number of credit inquiries in the past year - Number of logins to the bank website in the last 90 days - Indicator that the customer owns a home - Maximum revolving line of credit - Number of purchases in the last 90 days - Total spend in the last 90 days - Number of payments in the last 90 days - Number of cash-like payments (e.g., money orders) in last 90 days - Total payment amount in last 90 days - Number of distinct merchants purchased from in the last 90 days - Customer Service Representative notes and codes based on conversations with customer (cumulative) The table below shows a sample feature list: Feature name | Data type | Description | Data source | Example ------------ | --------- | ----------- | ----------- | ------- ALERT | Binary | Alert Indicator | tbl_alert | 1 SAR | Binary(Target) | SAR Indicator (Binary Target) | tbl_alert | 0 kycRiskScore | Numeric | Account relationship (Know Your Customer) score used at time of account opening | tbl_customer | 2 income | Numeric | Annual income | tbl_customer | 32600 tenureMonths | Numeric | Account tenure in months | tbl_customer | 13 creditScore | Numeric | Credit bureau score | tbl_customer | 780 state | Categorical | Account billing address state | tbl_account | VT nbrPurchases90d | Numeric | Number of purchases in last 90 days | tbl_transaction | 4 avgTxnSize90d | Numeric | Average transaction size in last 90 days | tbl_transaction | 28.61 totalSpend90d | Numeric | Total spend in last 90 days | tbl_transaction | 114.44 csrNotes | Text | Customer Service Representative notes and codes based on conversations with customer (cumulative) | tbl_customer_misc | call back password call back card password replace atm call back nbrDistinctMerch90d | Numeric | Number of distinct merchants purchased at in last 90 days | tbl_transaction | 1 nbrMerchCredits90d | Numeric | Number of credits from merchants in last 90 days | tbl_transaction | 0 nbrMerchCredits-RndDollarAmt90d | Numeric | Number of credits from merchants in round dollar amounts in last 90 days | tbl_transaction | 0 totalMerchCred90d | Numeric | Total merchant credit amount in last 90 days | tbl_transaction | 0 nbrMerchCredits-WoOffsettingPurch | Numeric | Number of merchant credits without an offsetting purchase in last 90 days | tbl_transaction | 0 nbrPayments90d | Numeric | Number of payments in last 90 days | tbl_transaction | 3 totalPaymentAmt90d | Numeric | Total payment amount in last 90 days | tbl_account_bill | 114.44 overpaymentAmt90d | Numeric | Total amount overpaid in last 90 days | tbl_account_bill | 0 overpaymentInd90d | Numeric | Indicator that account was overpaid in last 90 days | tbl_account_bill | 0 nbrCustReqRefunds90d | Numeric | Number refund requests by the customer in last 90 days | tbl_transaction | 1 indCustReqRefund90d | Binary | Indicator that customer requested a refund in last 90 days | tbl_transaction | 1 totalRefundsToCust90d | Numeric | Total refund amount in last 90 days | tbl_transaction | 56.01 nbrPaymentsCashLike90d | Numeric | Number of cash like payments (e.g., money orders) in last 90 days | tbl_transaction | 0 maxRevolveLine | Numeric | Maximum revolving line of credit | tbl_account | 14000 indOwnsHome | Numeric | Indicator that the customer owns a home | tbl_transaction | 1 nbrInquiries1y | Numeric | Number of credit inquiries in the past year | tbl_transaction | 0 nbrCollections3y | Numeric | Number of collections in the past year | tbl_collection | 0 nbrWebLogins90d | Numeric | Number of logins to the bank website in the last 90 days | tbl_account_login | 7 nbrPointRed90d | Numeric | Number of loyalty point redemptions in the last 90 days | tbl_transaction | 2 PEP | Binary | Politically Exposed Person indicator | tbl_customer | 0
aml-1-include
Data integrity and quality are cornerstones for creating highly accurate predictive models. These sections describe the tools and visualizations DataRobot provides to ensure that your project doesn't suffer the "garbage in, garbage out" outcome.
data-description
### Business problem {: #business-problem } A "readmission" event is when a patient is readmitted into the hospital within 30 days of being discharged. Readmissions are not only a reflection of uncoordinated healthcare systems that fail to sufficiently understand patients and their conditions, but they are also a tremendous financial strain on both healthcare providers and payers. In 2011, the United States Government estimated there were approximately 3.3 million cases of 30-day, all-cause hospital readmissions, incurring healthcare organizations a total cost of $41.3 billion. The foremost challenge in mitigating readmissions is accurately anticipating patient risk from the point of initial admission up until discharge. Although a readmission is caused by a multitude of factors, including a patient’s medical history, admission diagnosis, and social determinants, the existing methods (i.e., LACE and HOSPITAL scores) used to assess a patient’s likelihood of readmission do not effectively consider the variety of factors involved. By only including limited considerations, these methods result in suboptimal health evaluations and outcomes. ## Solution value {: #solution-value } AI provides clinicians and care managers with the information they need to nurture strong, lasting connections with their patients. It helps reduce readmission rates by predicting which patients are at risk and allowing clinicians to prescribe intervention strategies before, and after, the patient is discharged. AI models can ingest significant amounts of data and learn complex patterns behind why certain patients are likely to be readmitted. Model interpretability features offer personalized explanations for predictions, giving clinicians insight into the top risk drivers for each patient at any given time. By taking the form of an artificial clinician and augmenting the care they provide, along with other actions clinicians already take, AI enables them to conduct intelligent interventions to improve patient health. Using the information they learn, clinicians can decrease the likelihood of patient readmission by carefully walking through their discharge paperwork in-person, scheduling additional outpatient appointments (to give them more confidence about their health), and providing additional interventions that help reduce readmissions. ### Problem framing {: #problem-framing } One way to frame the problem is to determine how to measure ROI for the use case. Consider: **Current cost of readmissions**: Current readmissions annual rate x Annual hospital inpatient discharge volumes x Average cost of a hospital readmission **New cost of readmissions**: New readmissions annual rate x Annual hospital inpatient discharge volumes x Average cost of a hospital readmission **ROI**: New cost of readmissions - Current cost of readmissions As a result, the top-down calculation for value estimates is: **ROI**: Current costs of readmissions x improvement in readmissions rate For example, at a US national level, calculating the top-down cost of readmissions for each healthcare provider is `$41.3 billion / 6,210 US providers = ~$6.7 million` For illustrative purposes, this tutorial uses a sample dataset provided by a [medical journal](https://www.hindawi.com/journals/bmri/2014/781670/#supplementary-materials){ target=_blank } that studied readmissions across 70,000 inpatients with diabetes. The researchers of the study collected this data from the Health Facts database provided by Cerner Corporation, which is a collection of clinical records across providers in the United States. Health Facts allows organizations that use Cerner’s electronic health system to voluntarily make their data available for research purposes. All the data was cleansed of PII in compliance with HIPAA. ### Features and sample data {: #features-and-sample-data } The features for this use case represent key factors for predicting readmissions. They encompass each patient’s background, diagnosis, and medical history, which will help DataRobot find relevant patterns across the patient’s medical profile to assess their re-hospitalization risk. In addition to the features listed below, incorporate any additional data that your organization collects that might be relevant to readmission.(DataRobot is able to differentiate important/unimportant features if your selection would not improve modeling.) Relevant features are generally stored across proprietary data sources available in your EMR system (for example, Epic or Cerner) and include: * Patient data * Diagnosis data * Admissions data * Prescription data Other external data sources may also supply relevant data such as: * Seasonal data * Demographic data, * Social determinants data Each record in the data represents a unique patient visit. #### Target {: #target } The target variable: * `Readmitted` This feature represents whether or not a patient was readmitted to the hospital within 30 days of discharge, using values such as `True \ False`, `1 \ 0`, etc. This choice in target makes this a binary classification problem. ### Sample feature list {: #sample-feature-list } **Feature Name** | **Data Type** | **Description** | **Data Source** | **Example** --- | --- | --- | --- | --- **Readmitted** | **Binary (Target)** | Whether or not the patient readmitted after 30 days | Admissions Data | False | Age | Numeric | Patient age group | Patient Data | Female | Weight | Categorical | Patient weight group | Patient Data | 50-75| Gender | Categorical | Patient gender | Patient Data | 50-60 | Race | Categorical | Patient race | Patient Data | Caucasian | Admissions Type | Categorical | Patient state during admission (Elective, Urgent, Emergency, etc.) | Admissions Data | Elective | Discharge Disposition | Categorical | Patient discharge condition (Home, home with health services, etc.) | Admissions Data | Discharged to home | Admission Source | Categorical | Patient source of admissions (Physician Referral, Emergency Room, Transfer, etc.) | Admissions Data | Physician Referral | Days in Hospital | Numeric | Length of stay in hospital | Admissions Data | 1 | Payer Code | Categorical | Unique code of patient’s payer | Admissions Data | CP | Medical Specialty | Categorical | Medical specialty that patient is being admitted into | Admissions Data | Surgery-Neuro | Lab Procedures | Numeric | Total lab procedures in the past | Admissions Data | 35 | Procedures | Numeric | Total procedures in the past | Admissions Data | 4 Outpatient Visits | Numeric | Total outpatient visits in the past | Admissions Data | 0 | ER Visits | Numeric | Total emergency room visits in the past | Admissions Data | 0 | Inpatient Visits | Numeric | Total inpatient visits in the past | Admissions Data | 0 | Diagnosis | Numeric | Total diagnosis | Diagnosis Data | 9 | ICD10 Diagnosis Code(s) | Categorical | Patient’s ICD10 diagnosis on their condition; could be more than one (additional columns) | Diagnosis Data | M4802 | ICD10 Diagnosis Description(s) | Categorical | Description on patient’s diagnosis; could be more than one (additional columns) | Diagnosis Data | Spinal stenosis, cervical region | Medications | Numeric | Total number of medications prescribed to the patient | Prescription Data | 21 | Prescribed Medication(s) | Binary | Whether or not the patient is prescribed to a medication; could be more than one (additional columns) | Prescription Data | Metformin – No | ### Data preparation {: #data-preparation } The original raw data consisted of 74 million unique visits that include 18 million unique patients across 3 million providers. This data originally contained both inpatient and outpatient visits, as it included medical records from both integrated health systems and standalone providers. While the original data schema consisted of 41 tables with 117 features, the final dataset was filtered on relevant patients and features based on the use case. The patients included were limited to those with: * Inpatient encounters * Existing diabetic conditions * 1–14 days of inpatient stay * Lab tests performed during inpatient stay (or not) * Medications were prescribed during inpatient stay (or not) All other features were excluded due to lack of relevance and/or poor data integrity. Reference the [DataRobot documentation](data/index) to see details on how to connect DataRobot to your data source, perform feature engineering, follow best-practice data science techniques, and more. ## Modeling and insights {: #modeling-and-insights } DataRobot automates many parts of the modeling pipeline, including processing and partitioning the dataset, as described [here](model-data). This use case skips the modeling section and moves straight to model interpretation. Reference the [DataRobot documentation](gs-dr-fundamentals) to see how to use DataRobot from start to finish and how to understand the data science methodologies embedded in its automation. This use case creates one unified model that predicts the likelihood of readmission for patients with diabetic conditions. ### Feature Impact {: #feature-impact } By taking a look at the [**Feature Impact**](feature-impact) chart, you can see that a patient’s number of past inpatient visits, discharge disposition, and the medical specialty of their diagnosis are the top three most impactful features that contribute to whether a patient will readmit. ![](images/readmit-1.png) ### Feature Effects/Partial Dependence {: #partial-dependence } In assessing the [partial dependence](feature-effects#partial-dependence-calculations) plots to further evaluate the marginal impact top features have on the predicted outcome, you can see that as a patient’s number of past inpatient visits increases from 0 to 2, their likelihood to readmit subsequently jumps from 37% to 53%. As the number of visits exceeds 4 the likelihood increases to roughly 59%. ![](images/readmit-2.png) ### Prediction Explanations {: #prediction-explanations } DataRobot’s [**Prediction Explanations**](pred-explain/index) provide a more granular view for interpreting model results&mdash;key drivers for each prediction generated. These explanations show why a given patient was predicted to readmit or not, based on the top predictive features. ![](images/readmit-3.png) ### Post-processing {: #post-processing } For the prediction results to be intuitive for clinicians to consume, instead of displaying them as a probabilistic or binary number, they can can be post-processed into different labels based on where they fall under predefined prediction thresholds. For instance, patients can be labeled as high risk, medium risk, and low risk depending on their risk of readmissions. ## Predict and deploy {: #predict-and-deploy } After selecting the model that best learns patterns in your data to predict readmissions, you can deploy it into your desired decision environment. *Decision environments* are the ways in which the predictions generated by the model will be consumed by the appropriate organizational [stakeholders](#decision-stakeholders), and how these stakeholders will make decisions using the predictions to impact the overall process. This is a critical piece of implementing the use case as it ensures that predictions are used in the real-world for reducing hospital readmissions and generating clinical improvements. At its core, DataRobot empowers clinicians and care managers with the information they need to nurture strong and lasting connections with the people they care about most: their patients. While there are use cases where decisions can be automated in a data pipeline, a readmissions model is geared to *augment* the decisions of your clinicians. It acts as an intelligent machine that, combined with the expertise of your clinicians, will help improve patients’ medical outcomes. ### Decision stakeholders {: #decision-stakeholders } The following table lists potential decision stakeholders: Stakeholder | Description | Examples ----------- | ----------- | -------- Decision executors | Clinical stakeholders who will consume decisions on a daily basis to identify patients who are likely to readmit and understand the steps they can take to intervene. | Nurses, physicians, care managers Decision managers | Executive stakeholders who will monitor and manage the program to analyze the performance of the provider’s readmission improvement programs. | Chief medical officer, chief nursing officer, chief population health officer Decision authors | Technical stakeholders who will set up the decision flow in place. | Clinical operations analyst, business intelligence analyst, data scientists ### Decision process {: #decision-process } You can set thresholds to determine whether a prediction constitutes a foreseen readmission or not. Assign clear action items for each level of threshold so that clinicians can prescribe the necessary intervention strategies. ![](images/readmit-5.png) **Low risk:** Send an automated email or text that includes discharge paperwork, warning symptoms, and outpatient alternatives. **Medium risk:** Send multiple automated emails or texts that include discharge paperwork, warning symptoms, and outpatient alternatives, with multiple reminders. Follow up with the patient 10 days post-discharge through email to gauge their condition. **High risk:** Clinician briefs patient on their discharge paperwork in person. Send automated emails or texts that include discharge paperwork, warning symptoms, and outpatient alternatives, with multiple reminders. Follow up with the patient on a weekly basis post discharge through telephone or email to gauge their condition. ### Model deployment {: #model-deployment } DataRobot provides clinicians with complete transparency on the top risk-drivers for every patient at any given time, enabling them to conduct intelligent interventions both before and after the patient is discharged. Reference the [DataRobot documentation](mlops/index) for an overview of model deployment. #### No-Code AI Apps {: #no-code-ai-apps } Consider building a custom application where stakeholders can interact with the predictions and record the outcomes of the investigation. Once the model is deployed, predictions can be consumed for use in the [decision process](#decision-process). For example, this [No-Code AI App](app-builder/index) is an easily shareable, AI-powered application using a no-code interface: ![](images/biz-readmit-1.png) Click **Add new row** to enter patient data: ![](images/biz-readmit-2.png) #### Other business systems {: #other-business-systems } Predictions can also be integrated into other systems that are embedded in the provider’s day-to-day business workflow. Results can be integrated into the provider’s EMR system or BI dashboards. For the former, clinicians can easily see predictions as an additional column in the data they already view on a daily basis to monitor their assigned patients. They will be given transparent interpretability of the predictions to understand why the model predicts the patient to readmit or not. Some common integrations: * Display results through an Electronic Medical Record system (i.e., Epic) * Display results through a business intelligence tool (i.e., Tableau, Power BI) The following shows an example of how to integrate predictions with Microsoft Power BI to create a dashboard that can be accessed by clinicians to support decisions on which patients they should address to prevent readmissions. The dashboard below displays the probability of readmission for each patient on the floor. It shows the patient’s likelihood to readmit and top factors on why the model made the prediction. Nurses and physicians can consume a dashboard similar to this one to understand which patients are likely to readmit and why, allowing them to implement a prevention strategy tailored to each patient’s unique needs. ![](images/readmit-4.png) ### Model monitoring {: #model-monitoring } Common decision operators&mdash;IT, system operations, and data scientists&mdash;would likely implement this use case as follows: **Prediction Cadence**: Batch predictions generated on a daily basis. **Model Retraining Cadence**: Models retrained once data drift reaches an assigned threshold; otherwise, retrain the models at the beginning of every new operating quarter. Use DataRobot's [performance monitoring capabilities](monitor/index)&mdash;especially service health, data drift, and accuracy to produce and distribute regular reports to stakeholders. ### Implementation considerations {: #implementation-considerations } The following highlights some potential implementation risks, all of which are addressable once acknowledged: Issue | Description ----------- | ----------- Access | Failure to make prediction results easy and convenient for clinicians to access (i.e., if they have to open a separate web browser to the EHR that they are already used to or have information overload). Understandability | Failure to make predictions intuitive for clinicians to understand. Interpretability | Failure to help clinicians interpret the predictions and why the model thought a certain way. Prescriptive | Failure to provide clinicians with prescriptive strategies to act on high risk cases. ### Trusted AI {: #trusted-ai } In addition to traditional risk analysis, the following elements of AI Trust may require attention in this use case. **Target leakage:** Target leakage describes information that should not be available at the time of prediction being used to train the model. That is, particular features make leak information about the eventual outcome that will artificially inflate the performance of the model in training. This use case required the aggregation of data across 41 different tables and a wide timeframe, making it vulnerable to potential target leakage. In the design of this model and the preparation of data, it is pivotal to identify the point of prediction (discharge from the hospital) and ensure no data be included past that time. DataRobot additionally supports robust [target leakage detection](data-quality#target-leakage) in the second round of exploratory data analysis and the selection of the Informative Features feature list during Autopilot. **Bias & Fairness:** This use case leverages features that may be categorized as protected or may be sensitive (age, gender, race). It may be advisable to assess the equivalency of the error rates across these protected groups. For example, compare if patients of different races have equivalent false negative and positive rates. The risk is if the system predicts with less accuracy for a certain protected group, failing to identify those patients as at risk of readmission. Mitigation techniques may be explored at various stages of the modeling process, if it is determined necessary. DataRobot's [bias and fairness resources](b-and-f/index) help identify bias before (or after) models are deployed.
hospital-readmit-include
## DRUM on Windows with WSL2 {: #drum-on-windows-with-wsl2 } DRUM can be run on Windows 10 or 11 with WSL2 (Windows Subsystem for Linux), a native extension that is supported by the latest versions of Windows and allows you to easily install and run Linux OS on a Windows machine. With WSL, you can develop custom tasks and custom models locally in an IDE on Windows, and then immediately test and run them on the same machine using DRUM via the Linux command line. !!! tip You can use this [YouTube video](https://www.youtube.com/watch?v=wWFI2Gxtq-8){ target=_blank } for instructions on installing WSL into Windows 11 and updating Ubuntu. The following phases are required to complete the Windows DRUM installation: 1. [Enable WSL](#enable-linux-wsl) 2. [Install `pyenv`](#install-pyenv) 3. [Install DRUM](#install-drum-on-windows) 4. [Install Docker Desktop](#install-docker-desktop) ### Enable Linux (WSL) {: #enable-linux-wsl } 1. From **Control Panel > Turn Windows features on or off**, check the option **Windows Subsystem for Linux**. After making changes, you will be prompted to restart. ![](images/cml-drum-1.png) 2. Open [Microsoft store](https://aka.ms/wslstore){ target=_blank } and click to get Ubuntu. ![](images/cml-drum-2.png) 3. Install Ubuntu and launch it from the start prompt. Provide a Unix username and password to complete installation. You can use any credentials but be sure to record them as they will be required in the future. ![](images/cml-drum-5.png) You can access Ubuntu at any time from the Windows start menu. Access files on the C drive under **/mnt/c/**. ![](images/cml-drum-6.png) ### Install pyenv {: #install-pyenv } Because Ubuntu in WSL comes without Python or virtual environments installed, you must install `pyenv`, a Python version management program used on macOS and Linux. (Learn about managing multiple Python environments [here](https://codeburst.io/how-to-install-and-manage-multiple-python-versions-in-wsl2-1131c4e50a58){ target=_blank }.) In the Ubuntu terminal, run the following _commands_ (you can ignore comments) row by row: ``` sh cd $HOME sudo apt update --yes sudo apt upgrade --yes sudo apt-get install --yes git git clone https://github.com/pyenv/pyenv.git ~/.pyenv #add pyenv to bashrc echo '# Pyenv environment variables' >> ~/.bashrc echo 'export PYENV_ROOT="$HOME/.pyenv"' >> ~/.bashrc echo 'export PATH="$PYENV_ROOT/bin:$PATH"' >> ~/.bashrc echo '# Pyenv initialization' >> ~/.bashrc echo 'if command -v pyenv 1>/dev/null 2>&1; then' >> ~/.bashrc echo ' eval "$(pyenv init -)"' >> ~/.bashrc echo 'fi' >> ~/.bashrc #restart shell exec $SHELL #install pyenv dependencies (copy as a single line) sudo apt-get install --yes libssl-dev zlib1g-dev libbz2-dev libreadline-dev libsqlite3-dev llvm libncurses5-dev libncursesw5-dev xz-utils tk-dev libgdbm-dev lzma lzma-dev tcl-dev libxml2-dev libxmlsec1-dev libffi-dev liblzma-dev wget curl make build-essential python-openssl #install python 3.7 (it can take awhile) pyenv install 3.7.10 ``` ### Install DRUM on Windows {: #install-drum-on-windows } To install DRUM, first you setup a Python environment where DRUM will run, and then install DRUM in that environment. 1. Create and activate a `pyenv` environment: ``` sh cd $HOME pyenv local 3.7.10 .pyenv/shims/python3.7 -m venv DR-custom-tasks-pyenv source DR-custom-tasks-pyenv/bin/activate ``` 2. Install DRUM and its dependencies into that environment: ``` sh pip install datarobot-drum exec $SHELL ``` 3. Download container environments, where DRUM will run, from Github. `git clone https://github.com/datarobot/datarobot-user-models` ### Install Docker Desktop {: #install-docker-desktop } While you can run DRUM directly in the `pyenv` environment, it is preferable to run it in a Docker container. This recommended procedure ensures that your tasks run in the same environment both locally and inside DataRobot, as well as simplifies installation. 1. Download and install [Docker Desktop](https://www.docker.com/products/docker-desktop){ target=_blank }, following the default installation steps. 2. Enable Ubuntu version WSL2 by opening Windows PowerShell and running: ``` sh wsl.exe --set-version Ubuntu 2 wsl --set-default-version 2 ``` ![](images/cml-drum-3.png) !!! note You may need to download and install an [update](https://wslstorestorage.blob.core.windows.net/wslblob/wsl_update_x64.msi){ target=_blank }. Follow the instructions in the PowerShell until you see the **Conversion complete** message. 3. Enable access to Docker Desktop from Ubuntu: 1. From the Window's task bar, open Docker Dashboard, then access **Settings** (the gear icon). 2. Under **Resources > WSL integration > Enable integration with additional distros**, toggle on Ubuntu. 3. Apply changes and restart. ![](images/cml-drum-4.png)
drum-for-windows
![](images/batch-3.png) | | Element | Description | |--|---------|-------------| | ![](images/icon-1.png) | Include input features | Writes input features to the prediction results file alongside predictions. To add specific features, enable the **Include input features** toggle, select **Specific features**, and type feature names to filter for and then select features. To include every feature from the dataset, select **All features**. You can only append a feature (column) present in the original dataset, although the feature does not have to have been part of the feature list used to build the model. Derived features are *not* included. | | ![](images/icon-2.png) | Include Prediction Explanations | Adds columns for [Prediction Explanations](pred-explain/index) to your prediction output.<ul><li>**Number of explanations**: Enter the maximum number of explanations you want to request from the deployed model. You can request **100** explanations per prediction request.</li><li>**Low prediction threshold**: Enable and define this threshold to provide prediction explanations for _any_ values _below_ the set threshold value.</li><li>**High prediction threshold**: Enable and define this threshold to provide prediction explanations for _any_ values _above_ the set threshold value.</li><li>**Number of ngram explanations**: Enable and define the maximum number of text [ngram](glossary/index#n-gram) explanations to return per row of the dataset. The default (and recommended) setting is **all** (no limit).</li></ul> If you can't enable Prediction Explanations, see [Why can't I enable Prediction Explanations?](#include-prediction-explanations). | | ![](images/icon-3.png) | Include prediction outlier warning | Includes warnings for [outlier prediction values](humility-settings#prediction-warnings) (only available for regression model deployments).| | ![](images/icon-4.png) | Track data drift, accuracy, and fairness for predictions | Tracks [data drift](data-drift), [accuracy](deploy-accuracy), and [fairness](mlops-fairness) (if enabled for the deployment). | | ![](images/icon-5.png) | Chunk size | Adjusts the chunk size selection strategy. By default, DataRobot automatically calculates the chunk size; only modify this setting if advised by your DataRobot representative. For more information, see [What is chunk size?](#what-is-chunk-size) | | ![](images/icon-6.png) | Concurrent prediction requests | Limits the number of concurrent prediction requests. By default, prediction jobs utilize all available prediction server cores. To reserve bandwidth for real-time predictions, set a cap for the maximum number of concurrent prediction requests. | | ![](images/icon-7.png) | Include prediction status | Adds a column containing the status of the prediction. | | ![](images/icon-8.png) | Use default prediction instance | Lets you change the [prediction instance](pred-env#prediction-environments). Turn the toggle off to select a prediction instance. | ??? faq "Why can't I enable Prediction Explanations?" If you can't <span id="include-prediction-explanations">**Include Prediction Explanations**</span>, it is likely because: * The model's validation partition doesn't contain the required number of rows. * For a Combined Model, at least one segment champion validation partition doesn't contain the required number of rows. To enable Prediction Explanations, manually replace retrained champions before creating a model package or deployment. ??? faq "What is chunk size?" The batch prediction process <span id="what-is-chunk-size">chunks</span> your data into smaller pieces and scores those pieces one by one, allowing DataRobot to score large batches. The **Chunk size** setting determines the strategy DataRobot uses to chunk your data. DataRobot recommends the default setting of **Auto** chunking, as it performs the best overall; however, other options are available: * **Fixed**: DataRobot identifies an initial, effective chunk size and continues to use it for the rest of the model scoring process. * **Dynamic**: DataRobot increases the chunk size while model scoring speed is acceptable and decreases the chunk size if the scoring speed falls. * **Custom**: A data scientist sets the chunk size, and DataRobot continues to use it for the rest of the model scoring process.
prediction-options-include
Log in to GitHub before accessing these GitHub resources.
github-sign-in-plural
## About final models {: #about-final-models } The original ("final") model is trained without holdout data and therefore does not have the most recent data. Instead, it represents the first backtest. This is so that predictions match the insights, coefficients, and other data displayed in the tabs that help evaluate models. (You can verify this by checking the **Final model** representation on the **New Training Period** dialog to view the data your model will use.) If you want to use more recent data, retrain the model using [start and end dates](#start-end). !!! note Be careful retraining on all your data. In Time Series it is very common for historical data to have a negative impact on current predictions. There are a lot of good reasons not to retrain a model for deployment on 100% of the data. Think through how the training window can impact your deployments and ask yourself: * "Is all of my data actually relevant to my recent predictions? * Are there historical changes or events in my data which may negatively affect how current predictions are made, and that are no longer relevant?" * Is anything outside my Backtest 1 training window size _actually_ relevant? ## Retrain before deployment {: #retrain-before-deployment } Once you have selected a model and unlocked holdout, you may want to retrain the model (although with hyperparameters frozen) to ensure predictive accuracy. Because the original model is trained without the holdout data, it therefore did not have the most recent data. You can verify this by checking the **Final model** representation on the **New Training Period** dialog to view the data your model will use. To retrain the model, do the following: 1. On the Leaderboard, click the plus sign (**+**) to open the **New Training Period** dialog and change the training period. 2. View the final model and determine whether your model is trained on the most up-to-date data. 3. Enable **Frozen** run by clicking the slider. 4. Select **Start/End Date** and enter the dates for the retraining, including the dates of the holdout data. Remember to use the “+1” method (enter the date immediately after the final date you want to be included). ### Model retraining {: #model-retraining } Retraining a model on the most recent data* results in the model not having [out-of-sample predictions](data-partitioning#what-are-stacked-predictions), which is what many of the Leaderboard insights rely on. That is, the child (recommended and rebuilt) model trained with the most recent data has no additional samples with which to score the retrained model. Because insights are a key component to both understanding DataRobot's recommendation and facilitating model performance analysis, DataRobot links insights from the parent (original) model to the child (frozen) model. ![](images/otp-child-link.png) \* This situation is also possible when a model is trained into holdout ("slim-run" models also have no [stacked predictions](data-partitioning#what-are-stacked-predictions)). The insights affected are: * ROC Curve * Lift Chart * Confusion Matrix * Stability * Forecast Accuracy * Series Insights * Accuracy Over Time * Feature Effect
date-time-include-5
## Troubleshooting {: #troubleshooting } Problem | Solution | Instructions ---------- | ----------- | --------------- When attempting to execute an operation in DataRobot, the firewall requests that you clear the IP address each time. | Add all whitelisted IPs for DataRobot. | See [Source IP addresses for whitelisting](data-conn#source-ip-addresses-for-whitelisting). If you've already added the whitelisted IPs, check the existing IPs for completeness.
data-conn-trouble
DataRobot detects the date and/or time format (<a target="_blank" href="https://docs.python.org/2/library/datetime#strftime-and-strptime-behavior">standard GLIBC strings</a>) for the selected feature. Verify that it is correct. If the format displayed does not accurately represent the date column(s) of your dataset, modify the original dataset to match the detected format and re-upload it. ![](images/otp-detect.png) Configure the backtesting partitions. You can set them from the dropdowns (applies global settings) or by clicking the [bars in the visualization](#change-backtest-partitions) (applies individual settings). Individual settings override global settings. Once you modify settings for an individual backtest, any changes to the global settings are not applied to the edited backtest. ![](images/otp-backtesting.png) ??? info "Date/date range representation" DataRobot uses <em>date points</em> to represent dates and date ranges within the data, applying the following principles: * All date points adhere to ISO 8601, UTC (e.g., 2016-05-12T12:15:02+00:00), an internationally accepted way to represent dates and times, with some small variation in the duration format. Specifically, there is no support for ISO weeks (e.g., P5W). * Models are trained on data between two ISO dates. DataRobot displays these dates as a date range, but inclusion decisions and all key boundaries are expressed as date points. When you specify a date, DataRobot includes start dates and excludes end dates. * Once changes are made to formats using the date partitioning column, DataRobot converts all charts, selectors, etc. to this format for the project. ## Set backtest partitions globally {: #set-backtest-partitions-globally } The following table describes global settings: | | Selection | Description | |---|---|---| | ![](images/icon-1.png) | [Number of backtests](#set-the-number-of-backtests) | Configures the number of backtests for your project, the time-aware equivalent of cross-validation (but based on time periods or durations instead of random rows). | | ![](images/icon-2.png) | [Validation length](#set-the-validation-length) | Configures the size of the testing data partition. | | ![](images/icon-3.png) | [Gap length](#set-the-gap-length) | Configures spaces in time, representing gaps between model training and model deployment.| | ![](images/icon-4.png) | [Sampling method](#set-rows-or-duration) | Sets whether to use duration or rows as the basis for partitioning, and whether to use random or latest data.| See the table above for a description of the backtesting section's display elements. !!! note When changing partition year/month/day settings, note that the month and year values rebalance to fit the larger class (for example, 24 months becomes two years) when possible. However, because DataRobot cannot account for leap years or days in a month as it relates to your data, it cannot convert days into the larger container. ### Set the number of backtests {: #set-the-number-of-backtests } You can change the number of [backtests](#understanding-backtests), if desired. The default number of backtests is dependent on the project parameters, but you can configure up to 20. Before setting the number of backtests, use the histogram to validate that the training and validation sets of each fold will have sufficient data to train a model. Requirements are: * For OTV, backtests require at least 20 rows in each validation and holdout fold and at least 100 rows in each training fold. If you set a number of backtests that results in any of the partitions not meeting that criteria, DataRobot only runs the number of backtests that do meet the minimums (and marks the display with an asterisk). * For time series, backtests require at least 4 rows in validation and holdout and at least 20 rows in the training fold. If you set a number of backtests that results in any of the partitions not meeting that criteria, the project could fail. See the [time series partitioning reference](ts-customization) for more information. ![](images/otp-backtest-set.png) By default, DataRobot creates a holdout fold for training models in your project. [In some cases](ts-date-time#partition-without-holdout), however, you may want to create a project without a holdout set. To do so, uncheck the **Add Holdout fold** box. If you disable the holdout fold, the holdout score column does not appear on the Leaderboard (and you have no option to unlock holdout). Any tabs that provide an option to switch between Validation and Holdout will not show the Holdout option. !!! note If you build a project with a single backtest, the Leaderboard does not display a backtest column. ### Set the validation length {: #set-the-validation-length } To modify the duration, perhaps because of a warning message, click the dropdown arrow in the **Validation length** box and enter duration specifics. Validation length can also be set by [clicking the bars](#change-backtest-partitions) in the visualization. Note the change modifications make in the testing representation: ![](images/otp-val-length-1.png) ### Set the gap length {: #set-the-gap-length } Optionally, set the [gap](#understanding-gaps) length from the **Gap Length** dropdown. Initially set to zero, DataRobot does not process a gap in testing. When set, DataRobot excludes the data that falls in the gap from use in training or evaluation of the model. Gap length can also be set by [clicking the bars](#change-backtest-partitions) in the visualization. ![](images/otp-gap-length.png) ### Set rows or duration {: #set-rows-or-duration } By default, DataRobot ensures that each backtest has the same _duration_, either the default or the values set from the dropdown(s) or via the [bars in the visualization](#change-backtest-partitions). If you want the backtest to use the same number of _rows_, instead of the same length of time, use the **Equal rows per backtest** toggle: ![](images/otp-force-rows.png) Time series projects also have an option to set row or duration for the training data, used as the basis for feature engineering, in the [training window format](ts-customization#duration-and-row-count) section. Once you have selected the mechanism/mode for assigning data to backtests, select the sampling method, either **Random** or **Latest**, to select how to assign rows from the dataset. Setting the sampling method is particularly useful if a dataset is not distributed equally over time. For example, if data is skewed to the most recent date, the results of using 50% of random rows versus 50% of the latest will be quite different. By selecting the data more precisely, you have more control over the data that DataRobot trains on. ## Change backtest partitions {: #change-backtest-partitions } If you don't modify any settings, DataRobot disperses rows to backtests equally. However, you can customize an individual backtest's gap, training, validation, and holdout data by clicking the corresponding bar or the pencil icon (![](images/icon-pencil.png)) in the visualization. Note that: * You can only set holdout in the Holdout backtest ("backtest 0"), you cannot change the training data size in that backtest. * If, during the initial partitioning detection, the backtest configuration of the ordering (date/time) feature, series ID, or target results in insufficient rows to cover both validation and holdout, DataRobot automatically disables holdout. If other partitioning settings are changed (validation or gap duration, start/end dates, etc.), holdout is not affected unless manually disabled. * When **Equal rows per backtest** is checked (which sets the partitions to row-based assignment), only the Training End date is applicable. * When **Equal rows per backtest** is checked, the dates displayed are informative only (that is, they are approximate) and they include padding that is set by the feature derivation and forecast point windows. ### Edit individual backtests {: #edit-individual-backtests } Regardless of whether you are setting training, gaps, validation, or holdout, elements of the editing screens function the same. Hover on a data element to display a tooltip that reports specific duration information: ![](images/otp-hover.png) Click a section (1) to open the tool for modifying the start and/or end dates; click in the box (2) to open the calendar picker. ![](images/otp-reset-partition.png) Triangle markers provide indicators of corresponding boundaries. The larger blue triangle (![](images/icon-blue-triangle.png)) marks the active boundary&mdash;the boundary that will be modified if you apply a new date in the calendar picker. The smaller orange triangle (![](images/icon-orange-triangle.png)) identifies the other boundary points that can be changed but are not currently selected. The current duration for training, validation, and gap (if configured) is reported under the date entry box: ![](images/otp-report.png) Once you have made changes to a data element, DataRobot adds an **EDITED** label to the backtest. ![](images/otp-edit.png) There is no way to remove the **EDITED** label from a backtest, even if you manually reset the durations back to the original settings. If you want to be able to apply global duration settings across all backtests, [copy the project](manage-projects#project-actions-menu) and restart. ### Modify training and validation {: #modify-training-and-validation } To modify the duration of the training or validation data for an individual backtest: 1. Click in the backtest to open the calendar picker tool. 2. Click the triangle for the element you want to modify&mdash;options are training start (default), training end/validation start, or validation end. 3. Modify dates as required. ### Modify gaps {: #modify-gaps } A gap is a period between the end of the training set and the start of the validation set, resulting in data being intentionally ignored during model training. You can set the [gap](#gaps) length globally or for an individual backtest. To set a gap, add time between training end and validation start. You can do this by ending training sooner, starting validation later or both. 1. Click the triangle at the end of the training period. 2. Click the **Add Gap** link. ![](images/otp-add-gap.png) DataRobot adds an additional triangle marker. Although they appear next to each other, both the selected (blue) and inactive (orange) triangles represent the same date. They are slightly spaced to make them selectable. 3. Optionally, set the **Training End Date** using the calendar picker. The date you set will be the beginning of the gap period (training end = gap start). 4. Click the orange **Validation Start Date** marker; the marker changes to blue, indicating that it's selected. 5. Optionally, set the Validation Start Date (validation start = gap end). The gap is represented by a yellow band; hover over the band to view the duration. ### Modify the holdout duration {: #modify-the-holdout-duration } To modify the holdout length, click in the red (holdout area) of backtest 0, the holdout partition. Click the displayed date in the **Holdout Start Date** to open the calendar picker and set a new date. If you modify the holdout partition and the new size results in potential problems, DataRobot displays a warning icon next to the Holdout fold. Click the warning icon (![](images/icon-warning.png)) to expand the dropdown and reset the duration/date fields. ![](images/otp-holdout-warn.png) ### Lock the duration {: #lock-the-duration } You may want to make backtest <em>date</em> changes without modifying the duration of the selected element. You can lock duration for training, for validation, or for the combined period. To lock duration, click the triangle at one end of the period. Next, hold the **Shift** key and select the triangle at the other end of the locked duration. DataRobot opens calendar pickers for each element: ![](images/otp-duration-locked.png) Change the date in either entry. Notice that the other date updates to mirror the duration change you made. ## Interpret the display {: #interpret-the-display } The date/time partitioning display represents the training and validation data partitions as well as their respective sizes/durations. Use the visualization to ensure that your models are validating on the area of interest. The chart shows, for each backtest, the specific time period of values for the training, validation, and if applicable, holdout and gap data. Specifically, you can observe, for each backtest, whether the model will be representing an interesting or relevant time period. Will the scores represent a time period you care about? Is there enough data in the backtest to make the score valuable? ![](images/otp-rep.png) The following table describes elements of the display: | Element | Description | |--------------|---------------| | Observations | The [binned](lift-chart#lift-chart-binning) distribution of values (i.e., frequency), before downsampling, across the dataset. This is the same information as displayed in the feature’s histogram. | | Available Training Data | The blue color bar indicates the training data available for a given fold. That is, all available data minus the validation or holdout data. | | Primary Training Data | The dashed outline indicates the maximum amount of data you can train on to get scores from all backtest folds. You can later choose any time window for training, but depending on what you select, you may not then get all backtest scores. (This could happen, for example, if you train on data greater than the primary training window.) If you train on data less than or equal to the Primary Training Data value, DataRobot completes all backtest scores. If you train on data greater than this value, DataRobot runs fewer tests and marks the backtest score with an asterisk (\*). This value is dependent on (changed by) the number of configured backtests. | | Gap | A gap between the end of the training set and the start of the validation set, resulting in the data being intentionally ignored during model training. | | Validation | A set of data indicated by a green bar that is not used for training (because DataRobot selects a different section at each backtest). It is similar to traditional [validation](partitioning), except that it is time based. The validation set starts immediately at the end of the primary training data (or the end of the gap). | | Holdout (only if **Add Holdout fold** is checked) | The reserved (never seen) portion of data used as a final test of model quality once the model has been trained and validated. When using date/time partitioning, [holdout](data-partitioning) is a duration or row-based portion of the training data instead of a random subset. By default, the holdout data size is the same as the validation data size and always contains the latest data. (Holdout size is user-configurable, however.) | | Backtest*x* | Time- or row-based folds used for training models. The Holdout backtest is known as "backtest 0" and labeled as Holdout in the visualization. For small datasets and for the highest-scoring model from Autopilot, DataRobot runs all backtests. For larger datasets, the first backtest listed is the one DataRobot uses for model building. Its score is reported in the Validation column of the Leaderboard. Subsequent backtests are not run until manually initiated on the Leaderboard. | Additionally, the display includes **Target Over Time** and **Observations** histograms. Use these displays to visualize the span of times where models are compared, measured, and assessed&mdash;to identify "regions of interest." For example, the displays help to determine the density of data over time, whether there are gaps in the data, etc. ![](images/otp-graphs.png) In the displays, the green represents the selection of data that DataRobot is validating the model on. The "All Backtest" score is the average of this region. The gradation marks each backtest and its potential overlap with training data. Study the **Target Over Time** graph to find interesting regions where there is some data fluctuation. It may be interesting to compare models over these regions. Use the **Observations** chart to determine whether, roughly speaking, the amount of data in a particular backtest is suitable. Finally, you can click the red, locked holdout section to see where in the data the holdout scores are being measured and whether it is a consistent representation of your dataset.
date-time-include-1
| | Element | Description | |---|---|---| | ![](images/icon-1.png) | Selected word | Displays details about the selected word. (The term *word* here equates to an [*n-gram*](glossary/index#ngram), which can be a sequence of words.) <br><br>Mouse over a word to select it. Words that appear more frequently display in a larger font size in the **Word Cloud**, and those that appear less frequently display in smaller font sizes.| | ![](images/icon-2.png) | Coefficient | Displays the [coefficient](coefficients#coefficientpreprocessing-information-with-text-variables) value specific to the word.| | ![](images/icon-3.png) | Color spectrum | Displays a legend for the color spectrum and values for words, from blue to red, with blue indicating a negative effect and red indicating a positive effect. | | ![](images/icon-4.png) | Appears in # rows| Specifies the number of rows the word appears in. | | ![](images/icon-5.png) | Filter stop words | Removes stop words (commonly used terms that can be excluded from searches) from the display. | | ![](images/icon-6.png) | Export | Allows you to [export](export-results) the **Word Cloud**. | | ![](images/icon-7.png) | Zoom controls | Enlarges or reduces the image displayed on the canvas. Alternatively, double-click on the image. To move areas of the display into focus, click and drag. | | ![](images/icon-8.png) | Select class | For multiclass projects, selects the class to investigate using the **Word Cloud**. | ??? info "Word Cloud availability" You can access **Word Cloud** from either the **Insights** page or the Leaderboard. Operationally, each version of the model behaves the same&mdash;use the Leaderboard tab to view a **Word Cloud** while investigating an individual model and the **Insights** page to access, and compare, each **Word Cloud** for a project. Additionally, they are available for multimodal datasets (i.e., datasets that mix images, text, categorical, etc.)&mdash;a **Word Cloud** is displayed for all text from the data. The **Word Cloud** visualization is supported in the following model types and blueprints: * Binary classification: * All variants of ElasticNet Classifier (linear family models) with the exception of TinyBERT ElasticNet classifier and FastText ElasticNet classifier * LightGBM on ElasticNet Predictions * Text fit on Residuals * Extended support for multimodal datasets (with single Auto-Tuned N-gram) * Multiclass: * Stochastic Gradient Descent with at least 1 text column with the exception of TinyBERT SGD classifier and FastText SGD classifier * Regression: * Ridge Regressor * ElasticNet Regressor * Lasso Regressor * Single Auto-Tuned Multi-Modal * LightGBM on ElasticNet Predictions * Text fit on Residuals * Keras !!! note The **Word Cloud** for a model is based on the data used to train that model, not on the entire dataset. For example, a model trained on a 32% sample size will result in a **Word Cloud** that reflects those same 32% of rows. See [Text-based insights](analyze-insights#text-based-insights) for a description of how DataRobot handles single-character words.
word-cloud-include
??? info "Category Cloud availability" The **Category Cloud** insight is available on the **Models > Insights** tab and on the **Data** tab. On the **Insights** page, you can compare word clouds for a project's categorically-based models. From the **Data** page you can more easily compare clouds across features. Note that the **Category Cloud** is not created when using a multiclass target. Keys are displayed in a color spectrum from blue to red, with blue indicating a negative effect and red indicating a positive effect. Keys that appear more frequently are displayed in a larger font size, and those that appear less frequently are displayed in smaller font sizes. Check the **Filter stop words** box to remove stopwords (commonly used terms that can be excluded from searches) from the display. Removing these words can improve interpretability if the words are not informative to the Auto-Tuned Summarized Categorical Model. Mouse over a key to display the coefficient value specific to that key and to read its full name (displayed with the information to the left of the cloud). Note that the names of keys are truncated to 20 characters when displayed in the cloud and limited to 100 characters otherwise.
category-cloud-include
## Predict and deploy {: #predict-and-deploy } Once you identify the model that best learns patterns in your data to predict SARs, you can deploy it into your desired decision environment. *Decision environments* are the ways in which the predictions generated by the model will be consumed by the appropriate organizational [stakeholders](#decision-stakeholders), and how these stakeholders will make decisions using the predictions to impact the overall process. This is a critical step for implementing the use case, as it ensures that predictions are used in the real world to reduce false positives and improve efficiency in the investigation process. The following applications of the alert-prioritization score from the false positive reduction model both automate and augment the existing rule-based transaction monitoring system. * If the FCC (Financial Crime Compliance) team is comfortable with removing the low-risk alerts (very low prioritization score) from the scope of investigation, then the binary threshold selected during the model-building stage will be used as the cutoff to remove those no-risk alerts. The investigation team will only investigate alerts above the cutoff, which will still capture all the SARs based on what was learned from the historical data. * Often regulatory agencies will consider auto-closure or auto-removal as an aggressive treatment for production alerts. If auto-closing is not the ideal way to use the model output, the alert prioritization score can still be used to triage alerts into different investigation processes, improving the operational efficiency. ### Decision stakeholders {: #decision-stakeholders } The following table lists potential decision stakeholders: Stakeholder | Description ----------- | ----------- Decision Executors | Financial Crime Compliance Team Decision Managers |Chief Compliance Officer Decision Authors | Data scientists or business analysts ### Decision process {: #decision-process } Currently, the review process consists of a deep-dive analysis by investigators. The data related to the case is made available for review so that the investigators can develop a 360° view of the customer, including their profile, demographic, and transaction history. Additional data from third-party data providers and web crawling can supplement this information to complete the picture. For transactions that do not get auto-closed or auto-removed, the model can help the compliance team create a more effective and efficient review process by triaging their reviews. The predictions and their explanations also give investigators a more holistic view when assessing cases. **Risk-based Alert Triage:** Based on the prioritization score, the investigation team can take different investigation strategies. * For no-risk or low-risk alerts&mdash;alerts can be reviewed on a quarterly basis, instead of monthly. The frequently alerted entities without any SAR risk will be reviewed once every three months, which will significantly reduce the time of investigation. * For high-risk alerts with higher prioritization scores&mdash;investigations can fast-forward to the final stage in the alert escalation path. This will significantly reduce the effort spent on level 1 and level 2 investigations. * For medium-risk alerts&mdash;the standard investigation process can still be applied. **Smart Alert Assignment:** For an alert investigation team that is geographically dispersed, the alert prioritization score can be used to assign alerts to different teams in a more effective manner. High-risk alerts can be assigned to the team with the most experienced investigators, while low-risk alerts are assigned to the less-experienced team. This will mitigate the risk of missing suspicious activities due to a lack of competency during alert investigations. For both approaches, the definition of high/medium/low risk could be either a set of hard thresholds (for example, High: score>=0.5, Medium: 0.5>score>=0.3, Low: score<0.3), or based on the percentile of the alert scores on a monthly basis (for example, High: above 80th percentile, Medium: between 50th and 80th percentile, Low: below 50th percentile). ### Model deployment {: #model-deployment } The predictions generated from DataRobot can be integrated with an alert management system which will let the investigation team know of high-risk transactions. ![](images/aml-biz-13.png) ### Model monitoring {: #model-monitoring } DataRobot will continuously monitor the model deployed on the dedicated prediction server. With DataRobot [MLOps](mlops/index), the modeling team can monitor and manage the alert prioritization model by tracking the distribution drift of the input features as well as the performance deprecation over time. ![](images/aml-biz-14.png) ### Implementation considerations {: #implementation-considerations } When operationalizing this use case, consider the following, which may impact outcomes and require model re-evaluation: * Change in the transactional behavior of the money launderers. * Novel information introduced to the transaction, and customer records that are not seen by the machine learning models.
aml-3-include
Log in to GitHub before clicking this link.
github-sign-in
??? note "Time series blueprints with Scoring Code support" <span id="ts-sc-blueprint-support">The following blueprints typically support Scoring Code:</span> * AUTOARIMA with Fixed Error Terms * ElasticNet Regressor (L2 / Gamma Deviance) using Linearly Decaying Weights with Forecast Distance Modeling * ElasticNet Regressor (L2 / Gamma Deviance) with Forecast Distance Modeling * ElasticNet Regressor (L2 / Poisson Deviance) using Linearly Decaying Weights with Forecast Distance Modeling * ElasticNet Regressor (L2 / Poisson Deviance) with Forecast Distance Modeling * Eureqa Generalized Additive Model (250 Generations) * Eureqa Generalized Additive Model (250 Generations) (Gamma Loss) * Eureqa Generalized Additive Model (250 Generations) (Poisson Loss) * Eureqa Regressor (Quick Search: 250 Generations) * eXtreme Gradient Boosted Trees Regressor * eXtreme Gradient Boosted Trees Regressor (Gamma Loss) * eXtreme Gradient Boosted Trees Regressor (Poisson Loss) * eXtreme Gradient Boosted Trees Regressor with Early Stopping * eXtreme Gradient Boosted Trees Regressor with Early Stopping (Fast Feature Binning) * eXtreme Gradient Boosted Trees Regressor with Early Stopping (Gamma Loss) * eXtreme Gradient Boosted Trees Regressor with Early Stopping (learning rate =0.06) (Fast Feature Binning) * eXtreme Gradient Boosting on ElasticNet Predictions * eXtreme Gradient Boosting on ElasticNet Predictions (Poisson Loss) * Light Gradient Boosting on ElasticNet Predictions * Light Gradient Boosting on ElasticNet Predictions (Gamma Loss) * Light Gradient Boosting on ElasticNet Predictions (Poisson Loss) * Performance Clustered Elastic Net Regressor with Forecast Distance Modeling * Performance Clustered eXtreme Gradient Boosting on Elastic Net Predictions * RandomForest Regressor * Ridge Regressor using Linearly Decaying Weights with Forecast Distance Modeling * Ridge Regressor with Forecast Distance Modeling * Vector Autoregressive Model (VAR) with Fixed Error Terms * IsolationForest Anomaly Detection with Calibration (time series) * Anomaly Detection with Supervised Learning (XGB) and Calibration (time series) While the blueprints listed above support Scoring Code, there are situations when Scoring Code is unavailable: * Scoring Code might not be available for some models generated using [Feature Discovery](fd-time). * Consistency issues can occur for non day-level calendars when the event is not in the dataset; therefore, Scoring Code is unavailable. * Consistency issues can occur when inferring the forecast point in situations with a non-zero [blind history](glossary/index#blind-history); however, Scoring Code is still available in this scenario. * Scoring Code might not be available for some models that use text tokenization involving the MeCab tokenizer. * Differences in rolling sum computation can cause consistency issues in projects with a weight feature and models trained on feature lists with `weighted std` or `weighted mean`. ??? note "Time series Scoring Code capabilities" The following capabilities are currently supported for time series Scoring Code: * [Time series parameters](sc-time-series#time-series-parameters-for-cli-scoring) for scoring at the command line. * [Segmented modeling](sc-time-series#scoring-code-for-segmented-modeling-projects) * [Prediction intervals](sc-time-series#prediction-intervals-in-scoring-code) * [Calendars](ts-adv-opt#calendar-files) (high resolution) * [Cross-series](ts-adv-opt#enable-cross-series-feature-generation) * [Zero inflated](ts-feature-lists#zero-inflated-models) / naïve binary * [Nowcasting](nowcasting) (historical range predictions) * ["Blind history" gaps](glossary/index#blind-history) * [Weighted features](ts-adv-opt#apply-weights) The following time series capabilities are not supported for Scoring Code: * Row-based / irregular data * Nowcasting (single forecast point) * Intramonth seasonality * Time series blenders * Autoexpansion * EWMA (Exponentially Weighted Moving Average)
scoring-code-consider-ts
No-Code AI Apps allow you to build and configure AI-powered applications using a no-code interface to enable core DataRobot services without having to build models and evaluate their performance in DataRobot. Applications are easily shared and do not require users to own full DataRobot licenses in order to use them. Applications also offer a great solution for broadening your organization's ability to use DataRobot's functionality.
no-code-app-intro
The **Over time** chart helps you identify trends and potential gaps in your data by displaying, for both the original modeling data and the derived data, how a feature changes over the primary date/time feature. It is available for all time-aware projects (OTV, single series, and multiseries). For time series, it is available for each user-configured forecast distance. Using the page's tools, you can focus on specific time periods. Display options for OTV and single-series projects differ from those of multiseries. Note that to view the **Over time** chart you must first compute chart data. Once computed: 1. Set the chart's granularity. The resolution options are auto-detected by DataRobot. All project types allow you to set a resolution (this option is under **Additional settings** for multiseries projects). ![](images/fot-resolution.png) 2. Toggle the histogram display on and off to see a visualization of the bins DataRobot is using for [EDA1](eda-explained#eda1). 3. Use the date range slider below the chart to highlight a specific region of the time plot. For smaller datasets, you can drag the sliders to a selected portion. Larger data sets use block pagination. ![](images/fot-slide.png) 4. For multiseries projects, you can set both the forecast distance and an individual series (or average across series) to plot: ![](images/fot-resolution-multi.png) For time series projects, the **Data** page also provides a [Feature Lineage](#feature-lineage-tab) chart to help understand the creation process for derived features. ## Partition without holdout {: #partition-without-holdout } Sometimes, you may want to create a project without a holdout set, for example, if you have limited data points. Date/time partitioning projects have a minimum data ingest size of 140 rows. If **Add Holdout fold** is not checked, minimum ingest becomes 120 rows. By default, DataRobot creates a holdout fold. When you toggle the switch off, the red holdout fold disappears from the representation (only the backtests and validation folds are displayed) and backtests recompute and shift to the right. Other configuration functionality remains the same&mdash;you can still modify the validation length and gap length, as well as the number of backtests. On the Leaderboard, after the project builds, you see validation and backtest scores, but no holdout score or **Unlock Holdout** option. The following lists other differences when you do not create a holdout fold: * Both the [**Lift Chart**](lift-chart#change-the-display) and [**ROC Curve**](pred-dist-graph#data-selection) can only be built using the validation set as their **Data Source**. * The [**Model Info**](model-info) tab shows no holdout backtest and or warnings related to holdout. * You can only compute predictions for **All data** and the **Validation** set from the [**Predict**](predict#why-use-training-data-for-predictions) tab. * The [**Learning Curves**](learn-curve) graph does not plot any models trained into Validation or Holdout. * [**Model Comparison**](model-compare) uses results only from validation and backtesting.
date-time-include-4
### Business problem {: #business-problem } Because, on average, it takes roughly 20 days to process an auto insurance claim (which often frustrates policyholders), insurance companies look for ways to increase the efficiency of their claims workflows. Increasing the number of claim handlers is expensive, so companies have increasingly relied on automation to accelerate the process of paying or denying claims. Automation can increase Straight-Through Processing (STP) by more than 20%, resulting in faster claims processing and improved customer satisfaction. However, as insurance companies increase the speed by which they process claims, they also increase their risk of exposure to fraudulent claims. Unfortunately, most of the systems widely used to prevent fraudulent claims from being processed either require high amounts of manual labor or rely on static rules. ## Solution value {: #solution-value } While Business Rule Management Systems (BRMS) will always be required&mdash;they implement mandatory rules related to compliance&mdash;you can supplement these systems by improving the accuracy of predicting which incoming claims are fraudulent. Using historical cases of fraud and their associated features, AI can apply learnings to new claims to assess whether they share characteristics of the learned fraudulent patterns. Unlike BRMS, which are static and have hard-coded rules, AI generates a probabilistic prediction and provides transparency on the unique drivers of fraud for each suspicious claim. This allows investigators to not only route and triage claims by their likelihood of fraud, but also enables them to accelerate the review process as they know which vectors of a claim they should evaluate. The probabilistic predictions also allow investigators to set thresholds that automatically approve or reject claims. ### Problem framing {: #problem-framing } Work with [stakeholders](#decision-stakeholders) to identify and prioritize the decisions for which automation will offer the greatest business value. In this example, stakeholders agreed that achieving over 20% STP in claims payment was a critical success factor and that minimizing fraud was a top priority. Working with subject matter experts, the team developed a shared understanding of STP in claims payment and built decision logic for claims processing: Step | Best practice ---- | ------------- Determine which decisions to automate. | Automate simple claims and send the more complex claims to a human claims processor. Determine which decisions will be based on business rules and which will be based on machine learning. | Mange decisions that rely on compliance and business strategy by rules. Use machine learning for decisions that rely on experiences, including whether a claim is fraudulent and how much the payment will be. Once the decision logic is in good shape, it is time to build business rules and machine learning models. Clarifying the decision logic reveals the true data needs, which helps decision owners see exactly what data and analytics drive decisions. ### ROI estimation {: #roi-estimation } One way to frame the problem is to determine how to measure ROI. Consider: For ROI, multiple AI models are involved in an STP use case. For example, fraud detection, claims severity prediction, and litigation likelihood prediction are common use cases for models that can augment business rules and human judgment. Insurers implementing fraud detection models have reduced payments to fraud by 15% to 25% annually, saving $1 million to $3 million. To measure: 1. Identify the number of fraudulent claims that models detected but manual processing failed to identify (false negatives). 2. Calculate the monetary amount that would have been paid on these fraudulent claims if machine learning had not flagged them as fraud. `100 fraudulent claims * $20,000 each on average = $2 million per year` 3. Identify fraudulent claims that manual investigation detected but machine learning failed to detect. 4. Calculate the monetary amount that would have been paid without manual investigation. `40 fraudulent claims * $5,000 each on average = $0.2 million per year` The difference between these two numbers would be the ROI. $2 million – $0.2 million = $1.8 million per year ## Working with data {: #working-with-data } For illustrative purposes, this guide uses a simulated dataset that resembles insurance company data. The dataset consists of 10,746 rows and 45 columns. ![](images/fraud-claim-12.png) ### Features and sample data {: #features-and-sample-data } The target variable for this use case is whether or not a claim submitted is fraudulent. It is a binary classification problem. In this dataset 1,746 of 10,746 claims (16%) are fraudulent. The target variable: * `FRAUD` ### Data preparation {: #data-preparation } Below are examples of 44 features that can be used to train a model to identify fraud. They consist of historical data on customer policy details, claims data including free-text description, and internal business rules from national databases. These features help DataRobot extract relevant patterns to detect fraudulent claims. Beyond the features listed below, it might help to incorporate any additional data your organization collects that could be relevant to detecting fraudulent claims. For example, DataRobot is able to process image data as a feature together with numeric, categorical, and text features. Images of vehicles after an accident may be useful to detect fraud and help predict severity. Data from the claim table, policy table, customer table, and vehicle table are merged with customer ID as a key. Only data known before or at the time of the claim creation is used, except for the target variable. Each record in the dataset is a claim. ### Sample feature list {: #sample-feature-list } Feature name | Data type | Description | Data source | Example ------------ | --------- | ------------| ------------| -------- ID | Numeric | Claim ID | Claim | 156843 FRAUD | Numeric | Target | Claim | 0 DATE | Date | Date of Policy | Policy | 31/01/2013 POLICY_LENGTH | Categorical | Length of Policy | Policy | 12 month LOCALITY | Categorical | Customer’s locality | Customer | OX29 REGION | Categorical | Customer’s region | Customer | OX GENDER | Numeric | Customer’s gender | Customer | 1 CLAIM\_POLICY\_DIFF\_A | Numeric | Internal | Policy | 0 CLAIM\_POLICY\_DIFF\_B | Numeric | Internal | Policy | Policy | 0 CLAIM\_POLICY\_DIFF\_C | Numeric | Internal | Policy | Policy | 1 CLAIM\_POLICY\_DIFF\_D | Numeric | Internal | Policy | Policy | 0 CLAIM\_POLICY\_DIFF\_E | Numeric | Internal | Policy | Policy | 0 POLICY\_CLAIM\_DAY\_DIFF | Numeric | Number of days since policy taken | Policy, Claim | 94 DISTINCT\_PARTIES\_ON\_CLAIM | Numeric | Number of people on claim | Claim | 4 CLM\_AFTER\_RNWL | Numeric | Renewal | History | Policy | 0 NOTIF\_AFT\_RENEWAL | Numeric | Renewal | History | Policy | 0 CLM\_DURING\_CAX | Numeric | Cancellation claim | Policy | 0 COMPLAINT | Numeric | Customer complaint | Policy | 0 CLM\_before\_PAYMENT | Numeric | Claim before premium paid | Policy, Claim | 0 PROP\_before\_CLM | Numeric | Claim History| Claim | 0 NCD\_REC\_before\_CLM | Numeric | Claim History | Claim | 1 NOTIF\_DELAY | Numeric | Delay in notification | Claim | 0 ACCIDENT\_NIGHT | Numeric | Night time accident | Claim | 0 NUM\_PI\_CLAIM | Numeric | Number of personal injury claims | Claim | 0 NEW\_VEHICLE\_BEFORE\_CLAIM | Numeric | Vehicle History | Vehicle, Claim | 0 PERSONAL_INJURY_INDICATOR | Numeric | Personal Injury flag | Claim | 0 CLAIM\_TYPE\_ACCIDENT | Numeric | Claim details | Claim | 1 CLAIM\_TYPE\_FIRE | Numeric | Claim details | Claim | 0 CLAIM\_TYPE\_MOTOR\_THEFT | Numeric | Claim details | Claim | 0 CLAIM\_TYPE\_OTHER | Numeric | Claim details | Claim | 0 CLAIM\_TYPE\_WINDSCREEN | Numeric | Claim details | Claim | 0 LOCAL\_TEL\_MATCH | Numeric | Internal Rule Matching | Claim | 0 LOCAL\_M\_CLM\_ADD\_MATCH | Numeric | Internal Rule Matching | Claim | 0 LOCAL\_M\_CLM\_PERS\_MATCH | Numeric | Internal Rule Matching | Claim | 0 LOCAL_\NON\_CLM\_ADD\_MATCH | Numeric | Internal Rule Matching | Claim | 0 LOCAL\_NON\_CLM\_PERS\_MATCH | Numeric | Internal Rule Matching | Claim | 0 federal\_TEL\_MATCH | Numeric | Internal Rule Matching | Claim | 0 federal\_CLM\_ADD\_MATCH | Numeric | Internal Rule Matching | Claim | 0 federal\_CLM\_PERS\_MATCH | Numeric | Internal Rule Matching | Claim | 0 federal\_NON\_CLM\_ADD\_MATCH | Numeric | Internal Rule Matching | Claim | 0 federal\_NON\_CLM\_PERS\_MATCH | Numeric | Internal Rule Matching | Claim | 0 SCR\_LOCAL\_RULE\_COUNT | Numeric | Internal Rule Matching | Claim | 0 SCR\_NAT\_RULE\_COUNT | Numeric | Internal Rule Matching | Claim | 0 RULE MATCHES | Numeric | Internal Rule Matching | Claim | 0 CLAIM_DESCRIPTION | Text | Customer Claim Text | Claim | this via others themselves inc become within ours slow parking lot fast vehicle roundabout mall not indicating car caravan neck emergency ## Modeling and insights {: #modeling-and-insights } DataRobot automates many parts of the modeling pipeline, including processing and partitioning the dataset, as described [here](model-data). That activity is not described here and instead the following describes model interpretation. Reference the [DataRobot documentation](gs-dr-fundamentals) to see how to use DataRobot from start to finish and how to understand the data science methodologies embedded in its automation. ### Feature Impact {: #feature-impact } [**Feature Impact**](feature-impact) reveals that the number of past personal injury claims (`NUM_PI_CLAIM`) and internal rule matches (`LOCAL_M_CLM_PERS_MATCH`, `RULE_MATCHES`, `SCR_LOCAL_RULE_COUNT`) are among the most influential features in detecting fraudulent claims. ### Feature Effects/partial dependence {: #partial-dependence } ![](images/fraud-claim-3.png) The [partial dependence plot](feature-effects#partial-dependence-calculations) in **Feature Effects** shows that the larger the number of personal injury claims (`NUM_PI_CLAIM`), the higher the likelihood of fraud. As expected, when a claim matches internal red flag rules, its likelihood of being fraud increases greatly. Interestingly, `GENDER` and `CLAIM_TYPE_MOTOR_THEFT` (car theft) are also strong features. ### Word Cloud {: #word-cloud } ![](images/fraud-claim-4.png) The current data includes `CLAIM_DESCRIPTION` as text. A [**Word Cloud**](word-cloud) reveals that customers who use the term "roundabout," for example, are more likely to be committing fraud than those who use the term "emergency." (The size of a word indicates how many rows include the word; the deeper red indicates the higher association it has to claims scored as fraudulent. Blue words are terms associated with claims scored as non-fraudulent.) ### Prediction Explanations {: #prediction-explanations } ![](images/fraud-claim-5.png) [**Prediction Explanations**](pred-explain/index) provide up to 10 reasons for each prediction score. Explanations provide Send directly to Special Investigation Unit (SIU) agents and claim handlers with useful information to check during investigation. For example, DataRobot not only predicts that Claim ID 8296 has a 98.5% chance of being fraudulent, but it also explains that this high score is due to a specific internal rule match (`LOCAL_M_CLM_PERS_MATCH`, `RULE_MATCHES`) and the policyholder’s six previous personal injury claims (`NUM_PI_CLAIM`). When claim advisors need to deny a claim, they can provide the reasons why by consulting Prediction Explanations. ### Evaluate accuracy {: #evaluate-accuracy } There are several vizualizations that help to evaluate accuracy. #### Leaderboard {: #leaderboard } Modeling results show that the ENET Blender is the most accurate model, with 0.93 AUC on cross validation. This is an ensemble of eight single models. The high accuracy indicates that the model has learned signals to distinguish fraudulent from non-fraudulent claims. Keep in mind, however, that blenders take longer to score compared to single models and so may not be ideal for real-time scoring. The Leaderboard shows that the modeling accuracy is stable across Validation, Cross Validation, and Holdout. Thus, you can expect to see similar results when you deploy the selected model. ![](images/fraud-claim-6.png) #### Lift Chart {: #lift-chart } The steep increase in the average target value in the right side of the [**Lift Chart**](lift-chart) reveals that, when the model predicts that a claim has a high probability of being fraudulent (blue line), the claim tends to actually be fraudulent (orange line). ![](images/fraud-claim-7.png) #### Confusion matrix {: #confusion-matrix } The [confusion matrix](confusion-matrix) shows: * Of 2,149 claims in the holdout partition, the model predicted 372 claims as fraudulent and 1,777 claims as legitimate. * Of the 372 claims predicted as fraud, 275 were actually fraudulent (true positives), and 97 were not (false positives). * Of 1,777 claims predicted as non-fraud, 1,703 were actually not fraudulent (true negatives) and 74 were fraudulent (false negatives). ![](images/fraud-claim-8.png) Analysts can examine this table to determine if the model is accurate enough for business implementation. ### Post-processing {: #post-processing } To convert model predictions into decisions, you determine the best thresholds to classify a whether a claim is fraudulent. #### ROC Curve {: #roc-curve } Set the [**ROC Curve**](roc-curve-tab-use) threshold depending on how you want to use model predictions and business constraints. Some examples: If... | Then... ----- | ------- ...the main use of the fraud detection model is to automate payment | ...minimize the false negatives (the number of fraudulent claims mistakenly predicted as not fraudulent) by adjusting the threshold to classify prediction scores into fraud or not. ...the main use is to automate the transfer of the suspicious claims to SIU | ...minimize false positives (the number of non-fraudulent claims mistakenly predicted as fraudulent). ...you want to minimize the false negatives, but you do not want false positives to go over 100 claims because of the limited resources of SIU agents | ...lower the threshold just to the point where the number of false positives becomes 100. ![](images/fraud-claim-9.png) #### Payoff matrix {: #payoff-matrix} From the **Profit Curve** tab, use the [**Payoff Matrix**](profit-curve) to set thresholds based on simulated profit. For example: Payoff value | Description ------------ | ----------- True positive = $20,000 | Average payment associated with a fraudulent claim. False positive = -$20,000 | This is assuming that a false positive means that a human investigator will not be able to spend time detecting a real fraudulent claim. True negative = $100 | Leads to auto pay of claim and saves by eliminating manual claim processing. False negative = -$20,000 | Cost of missing fraudulent claims. DataRobot then automatically calculates the threshold that maximizes profit. You can also measure DataRobot ROI by creating the same payoff matrix for your existing business process and subtracting the max profit of the existing process from that calculated by DataRobot. ![](images/fraud-claim-10.png) Once the threshold is set, model predictions are converted into fraud or non-fraud according to the threshold. These classification results are integrated into BRMS and become one of the many factors that determine the final decision. ## Predict and deploy {: #predict-and-deploy } After selecting the model that best learns patterns to predict fraud, you can deploy it into your desired decision environment. Decision environments are the ways in which the predictions generated by the model will be consumed by the appropriate organizational stakeholders, and how these stakeholders will make decisions using the predictions to impact the overall process. ### Decision stakeholders {: #decision-stakeholders } The following table lists potential decision stakeholders: Stakeholder | Description ----------- | ----------- Decision executors | The decision logic assigns claims that require manual investigation to claim handlers (executors) and SIU agents based claim complexity. They investigate the claims referring to insights provided by DataRobot and decide whether to pay or deny. They report to decision authors the summary of claims received and their decisions each week. Decision managers | Managers monitor the KPI dashboard, which visualizes the results of following the decision logic. For example, they track the number of fraudulent claims identified and missed. They can discuss with decision authors how to improve the decision logic each week. Decision authors | Senior managers in the claims department examine the performance of the decision logic by receiving input from decision executors and decision managers. For example, decision executors will inform whether or not the fraudulent claims they receive are reasonable, and decision managers will inform whether or not the rate of fraud is as expected. Based on the inputs, decision authors update the decision logic each week. ### Decision process {: #decision-process } This use case blends augmentation and automation for decisions. Instead of claim handlers manually investigating every claim, business rules and machine learning will identify simple claims that should be automatically paid and problematic claims that should be automatically denied. Fraud likelihood scores are sent to BRMS through the API and post-processed into high, medium, and low risk, based on set thresholds, and arrive at one of the following final decisions: Action | Degree of risk ------ | ------------- SIU | High Assign to claim handlers | Medium Auto pay | Low Auto deny | Low Routing to claims handlers includes an intelligent triage, in which claims handlers receive fewer claims and just those which are better tailored to their skills and experience. For example, more complex claims can be identified and sent to more experienced claims handlers. SIU agents and claim handlers will decide whether to pay or deny the claims after investigation. ### Model deployment {: #model-deployment } Predictions are deployed through the API and sent to the BRMS. ![](images/fraud-claim-11.png) ### Model monitoring {: #model-monitoring } Using DataRobot [MLOps](mlops/index), you can monitor, maintain, and update models within a single platform. Each week, decision authors monitor the fraud detection model and retrain the model if [data drift](data-drift) reaches a certain threshold. In addition, along with investigators, decision authors can regularly review the model decisions to ensure that data are available for future retraining of the fraud detection model. Based on the review of the model's decisions, the decision authors can also update the decision logic. For example, they might add a repair shop to the red flags list and improve the threshold to convert fraud scores into high, medium, or low risk. DataRobot provides tools for managing and monitoring the deployments, including accuracy and data drift. ### Implementation considerations {: #implementation-considerations } Business goals should determine decision logic, not data. The project begins with business users building decision logic to improve business processes. Once decision logic is ready, true data needs will become clear. Integrating business rules and machine learning to production systems can be problematic. Business rules and machine learning models need to be updated frequently. Externalizing the rules engine and machine learning allows decision authors to make frequent improvements to decision logic. When the rules engine and machine learning are integrated into production systems, updating decision logic becomes difficult because it will require changes to production systems. Trying to automate all decisions will not work. It is important to decide which decisions to automate and which decisions to assign to humans. For example, business rules and machine learning cannot identify fraud 100% of the time; human involvement is still necessary for more complex claim cases.
fraud-claims-include
The execution environment limit allows you to control how many custom model environments a user can add to the [Custom Model Workshop](custom-model-workshop/index). In addition, the execution environment _version_ limit allows you to control how many versions a user can add to _each_ of those environments. These limits can be: 1. **Directly applied to the user**: Set in a user's permissions. Overrides the limits set in the group and organization permissions. 2. **Inherited from a user group**: Set in the permissions of the group a user belongs to. Overrides the limits set in organization permissions. 3. **Inherited from an organization**: Set in the permissions of the organization a user belongs to. If the environment or environment version limits are defined for an organization or a group, the users within that organization or group inherit the defined limits. However, a more specific definition of those limits at a lower level takes precedence. For example, an organization may have the environment limits set to 5, a group to 4, and the user to 3; in this scenario, the final limit for the individual user is 3. For more information on adding custom model execution environments, see the [Custom model environments documentation](custom-environments). To manage the execution environment limits in the platform settings:
ex-env-limits
## Feature considerations {: #feature-considerations } Consider the following when working with Scoring Code: * Using Scoring Code in production requires additional development efforts to implement model management and model monitoring, which the DataRobot API provides out of the box. * Exportable Java Scoring Code requires extra RAM during model building. As a result, to use this feature, you should keep your training dataset under 8GB. Projects larger than 8GB may fail due to memory issues. If you get an out-of-memory error, decrease the sample size and try again. The memory requirement _does not apply during model scoring_. During scoring, the only limitation on the dataset is the RAM of the machine on which the Scoring Code is run. ### Model support {: #model-support } * Scoring Code is available for models containing only _supported_ built-in tasks. It is not available for [custom models](custom-inf-model) or models containing one or more [custom tasks](cml-custom-tasks). * Scoring Code is not supported in multilabel projects. * Keras models do not support Scoring Code by default; however, support can be enabled by having an administrator activate the Enable Scoring Code Support for Keras Models feature flag. If enabled, note that these models are not compatible with Scoring Code for Android and Snowflake. Additional instances in which Scoring Code generation is not available include: * Naive Bayes * Text tokenization involving the MeCab tokenizer * Visual AI and Location AI ### Time series support {: #time-series-support } * The following time series capabilities are not supported for Scoring Code: * Row-based / irregular data * Nowcasting (single forecast point) * Intramonth seasonality * Time series blenders * Autoexpansion * EWMA (Exponentially Weighted Moving Average) * Scoring Code is not supported in time series binary classification projects. * Scoring Code is not typically supported in time series anomaly detection models; however, it is supported for IsolationForest and some XGBoost-based anomaly detection model blueprints. For a list of supported time series blueprints, see the [Time series blueprints with Scoring Code support](#ts-sc-blueprint-support) note. {% include 'includes/scoring-code-consider-ts.md' %} ### Prediction Explanations support {: #prediction-explanations-support } Consider the following when working with Prediction Explanations for Scoring Code: * To download Prediction Explanations with Scoring Code, you _must_ select **Include Prediction Explanations** during [Leaderboard download](sc-download-leaderboard#leaderboard-download) or [Deployment download](sc-download-deployment#deployment-download). This option is _not_ available for [Legacy download](sc-download-legacy). * Scoring Code _doesn't_ support Prediction Explanations for time series models. * Scoring Code _only_ supports [XEMP-based](xemp-pe) prediction explanations. [SHAP-based](shap-pe) prediction explanations aren't supported.
scoring-code-consider
!!! note Some [DataRobot University](https://university.datarobot.com){ target=_blank } courses require subscriptions.
dru-subscription
The [metrics values](#metrics-explained) on the ROC curve display might not always match those shown on the Leaderboard. For ROC curve metrics, DataRobot keeps up to 120 of the calculated thresholds that best represent the distribution. Because of this, minute details might be lost. For example, if you select **Maximize MCC** as the [display threshold](threshold#set-the-display-threshold), DataRobot preserves the top 120 thresholds and calculates the maximum among them. This value is usually very close but may not exactly match the metric value.
max-metrics-roc
## Time-aware models on the Leaderboard {: #time-aware-models-on-the-leaderboard } Once you click **Start**, DataRobot begins the model-building process and returns results to the Leaderboard. !!! note Model parameter selection has not been customized for date/time-partitioned projects. Though automatic parameter selection yields good results in most cases, [**Advanced Tuning** ](adv-tuning) may significantly improve performance for some projects that use the Date/Time partitioning feature. While most elements of the Leaderboard are the same, DataRobot's calculation and assignment of [recommended models](model-rec-process) differs. Also, the **Sample Size** function is different for date/time-partitioned models. Instead of reporting the percentage of the dataset used to build a particular model, under **Feature List & Sample Size**, the default display lists the sampling method (random/latest) and either: * The start/end date (either manually added or automatically assigned for the recommended model: ![](images/otp-start-end-lb.png) * The duration used to build the model: ![](images/otp-sample-size.png) * The number of rows: ![](images/otp-rows-lb.png) * the **Project Settings** label, indicating custom backtest configuration: ![](images/otp-ps-lb.png) You can filter the Leaderboard display on the time window sample percent, sampling method, and feature list using the dropdown available from the **Feature List & Sample Size**. Use this to, for example, easily select models in a single Autopilot stage. ![](images/otp-lb-filter.png) Autopilot does not optimize the amount of data used to build models when using Date/Time partitioning. Different length training windows may yield better performance by including more data (for longer model-training periods) or by focusing on recent data (for shorter training periods). You may improve model performance by adding models based on shorter or longer training periods. You can customize the training period with the <b>Add a Model</b> option on the Leaderboard. Another partitioning-dependent difference is the origination of the Validation score. With date partitioning, DataRobot initially builds a model using only the first backtest (the partition displayed just below the holdout test) and reports the score on the Leaderboard. When calculating the holdout score (if enabled) for row count or duration models, DataRobot trains on the first backtest, freezes the parameters, and then trains the holdout model. In this way, models have the same relationship (i.e., end of backtest 1 training to start of backtest validation will be equivalent in duration to end of holdout training data to start of holdout). Note, however, that backtesting scores are dependent on the [sampling method](#set-rows-or-duration) selected. DataRobot only scores all backtests for a limited number of models (you must manually run others). The automatically run backtests are based on: * With *random*, DataRobot always backtests the best blueprints on the max available sample size. For example, if `BP0 on P1Y @ 50%` has the best score, and BP0 has been trained on `P1Y@25%`, `P1Y@50%` and `P1Y` (the 100% model), DataRobot will score all backtests for BP0 trained on P1Y. * With *latest*, DataRobot preserves the exact training settings of the best model for backtesting. In the case above, it would score all backtests for `BP0 on P1Y @ 50%`. Note that when the model used to score the validation set was trained on less data than the training size displayed on the Leaderboard, the score displays an asterisk. This happens when training size is equal to full size minus holdout. Just like [cross-validation](data-partitioning), you must initiate a separate build for the other configured backtests (if you initially set the number of backtest to greater than 1). Click a model’s **Run** link from the Leaderboard, or use **Run All Backtests for Selected Models** from the Leaderboard menu. (You can use this option to run backtests for single or multiple models at one time.) ![](images/otp-run.png) The resulting score displayed in the **All Backtests** column represents an average score for all backtests. See the description of [**Model Info**](model-info) for more information on backtest scoring. ![](images/otp-run-value.png) ### Change the training period {: #change-the-training-period } !!! note Consider [retraining your model on the most recent data](otv#retrain-before-deployment) before final deployment. You can change the training range and sampling rate and then rerun a particular model for date-partitioned builds. Note that you cannot change the duration of the validation partition once models have been built; that setting is only available from the **Advanced options** link before the building has started. Click the plus sign (**+**) to open the **New Training Period** dialog: ![](images/otp-open-training.png) The **New Training Period** box has multiple selectors, described in the table below: ![](images/otp-new-training.png) | | Selection | Description | |---|---|---| | ![](images/icon-1.png) | Frozen run toggle | [Freeze the run](frozen-run) | | ![](images/icon-2.png) | Training mode | Rerun the model using a different training period. Before setting this value, see [the details](ts-customization#duration-and-row-count) of row count vs. duration and how they apply to different folds. | | ![](images/icon-3.png) | Snap to | "Snap to" predefined points, to facilitate entering values and avoid manually scrolling or calculation. | | ![](images/icon-4.png) | [Enable time window sampling](#time-window-sampling) | Train on a subset of data within a time window for a duration or [start/end](#setting-the-start-and-end-dates) training mode. Check to enable and specify a percentage. | | ![](images/icon-5.png) | [Sampling method](#set-rows-or-duration) | Select the sampling method used to assign rows from the dataset. | |![](images/icon-6.png) | Summary graphic | View a summary of the observations and testing partitions used to build the model. | | ![](images/icon-7.png) | Final Model | View an image that changes as you adjust the dates, reflecting the data to be used in the model you will make predictions with (see the [note](#about-final-models) below). | Once you have set a new value, click **Run with new training period**. DataRobot builds the new model and displays it on the Leaderboard. #### Setting the duration {: #setting-the-duration} To change the training period a model uses, select the **Duration** tab in the dialog and set a new length. Duration is measured from the beginning of validation working back in time (to the left). With the Duration option, you can also enable [time window sampling](#time-window-sampling). DataRobot returns an error for any period of time outside of the observation range. Also, the units available depend on the time format (for example, if the format is `%d-%m-%Y`, you won't have hours, minutes, and seconds). ![](images/otp-duration.png) #### Setting the row count {: #setting-the-row-count } The row count used to build a model is reported on the Leaderboard as the Sample Size. To vary this size, Click the **Row Count** tab in the dialog and enter a new value. ![](images/otp-row-count.png) #### Setting the start and end dates {: #setting-the-start-and-end-dates } If you enable [Frozen run](frozen-run) by clicking the toggle, DataRobot re-uses the parameter settings it established in the original model run on the newly specified sample. Enabling Frozen run unlocks a third training criteria, Start/End Date. Use this selection to manually specify which data DataRobot uses to build the model. With this setting, after unlocking holdout, you can train a model into the Holdout data. (The Duration and Row Count selectors do not allow training into holdout.) Note that if holdout is locked and you overlap with this setting, the model building will fail. With the start and end dates option, you can also enable [time window sampling](#time-window-sampling). ![](images/otp-start-end.png) When setting start and end dates, note the following: * DataRobot does not run backtests because some of the data may have been used to build the model. * The end date is excluded when extracting data. In other words, if you want data through December 31, 2015, you must set end-date to January 1, 2016. * If the validation partition (set via Advanced options before initial model build) occurs after the training data, DataRobot displays a validation score on the Leaderboard. Otherwise, the Leaderboard displays N/A. * Similarly, if any of the holdout data is used to build the model, the Leaderboard displays N/A for the Holdout score. * Date/time partitioning does not support dates before 1900. Click **Start/End Date** to open a clickable calendar for setting the dates. The dates displayed on opening are those used for the existing model. As you adjust the dates, check the **Final model** graphic to view the data your model will use. ![](images/otp-final-model.png) ### Time window sampling {: #time-window-sampling } If you do not want to use all data within a time window for a date/time-partitioned project, you can train on a subset of data within a time window specification. To do so, check the **Enable Time Window** sampling box and specify a percentage. DataRobot will take a uniform sample over the time range using that percentage of the data. This feature helps with larger datasets that may need the full time window to capture seasonality effects, but could otherwise face runtime or memory limitations. ![](images/otp-time-sample.png) ## View summary information {: #view-summary-information } Once models are built, use the [**Model Info**](model-info) tab for the model overview, backtest summary, and resource usage information. ![](images/otp-model-info.png) Some notes: * Hover over the folds to display rows, dates, and duration as they may differ from the values shown on the Leaderboard. The values displayed are the actual values DataRobot used to train the model. For example, suppose you request a [Start/End Date](#setting-the-start-and-end-dates) model from 6/1/2015 to 6/30/2015 but there is only data in your dataset from 6/7/2015 to 6/14/2015, then the hover display indicates the actual dates, 6/7/2015 through 6/15/2015, for start and end dates, with a duration of eight days. * The **Model Overview** is a summary of row counts from the validation fold (the first fold under the holdout fold). * If you created duration-based testing, the validation summary could result in differences in numbers of rows. This is because the number of rows of data available for a given time period can vary. * A message of **Not Yet Computed** for a backtest indicates that there was not available data for the validation fold (for example, because of gaps in the dataset). In this case, where all backtests were not completed, DataRobot displays an asterisk on the backtest score. * The “reps” listed at the bottom correspond to the backtests above and are ordered in the sequence in which they finished running.
date-time-include-3
End of preview. Expand in Data Studio
README.md exists but content is empty.
Downloads last month
1,164