markdown
stringlengths 44
160k
| filename
stringlengths 3
39
|
---|---|
---
title: Platform home
description: Access the full DataRobot UI documentation, including feature descriptions for the UI and API, data preparation, tutorials, and a glossary.
---
# DataRobot UI documentation
The **UI docs** tab describes workflow and reference material for the UI version of the DataRobot AI Platform, regardless of deployment type.
Resource | Description
-------- | -----------
 | [**Get Started**](get-started/index): A quick introduction to analyzing data, creating models, and writing code with DataRobot.
 | [**Workbench**](workbench/index): Workbench, an organizational hierarchy that creates a Use Case as a top level concept and supports experimentation and sharing.
 | [**Data**](data/index): Data management (import, transform, analyze, store) and the DataRobot Data Prep tool.
 | [**Modeling**](modeling/index): Building, understanding, and analyzing models; time series modeling; business operations tools; a modeling reference.
 | [**Predictions**](predictions/index.html): The prediction API and associated reference; Scoring Code guide; batch scoring methods; UI-based prediction methods.
 | [**MLOps**](mlops/index): The pillars of the centralized hub for managing models in production— deployment, monitoring, managing, and governance.
 | [**Notebooks**](dr-notebooks/index): **Public Preview**: Create interactive, computational environments that hosts code execution and rich media for various use cases and workflows.
 | [**No-Code AI Apps**](app-builder/index): Use a no-code interface to configure AI-powered applications and enable core DataRobot services (making predictions, optimizing target features, simulating scenarios).
| index |
---
title: ELI5
description: Explain it Like I'm 5 provides a list with brief, easily digestible answers. Answers links to more complete documentation.
---
# ELI5 {: #eli5 }
Explain it like I'm 5 (ELI5) contains complex DataRobot and data science concepts, broken down into brief, digestible answers. Many topics include a link to the full documentation where you can learn more.
??? ELI5 "What is MLOps?"
Machine learning operations (MLOps) itself is a derivative of DevOps; the thought being that there is an entire “Ops” (operations) industry that exists for normal software, and that such an industry needed to emerge for ML (machine learning) as well. Technology (including DataRobot AutoML) has made it easy for people to build predictive models, but to get value out of models, you have to deploy, monitor, and maintain them. Very few people know how to do this and even fewer than know how to build a good model in the first place.
This is where DataRobot comes in. DataRobot offers a product that performs the "deploy, monitor, and maintain" component of ML (MLOps) in addition to the modeling (AutoML), which automates core tasks with built in best practices to achieve better cost, performance, scalability, trust, accuracy, and more.
_Who can benefit from MLOps?_ MLOps can help AutoML users who have problems operating models, as well as organizations that do not want AutoML but do want a system to operationalize their existing models.
Key pieces of MLOps include the following:
* The **Model Management** piece in which DataRobot provides model monitoring and tracks performance statistics.
* The **Custom Models** piece makes it applicable to the 99.9% of existing models that weren’t created in DataRobot.
* The **Tracking Agents** piece makes it applicable even to models that are never brought into DataRobot—this makes it much easier to start monitoring existing models (no need to shift production pipelines).
[Learn more about MLOps](mlops/index).
??? ELI5 "What are stacked predictions?"
DataRobot produces predictions for training data rows by making "stacked predictions," which just means that for each row of data that is predicted on, DataRobot is careful to use a model that was trained with data that does not include the given row.
An analogy:
You're a teacher teaching five different math students and want to be sure that your teaching material does a good job of teaching math concepts.
So, you take one hundred math problems and divide them up into five sets of question-answer pairs. You give each student a different collection of four sets to use as study material. The remaining fifth set of math problems you use as the exam for that student.
When you present your findings to the other teachers, you don't want to present the student's answers on the study material as evidence of learning—the students already had the answers available and could have just copied them without understanding the concepts. Instead you show how each student performed on their exam, where they didn't have the answers given to them.
In this analogy, the students are the _models_, the question-answer pairs are the _rows of data_, and the different sets of question-answer pairs are the different _cross-validation partitions_. Your presentation to the other teachers is all the charts DataRobot makes to understand model performance (Lift Charts, ROC curve, etc). The student's answers on their exams are the stacked predictions. Learn more about stacked predictions [here](data-partitioning#what-are-stacked-predictions).
??? ELI5 "What's a rating table, why are generalized additive models (GAM) good for insurance, and what's the relation between them?"
A **rating table** is a ready-made set of rules you can apply to insurance policy pricing, like, "if driving experience and number of accidents is in this range, set this price.""
A **GAM model** is interpretable by an actuary because it models things like, "if you have this feature, add $100; if you have this, add another $50."
The way you learn GAMs allows you to automatically learn ranges for the rating tables.
Learn more about [rating tables](rating-table).
??? ELI5 "Loss reserve modeling vs. loss cost modeling"
You just got paid and have $1000 in the bank, but in 10 days your $800 mortgage payment is due. If you spend your $1000, you won't be able to pay your mortgage, so you put aside $800 as a reserve to pay the future bill.
**Example: Insurance**
Loss reserving is estimating the ultimate costs of policies that you've already sold (regardless of what price you charged). If you sold 1000 policies this year, at the end of the year lets say you see that there have been 50 claims reported and only $40,000 has been paid. They estimate that when they look back 50 or 100 years from now, they'll have paid out a total of $95k, so they set aside an additional $55k of "loss reserve". Loss reserves are by far the biggest liability on an insurer's balance sheet. A multi-billion dollar insurer will have hundreds of millions if not billions of dollars worth of reserves on their balance sheet. Those reserves are very much dependent on predictions.
??? ELI5 "Algorithm vs. model"
The following is an example model for sandwiches: a sandwich is a savory filling (such as pastrami, a portobello mushroom, or a sausage) and optional extras (lettuce, cheese, mayo, etc.) surrounded by a carbohydrate (bread). This model allows you to describe foods simply (you can classify all foods as "sandwich" or "not sandwich"), and allows you to predict new sets of ingredients to make a sandwich.
An algorithm for making a sandwich would consist of a set of instructions:
1. Slice two pieces of bread from a loaf.
2. Spread chunky peanut butter on one side of one slice of bread.
3. Spread raspberry jam on one side of the other slice.
4. Place one slice of bread on top of the other so that the sides with the peanut butter and jam are facing each other.
??? ELI5 "API vs. SDK"
**API:** "This is how you talk to me."
**SDK:** "These are the tools to help you talk to me."
**API:** "Talk into this tube."
**SDK:** "Here's a loudspeaker and a specialized tool that holds the tube in the right place for you."
**Example**
DataRobot's REST API is an API but the Python and R packages are a part of DataRobot's SDK because they provide an easier way to interact with the API.
**API:** Bolts and nuts
**SDK:** Screwdrivers and wrenches
Learn more about [DataRobot's APIs and SDKs](api/index).
??? ELI5 "What does monotonic mean?"
**Examples**
=== "Comic books"
Let's say you collect comic books. You expect that the more money you spend, the more value your collection has (**monotonically** increasing relationship between value and money spent). However, there could be other factors that affect this relationship, like a comic book tears and your collection is worth less even though you spent more money. You don't want your model to learn that spending more money decreases value because it's really decreasing from a comic book tearing or other factor it doesn't consider. So, you force it to learn the **monotonic relationship**.
=== "Insurance"
Let's say you're an insurance company, and you give a discount to people who install a speed monitor in their car. You want to give a bigger discount to people who are safer drivers, based on their speed. However, your model discovers a small population of people who drive incredibly fast (e.g., 150 MPH or more), that are also really safe drivers, so it decides to give a discount to these customers too. Then other customers discover that if they can hit 150 MPH in their cars each month, they get a big insurance discount, and then you go bankrupt. **Monotonicity** is a way for you to say to the model: "as top speed of the car goes up, insurance prices must always go up too."
Learn more about [monotonics](monotonic).
??? ELI5 "What is ridge regressor?"
If you have a group of friends in a room talking about which team is going to win a game, you want to hear multiple opinions and not have one friend dominate the conversation. So if they keep talking and talking, you give them a 'shush' and then keep 'shushing' them louder the more they talk. Similarly, the ridge regressor penalizes one variable from dominating the model and spreads the signal to more variables.
There are two kinds of penalized regression—one kind of penalty makes the model keep all the features but spend less on the unimportant features and more on the important ones. This is **Ridge**. The other kind of penalty makes the model leave some unimportant variable completely out of the model. This is called **Lasso**.
??? ELI5 "Anomaly detection vs. other machine learning problems DataRobot can solve"
**Anomaly detection is an unsupervised learning problem**. This means that it does not use a target and does not have labels, as opposed to supervised learning which is the type of learning many DataRobot problems fall into. In supervised learning there is a "correct" answer and models predict that answer as close as possible by training on the features. There are a number of anomaly detection techniques, but no matter what way you do it, there is no real "right answer" to whether something is an anomaly or not—it's just trying to group common rows together and find a heuristic way to tell you "hey wait a minute, this new data doesn't look like the old data, maybe you should check it out."
**Supervised** = I know what I’m looking for.
**Unsupervised** = Show me something interesting.
**Example: Network access and credit card transactions**
In some anomaly detection use cases there are millions of transactions that require a manual process of assigning labels. This is impossible for humans to do when you have thousands of transactions per day, so they have large amounts of unlabeled data. Anomaly detection is used to try and pick up the abnormal transactions or network access. A ranked list can then be passed on to a human to manually investigate, saving them time.
=== "Supervised"
A parent feeds their toddler; the toddler throws the food on the floor and Mom gets mad. The next day, the same thing happens. The next day, Mom feeds the kid, the kid eats the food, and Mom's happy. The kid is particularly aware of Mom's reaction, and that ends up driving their learning (or supervising their learning), i.e., they learn the association between their action and mom's reaction—that's supervised.
=== "Unsupervised"
Mom feeds the kid; the kid separates his food into two piles: cold and hot. Another day, the kid separates the peas, carrots, and corn. They're finding some structure in the food, but there isn't an outcome (like Mom's reaction) guiding their observations.
Learn more about [machine learning problems DataRobot can help solve](unsupervised/index).
??? ELI5 "What are tuning parameters and hyperparameters?"
Tuning parameters and hyperparameters are like knobs and dials you can adjust to make a model perform differently. DataRobot automates this process to make a model fit data better.
**Examples**
=== "Playing the guitar"
Say you are playing a song on an electric guitar. The chords progression is the model, but you and your friend play it with different effects on the guitar—your friend might tune their amplifier with some rock distortion and you might increase the bass. Depending on that, the same song will sound different. That's hyperparameter tuning.
=== "Tuning a car"
Some cars, like a Honda Civic, have very little tuning you can do to them. Other cars, like a race car, have a lot of tuning you can do. Depending on the racetrack, you might change the way your car is tuned.
??? ELI5 "What insights can a user get from Hotspots visualization?"
Hotspots can give you feature engineering ideas for subsequent DataRobot projects. Since they act as simple IF statements, they are easy to add to see if your models get better results. They can also help you find clusters in data where variables go together, so you can see how they interact.
If Hotspots could talk to the user: "The model does some heavy math behind the scenes, but let's try to boil it down to some if-then-else rules you can memorize or implement in a simple piece of code without losing much accuracy. Some rules look promising, some don't, so take a look at them and see if they make sense based on your domain expertise."
**Example**
If a strongly-colored, large rule was something like "Age > 65 & discharge_type = 'discharged to home'" you might conclude that there is a high diabetes readmission rate for people over 65 who are discharged to home. Then, you might consider new business ideas that treat the affected population to prevent readmission, although this approach is completely non-scientific.
Learn more about [Hotspots visualizations](general-modeling-faq#model-insghts).
??? ELI5 "What is target leakage?"
Target leakage is like that scene in Mean Girls where the girl can predict when it's going to rain if it's already raining.
One of the features used to build your model is actually derived from the target, or closely related.
**Example**
You and a friend are trying to predict who’s going to win the Super Bowl (the Rams, or the Patriots).
You both start collecting information about past Super Bowls. Then your friend goes, “Hey wait! I know, we can just grab the newspaper headline from the day after the Super Bowl! Every past Super Bowl, you could just read the next day’s newspaper to know exactly who won.”
So you start collecting all newspapers from past Super Bowls, and become really good at predicting previous Super Bowl winners.
Then, you get to the upcoming Super Bowl and try to predict who’s going to win, however, something is wrong: “where is the newspaper that tells us who wins?”
You were using target leakage that helped you predict the past winners with high accuracy, but that method wasn't useful for predicting future behavior. **The newspaper headline was an example of target leakage, because the target information was “leaking” into the past**.
**Interesting links:**
* [AI Simplified: What is Target Leakage in Data Science?](https://youtu.be/y8qaI5mpJeA){ target=_blank }
* [Karen Smith's Weather Report](https://youtu.be/MG_LL9m7cl4){ target=_blank }
Learn more about [Target Leakage](data-quality#target-leakage).
??? ELI5 "Scoring data vs. scoring a model"
You want to compete in a cooking competition, so you practice different recipes at home. You start with your ingredients (training data), then you try out different recipes on your friend to optimize each of your recipes (training my models). After that, you try out the recipes on some external guests who you trust and are somewhat unbiased (validation), ultimately, choosing the recipe that you will try in the competition. This is the model that you will be using for scoring.
Now you go to the competition where they give you a bunch of ingredients—this is your scoring data (new data that you haven't seen). You want to run these through your recipe and produce a dish for the judges—that is making predictions or scoring using the model.
You could have tried many recipes with the same ingredients—so the same scoring data can be used to generate predictions from different models.
??? ELI5 "Bias vs. variance"
You're going to a wine tasting party and are thinking about inviting one of two friends:
* **Friend 1:** Enjoys all kinds of wine, but may not actually show up (low bias/high variance).
* **Friend 2:** Only enjoys bad gas station wine, but you can always count on them to show up to things (high bias/low variance).
Best case scenario: You find someone who isn’t picky about wine and is reliable (low bias/low variance).
However, this is hard to come by, so you may just try to incentivize Friend 1 to show up or convince Friend 2 to try other wines (hyperparameter tuning).
You avoid friends who only drink gas station wine and are unreliable about showing up to things (high bias/high variance).
??? ELI5 "Structured vs. unstructured datasets"
Structured data is neat and organized—you can upload it right into DataRobot. Structured data is CSV files or nicely organized Excel files with one table.
Unstructured data is messy and unorganized—you have to add some structure to it before you can upload it to DataRobot.
Unstructured data is a bunch of tables in various PDF files.
**Example**
Let’s imagine that you have a task to predict categories for new Wikipedia pages (Art, History, Politics, Science, Cats, Dogs, Famous Person, etc.).
All the needed information is right in front of you—just go to wikipedia.com and you can find all categories for each page. But the way the information is structured right now is not suitable for predicting categories for new pages. It would be hard to extract some knowledge from this data. On the other hand, if you will query Wikipedia’s databases and form a file with columns corresponding to different features of articles (title, content, age, previous views, number of edits, number of editors) and rows corresponding to different articles, this would be a structured dataset, it is more suitable to extract some hidden value from it using machine learning methods.
Note that text is not ALWAYS unstructured. For example, you have 1000 short stories, some of which you liked, and some of which you didn't. As 1000 separate files, this is an unstructured problem. But if you put them all together in one CSV, the problem becomes structured, and now DataRobot can solve it.
??? ELI5 "Log scale vs. linear scale"
In _log scale_ the values keep multiplying by a fixed factor (1, 10, 100, 1000, 10000). In _linear scale_ the values keep adding up by a fixed amount (1, 2, 3, 4, 5).
**Examples**
=== "Richter scale"
Going up one point on the Richter scale is a magnitude increase of about 30x. So 7 is 30 times higher than 6, but 8 is 30 times higher than 7, so 8 is 900 (30x30) times higher than 6.
=== "Music theory"
The octave numbers increase linearly, but the sound frequencies increase exponentially. So note A 3rd octave = 220 Hz, note A 4th octave = 440 Hz, note A 5th octave = 880 Hz, note A 6th octave = 1760 Hz.
**Interesting facts:**
- In economics and finance, log scale is used because it's much easier to translate to a % change.
- It's possible that a reason log scale exists is because many events in nature are governed by exponential laws rather than linear, but linear is easier to understand and visualize.
- If you have large linear numbers and they make your graph look bad, then you can log the numbers to shrink them and make your graph look prettier.
??? ELI5 "What is reinforcement learning?"
**Examples**
=== "Restaurant reviews"
Let's say you want to find the best restaurant in town. To do this, you have to go to one, try the food, and decide if you like it or not. Now, every time you go to a new restaurant, you will need to figure out if it is better than all the other restaurants you've already been to, but you aren't sure about your judgement, because maybe a dish you had was good/bad compared to others offered in that restaurant. Reinforcement learning is the targeted approach you can take to still be able to find the best restaurant for you, by choosing the right amount of restaurants to visit or choosing to revisit one to try a different dish. It narrows down your uncertainty about a particular restaurant, trading off the potential quality of unvisited restaurants.
=== "Dog training"
Reinforcement learning is like training a dog—for every action your model takes, you either say "good dog" or "bad dog". Over time, by trial and error, the model learns the behavior so as to maximize the reward. Your job is to provide the environment to respond to the agent's (dog's) actions with numeric rewards. Reinforcement learning algorithms operate in this environment and learn a policy.
**Interesting Facts:**
- Reinforcement learning works better if you can generate an unlimited amount of training data, like with Doom/Atari, AlphaGo games, and so on. You need to emulate the training environment so the model can learn its mechanics by trying different approaches a _gazillion_ times.
- A good reinforcement learning framework is OpenAI Gym. In it you set some goal for your model, put it in some environment, and keep it training until it learns something.
- Tasks that humans normally consider "easy" are actually some of the hardest problems to solve. It's part of why robotics is currently behind machine learning. It is significantly harder to learn how to stand up or walk or move smoothly than it is to perform a supervised multiclass prediction with 25 million rows and 200 features.
??? ELI5 "What is target encoding?"
Machine learning models don't understand categorical data, so you need to turn the categories into numbers to be able to do math with them. Some example methods to do target encoding include:
- _One-hot encoding_ is a way to turn categories into numbers by encoding categories as very wide matrices of 0s and 1s. This works well for linear models.
- _Target encoding_ is a different way to turn a categorical into a number by replacing each category with the mean of the target for that category. This method gives a very narrow matrix, as the result is only one column (vs. one column per category with a one-hot encoding).
Although more complicated, you can also try to avoid overfitting while using target encoding—DataRobot's version of this is called _credibility encoding_.
??? ELI5 "What is credibility weighting?"
Credibility weighting is a way of accounting for the certainty of outcomes for data with categorical labels (e.g. what model vehicle you drive).
**Examples:**
=== "Vehicle models"
For popular vehicle category types, e.g. Ford F-series was the top-selling vehicle in USA in 2018, there will be many people in your data, and you will be more certain that the historical outcome is reliable. For unpopular category types, e.g. Smart Fortwo was ranked one of the rarest vehicle models in the US in 2017, you may only have one or two people in your data, and you will not be certain that the historical outcome is a reliable guide to the future. Therefore, you will use broader population statistics to guide your decisions.
=== "Flipping a coin"
You know that when you toss a coin, you can't predict with any certainty whether it is going to be heads or tails, but if you toss a coin 1000 times, you are going to be more certain about how many times you see heads (close to 500 times if you're doing it correctly).
??? ELI5 "What is ROC Curve?"
The ROC Curve is a measure of how well a good model can classify data, and it's also a good off-the-shelf method of comparing two models. You typically have several different models to choose from, so you need a way to compare them. If you can find a model that has a very good ROC curve, meaning the model classifies with close to 100% True Positive and 0% False Positive, then that model is probably your best model.
**Example: Alien sighting**
Imagine you want to receive an answer to the question "Do aliens exist?" Your best plan to get an answer to this question is to interview a particular stranger by asking "Did you see anything strange last night?" If they say "Yes", you conclude aliens exist. If say "No", you conclude aliens don't exist. What's nice is that you have a friend in the army who has access to radar technology so they can determine whether an alien did or did not show up. However, you won't see your friend until next week, so for now conducting the interview experiment is your best option.
Now, you have to decide which strangers to interview. You inevitably have to balance whether you should conclude aliens exist now, or just wait for your army friend. You get about 100 people together and you will conduct this experiment with each of them tomorrow. The ROC curve is a way to decide which stranger you should interview because people are blind, drink alcohol, shy, etc. It represents a ranking of each person and how good they are, and at the end you pick the "best" person, and if the "best" person is good enough, you go with the experiment.
The ROC curve's y-axis is True Positives, and the x-axis is False Positives. You can imagine people that drink a lot of wine are ranked on the top right of the curve. They think anything is an alien, so they have a 100% True Positive ranking. They will identify an alien if one exists, but they also have 100% False Positive ranking—if you say everything is an alien, you're flat out wrong when there really aren't aliens. People ranked on the lower left don't believe in aliens, so nothing is an alien because aliens never existed. They have a 0% False Positive ranking, and 0% True Positive ranking. Again, nothing is an alien, so they will never identify if aliens exist.
What you want is a person with a 100% True Positive ranking and 0% False Positive ranking—they correctly identify aliens when they exist, but only when they exist. That's a person that is close to the top-left of the ROC Chart. So your procedure is, take 100 people, and rank them on this space of True Positives vs. False Positives.
Learn more about [ROC Curve](roc-curve-tab/index).
??? ELI5 "What is overfitting?"
You tell Goodreads that you like a bunch of Agatha Christie books, and you want to know if you'd like other murder mysteries. It says “no,” because those other books weren't written by Agatha Christie.
Overfitting is like a bad student who only remembers book facts but does not draw conclusions from them. Any life situation that wasn't specifically mentioned in the book will leave them helpless. But they'll do well on an exam based purely on book facts (that's why you shouldn't score on training data).
??? ELI5 "Dedicated Prediction Server (DPS) vs. Portable Prediction Server (PPS)"
=== "The Simple Explanation"
* **Dedicated Prediction Server (DPS)**: A service built into the DataRobot platform, allowing you to easily host and access your models. This type of prediction server provides the easiest path to MLOps monitoring since the platform is handling scoring directly.
* **Portable Prediction Server (PPS)**: A containerized service running outside of DataRobot, serving models exported from DataRobot. This type of prediction server allows more flexibility in terms of where you host your models, while still allowing monitoring when you configure DataRobot's [MLOps agents](mlops-agent/index). This can be helpful in cases where data segregation or network performance are barriers to more traditional scoring with a DPS. The PPS might be a good option if you're considering using scoring code but would benefit from the simplicity of the prediction API or if you have a requirement to collect Prediction Explanations.
=== "The Garage Metaphor"
* **Dedicated Prediction Server (DPS)**: You have a garage attached to your house, allowing you to open the door to check in on your car whenever you want.
* **Portable Prediction Server (PPS)**: You have a garage but it's down the street from your house. You keep it down the street because you want more space to work and for your car collection to be safe from damage when your teenage driver tries to park. However, if you want to regularly check in on your collection, you must install cameras.
??? ELI5 "What is deep learning?"
Imagine your grandma Dot forgot her chicken matzo ball soup recipe. You want to try to replicate it, so you get your family together and make them chicken matzo ball soup.
It’s not even close to what grandma Dot used to make, but you give it to everyone. Your cousin says “too much salt,” your mom says, “maybe she used more egg in the batter,” and your uncle says, “the carrots are too soft.” So you make another one, and they give you more feedback, and you keep making chicken matzo ball soup over until everyone agrees that it tastes like grandma Dot's.
That’s how a neural network trains—something called backpropagation, where the errors are passed back through the network, and you make small changes to try to get closer to the right answers.
??? ELI5 "What is federated machine learning?"
The idea is that once a central model is built, you can retrain the model to use in different edge devices.
**Example**
=== "McDonald's menu items"
McDonald's set aside its menu (central model) and gives the different franchise flexibility to use it. Then, McDonald's locations in India use that recipe and tweak it to include McPaneer Tikka burger. To do that tweaking, the Indian McDonald's did not need to reach out to the central McDonald's—they can update those decisions locally. It's advantageous because your models will be updated faster without having to send the data to some central place all the time. The model can use the devices local data (e.g., smartphone usage data on your smartphone) without having to store it in a central training data storage, which can also be good for privacy.
=== "Phone usage"
One example that Google gives is the smart keyboard on your phone. There is a shared model that gets updated based on your phone usage. All that computing is happening on your phone without having to store your usage data in a central cloud.
??? ELI5 "What are offsets?"
Let's say you are a 5 year old who understands linear models. With linear regression, you find that betas minimize the error, but you may already know some of the betas in advance. So you give the model the value of those betas and ask it to go find the values of the other betas that minimize error. When you give a model an effect that’s known ahead of time, you're giving the model an offset.
??? ELI5 "What is F1 score?"
Let's say you have a medical test (ML model) that determines if a person has a disease. Like many tests, this test is not perfect and can make mistakes (call a healthy person unhealthy or otherwise).
We might care the most about maximizing the % of truly sick people among those our model calls sick (_precision_), or we might care about maximizing the % of detection of truly sick people in our population (_recall_).
Unfortunately, tuning towards one metric often makes the other metric worse, especially if the target is imbalanced. Imagine you have 1% of sick people on the planet and your model calls everyone on the planet (100%) sick. Now it has a perfect recall score but a horrible precision score. On the opposite side, you might make the model so conservative that it calls only one person in a billion sick but gets it right. That way it has perfect precision but terrible recall.
F1 score is a metric that considers precision and recall at the same time so that you could achieve balance between the two.
How do you consider precision and recall at the same time? Well, you could just take an average of the two (arithmetic mean), but because precision and recall are ratios with different denominators, arithmetic mean doesn't work that well in this case and a harmonic mean is better. That's exactly what an F1 score is—a harmonic mean between precision and recall.
**Interesting Facts:**
Explanations of [Harmonic Mean](https://www.quora.com/How-do-you-explain-the-arithmetic-mean-and-harmonic-mean-to-kids){ target=_blank }.
??? ELI5 "What is SVM?"
Let's say you build houses for good borrowers and bad borrowers on different sides of the street so that the road between them is as wide as possible. When a new person moves to this street, you csn see which side of the road they're on to determine if they're a good borrower or not. SVM learns how to draw this "road" between positive and negative examples.
SVMs are also called “maximum margin” classifiers. You define a road by the center line and the curbs on either side, and then try to find the widest possible road. The curbs on the side on the road are the “support vectors”.
Closely related term: _Kernel Trick_.
In the original design, SVM could only learn roads that are straight lines, however, kernels are a math trick that allows them to learn curve-shaped roads. Kernels project the points into a higher dimensional space where they are still separated by a linear "road," but in the original space, they are no longer a straight line.
The ingenious part about kernels compared to manually creating polynomial features in logistic regression, is that you don't have to compute those higher-dimensional coordinates beforehand because kernel is always applied to a pair of points and only needs to return a dot product, not the coordinates. This makes it very computationally efficient.
**Reference:**
Help me understand Support Vector Machines on [Stack Exchange](https://stats.stackexchange.com/a/3954){ target=_blank }.
??? ELI5 "What is an end-to-end ML platform?"
Think of it as baking a loaf of bread. If you take ready-made bread mix and follow the recipe, but someone else eats it, that's not end-to-end. If you harvest your own wheat, mill it into flour, make your loaf from scratch (flour, yeast, water, etc.), try out several different recipes, take the best loaf, eat some of it yourself, and then watch to see if it doesn't become moldy—that's end to end.
??? ELI5 "What are lift charts?"
**Example**
=== "Rock classification"
You have 100 rocks. Your friend guesses the measurement of each rock while you actually measure each one. Next, you put them in order from smallest to largest (according to your friend's guesses, not by how big they actually are). You divide them into groups of 10 and take the average size of each group. Then, you compare what your friend guessed with what you measured. This allows you to determine how good your friend is at guessing the size of rocks.
=== "Customer churn"
Let's say you build a model for customer churn, and you want to send out campaigns to 10% of your customers. If you use a model to target that 10% with higher probability of churn, then you have more chance of targeting clients that might churn vs. not using model and just sending your campaigns randomly. Cumulative lift chart shows this more clearly.
Learn more about [Lift Charts](lift-chart).
??? ELI5 "What is transfer learning?"
**Short version:** When you teach someone how to distinguish dogs from cats, the skills that go into that can be useful when distinguishing foxes and wolves.
**Example**
You are a 5-year old whose parents decided you need to learn tennis, while you are wondering who "Tennis" is.
=== "Scenario 1"
Every day your parents push you out the door and say, “go learn tennis and if you come back without learning anything today, there is no food for you.”
Worried that you'll starve, you started looking for "Tennis." It took a few days for you to figure out that tennis is a game and where tennis is played. It takes you a few more days to understand how to hold the racquet and how to hit the ball. Finally, by the time you figured out the complete game, you are already 6 years old.
=== "Scenario 2"
Your parents took you to the best tennis club in town, and found Roger Federer to coach you. He can immediately start working with you—teaching you all about tennis and makes you tennis-ready in just a week. Because this guy also happened to have a lot of experience playing tennis, you were able to take advantage of all his tips, and within a few months, you are already one of the best players in town.
Scenario 1 is similar to how a regular machine learning algorithm starts learning. With the fear of being punished, it starts looking for a way to learn what is being taught and slowly starts learning stuff from scratch. On the other hand, by using Transfer Learning the same ML algorithm has a much better guidance/starting ground or in other words it is using the same ML algorithm that was trained on a similar data as an initialization point so that it can quickly learn the new data but at a much faster rate and sometimes with better accuracy.
??? ELI5 "What are summarized categorical features?"
Let's say you go to the store to shop for food. You walk around the store and put items of different types into your cart, one at a time. Then, someone calls you on the phone and asks you what have in your cart, so you respond with something like "6 cans of soup, 2 boxes of Cheerios, 1 jar of peanut butter, 7 jars of pickles, 82 grapes..."
Learn more about [Summarized Categorical Features](histogram#summarized-categorical-features).
??? ELI5 "Particle swarm vs. GridSearch"
GridSearch takes a fixed amount of time but may not find a good result.
Particle swarm takes an unpredictable, potentially unlimited amount of time, but can find better results.
**Examples**
=== "Particle swarm"
You’ve successfully shown up to Black Friday at Best Buy with 3 of your friends and walkie talkies. However, you forgot to look at the ads in the paper for sales. Not to worry, you decided the way you were going to find the best deal in the Big Blue Box is to spread out around the store and walk around for 1 minute to find the best deal and then call your friends and tell them what you found. The friend with the best deal is now an anchor and the other friends start moving in that direction and repeat this process every minute until the friends are all in the same spot (2 hours later), looking at the same deal, feeling accomplished and smart.
=== "GridSearch"
You’ve successfully shown up to Black Friday at Best Buy with 3 of your friends. However, you forgot to look at the ads in the paper for sales and you also forgot the walkie talkies. Not to worry, you decided the way you were going to find the best deal in the Big Blue Box is to spread out around the store in a 2 x 2 grid and grab the best deal in the area then meet your friends at the checkout counter and see who has the best deal. You meet at the checkout counter (5 minutes later), feeling that you didn’t do all you could, but happy that you get to go home, eat leftover pumpkin pie and watch college football.
??? ELI5 "What's GridSearch and why is it important?"
Let’s say that you’re baking cookies and you want them to taste as good as they possibly can. To keep it simple, let’s say you use exactly two ingredients: flour and sugar (realistically, you need more ingredients but just go with it for now).
How much flour do you add? How much sugar do you add? Maybe you look up recipes online, but they’re all telling you different things. There’s not some magical, perfect amount of flour you need and sugar you need that you can just look up online.
So, what do you decide to do? You decide to try a bunch of different values for flour and sugar and just taste-test each batch to see what tastes best.
- You might decide to try having 1 cup, 2 cups, and 3 cups of sugar.
- You might also decide to try having 3 cups, 4 cups, and 5 cups of flour.
In order to see which of these recipes is the best, you’d have to test each possible combination of sugar and of flour. So, that means:
- Batch A: 1 cup of sugar & 3 cups of flour
- Batch B: 1 cup of sugar & 4 cups of flour
- Batch C: 1 cup of sugar & 5 cups of flour
- Batch D: 2 cups of sugar & 3 cups of flour
- Batch E: 2 cups of sugar & 4 cups of flour
- Batch F: 2 cups of sugar & 5 cups of flour
- Batch G: 3 cups of sugar & 3 cups of flour
- Batch H: 3 cups of sugar & 4 cups of flour
- Batch I: 3 cups of sugar & 5 cups of flour
If you want, you can draw this out, kind of like you’re playing the game tic-tac-toe.
<table>
<tr>
<th></th>
<th scope="col">1 cup of sugar</th>
<th scope="col">2 cups of sugar</th>
<th scope="col">3 cups of sugar</th>
</tr>
<tr>
<th scope="row">3 cups of flour</th>
<td>1 cup of sugar & 3 cups of flour</td>
<td>1 cup of sugar & 4 cups of flour</td>
<td>1 cup of sugar & 5 cups of flour</td>
</tr>
<tr>
<th scope="row">4 cups of flour</th>
<td>2 cups of sugar & 3 cups of flour</td>
<td>2 cups of sugar & 4 cups of flour</td>
<td>2 cups of sugar & 5 cups of flour</td>
</tr>
<tr>
<th scope="row">5 cups of flour</th>
<td>3 cups of sugar & 3 cups of flour</td>
<td>3 cups of sugar & 4 cups of flour</td>
<td>3 cups of sugar & 5 cups of flour</td>
</tr>
</table>
Notice how this looks like a grid. You are _searching_ this _grid_ for the best combination of sugar and flour. _The only way for you to get the best-tasting cookies is to bake cookies with all of these combinations, taste test each batch, and decide which batch is best._ If you skipped some of the combinations, then it’s possible you’ll miss the best-tasting cookies.
Now, what happens when you’re in the real world and you have more than two ingredients? For example, you also have to decide how many eggs to include. Well, your “grid” now becomes a 3-dimensional grid. If you decide between 2 eggs and 3 eggs, then you need to try all nine combinations of sugar and flour for 2 eggs, and you need to try all nine combinations of sugar and flour for 3 eggs.
The more ingredients you include, the more combinations you'll have. Also, the more values of ingredients (e.g. 3 cups, 4 cups, 5 cups) you include, the more combinations you have to choose.
**Applied to Machine Learning:** When you build models, you have lots of choices to make. Some of these choices are called hyperparameters. For example, if you build a random forest, you need to choose things like:
- How many decision trees do you want to include in your random forest?
- How deep can each individual decision tree grow?
- At least how many samples must be in the final “node” of each decision tree?
The way we test this is just like how you taste-tested all of those different batches of cookies:
1. You pick which hyperparameters you want to search over (all three are listed above).
2. You pick what values of each hyperparameter you want to search.
3. You then fit a model separately for each combination of hyperparameter values.
4. Now it’s time to taste test: you measure each model’s performance (using some metric like accuracy or root mean squared error).
5. You pick the set of hyperparameters that had the best-performing model. (Just like your recipe would be the one that gave you the best-tasting cookies.)
Just like with ingredients, the number of hyperparameters and number of levels you search are important.
- Trying 2 hyperparameters (ingredients) of 3 levels apiece → 3 * 3 = 9 combinations of models (cookies) to test.
- Trying 2 hyperparameters (ingredients) of 3 levels apiece and a third hyperparameter with two levels (when we added the eggs) → 3 * 3 * 2 = 18 combinations of models (cookies) to test.
The formula for that is: you take the number of levels of each hyperparameter you want to test and multiply it. So, if you try 5 hyperparameters, each with 4 different levels, then you’re building 4 * 4 * 4 * 4 * 4 = 4^5 = 1,024 models.
Building models can be time-consuming, so if you try too many hyperparameters and too many levels of each hyperparameter, you might get a really high-performing model but it might take a really, really, really long time to get.
DataRobot automatically GridSearches for the best hyperparameters for its models. It is not an exhaustive search where it searches every possible combination of hyperparameters. That’s because this would take a very, very long time and might be impossible.
**In one line, but technical:** GridSearch is a commonly-used technique in machine learning that is used to find the best set of hyperparameters for a model.
**Bonus note:** You might also hear RandomizedSearch, which is an alternative to GridSearch. Rather than setting up a grid to check, you might specify a range of each hyperparameter (e.g. somewhere between 1 and 3 cups of sugar, somewhere between 3 and 5 cups of flour) and a computer will randomly generate, say, 5 combinations of sugar/flour. It might be like:
- Batch A: 1.2 cups of sugar & 3.5 cups of flour.
- Batch B: 1.7 cups of sugar & 3.1 cups of flour.
- Batch C: 2.4 cups of sugar & 4.1 cups of flour.
- Batch D: 2.9 cups of sugar & 3.9 cups of flour.
- Batch E: 2.6 cups of sugar & 4.8 cups of flour.
??? ELI5 "Keras vs. TensorFlow"
In DataRobot, “TensorFlow" really means “TensorFlow 0.7” and “Keras” really means “TensorFlow 1.x".
In the past, TensorFlow had many interfaces, most of which were lower level than Keras, and Keras supported multiple backends (e.g., Theano and TensorFlow). However, TensorFlow consolidated these interfaces and Keras now only supports running code with TensorFlow, so as of Tensorflow 2.x, Keras and TensorFlow are effectively one and the same.
Because of this history, upgrading from an older TensorFlow to a new TensorFlow is easier to understand than switching from TensorFlow to Keras.
**Example:**
Keras vs. Tensorflow is like an automatic coffee machine vs. grinding and brewing coffee manually.
There are many ways to make coffee, meaning TensorFlow is not the only technology that can be used by Keras. Keras offers "buttons" (interface) that is powered by a specific "brewing technology" (TensorFlow, CNTK, Theano or something else, known as the Keras backend).
Earlier, DataRobot used the lower-level technology, TensorFlow, directly. But just like grinding and brewing coffee manually, this takes a lot more effort and maintenance, and increased the maintenance burden as well, so DataRobot switched to a higher-level technology, like Keras, that provides many nice things under the hood, for example, delivering more advanced blueprints in the product more quickly, which would have taken a lot of effort if manually implemented in TensorFlow.
??? ELI5 "How does CPU differ from GPU (in term of training ML models)?"
Think about CPU (central processing unit) as a 4-lane highway with trucks delivering the computation, and GPUs (graphics processing unit) as a 100-lane highway with little shopping carts. GPUs are great at parallelism, but only for less complex tasks. Deep learning specifically benefits from that since it's mainly batches of matrix multiplication, and these can be parallelized very easily. So, training a neural network in a GPU can be 10x faster than on a CPU. But not all model types get that benefit.
Here's another one:
Let’s say there is a very large library, and the goal is to count all of the books. The librarian is knowledgeable about where books are, how they’re organized, how the library works, etc. The librarian is perfectly capable of counting the books on their own and they’ll probably be very good and organized about it.
But what if there is a big team of people who could count the books with the librarian—not library experts, just people who can count accurately.
* If you have 3 people who count books, that speeds up your counting.
* If you have 10 people who count books, your counting gets even faster.
* If you have 100 people who count books...that’s awesome!
*A CPU is like a librarian.* Just like you need a librarian running a library, you need a CPU. A CPU can basically do any jobs that you need done. Just like a librarian could count all of the books on their own, a CPU can do math things like building machine learning models.
*A GPU is like a team of people counting books.* Just like counting books is something that can be done by many people without specific library expertise, a GPU makes it much easier to take a job, split it among many different units, and do math things like building machine learning models.
For more details on this analogy, see [Robot-to-Robot](rr-gpu-v-cpu).
??? ELI5 "What is meant by single-tenant and multi-tenant SaaS?"
*Single-tenant*: You rent an apartment. When you're not using it, neither is anybody else. You can leave your stuff there without being concerned that others will mess with it.
*Multi-tenant*: You stay in a hotel room.
*Multi-tenant:* Imagine a library with many individual, locked rooms, where every reader has a designated room for their personal collection, but the core library collection at the center of the space is shared, allowing everyone to access those resources. For the most part, you have plenty of privacy and control over your personal collection, but there's only one of copy of each book at the center of the building, so it's possible for someone to rent out the entire collection on a particular topic, leaving others to wait their turn.
*Single-tenant:* Imagine a library network of many individual branches, where each individual library branch carries a complete collection while still providing private rooms. Readers don't need to share the central collection of their branch with others, but the branches are maintained by the central library committee, ensuring that the contents of each library branch is regularly updated for all readers.
*On-prem:* Some readers don't want to use our library space and instead want to make a copy to use in their own home. These folks make a copy of the library and resources and take them home, and then maintain them on their own schedule with their own personal resources. This gives them even more privacy and control over their content, but they lose the convenience of automated updates, new books, and library management.
| eli5 |
---
title: Learn more
description: Get started in DataRobot with descriptions of common terms and concepts, as well as how-tos.
---
# Learn more {: #learn-more }
This page provides access to learning resources that help you get started in DataRobot, including simplified explanations of concepts, how-tos and end-to-end walkthroughs, and descriptions of terms in the application.
Topic | Describes...
---------- | -----------
[Glossary](glossary/index) | Read descriptions for terms used throughout DataRobot.
[How-tos](how-to/index) | Step-by-step instructions to perform tasks within the DataRobot application as well as partners, cloud providers, and 3rd party vendors.
[ELI5](eli5) | Read simplified descriptions of common DataRobot concepts.
[Robot-to-Robot](robot-to-robot/index) | See the data science topics that DataRobot employees talk about in Slack.
[Business accelerators](biz-accelerators/index) | End-to-end walkthroughs, based on best practices and patterns, that address common business problems.
| index |
---
title: MLOps
description: DataRobot machine learning operations (MLOps) provides a central hub for you to deploy, monitor, manage, and govern your models in production.
---
# MLOps {: #mlops }
DataRobot MLOps provides a central hub to deploy, monitor, manage, and govern all your models in production. You can deploy models to the production environment of your choice and continuously monitor the health and accuracy of your models, among other metrics.
The following sections describe:
Topic | Describes...
----- | ------
[Deployment](deployment/index) | How to bring models to production by following the workflows provided for all kinds of starting artifacts.
[Deployment settings](deployment-settings/index) | How to use the settings tabs for individual MLOps features to add or update deployment functionality.
[Lifecycle management](manage-mlops/index) | Maintaining model health to minimize inaccurate data, poor performance, or unexpected results from models in production.
[Performance monitoring](monitor/index) | Tracking the performance of models to identify potential issues, such as service errors or model accuracy decay, as soon as possible.
[Governance](governance/index) | Enacting workflow requirements to ensure quality and comply with regulatory obligations.
[MLOps FAQ](mlops-faq) | A list of frequently asked MLOps questions with brief answers linking to the relevant documentation.
| index |
---
title: MLOps FAQ
dataset_name: N/A
description: Provides a list, with brief answers, of frequently asked MLOps deployment and monitoring questions. Answers link to complete documentation.
domain: mlops
expiration_date: 10-10-2024
owner: nick.aylward@datarobot.com
url: docs.datarobot.com/docs/mlops/mlops-faq.html
---
# MLOps FAQ {: #mlops-faq }
??? faq "What are the supported model types for deployments?"
DataRobot MLOps supports three types of model for deployment:
* [DataRobot models](model-data) built with AutoML and deployed directly to the inventory
* [Custom inference models](custom-inf-model) assembled in the Custom Model Workshop
* External models [registered as model packages](reg-create#register-external-model-packages) and monitored by the [MLOps agent](mlops-agent/index).
??? faq "How do I make predictions on a deployed model?"
To make predictions with a deployment, navigate to the **Predictions** tab. From there, you can use the [predictions interface](batch-pred) to drag and drop prediction data and return prediction results. Supported models can [download and configure Scoring Code](sc-download-deployment) from a deployment. External models can score datasets in batches on a remote environment [with the Portable Prediction Server](portable-batch-predictions). For a code-centric experience, use the provided [Python Scoring Code](code-py), which contains the commands and identifiers needed to submit a CSV or JSON file for scoring with the [Prediction API](dr-predapi).
??? faq "What is a prediction environment?"
Models that run on your own infrastructure (outside of DataRobot) may be run in different environments and can have differing deployment permissions and approval processes. For example, while any user may have permission to deploy a model to a test environment, deployment to production may require a strict approval workflow and only be permitted by those authorized to do so. [Prediction environments](pred-env) support this deployment governance by grouping deployment environments and supporting grouped deployment permissions and approval workflows. They indicate the platform used in your external infrastructure (AWS, Azure, Snowflake, etc.) and the model formats it supports.
??? faq "How do I enable accuracy monitoring?"
To activate the [**Accuracy**](deploy-accuracy) tab for deployments, you must first select an association ID; a [foreign key](https://www.tutorialspoint.com/Foreign-Key-in-RDBMS) that links predictions with future results (referred to as actuals or outcome data). In the **Settings** > **Data** tab for a deployment, the **Inference** section has a field for the column name containing the association IDs. Enter the column name here, and then, after making predictions, [add actuals](accuracy-settings#add-actuals) to the deployment to generate accuracy statistics.
??? faq "What is data drift? How is this different from model drift?"
Data Drift refers to changes in the distribution of prediction data versus training data. Data Drift alerts indicate that the data you are making predictions on looks different from the data the model used for training. DataRobot uses PSI or ["Population Stability Index"](https://www.listendata.com/2015/05/population-stability-index.html){ target=_blank } to measure this. Models themselves cannot drift; once they are fit, they are static. Sometimes the term "Model Drift" is used to refer to drift in the predictions, which simply indicates that the average predicted value is changing over time.
??? faq "What do the green, yellow, and red status icons mean in the deployment inventory (on the **Deployments** tab)?"
The [**Service Health**](service-health), [**Data Drift**](data-drift), and [**Accuracy**](deploy-accuracy) summaries in the deployment inventory provide an at-a-glance indication of health and accuracy for all deployed models. To view this more detailed information for an individual model, click on the model in the inventory list. For more information about interpreting the color indicators, [reference the documentation](deploy-inventory).
??? faq "What data formats does the Prediction API support for scoring?"
Prediction data needs to be provided in a CSV or JSON file. For more information, reference the documentation for the [DataRobot Prediction API](dr-predapi).
??? faq "How do I use a different model in a deployment?"
To replace a model, use the [**Replace model**](deploy-replace) functionality found in the **Actions** menu for a deployment. Note that DataRobot issues a warning if the replacement model differs from the current model in either of these ways: <ul><li>Feature names do not match. </li><li>Feature names match but have different data types in the replacement model.</li></ul>
??? faq "What is humility monitoring?"
Humility monitoring, available from a deployment's [**Humility** tab](humble), allows you to [configure rules](humility-settings) that allow models to be capable of recognizing, in real-time, when they make uncertain predictions or receive data they have not seen before. Unlike data drift, model humility does not deal with broad statistical properties over time—it is instead triggered for individual predictions, allowing you to set desired behaviors with rules that depend on different triggers. Humility rules help to identify and handle data integrity issues during monitoring and to better identify the root cause of unstable predictions.
??? faq "What is the Portable Prediction Server and how do I use it?"
The [Portable Prediction Server (PPS)](portable-pps) is a DataRobot execution environment for DataRobot model packages (`.mlpkg` files) distributed as a self-contained Docker image. PPS can be run disconnected from main installation environments. Once started, the image serves HTTP API via the `:8080` port. In order to use it, you create an external deployment, [create an external prediction environment](pred-env) for your infrastructure, download the [PPS Docker image](portable-pps#obtain-the-pps-docker-image), and download the [model package](portable-pps#download-the-model-package). This configuration allows you to run PPS outside of DataRobot but still have access to insights and statistics from your deployment in the application.
??? faq "The **Challengers** tab is grayed out. Why can't I add a challenger?"
In order to add a challenger model to compare against your deployed model, you must be an MLOps user and you must enable the **Challengers** tab. To do so, select **Settings > Data** in your deployment. Under **Data Drift** in the right pane, toggle **Enable prediction rows storage** and click **Save change(s)**. This setting is required for you to compare challenger models to the champion model. When you select a deployment, you can now select the **Challengers** tab.
| mlops-faq |
---
title: Platform
description: This section includes information and links for managing user settings; authentication and SSO; the administrator's guide; sharing and permissions; user documentation for companion tools; and more.
---
# Platform {: #platform }
The platform section provides materials for users and administrators to manage their DataRobot accounts.
!!! note
DataRobot performs service maintenance regularly. Although most maintenance will occur unnoticed, some may cause a temporary impact. Status page announcements provide information on service outages, scheduled maintenance, and historical uptime. You can view and subscribe to notifications from the [DataRobot status page](https://status.datarobot.com/){ target=_blank }.
Topic | Describes...
----- | ------
[Account management](account-mgmt/index) | View information to help manage your DataRobot account.
[Authentication](authentication/index) | Learn about authentication in DataRobot, including SSO, 2FA, and stored credentials.
[Administrator's guide](admin-guide/index) | For administrators, get help in managing the DataRobot application.
[Data and sharing](data-sharing/index) | Learn about sharing, permissions, and data file size requirements.
[Companion tools](companion-tools/index) | Access user documentation for Algorithmia and Paxata Data Prep.
## Browser compatibility {: #browser-compatibility }
{% include 'includes/browser-compatibility.md' %}
| index |
With the **Comments** link, you can add comments to—even host a discussion around—any item in the catalog that you have access to. Comment functionality is available in the **AI Catalog** (illustrated below), and also as a model tab from the Leaderboard and in use case tracking. With comments you can:
* Tag other users in a comment; DataRobot will then send them an email notification.
* Edit or delete any comment you have added (you cannot edit or delete other users' comments).

| comm-add |
??? note "Dataset requirements for time series batch predictions"
To ensure DataRobot can process your time series data, configure the dataset to meet the following requirements:
* Sort prediction rows by their timestamps, with the earliest row first.
* For multiseries, sort prediction rows by series ID and then by timestamp.
* There is *no limit* on the number of series DataRobot supports. The only limit is the job timeout, as mentioned in [Limits](batch-prediction-api/index#limits).
For dataset examples, see the [requirements for the scoring dataset](batch-pred-ts#requirements-for-the-scoring-dataset). | batch-pred-ts-scoring-data-requirements |
!!! note "DataRobot fully supports the latest version of Google Chrome"
Other browsers such as Edge, Firefox, and Safari are not fully supported. As a result, certain features may not work as expected. DataRobot recommends using Chrome for the best experience. Ad block browser extensions may cause display or performance issues in the DataRobot web application.
| browser-compatibility |
The **Clustering** tab sets the number of clusters that DataRobot will find during Autopilot. The default number of clusters is based on number of series in the dataset.
To set the number, add or remove values from the entry box and select the value from the dropdown:

Note that when using Manual mode, you are prompted to set the number of clusters when building models from the Repository.
| ts-cluster-adv-opt-include |
There are several options available in the **Actions** menu, which can be accessed for each model package in the **Model Packages** tab of the **Model Registry**:

The available options depend on a variety of criteria, including user permissions and the data available to your model package:

Option | Description
-------|------------
Deploy | Select **+ Deploy** to [create a deployment](deploy-model#deploy-from-the-model-registry) from a model package. For external models, you can [create an external deployment](deploy-external-model#deploy-an-external-model-package).
Share | The sharing capability allows [appropriate user roles](roles-permissions#reg-roles) to grant permissions on a model package. To share a model package, select the **Share** () action. <br> You can only share up to your own access level (a consumer cannot grant an editor role, for example) and you cannot downgrade the access of a collaborator with a higher access level than your own.
Permanently Archive | If you have the appropriate [permissions](roles-permissions#reg-roles), you can select **Permanently Archive**  to archive a model package, which also removes it from the **Model Packages** list. | manage-model-packages |
??? faq "How does DataRobot track drift?"
For data drift, DataRobot tracks:
* **Target drift**: DataRobot stores statistics about predictions to monitor how the distribution and values of the target change over time. As a baseline for comparing target distributions, DataRobot uses the distribution of predictions on the holdout.
* **Feature drift**: DataRobot stores statistics about predictions to monitor how distributions and values of features change over time. As a baseline for comparing distributions of features:
* For training datasets larger than 500 MB, DataRobot uses the distribution of a random sample of the training data.
* For training datasets smaller than 500 MB, DataRobot uses the distribution of 100% of the training data. | how-dr-tracks-drift-include |
## Deep dive: Imbalanced targets {: #deep-dive-imbalanced-targets }
In AML and Transaction Monitoring, the SAR rate is usually very low (1%–5%, depending on the detection scenarios); sometimes it could be even lower than 1% in extremely unproductive scenarios. In machine learning, such a problem is called _class imbalance_. The question becomes, how can you mitigate the risk of class imbalance and let the machine learn as much as possible from the limited known-suspicious activities?
DataRobot offers different techniques to handle class imbalance problems. Some techniques:
* Evaluate the model with <a target="_blank" rel="noopener noreferrer" href="https://docs.datarobot.com/en/docs/modeling/reference/model-detail/opt-metric.html#optimization-metrics"><b>different metrics</b></a>. For binary classification (the false positive reduction model here, for example), LogLoss is used as the default metric to rank models on the Leaderboard. Since the rule-based system is often unproductive, which leads to a very low SAR rate, it’s reasonable to take a look at a different metric, such as the SAR rate in the top 5% of alerts in the prioritization list. The objective of the model is to assign a higher prioritization score with a high risk alert, so it’s ideal to have a higher rate of SAR in the top tier of the prioritization score. In the example shown in the image below, the SAR rate in the top 5% of prioritization score is more than 70% (the original SAR rate is less than 10%), which indicates that the model is very effective in ranking the alert based on the SAR risk.
* DataRobot also provides flexibility for modelers when tuning hyperparameters which could also help with the class imbalance problem. In the example below, the Random Forest Classifier is tuned by enabling the balance_boostrap (a random sample with an equal amount of SAR and non-SAR alerts in each decision tree in the forest); you can see the validation score of the new ‘Balanced Random Forest Classifier’ model is slightly better than the parent model.

* You can also use <a target="_blank" rel="noopener noreferrer" href="https://docs.datarobot.com/en/docs/modeling/build-models/adv-opt/smart-ds.html#smart-downsampling"><b>Smart Downsampling</b></a> (from the Advanced Options tab) to intentionally downsample the majority class (i.e., non-SAR alerts) in order to build faster models with similar accuracy.
| aml-4-include |
* Frozen thresholds are not supported.
* Blenders that contain monotonic models do not display the MONO label on the Leaderboard for OTV projects.
* When previewing predictions over time, the interval only displays for models that haven’t been retrained (for example, it won’t show up for models with the **Recommended for Deployment** badge).
* If you configure long backtest durations, DataRobot will still build models, but will not run backtests in cases where there is not enough data. In these case, the backtest score will not be available on the Leaderboard.
* Timezones on date partition columns are ignored. Datasets with multiple time zones may cause issues. The workaround is to convert to a single time zone outside of DataRobot. Also there is no support for daylight savings time.
* Dates before 1900 are not supported. If necessary, shift your data forward in time.
* Leap seconds are not currently supported.
| dt-consider |
| | Element | Description |
|---|---|---|
|  | Filter by predicted or actual | Narrows the display based on the predicted and actual class values. See [Filters](#filters) for details.|
|  | Show color overlay | Sets whether to display the activation map in either black and white or full color. See [Color overlay](#color-overlay) for details. |
|  | Activation scale | Shows the extent to which a region is influencing the prediction. See [Activation scale](#activation-scale) for details. |
See the [reference material](vai-ref#ref-map) for detailed information about Visual AI.
### Filters {: #filters }
Filters allow you to narrow the display based on the predicted and the actual class values. The initial display shows the full sample (i.e., both filters are set to *all*). You can instead set the display to filter by specific classes, limiting the display). Some examples:
| "Predicted" filter | "Actual" filter | Display results |
|--------------------|-------------------|--------------------|
| All | All | All (up to 100) samples from the validation set |
| Tomato Leaf Mold | All | All samples in which the predicted class was Tomato Leaf Mold |
| Tomato Leaf Mold | Tomato Leaf Mold | All samples in which both the predicted and actual class were Tomato Leaf Mold |
| Tomato Leaf Mold | Potato Blight | Any sample in which DataRobot predicted Tomato Leaf Mold but the actual class was potato blight |
Hover over an image to see the reported predicted and actual classes for the image:

### Color overlay {: #color-overlay }
DataRobot provides two different views of the activation maps—black and white (which shows some transparency of original image colors) and full color. Select the option that provides the clearest contrast. For example, for black and white datasets, the alternative color overlay may make activation areas more obvious (instead of using a black-to-transparent scale). Toggle **Show color overlay** to compare.

### Activation scale {: #activation-scale }
The high-to-low activation scale indicates how much of a region in an image is influencing the prediction. Areas that are higher on the scale have a higher predictive influence—the model used something that was there (or not there, but should have been) to make the prediction. Some examples might include the presence or absence of yellow discoloration on a leaf, a shadow under a leaf, or an edge of a leaf that curls in a certain way.
Another way to think of scale is that it reflects how much the model "is excited by" a particular region of the image. It’s a kind of prediction explanation—why did the model predict what it did? The map shows that the reason is because the algorithm saw _x_ in this region, which activated the filters sensitive to visual information like _x_. | activation-map-include |
Consider the following when working with segmented modeling deployments:
* Time series segmented modeling deployments do not support data drift monitoring.
* Automatic retraining for segmented deployments that use clustering models is disabled; retraining must be done manually.
* Retraining can be triggered by accuracy drift in a Combined Model; however, it doesn't support monitoring accuracy in individual segments or retraining individual segments.
* Combined model deployments can include standard model challengers. | deploy-combined-model-include |
The **Histogram** chart is the default display for numeric features. It "buckets" numeric feature values into equal-sized ranges to show frequency distribution of the variable—the target observation (left Y-axis) plotted against the frequency of the value (X-axis). The height of each bar represents the number of rows with values in that range.
??? note "Histogram display variations"
The display differs depending on whether the [data quality](data-quality#interpret-the-histogram-tab) issue "Outliers" was found.
Without data quality issues:

With data quality issues:

Initially, the display shows the bucketed data:

Select the **Show outliers** checkbox to calculate and display outliers:

The traditional box plot above the chart (shown in gold) highlights the middle quartiles for the data to help you determine whether the distribution is skewed. To determine whisker length, DataRobot uses [Ueda's algorithm](https://jsdajournal.springeropen.com/articles/10.1186/s40488-015-0031-y){ target=_blank } to identify the outlier points—the whiskers depict the full range for the lowest and highest data points in the dataset excluding those outliers. This is useful for helping to determine whether a distribution is skewed and/or whether the dataset contains a problematic number of outliers.
Note the change in the X-axis scale and compression of the box plot to allow for outlier display. Because there tend to be fewer rows recording an outlier value (it's what makes them outliers), the blue bar may not display. Hover on that column to display a tooltip with the actual row count.
After EDA2 completes, the histogram also displays an [average target value](histogram#average-target-values) overlay.
| histogram-include |
## Business problem {: #business-problem }
A key pillar of any AML compliance program is to monitor transactions for suspicious activity. The scope of transactions is broad, including deposits, withdrawals, fund transfers, purchases, merchant credits, and payments. Typically, monitoring starts with a rules-based system that scans customer transactions for red flags consistent with money laundering. When a transaction matches a predetermined rule, an alert is generated and the case is referred to the bank’s internal investigation team for manual review. If the investigators conclude the behavior is indicative of money laundering, then the bank will file a Suspicious Activity Report (SAR) with FinCEN.
Unfortunately, the standard transaction monitoring system described above has costly drawbacks. In particular, the rate of false-positives (cases incorrectly flagged as suspicious) generated by this rules-based system can reach 90% or more. Since the system is rules-based and rigid, it cannot dynamically learn the complex interactions and behaviors behind money laundering. The prevalence of false-positives makes investigators less efficient as they have to manually weed out cases that the rules-based system incorrectly marked as suspicious.
Compliance teams at financial institutions can have hundreds or even thousands of investigators, and the current systems prevent investigators from becoming more effective and efficient in their investigations. The cost of reviewing an alert ranges between `$30~$70`. For a bank that receives 100,000 alerts a year, this is a substantial sum; on average, penalties imposed for proven money laundering amount to `$145` million per case. A reduction in false positives could result in savings between `$600,000~$4.2` million per year.
## Solution value {: #solution-value }
This use case builds a model that dynamically learns patterns in complex data and reduces false positive alerts. Financial crime compliance teams can then prioritize the alerts that legitimately require manual review and dedicate more resources to those cases most likely to be suspicious. By learning from historical data to uncover patterns related to money laundering, AI also helps identify which customer data and transaction activities are indicative of a high risk for potential money laundering.
The primary issues and corresponding opportunities that this use case addresses include:
Issue | Opportunity
:- | :-
Potential regulatory fine | Mitigate the risk of missing suspicious activities due to lack of competency with alert investigations. Use alert scores to more effectively assign alerts—high risk alerts to more experienced investigators, low risk alerts to more junior team members.
Investigation productivity | Increase investigators' productivity by making the review process more effective and efficient, and by providing a more holistic view when assessing cases.
Specifically:
* **Strategy/challenge**: Help investigators focus their attention on cases that have the highest risk of money laundering while minimizing the time they spend reviewing false-positive cases.
For banks with large volumes of daily transactions, improvements in the effectiveness and efficiency of their investigations ultimately results in fewer cases of money laundering that go unnoticed. This allows banks to enhance their regulatory compliance and reduce the volume of financial crime present within their network.
* **Business driver**: Improve the efficiency of AML transaction monitoring and lower operational costs.
With its ability to dynamically learn patterns in complex data, AI significantly improves accuracy in predicting which cases will result in a SAR filing. AI models for anti-money laundering can be deployed into the review process to score and rank all new cases.
* **Model solution**: Assign a suspicious activity score to each AML alert, improving the efficiency of an AML compliance program.
Any case that exceeds a predetermined threshold of risk is sent to the investigators for manual review. Meanwhile, any case that falls below the threshold can be automatically discarded or sent to a lighter review. Once AI models are deployed into production, they can be continuously retrained on new data to capture any novel behaviors of money laundering. This data will come from the feedback of investigators.
Specifically, the model will use rules that trigger an alert whenever a customer requests a refund of any amount since small refund requests could be the money launderer’s way of testing the refund mechanism or trying to establish refund requests as a normal pattern for their account.
The following table summarizes aspects of this use case.
Topic | Description
:- | :-
**Use case type** | Anti-money laundering (false positive reduction)
**Target audience** | Data Scientist, Financial Crime Compliance Team
**Desired outcomes**| <ul><li>Identify which customer data and transaction activity are indicative of a high risk for potential money laundering.</li><li>Detect anomalous changes in behavior or nascent money laundering patterns before they spread.</li><li>Reduce the false positive rate for the cases selected for manual review.</li></ul>
**Metrics/KPIs** | <ul><li>Annual alert volume</li><li>Cost per alert</li><li>False positive reduction rate</li></ul>
**Sample dataset** | https://s3.amazonaws.com/datarobot-use-case-datasets/DR_Demo_AML_Alert_train.csv
### Problem framing {: #problem-framing }
The target variable for this use case is **whether or not the alert resulted in a SAR** after manual review by investigators, making this a binary classification problem. The unit of analysis is an individual alert—the model will be built on the alert level—and each alert will receive a score ranging from 0 to 1. The score indicates the probability of being a SAR.
The goal of applying a model to this use case is to lower the false positive rate, which means resources are not spent reviewing cases that are eventually determined not to be suspicious after an investigation.
In this use case, the False Positive Rate of the rules engine on the validation sample (1600 records) is:
The number of `SAR=0` divided by the total number of records = `1436/1600` = `90%`.
### ROI estimation {: #roi-estimation }
ROI can be calculated as follows:
`Avoided potential regulatory fine + Annual alert volume * false positive reduction rate * cost per alert`
A high-level measurement of the ROI equation involves two parts.
1. The total amount of `avoided potential regulatory fines` will vary depending on the nature of the bank and must be estimated on a case-by-case basis.
2. The second part of the equation is where AI can have a tangible impact on improving investigation productivity and reducing operational costs. Consider this example:
* A bank generates 100,000 AML alerts every year.
* DataRobot achieves a 70% false positive reduction rate without losing any historical suspicious activities.
* The average cost per alert is `$30~$70`.
Result: The annual ROI of implementing the solution will be `100,000 * 70% * ($30~$70) = $2.1MM~$4.9MM`.
## Working with data {: #working-with-data }
The linked synthetic dataset illustrates a credit card company’s AML compliance program. Specifically, the model detects the following money-laundering scenarios:
- The customer spends on the card but overpays their credit card bill and seeks a cash refund for the difference.
- The customer receives credits from a merchant without offsetting transactions and either spends the money or requests a cash refund from the bank.
The unit of analysis in this dataset is an individual alert, meaning a rule-based engine is in place to produce an alert to detect potentially suspicious activity consistent with the above scenarios.
### Data preparation {: #data-preparation }
Consider the following when working with data:
* **Define the scope of analysis**: Collect alerts from a specific analytical window to start with; it’s recommended that you use 12–18 months of alerts for model building.
* **Define the target**: Depending on the investigation processes, the target definition could be flexible. In this walkthrough, alerts are classified as `Level1`, `Level2`, `Level3`, and `Level3-confirmed`. These labels indicate at which level of the investigation the alert was closed (i.e., confirmed as a SAR). To create a binary target, treat `Level3-confirmed` as SAR (denoted by 1) and the remaining levels as non-SAR alerts (denoted by 0).
* **Consolidate information from multiple data sources**: Below is a sample entity-relationship diagram indicating the relationship between the data tables used for this use case.

Some features are static information—for example, `kyc_risk_score` and `state of residence`—these can be fetched directly from the reference tables.
For transaction behavior and payment history, the information will be derived from a specific time window prior to the alert generation date. This case uses 90 days as the time window to obtain the dynamic customer behavior, such as `nbrPurchases90d`, `avgTxnSize90d`, or `totalSpend90d`.
Below is an example of one row in the training data after it is merged and aggregated (it is broken into multiple lines for easier visualization).

### Features and sample data {: #features-and-sample-data }
The features in the sample dataset consist of KYC (Know-Your-Customer) information, demographic information, transactional behavior, and free-form text information from notes taken by customer service representatives. To apply this use case in your organization, your dataset should contain, at a minimum, the following features:
- Alert ID
- Binary classification target (`SAR/no-SAR`, `1/0`, `True/False`, etc.)
- Date/time of the alert
- "Know Your Customer" score used at the time of account opening
- Account tenure, in months
- Total merchant credit in the last 90 days
- Number of refund requests by the customer in the last 90 days
- Total refund amount in the last 90 days
Other helpful features to include are:
- Annual income
- Credit bureau score
- Number of credit inquiries in the past year
- Number of logins to the bank website in the last 90 days
- Indicator that the customer owns a home
- Maximum revolving line of credit
- Number of purchases in the last 90 days
- Total spend in the last 90 days
- Number of payments in the last 90 days
- Number of cash-like payments (e.g., money orders) in last 90 days
- Total payment amount in last 90 days
- Number of distinct merchants purchased from in the last 90 days
- Customer Service Representative notes and codes based on conversations with customer (cumulative)
The table below shows a sample feature list:
Feature name | Data type | Description | Data source | Example
------------ | --------- | ----------- | ----------- | -------
ALERT | Binary | Alert Indicator | tbl_alert | 1
SAR | Binary(Target) | SAR Indicator (Binary Target) | tbl_alert | 0
kycRiskScore | Numeric | Account relationship (Know Your Customer) score used at time of account opening | tbl_customer | 2
income | Numeric | Annual income | tbl_customer | 32600
tenureMonths | Numeric | Account tenure in months | tbl_customer | 13
creditScore | Numeric | Credit bureau score | tbl_customer | 780
state | Categorical | Account billing address state | tbl_account | VT
nbrPurchases90d | Numeric | Number of purchases in last 90 days | tbl_transaction | 4
avgTxnSize90d | Numeric | Average transaction size in last 90 days | tbl_transaction | 28.61
totalSpend90d | Numeric | Total spend in last 90 days | tbl_transaction | 114.44
csrNotes | Text | Customer Service Representative notes and codes based on conversations with customer (cumulative) | tbl_customer_misc | call back password call back card password replace atm call back
nbrDistinctMerch90d | Numeric | Number of distinct merchants purchased at in last 90 days | tbl_transaction | 1
nbrMerchCredits90d | Numeric | Number of credits from merchants in last 90 days | tbl_transaction | 0
nbrMerchCredits-RndDollarAmt90d | Numeric | Number of credits from merchants in round dollar amounts in last 90 days | tbl_transaction | 0
totalMerchCred90d | Numeric | Total merchant credit amount in last 90 days | tbl_transaction | 0
nbrMerchCredits-WoOffsettingPurch | Numeric | Number of merchant credits without an offsetting purchase in last 90 days | tbl_transaction | 0
nbrPayments90d | Numeric | Number of payments in last 90 days | tbl_transaction | 3
totalPaymentAmt90d | Numeric | Total payment amount in last 90 days | tbl_account_bill | 114.44
overpaymentAmt90d | Numeric | Total amount overpaid in last 90 days | tbl_account_bill | 0
overpaymentInd90d | Numeric | Indicator that account was overpaid in last 90 days | tbl_account_bill | 0
nbrCustReqRefunds90d | Numeric | Number refund requests by the customer in last 90 days | tbl_transaction | 1
indCustReqRefund90d | Binary | Indicator that customer requested a refund in last 90 days | tbl_transaction | 1
totalRefundsToCust90d | Numeric | Total refund amount in last 90 days | tbl_transaction | 56.01
nbrPaymentsCashLike90d | Numeric | Number of cash like payments (e.g., money orders) in last 90 days | tbl_transaction | 0
maxRevolveLine | Numeric | Maximum revolving line of credit | tbl_account | 14000
indOwnsHome | Numeric | Indicator that the customer owns a home | tbl_transaction | 1
nbrInquiries1y | Numeric | Number of credit inquiries in the past year | tbl_transaction | 0
nbrCollections3y | Numeric | Number of collections in the past year | tbl_collection | 0
nbrWebLogins90d | Numeric | Number of logins to the bank website in the last 90 days | tbl_account_login | 7
nbrPointRed90d | Numeric | Number of loyalty point redemptions in the last 90 days | tbl_transaction | 2
PEP | Binary | Politically Exposed Person indicator | tbl_customer | 0
| aml-1-include |
Data integrity and quality are cornerstones for creating highly accurate predictive models. These sections describe the tools and visualizations DataRobot provides to ensure that your project doesn't suffer the "garbage in, garbage out" outcome.
| data-description |
### Business problem {: #business-problem }
A "readmission" event is when a patient is readmitted into the hospital within 30 days of being discharged. Readmissions are not only a reflection of uncoordinated healthcare systems that fail to sufficiently understand patients and their conditions, but they are also a tremendous financial strain on both healthcare providers and payers. In 2011, the United States Government estimated there were approximately 3.3 million cases of 30-day, all-cause hospital readmissions, incurring healthcare organizations a total cost of $41.3 billion.
The foremost challenge in mitigating readmissions is accurately anticipating patient risk from the point of initial admission up until discharge. Although a readmission is caused by a multitude of factors, including a patient’s medical history, admission diagnosis, and social determinants, the existing methods (i.e., LACE and HOSPITAL scores) used to assess a patient’s likelihood of readmission do not effectively consider the variety of factors involved. By only including limited considerations, these methods result in suboptimal health evaluations and outcomes.
## Solution value {: #solution-value }
AI provides clinicians and care managers with the information they need to nurture strong, lasting connections with their patients. It helps reduce readmission rates by predicting which patients are at risk and allowing clinicians to prescribe intervention strategies before, and after, the patient is discharged. AI models can ingest significant amounts of data and learn complex patterns behind why certain patients are likely to be readmitted. Model interpretability features offer personalized explanations for predictions, giving clinicians insight into the top risk drivers for each patient at any given time.
By taking the form of an artificial clinician and augmenting the care they provide, along with other actions clinicians already take, AI enables them to conduct intelligent interventions to improve patient health. Using the information they learn, clinicians can decrease the likelihood of patient readmission by carefully walking through their discharge paperwork in-person, scheduling additional outpatient appointments (to give them more confidence about their health), and providing additional interventions that help reduce readmissions.
### Problem framing {: #problem-framing }
One way to frame the problem is to determine how to measure ROI for the use case. Consider:
**Current cost of readmissions**:
Current readmissions annual rate x Annual hospital inpatient discharge volumes x Average cost of a hospital readmission
**New cost of readmissions**:
New readmissions annual rate x Annual hospital inpatient discharge volumes x Average cost of a hospital readmission
**ROI**:
New cost of readmissions - Current cost of readmissions
As a result, the top-down calculation for value estimates is:
**ROI**:
Current costs of readmissions x improvement in readmissions rate
For example, at a US national level, calculating the top-down cost of readmissions for each healthcare provider is `$41.3 billion / 6,210 US providers = ~$6.7 million`
For illustrative purposes, this tutorial uses a sample dataset provided by a [medical journal](https://www.hindawi.com/journals/bmri/2014/781670/#supplementary-materials){ target=_blank } that studied readmissions across 70,000 inpatients with diabetes. The researchers of the study collected this data from the Health Facts database provided by Cerner Corporation, which is a collection of clinical records across providers in the United States. Health Facts allows organizations that use Cerner’s electronic health system to voluntarily make their data available for research purposes. All the data was cleansed of PII in compliance with HIPAA.
### Features and sample data {: #features-and-sample-data }
The features for this use case represent key factors for predicting readmissions. They encompass each patient’s background, diagnosis, and medical history, which will help DataRobot find relevant patterns across the patient’s medical profile to assess their re-hospitalization risk.
In addition to the features listed below, incorporate any additional data that your organization collects that might be relevant to readmission.(DataRobot is able to differentiate important/unimportant features if your selection would not improve modeling.)
Relevant features are generally stored across proprietary data sources available in your EMR system (for example, Epic or Cerner) and include:
* Patient data
* Diagnosis data
* Admissions data
* Prescription data
Other external data sources may also supply relevant data such as:
* Seasonal data
* Demographic data,
* Social determinants data
Each record in the data represents a unique patient visit.
#### Target {: #target }
The target variable:
* `Readmitted`
This feature represents whether or not a patient was readmitted to the hospital within 30 days of discharge, using values such as `True \ False`, `1 \ 0`, etc. This choice in target makes this a binary classification problem.
### Sample feature list {: #sample-feature-list }
**Feature Name** | **Data Type** | **Description** | **Data Source** | **Example**
--- | --- | --- | --- | ---
**Readmitted** | **Binary (Target)** | Whether or not the patient readmitted after 30 days | Admissions Data | False |
Age | Numeric | Patient age group | Patient Data | Female |
Weight | Categorical | Patient weight group | Patient Data | 50-75|
Gender | Categorical | Patient gender | Patient Data | 50-60 |
Race | Categorical | Patient race | Patient Data | Caucasian |
Admissions Type | Categorical | Patient state during admission (Elective, Urgent, Emergency, etc.) | Admissions Data | Elective |
Discharge Disposition | Categorical | Patient discharge condition (Home, home with health services, etc.) | Admissions Data | Discharged to home |
Admission Source | Categorical | Patient source of admissions (Physician Referral, Emergency Room, Transfer, etc.) | Admissions Data | Physician Referral |
Days in Hospital | Numeric | Length of stay in hospital | Admissions Data | 1 |
Payer Code | Categorical | Unique code of patient’s payer | Admissions Data | CP |
Medical Specialty | Categorical | Medical specialty that patient is being admitted into | Admissions Data | Surgery-Neuro |
Lab Procedures | Numeric | Total lab procedures in the past | Admissions Data | 35 |
Procedures | Numeric | Total procedures in the past | Admissions Data | 4
Outpatient Visits | Numeric | Total outpatient visits in the past | Admissions Data | 0 |
ER Visits | Numeric | Total emergency room visits in the past | Admissions Data | 0 |
Inpatient Visits | Numeric | Total inpatient visits in the past | Admissions Data | 0 |
Diagnosis | Numeric | Total diagnosis | Diagnosis Data | 9 |
ICD10 Diagnosis Code(s) | Categorical | Patient’s ICD10 diagnosis on their condition; could be more than one (additional columns) | Diagnosis Data | M4802 |
ICD10 Diagnosis Description(s) | Categorical | Description on patient’s diagnosis; could be more than one (additional columns) | Diagnosis Data | Spinal stenosis, cervical region |
Medications | Numeric | Total number of medications prescribed to the patient | Prescription Data | 21 |
Prescribed Medication(s) | Binary | Whether or not the patient is prescribed to a medication; could be more than one (additional columns) | Prescription Data | Metformin – No |
### Data preparation {: #data-preparation }
The original raw data consisted of 74 million unique visits that include 18 million unique patients across 3 million providers. This data originally contained both inpatient and outpatient visits, as it included medical records from both integrated health systems and standalone providers.
While the original data schema consisted of 41 tables with 117 features, the final dataset was filtered on relevant patients and features based on the use case. The patients included were limited to those with:
* Inpatient encounters
* Existing diabetic conditions
* 1–14 days of inpatient stay
* Lab tests performed during inpatient stay (or not)
* Medications were prescribed during inpatient stay (or not)
All other features were excluded due to lack of relevance and/or poor data integrity.
Reference the [DataRobot documentation](data/index) to see details on how to connect DataRobot to your data source, perform feature engineering, follow best-practice data science techniques, and more.
## Modeling and insights {: #modeling-and-insights }
DataRobot automates many parts of the modeling pipeline, including processing and partitioning the dataset, as described [here](model-data). This use case skips the modeling section and moves straight to model interpretation. Reference the [DataRobot documentation](gs-dr-fundamentals) to see how to use DataRobot from start to finish and how to understand the data science methodologies embedded in its automation.
This use case creates one unified model that predicts the likelihood of readmission for patients with diabetic conditions.
### Feature Impact {: #feature-impact }
By taking a look at the [**Feature Impact**](feature-impact) chart, you can see that a patient’s number of past inpatient visits, discharge disposition, and the medical specialty of their diagnosis are the top three most impactful features that contribute to whether a patient will readmit.

### Feature Effects/Partial Dependence {: #partial-dependence }
In assessing the [partial dependence](feature-effects#partial-dependence-calculations) plots to further evaluate the marginal impact top features have on the predicted outcome, you can see that as a patient’s number of past inpatient visits increases from 0 to 2, their likelihood to readmit subsequently jumps from 37% to 53%. As the number of visits exceeds 4 the likelihood increases to roughly 59%.

### Prediction Explanations {: #prediction-explanations }
DataRobot’s [**Prediction Explanations**](pred-explain/index) provide a more granular view for interpreting model results—key drivers for each prediction generated. These explanations show why a given patient was predicted to readmit or not, based on the top predictive features.

### Post-processing {: #post-processing }
For the prediction results to be intuitive for clinicians to consume, instead of displaying them as a probabilistic or binary number, they can can be post-processed into different labels based on where they fall under predefined prediction thresholds. For instance, patients can be labeled as high risk, medium risk, and low risk depending on their risk of readmissions.
## Predict and deploy {: #predict-and-deploy }
After selecting the model that best learns patterns in your data to predict readmissions, you can deploy it into your desired decision environment. *Decision environments* are the ways in which the predictions generated by the model will be consumed by the appropriate organizational [stakeholders](#decision-stakeholders), and how these stakeholders will make decisions using the predictions to impact the overall process. This is a critical piece of implementing the use case as it ensures that predictions are used in the real-world for reducing hospital readmissions and generating clinical improvements.
At its core, DataRobot empowers clinicians and care managers with the information they need to nurture strong and lasting connections with the people they care about most: their patients. While there are use cases where decisions can be automated in a data pipeline, a readmissions model is geared to *augment* the decisions of your clinicians. It acts as an intelligent machine that, combined with the expertise of your clinicians, will help improve patients’ medical outcomes.
### Decision stakeholders {: #decision-stakeholders }
The following table lists potential decision stakeholders:
Stakeholder | Description | Examples
----------- | ----------- | --------
Decision executors | Clinical stakeholders who will consume decisions on a daily basis to identify patients who are likely to readmit and understand the steps they can take to intervene. | Nurses, physicians, care managers
Decision managers | Executive stakeholders who will monitor and manage the program to analyze the performance of the provider’s readmission improvement programs. | Chief medical officer, chief nursing officer, chief population health officer
Decision authors | Technical stakeholders who will set up the decision flow in place. | Clinical operations analyst, business intelligence analyst, data scientists
### Decision process {: #decision-process }
You can set thresholds to determine whether a prediction constitutes a foreseen readmission or not. Assign clear action items for each level of threshold so that clinicians can prescribe the necessary intervention strategies.

**Low risk:** Send an automated email or text that includes discharge paperwork, warning symptoms, and outpatient alternatives.
**Medium risk:** Send multiple automated emails or texts that include discharge paperwork, warning symptoms, and outpatient alternatives, with multiple reminders. Follow up with the patient 10 days post-discharge through email to gauge their condition.
**High risk:** Clinician briefs patient on their discharge paperwork in person. Send automated emails or texts that include discharge paperwork, warning symptoms, and outpatient alternatives, with multiple reminders. Follow up with the patient on a weekly basis post discharge through telephone or email to gauge their condition.
### Model deployment {: #model-deployment }
DataRobot provides clinicians with complete transparency on the top risk-drivers for every patient at any given time, enabling them to conduct intelligent interventions both before and after the patient is discharged. Reference the [DataRobot documentation](mlops/index) for an overview of model deployment.
#### No-Code AI Apps {: #no-code-ai-apps }
Consider building a custom application where stakeholders can interact with the predictions and record the outcomes of the investigation. Once the model is deployed, predictions can be consumed for use in the [decision process](#decision-process). For example, this [No-Code AI App](app-builder/index) is an easily shareable, AI-powered application using a no-code interface:

Click **Add new row** to enter patient data:

#### Other business systems {: #other-business-systems }
Predictions can also be integrated into other systems that are embedded in the provider’s day-to-day business workflow. Results can be integrated into the provider’s EMR system or BI dashboards. For the former, clinicians can easily see predictions as an additional column in the data they already view on a daily basis to monitor their assigned patients. They will be given transparent interpretability of the predictions to understand why the model predicts the patient to readmit or not.
Some common integrations:
* Display results through an Electronic Medical Record system (i.e., Epic)
* Display results through a business intelligence tool (i.e., Tableau, Power BI)
The following shows an example of how to integrate predictions with Microsoft Power BI to create a dashboard that can be accessed by clinicians to support decisions on which patients they should address to prevent readmissions.
The dashboard below displays the probability of readmission for each patient on the floor. It shows the patient’s likelihood to readmit and top factors on why the model made the prediction. Nurses and physicians can consume a dashboard similar to this one to understand which patients are likely to readmit and why, allowing them to implement a prevention strategy tailored to each patient’s unique needs.

### Model monitoring {: #model-monitoring }
Common decision operators—IT, system operations, and data scientists—would likely implement this use case as follows:
**Prediction Cadence**: Batch predictions generated on a daily basis.
**Model Retraining Cadence**: Models retrained once data drift reaches an assigned threshold; otherwise, retrain the models at the beginning of every new operating quarter.
Use DataRobot's [performance monitoring capabilities](monitor/index)—especially service health, data drift, and accuracy to produce and distribute regular reports to stakeholders.
### Implementation considerations {: #implementation-considerations }
The following highlights some potential implementation risks, all of which are addressable once acknowledged:
Issue | Description
----------- | -----------
Access | Failure to make prediction results easy and convenient for clinicians to access (i.e., if they have to open a separate web browser to the EHR that they are already used to or have information overload).
Understandability | Failure to make predictions intuitive for clinicians to understand.
Interpretability | Failure to help clinicians interpret the predictions and why the model thought a certain way.
Prescriptive | Failure to provide clinicians with prescriptive strategies to act on high risk cases.
### Trusted AI {: #trusted-ai }
In addition to traditional risk analysis, the following elements of AI Trust may require attention in this use case.
**Target leakage:** Target leakage describes information that should not be available at the time of prediction being used to train the model. That is, particular features make leak information about the eventual outcome that will artificially inflate the performance of the model in training. This use case required the aggregation of data across 41 different tables and a wide timeframe, making it vulnerable to potential target leakage. In the design of this model and the preparation of data, it is pivotal to identify the point of prediction (discharge from the hospital) and ensure no data be included past that time. DataRobot additionally supports robust [target leakage detection](data-quality#target-leakage) in the second round of exploratory data analysis and the selection of the Informative Features feature list during Autopilot.
**Bias & Fairness:** This use case leverages features that may be categorized as protected or may be sensitive (age, gender, race). It may be advisable to assess the equivalency of the error rates across these protected groups. For example, compare if patients of different races have equivalent false negative and positive rates. The risk is if the system predicts with less accuracy for a certain protected group, failing to identify those patients as at risk of readmission. Mitigation techniques may be explored at various stages of the modeling process, if it is determined necessary. DataRobot's [bias and fairness resources](b-and-f/index) help identify bias before (or after) models are deployed.
| hospital-readmit-include |
## DRUM on Windows with WSL2 {: #drum-on-windows-with-wsl2 }
DRUM can be run on Windows 10 or 11 with WSL2 (Windows Subsystem for Linux), a native extension that is supported by the latest versions of Windows and allows you to easily install and run Linux OS on a Windows machine. With WSL, you can develop custom tasks and custom models locally in an IDE on Windows, and then immediately test and run them on the same machine using DRUM via the Linux command line.
!!! tip
You can use this [YouTube video](https://www.youtube.com/watch?v=wWFI2Gxtq-8){ target=_blank } for instructions on installing WSL into Windows 11 and updating Ubuntu.
The following phases are required to complete the Windows DRUM installation:
1. [Enable WSL](#enable-linux-wsl)
2. [Install `pyenv`](#install-pyenv)
3. [Install DRUM](#install-drum-on-windows)
4. [Install Docker Desktop](#install-docker-desktop)
### Enable Linux (WSL) {: #enable-linux-wsl }
1. From **Control Panel > Turn Windows features on or off**, check the option **Windows Subsystem for Linux**. After making changes, you will be prompted to restart.

2. Open [Microsoft store](https://aka.ms/wslstore){ target=_blank } and click to get Ubuntu.

3. Install Ubuntu and launch it from the start prompt. Provide a Unix username and password to complete installation. You can use any credentials but be sure to record them as they will be required in the future.

You can access Ubuntu at any time from the Windows start menu. Access files on the C drive under **/mnt/c/**.

### Install pyenv {: #install-pyenv }
Because Ubuntu in WSL comes without Python or virtual environments installed, you must install `pyenv`, a Python version management program used on macOS and Linux. (Learn about managing multiple Python environments [here](https://codeburst.io/how-to-install-and-manage-multiple-python-versions-in-wsl2-1131c4e50a58){ target=_blank }.)
In the Ubuntu terminal, run the following _commands_ (you can ignore comments) row by row:
``` sh
cd $HOME
sudo apt update --yes
sudo apt upgrade --yes
sudo apt-get install --yes git
git clone https://github.com/pyenv/pyenv.git ~/.pyenv
#add pyenv to bashrc
echo '# Pyenv environment variables' >> ~/.bashrc
echo 'export PYENV_ROOT="$HOME/.pyenv"' >> ~/.bashrc
echo 'export PATH="$PYENV_ROOT/bin:$PATH"' >> ~/.bashrc
echo '# Pyenv initialization' >> ~/.bashrc
echo 'if command -v pyenv 1>/dev/null 2>&1; then' >> ~/.bashrc
echo ' eval "$(pyenv init -)"' >> ~/.bashrc
echo 'fi' >> ~/.bashrc
#restart shell
exec $SHELL
#install pyenv dependencies (copy as a single line)
sudo apt-get install --yes libssl-dev zlib1g-dev libbz2-dev libreadline-dev libsqlite3-dev llvm libncurses5-dev libncursesw5-dev xz-utils tk-dev libgdbm-dev lzma lzma-dev tcl-dev libxml2-dev libxmlsec1-dev libffi-dev liblzma-dev wget curl make build-essential python-openssl
#install python 3.7 (it can take awhile)
pyenv install 3.7.10
```
### Install DRUM on Windows {: #install-drum-on-windows }
To install DRUM, first you setup a Python environment where DRUM will run, and then install DRUM in that environment.
1. Create and activate a `pyenv` environment:
``` sh
cd $HOME
pyenv local 3.7.10
.pyenv/shims/python3.7 -m venv DR-custom-tasks-pyenv
source DR-custom-tasks-pyenv/bin/activate
```
2. Install DRUM and its dependencies into that environment:
``` sh
pip install datarobot-drum
exec $SHELL
```
3. Download container environments, where DRUM will run, from Github.
`git clone https://github.com/datarobot/datarobot-user-models`
### Install Docker Desktop {: #install-docker-desktop }
While you can run DRUM directly in the `pyenv` environment, it is preferable to run it in a Docker container. This recommended procedure ensures that your tasks run in the same environment both locally and inside DataRobot, as well as simplifies installation.
1. Download and install [Docker Desktop](https://www.docker.com/products/docker-desktop){ target=_blank }, following the default installation steps.
2. Enable Ubuntu version WSL2 by opening Windows PowerShell and running:
``` sh
wsl.exe --set-version Ubuntu 2
wsl --set-default-version 2
```

!!! note
You may need to download and install an [update](https://wslstorestorage.blob.core.windows.net/wslblob/wsl_update_x64.msi){ target=_blank }. Follow the instructions in the PowerShell until you see the **Conversion complete** message.
3. Enable access to Docker Desktop from Ubuntu:
1. From the Window's task bar, open Docker Dashboard, then access **Settings** (the gear icon).
2. Under **Resources > WSL integration > Enable integration with additional distros**, toggle on Ubuntu.
3. Apply changes and restart.
 | drum-for-windows |

| | Element | Description |
|--|---------|-------------|
|  | Include input features | Writes input features to the prediction results file alongside predictions. To add specific features, enable the **Include input features** toggle, select **Specific features**, and type feature names to filter for and then select features. To include every feature from the dataset, select **All features**. You can only append a feature (column) present in the original dataset, although the feature does not have to have been part of the feature list used to build the model. Derived features are *not* included. |
|  | Include Prediction Explanations | Adds columns for [Prediction Explanations](pred-explain/index) to your prediction output.<ul><li>**Number of explanations**: Enter the maximum number of explanations you want to request from the deployed model. You can request **100** explanations per prediction request.</li><li>**Low prediction threshold**: Enable and define this threshold to provide prediction explanations for _any_ values _below_ the set threshold value.</li><li>**High prediction threshold**: Enable and define this threshold to provide prediction explanations for _any_ values _above_ the set threshold value.</li><li>**Number of ngram explanations**: Enable and define the maximum number of text [ngram](glossary/index#n-gram) explanations to return per row of the dataset. The default (and recommended) setting is **all** (no limit).</li></ul> If you can't enable Prediction Explanations, see [Why can't I enable Prediction Explanations?](#include-prediction-explanations). |
|  | Include prediction outlier warning | Includes warnings for [outlier prediction values](humility-settings#prediction-warnings) (only available for regression model deployments).|
|  | Track data drift, accuracy, and fairness for predictions | Tracks [data drift](data-drift), [accuracy](deploy-accuracy), and [fairness](mlops-fairness) (if enabled for the deployment). |
|  | Chunk size | Adjusts the chunk size selection strategy. By default, DataRobot automatically calculates the chunk size; only modify this setting if advised by your DataRobot representative. For more information, see [What is chunk size?](#what-is-chunk-size) |
|  | Concurrent prediction requests | Limits the number of concurrent prediction requests. By default, prediction jobs utilize all available prediction server cores. To reserve bandwidth for real-time predictions, set a cap for the maximum number of concurrent prediction requests. |
|  | Include prediction status | Adds a column containing the status of the prediction. |
|  | Use default prediction instance | Lets you change the [prediction instance](pred-env#prediction-environments). Turn the toggle off to select a prediction instance. |
??? faq "Why can't I enable Prediction Explanations?"
If you can't <span id="include-prediction-explanations">**Include Prediction Explanations**</span>, it is likely because:
* The model's validation partition doesn't contain the required number of rows.
* For a Combined Model, at least one segment champion validation partition doesn't contain the required number of rows. To enable Prediction Explanations, manually replace retrained champions before creating a model package or deployment.
??? faq "What is chunk size?"
The batch prediction process <span id="what-is-chunk-size">chunks</span> your data into smaller pieces and scores those pieces one by one, allowing DataRobot to score large batches. The **Chunk size** setting determines the strategy DataRobot uses to chunk your data. DataRobot recommends the default setting of **Auto** chunking, as it performs the best overall; however, other options are available:
* **Fixed**: DataRobot identifies an initial, effective chunk size and continues to use it for the rest of the model scoring process.
* **Dynamic**: DataRobot increases the chunk size while model scoring speed is acceptable and decreases the chunk size if the scoring speed falls.
* **Custom**: A data scientist sets the chunk size, and DataRobot continues to use it for the rest of the model scoring process.
| prediction-options-include |
Log in to GitHub before accessing these GitHub resources.
| github-sign-in-plural |
## About final models {: #about-final-models }
The original ("final") model is trained without holdout data and therefore does not have the most recent data. Instead, it represents the first backtest. This is so that predictions match the insights, coefficients, and other data displayed in the tabs that help evaluate models. (You can verify this by checking the **Final model** representation on the **New Training Period** dialog to view the data your model will use.) If you want to use more recent data, retrain the model using [start and end dates](#start-end).
!!! note
Be careful retraining on all your data. In Time Series it is very common for historical data to have a negative impact on current predictions. There are a lot of good reasons not to retrain a model for deployment on 100% of the data. Think through how the training window can impact your deployments and ask yourself:
* "Is all of my data actually relevant to my recent predictions?
* Are there historical changes or events in my data which may negatively affect how current predictions are made, and that are no longer relevant?"
* Is anything outside my Backtest 1 training window size _actually_ relevant?
## Retrain before deployment {: #retrain-before-deployment }
Once you have selected a model and unlocked holdout, you may want to retrain the model (although with hyperparameters frozen) to ensure predictive accuracy. Because the original model is trained without the holdout data, it therefore did not have the most recent data. You can verify this by checking the **Final model** representation on the **New Training Period** dialog to view the data your model will use.
To retrain the model, do the following:
1. On the Leaderboard, click the plus sign (**+**) to open the **New Training Period** dialog and change the training period.
2. View the final model and determine whether your model is trained on the most up-to-date data.
3. Enable **Frozen** run by clicking the slider.
4. Select **Start/End Date** and enter the dates for the retraining, including the dates of the holdout data. Remember to use the “+1” method (enter the date immediately after the final date you want to be included).
### Model retraining {: #model-retraining }
Retraining a model on the most recent data* results in the model not having [out-of-sample predictions](data-partitioning#what-are-stacked-predictions), which is what many of the Leaderboard insights rely on. That is, the child (recommended and rebuilt) model trained with the most recent data has no additional samples with which to score the retrained model. Because insights are a key component to both understanding DataRobot's recommendation and facilitating model performance analysis, DataRobot links insights from the parent (original) model to the child (frozen) model.

\* This situation is also possible when a model is trained into holdout ("slim-run" models also have no [stacked predictions](data-partitioning#what-are-stacked-predictions)).
The insights affected are:
* ROC Curve
* Lift Chart
* Confusion Matrix
* Stability
* Forecast Accuracy
* Series Insights
* Accuracy Over Time
* Feature Effect
| date-time-include-5 |
## Troubleshooting {: #troubleshooting }
Problem | Solution | Instructions
---------- | ----------- | ---------------
When attempting to execute an operation in DataRobot, the firewall requests that you clear the IP address each time. | Add all whitelisted IPs for DataRobot. | See [Source IP addresses for whitelisting](data-conn#source-ip-addresses-for-whitelisting). If you've already added the whitelisted IPs, check the existing IPs for completeness. | data-conn-trouble |
DataRobot detects the date and/or time format (<a target="_blank" href="https://docs.python.org/2/library/datetime#strftime-and-strptime-behavior">standard GLIBC strings</a>) for the selected feature. Verify that it is correct. If the format displayed does not accurately represent the date column(s) of your dataset, modify the original dataset to match the detected format and re-upload it.

Configure the backtesting partitions. You can set them from the dropdowns (applies global settings) or by clicking the [bars in the visualization](#change-backtest-partitions) (applies individual settings). Individual settings override global settings. Once you modify settings for an individual backtest, any changes to the global settings are not applied to the edited backtest.

??? info "Date/date range representation"
DataRobot uses <em>date points</em> to represent dates and date ranges within the data, applying the following principles:
* All date points adhere to ISO 8601, UTC (e.g., 2016-05-12T12:15:02+00:00), an internationally accepted way to represent dates and times, with some small variation in the duration format. Specifically, there is no support for ISO weeks (e.g., P5W).
* Models are trained on data between two ISO dates. DataRobot displays these dates as a date range, but inclusion decisions and all key boundaries are expressed as date points. When you specify a date, DataRobot includes start dates and excludes end dates.
* Once changes are made to formats using the date partitioning column, DataRobot converts all charts, selectors, etc. to this format for the project.
## Set backtest partitions globally {: #set-backtest-partitions-globally }
The following table describes global settings:
| | Selection | Description |
|---|---|---|
|  | [Number of backtests](#set-the-number-of-backtests) | Configures the number of backtests for your project, the time-aware equivalent of cross-validation (but based on time periods or durations instead of random rows). |
|  | [Validation length](#set-the-validation-length) | Configures the size of the testing data partition. |
|  | [Gap length](#set-the-gap-length) | Configures spaces in time, representing gaps between model training and model deployment.|
|  | [Sampling method](#set-rows-or-duration) | Sets whether to use duration or rows as the basis for partitioning, and whether to use random or latest data.|
See the table above for a description of the backtesting section's display elements.
!!! note
When changing partition year/month/day settings, note that the month and year values rebalance to fit the larger class (for example, 24 months becomes two years) when possible. However, because DataRobot cannot account for leap years or days in a month as it relates to your data, it cannot convert days into the larger container.
### Set the number of backtests {: #set-the-number-of-backtests }
You can change the number of [backtests](#understanding-backtests), if desired. The default number of backtests is dependent on the project parameters, but you can configure up to 20. Before setting the number of backtests, use the histogram to validate that the training and validation sets of each fold will have sufficient data to train a model. Requirements are:
* For OTV, backtests require at least 20 rows in each validation and holdout fold and at least 100 rows in each training fold. If you set a number of backtests that results in any of the partitions not meeting that criteria, DataRobot only runs the number of backtests that do meet the minimums (and marks the display with an asterisk).
* For time series, backtests require at least 4 rows in validation and holdout and at least 20 rows in the training fold. If you set a number of backtests that results in any of the partitions not meeting that criteria, the project could fail. See the [time series partitioning reference](ts-customization) for more information.

By default, DataRobot creates a holdout fold for training models in your project. [In some cases](ts-date-time#partition-without-holdout), however, you may want to create a project without a holdout set. To do so, uncheck the **Add Holdout fold** box. If you disable the holdout fold, the holdout score column does not appear on the Leaderboard (and you have no option to unlock holdout). Any tabs that provide an option to switch between Validation and Holdout will not show the Holdout option.
!!! note
If you build a project with a single backtest, the Leaderboard does not display a backtest column.
### Set the validation length {: #set-the-validation-length }
To modify the duration, perhaps because of a warning message, click the dropdown arrow in the **Validation length** box and enter duration specifics. Validation length can also be set by [clicking the bars](#change-backtest-partitions) in the visualization. Note the change modifications make in the testing representation:

### Set the gap length {: #set-the-gap-length }
Optionally, set the [gap](#understanding-gaps) length from the **Gap Length** dropdown. Initially set to zero, DataRobot does not process a gap in testing. When set, DataRobot excludes the data that falls in the gap from use in training or evaluation of the model. Gap length can also be set by [clicking the bars](#change-backtest-partitions) in the visualization.

### Set rows or duration {: #set-rows-or-duration }
By default, DataRobot ensures that each backtest has the same _duration_, either the default or the values set from the dropdown(s) or via the [bars in the visualization](#change-backtest-partitions). If you want the backtest to use the same number of _rows_, instead of the same length of time, use the **Equal rows per backtest** toggle:

Time series projects also have an option to set row or duration for the training data, used as the basis for feature engineering, in the [training window format](ts-customization#duration-and-row-count) section.
Once you have selected the mechanism/mode for assigning data to backtests, select the sampling method, either **Random** or **Latest**, to select how to assign rows from the dataset.
Setting the sampling method is particularly useful if a dataset is not distributed equally over time. For example, if data is skewed to the most recent date, the results of using 50% of random rows versus 50% of the latest will be quite different. By selecting the data more precisely, you have more control over the data that DataRobot trains on.
## Change backtest partitions {: #change-backtest-partitions }
If you don't modify any settings, DataRobot disperses rows to backtests equally. However, you can customize an individual backtest's gap, training, validation, and holdout data by clicking the corresponding bar or the pencil icon () in the visualization. Note that:
* You can only set holdout in the Holdout backtest ("backtest 0"), you cannot change the training data size in that backtest.
* If, during the initial partitioning detection, the backtest configuration of the ordering (date/time) feature, series ID, or target results in insufficient rows to cover both validation and holdout, DataRobot automatically disables holdout. If other partitioning settings are changed (validation or gap duration, start/end dates, etc.), holdout is not affected unless manually disabled.
* When **Equal rows per backtest** is checked (which sets the partitions to row-based assignment), only the Training End date is applicable.
* When **Equal rows per backtest** is checked, the dates displayed are informative only (that is, they are approximate) and they include padding that is set by the feature derivation and forecast point windows.
### Edit individual backtests {: #edit-individual-backtests }
Regardless of whether you are setting training, gaps, validation, or holdout, elements of the editing screens function the same. Hover on a data element to display a tooltip that reports specific duration information:

Click a section (1) to open the tool for modifying the start and/or end dates; click in the box (2) to open the calendar picker.

Triangle markers provide indicators of corresponding boundaries. The larger blue triangle () marks the active boundary—the boundary that will be modified if you apply a new date in the calendar picker. The smaller orange triangle () identifies the other boundary points that can be changed but are not currently selected.
The current duration for training, validation, and gap (if configured) is reported under the date entry box:

Once you have made changes to a data element, DataRobot adds an **EDITED** label to the backtest.

There is no way to remove the **EDITED** label from a backtest, even if you manually reset the durations back to the original settings. If you want to be able to apply global duration settings across all backtests, [copy the project](manage-projects#project-actions-menu) and restart.
### Modify training and validation {: #modify-training-and-validation }
To modify the duration of the training or validation data for an individual backtest:
1. Click in the backtest to open the calendar picker tool.
2. Click the triangle for the element you want to modify—options are training start (default), training end/validation start, or validation end.
3. Modify dates as required.
### Modify gaps {: #modify-gaps }
A gap is a period between the end of the training set and the start of the validation set, resulting in data being intentionally ignored during model training. You can set the [gap](#gaps) length globally or for an individual backtest.
To set a gap, add time between training end and validation start. You can do this by ending training sooner, starting validation later or both.
1. Click the triangle at the end of the training period.
2. Click the **Add Gap** link.

DataRobot adds an additional triangle marker. Although they appear next to each other, both the selected (blue) and inactive (orange) triangles represent the same date. They are slightly spaced to make them selectable.
3. Optionally, set the **Training End Date** using the calendar picker. The date you set will be the beginning of the gap period (training end = gap start).
4. Click the orange **Validation Start Date** marker; the marker changes to blue, indicating that it's selected.
5. Optionally, set the Validation Start Date (validation start = gap end).
The gap is represented by a yellow band; hover over the band to view the duration.
### Modify the holdout duration {: #modify-the-holdout-duration }
To modify the holdout length, click in the red (holdout area) of backtest 0, the holdout partition. Click the displayed date in the **Holdout Start Date** to open the calendar picker and set a new date. If you modify the holdout partition and the new size results in potential problems, DataRobot displays a warning icon next to the Holdout fold. Click the warning icon () to expand the dropdown and reset the duration/date fields.

### Lock the duration {: #lock-the-duration }
You may want to make backtest <em>date</em> changes without modifying the duration of the selected element. You can lock duration for training, for validation, or for the combined period. To lock duration, click the triangle at one end of the period. Next, hold the **Shift** key and select the triangle at the other end of the locked duration. DataRobot opens calendar pickers for each element:

Change the date in either entry. Notice that the other date updates to mirror the duration change you made.
## Interpret the display {: #interpret-the-display }
The date/time partitioning display represents the training and validation data partitions as well as their respective sizes/durations. Use the visualization to ensure that your models are validating on the area of interest. The chart shows, for each backtest, the specific time period of values for the training, validation, and if applicable, holdout and gap data. Specifically, you can observe, for each backtest, whether the model will be representing an interesting or relevant time period. Will the scores represent a time period you care about? Is there enough data in the backtest to make the score valuable?

The following table describes elements of the display:
| Element | Description |
|--------------|---------------|
| Observations | The [binned](lift-chart#lift-chart-binning) distribution of values (i.e., frequency), before downsampling, across the dataset. This is the same information as displayed in the feature’s histogram. |
| Available Training Data | The blue color bar indicates the training data available for a given fold. That is, all available data minus the validation or holdout data. |
| Primary Training Data | The dashed outline indicates the maximum amount of data you can train on to get scores from all backtest folds. You can later choose any time window for training, but depending on what you select, you may not then get all backtest scores. (This could happen, for example, if you train on data greater than the primary training window.) If you train on data less than or equal to the Primary Training Data value, DataRobot completes all backtest scores. If you train on data greater than this value, DataRobot runs fewer tests and marks the backtest score with an asterisk (\*). This value is dependent on (changed by) the number of configured backtests. |
| Gap | A gap between the end of the training set and the start of the validation set, resulting in the data being intentionally ignored during model training. |
| Validation | A set of data indicated by a green bar that is not used for training (because DataRobot selects a different section at each backtest). It is similar to traditional [validation](partitioning), except that it is time based. The validation set starts immediately at the end of the primary training data (or the end of the gap). |
| Holdout (only if **Add Holdout fold** is checked) | The reserved (never seen) portion of data used as a final test of model quality once the model has been trained and validated. When using date/time partitioning, [holdout](data-partitioning) is a duration or row-based portion of the training data instead of a random subset. By default, the holdout data size is the same as the validation data size and always contains the latest data. (Holdout size is user-configurable, however.) |
| Backtest*x* | Time- or row-based folds used for training models. The Holdout backtest is known as "backtest 0" and labeled as Holdout in the visualization. For small datasets and for the highest-scoring model from Autopilot, DataRobot runs all backtests. For larger datasets, the first backtest listed is the one DataRobot uses for model building. Its score is reported in the Validation column of the Leaderboard. Subsequent backtests are not run until manually initiated on the Leaderboard. |
Additionally, the display includes **Target Over Time** and **Observations** histograms. Use these displays to visualize the span of times where models are compared, measured, and assessed—to identify "regions of interest." For example, the displays help to determine the density of data over time, whether there are gaps in the data, etc.

In the displays, the green represents the selection of data that DataRobot is validating the model on. The "All Backtest" score is the average of this region. The gradation marks each backtest and its potential overlap with training data.
Study the **Target Over Time** graph to find interesting regions where there is some data fluctuation. It may be interesting to compare models over these regions. Use the **Observations** chart to determine whether, roughly speaking, the amount of data in a particular backtest is suitable.
Finally, you can click the red, locked holdout section to see where in the data the holdout scores are being measured and whether it is a consistent representation of your dataset.
| date-time-include-1 |
| | Element | Description |
|---|---|---|
|  | Selected word | Displays details about the selected word. (The term *word* here equates to an [*n-gram*](glossary/index#ngram), which can be a sequence of words.) <br><br>Mouse over a word to select it. Words that appear more frequently display in a larger font size in the **Word Cloud**, and those that appear less frequently display in smaller font sizes.|
|  | Coefficient | Displays the [coefficient](coefficients#coefficientpreprocessing-information-with-text-variables) value specific to the word.|
|  | Color spectrum | Displays a legend for the color spectrum and values for words, from blue to red, with blue indicating a negative effect and red indicating a positive effect. |
|  | Appears in # rows| Specifies the number of rows the word appears in. |
|  | Filter stop words | Removes stop words (commonly used terms that can be excluded from searches) from the display. |
|  | Export | Allows you to [export](export-results) the **Word Cloud**. |
|  | Zoom controls | Enlarges or reduces the image displayed on the canvas. Alternatively, double-click on the image. To move areas of the display into focus, click and drag. |
|  | Select class | For multiclass projects, selects the class to investigate using the **Word Cloud**. |
??? info "Word Cloud availability"
You can access **Word Cloud** from either the **Insights** page or the Leaderboard. Operationally, each version of the model behaves the same—use the Leaderboard tab to view a **Word Cloud** while investigating an individual model and the **Insights** page to access, and compare, each **Word Cloud** for a project. Additionally, they are available for multimodal datasets (i.e., datasets that mix images, text, categorical, etc.)—a **Word Cloud** is displayed for all text from the data.
The **Word Cloud** visualization is supported in the following model types and blueprints:
* Binary classification:
* All variants of ElasticNet Classifier (linear family models) with the exception of TinyBERT ElasticNet classifier and FastText ElasticNet classifier
* LightGBM on ElasticNet Predictions
* Text fit on Residuals
* Extended support for multimodal datasets (with single Auto-Tuned N-gram)
* Multiclass:
* Stochastic Gradient Descent with at least 1 text column with the exception of TinyBERT SGD classifier and FastText SGD classifier
* Regression:
* Ridge Regressor
* ElasticNet Regressor
* Lasso Regressor
* Single Auto-Tuned Multi-Modal
* LightGBM on ElasticNet Predictions
* Text fit on Residuals
* Keras
!!! note
The **Word Cloud** for a model is based on the data used to train that model, not on the entire dataset. For example, a model trained on a 32% sample size will result in a **Word Cloud** that reflects those same 32% of rows.
See [Text-based insights](analyze-insights#text-based-insights) for a description of how DataRobot handles single-character words.
| word-cloud-include |
??? info "Category Cloud availability"
The **Category Cloud** insight is available on the **Models > Insights** tab and on the **Data** tab. On the **Insights** page, you can compare word clouds for a project's categorically-based models. From the **Data** page you can more easily compare clouds across features. Note that the **Category Cloud** is not created when using a multiclass target.
Keys are displayed in a color spectrum from blue to red, with blue indicating a negative effect and red indicating a positive effect. Keys that appear more frequently are displayed in a larger font size, and those that appear less frequently are displayed in smaller font sizes.
Check the **Filter stop words** box to remove stopwords (commonly used terms that can be excluded from searches) from the display. Removing these words can improve interpretability if the words are not informative to the Auto-Tuned Summarized Categorical Model.
Mouse over a key to display the coefficient value specific to that key and to read its full name (displayed with the information to the left of the cloud). Note that the names of keys are truncated to 20 characters when displayed in the cloud and limited to 100 characters otherwise. | category-cloud-include |
## Predict and deploy {: #predict-and-deploy }
Once you identify the model that best learns patterns in your data to predict SARs, you can deploy it into your desired decision environment. *Decision environments* are the ways in which the predictions generated by the model will be consumed by the appropriate organizational [stakeholders](#decision-stakeholders), and how these stakeholders will make decisions using the predictions to impact the overall process. This is a critical step for implementing the use case, as it ensures that predictions are used in the real world to reduce false positives and improve efficiency in the investigation process.
The following applications of the alert-prioritization score from the false positive reduction model both automate and augment the existing rule-based transaction monitoring system.
* If the FCC (Financial Crime Compliance) team is comfortable with removing the low-risk alerts (very low prioritization score) from the scope of investigation, then the binary threshold selected during the model-building stage will be used as the cutoff to remove those no-risk alerts. The investigation team will only investigate alerts above the cutoff, which will still capture all the SARs based on what was learned from the historical data.
* Often regulatory agencies will consider auto-closure or auto-removal as an aggressive treatment for production alerts. If auto-closing is not the ideal way to use the model output, the alert prioritization score can still be used to triage alerts into different investigation processes, improving the operational efficiency.
### Decision stakeholders {: #decision-stakeholders }
The following table lists potential decision stakeholders:
Stakeholder | Description
----------- | -----------
Decision Executors | Financial Crime Compliance Team
Decision Managers |Chief Compliance Officer
Decision Authors | Data scientists or business analysts
### Decision process {: #decision-process }
Currently, the review process consists of a deep-dive analysis by investigators. The data related to the case is made available for review so that the investigators can develop a 360° view of the customer, including their profile, demographic, and transaction history. Additional data from third-party data providers and web crawling can supplement this information to complete the picture.
For transactions that do not get auto-closed or auto-removed, the model can help the compliance team create a more effective and efficient review process by triaging their reviews. The predictions and their explanations also give investigators a more holistic view when assessing cases.
**Risk-based Alert Triage:** Based on the prioritization score, the investigation team can take different investigation strategies.
* For no-risk or low-risk alerts—alerts can be reviewed on a quarterly basis, instead of monthly. The frequently alerted entities without any SAR risk will be reviewed once every three months, which will significantly reduce the time of investigation.
* For high-risk alerts with higher prioritization scores—investigations can fast-forward to the final stage in the alert escalation path. This will significantly reduce the effort spent on level 1 and level 2 investigations.
* For medium-risk alerts—the standard investigation process can still be applied.
**Smart Alert Assignment:** For an alert investigation team that is geographically dispersed, the alert prioritization score can be used to assign alerts to different teams in a more effective manner. High-risk alerts can be assigned to the team with the most experienced investigators, while low-risk alerts are assigned to the less-experienced team. This will mitigate the risk of missing suspicious activities due to a lack of competency during alert investigations.
For both approaches, the definition of high/medium/low risk could be either a set of hard thresholds (for example, High: score>=0.5, Medium: 0.5>score>=0.3, Low: score<0.3), or based on the percentile of the alert scores on a monthly basis (for example, High: above 80th percentile, Medium: between 50th and 80th percentile, Low: below 50th percentile).
### Model deployment {: #model-deployment }
The predictions generated from DataRobot can be integrated with an alert management system which will let the investigation team know of high-risk transactions.

### Model monitoring {: #model-monitoring }
DataRobot will continuously monitor the model deployed on the dedicated prediction server. With DataRobot [MLOps](mlops/index), the modeling team can monitor and manage the alert prioritization model by tracking the distribution drift of the input features as well as the performance deprecation over time.

### Implementation considerations {: #implementation-considerations }
When operationalizing this use case, consider the following, which may impact outcomes and require model re-evaluation:
* Change in the transactional behavior of the money launderers.
* Novel information introduced to the transaction, and customer records that are not seen by the machine learning models.
| aml-3-include |
Log in to GitHub before clicking this link.
| github-sign-in |
??? note "Time series blueprints with Scoring Code support"
<span id="ts-sc-blueprint-support">The following blueprints typically support Scoring Code:</span>
* AUTOARIMA with Fixed Error Terms
* ElasticNet Regressor (L2 / Gamma Deviance) using Linearly Decaying Weights with Forecast Distance Modeling
* ElasticNet Regressor (L2 / Gamma Deviance) with Forecast Distance Modeling
* ElasticNet Regressor (L2 / Poisson Deviance) using Linearly Decaying Weights with Forecast Distance Modeling
* ElasticNet Regressor (L2 / Poisson Deviance) with Forecast Distance Modeling
* Eureqa Generalized Additive Model (250 Generations)
* Eureqa Generalized Additive Model (250 Generations) (Gamma Loss)
* Eureqa Generalized Additive Model (250 Generations) (Poisson Loss)
* Eureqa Regressor (Quick Search: 250 Generations)
* eXtreme Gradient Boosted Trees Regressor
* eXtreme Gradient Boosted Trees Regressor (Gamma Loss)
* eXtreme Gradient Boosted Trees Regressor (Poisson Loss)
* eXtreme Gradient Boosted Trees Regressor with Early Stopping
* eXtreme Gradient Boosted Trees Regressor with Early Stopping (Fast Feature Binning)
* eXtreme Gradient Boosted Trees Regressor with Early Stopping (Gamma Loss)
* eXtreme Gradient Boosted Trees Regressor with Early Stopping (learning rate =0.06) (Fast Feature Binning)
* eXtreme Gradient Boosting on ElasticNet Predictions
* eXtreme Gradient Boosting on ElasticNet Predictions (Poisson Loss)
* Light Gradient Boosting on ElasticNet Predictions
* Light Gradient Boosting on ElasticNet Predictions (Gamma Loss)
* Light Gradient Boosting on ElasticNet Predictions (Poisson Loss)
* Performance Clustered Elastic Net Regressor with Forecast Distance Modeling
* Performance Clustered eXtreme Gradient Boosting on Elastic Net Predictions
* RandomForest Regressor
* Ridge Regressor using Linearly Decaying Weights with Forecast Distance Modeling
* Ridge Regressor with Forecast Distance Modeling
* Vector Autoregressive Model (VAR) with Fixed Error Terms
* IsolationForest Anomaly Detection with Calibration (time series)
* Anomaly Detection with Supervised Learning (XGB) and Calibration (time series)
While the blueprints listed above support Scoring Code, there are situations when Scoring Code is unavailable:
* Scoring Code might not be available for some models generated using [Feature Discovery](fd-time).
* Consistency issues can occur for non day-level calendars when the event is not in the dataset; therefore, Scoring Code is unavailable.
* Consistency issues can occur when inferring the forecast point in situations with a non-zero [blind history](glossary/index#blind-history); however, Scoring Code is still available in this scenario.
* Scoring Code might not be available for some models that use text tokenization involving the MeCab tokenizer.
* Differences in rolling sum computation can cause consistency issues in projects with a weight feature and models trained on feature lists with `weighted std` or `weighted mean`.
??? note "Time series Scoring Code capabilities"
The following capabilities are currently supported for time series Scoring Code:
* [Time series parameters](sc-time-series#time-series-parameters-for-cli-scoring) for scoring at the command line.
* [Segmented modeling](sc-time-series#scoring-code-for-segmented-modeling-projects)
* [Prediction intervals](sc-time-series#prediction-intervals-in-scoring-code)
* [Calendars](ts-adv-opt#calendar-files) (high resolution)
* [Cross-series](ts-adv-opt#enable-cross-series-feature-generation)
* [Zero inflated](ts-feature-lists#zero-inflated-models) / naïve binary
* [Nowcasting](nowcasting) (historical range predictions)
* ["Blind history" gaps](glossary/index#blind-history)
* [Weighted features](ts-adv-opt#apply-weights)
The following time series capabilities are not supported for Scoring Code:
* Row-based / irregular data
* Nowcasting (single forecast point)
* Intramonth seasonality
* Time series blenders
* Autoexpansion
* EWMA (Exponentially Weighted Moving Average) | scoring-code-consider-ts |
No-Code AI Apps allow you to build and configure AI-powered applications using a no-code interface to enable core DataRobot services without having to build models and evaluate their performance in DataRobot. Applications are easily shared and do not require users to own full DataRobot licenses in order to use them. Applications also offer a great solution for broadening your organization's ability to use DataRobot's functionality. | no-code-app-intro |
The **Over time** chart helps you identify trends and potential gaps in your data by displaying, for both the original modeling data and the derived data, how a feature changes over the primary date/time feature. It is available for all time-aware projects (OTV, single series, and multiseries). For time series, it is available for each user-configured forecast distance.
Using the page's tools, you can focus on specific time periods. Display options for OTV and single-series projects differ from those of multiseries. Note that to view the **Over time** chart you must first compute chart data. Once computed:
1. Set the chart's granularity. The resolution options are auto-detected by DataRobot. All project types allow you to set a resolution (this option is under **Additional settings** for multiseries projects).

2. Toggle the histogram display on and off to see a visualization of the bins DataRobot is using for [EDA1](eda-explained#eda1).
3. Use the date range slider below the chart to highlight a specific region of the time plot. For smaller datasets, you can drag the sliders to a selected portion. Larger data sets use block pagination.

4. For multiseries projects, you can set both the forecast distance and an individual series (or average across series) to plot:

For time series projects, the **Data** page also provides a [Feature Lineage](#feature-lineage-tab) chart to help understand the creation process for derived features.
## Partition without holdout {: #partition-without-holdout }
Sometimes, you may want to create a project without a holdout set, for example, if you have limited data points. Date/time partitioning projects have a minimum data ingest size of 140 rows. If **Add Holdout fold** is not checked, minimum ingest becomes 120 rows.
By default, DataRobot creates a holdout fold. When you toggle the switch off, the red holdout fold disappears from the representation (only the backtests and validation folds are displayed) and backtests recompute and shift to the right. Other configuration functionality remains the same—you can still modify the validation length and gap length, as well as the number of backtests. On the Leaderboard, after the project builds, you see validation and backtest scores, but no holdout score or **Unlock Holdout** option.
The following lists other differences when you do not create a holdout fold:
* Both the [**Lift Chart**](lift-chart#change-the-display) and [**ROC Curve**](pred-dist-graph#data-selection) can only be built using the validation set as their **Data Source**.
* The [**Model Info**](model-info) tab shows no holdout backtest and or warnings related to holdout.
* You can only compute predictions for **All data** and the **Validation** set from the [**Predict**](predict#why-use-training-data-for-predictions) tab.
* The [**Learning Curves**](learn-curve) graph does not plot any models trained into Validation or Holdout.
* [**Model Comparison**](model-compare) uses results only from validation and backtesting.
| date-time-include-4 |
### Business problem {: #business-problem }
Because, on average, it takes roughly 20 days to process an auto insurance claim (which often frustrates policyholders), insurance companies look for ways to increase the efficiency of their claims workflows. Increasing the number of claim handlers is expensive, so companies have increasingly relied on automation to accelerate the process of paying or denying claims. Automation can increase Straight-Through Processing (STP) by more than 20%, resulting in faster claims processing and improved customer satisfaction.
However, as insurance companies increase the speed by which they process claims, they also increase their risk of exposure to fraudulent claims. Unfortunately, most of the systems widely used to prevent fraudulent claims from being processed either require high amounts of manual labor or rely on static rules.
## Solution value {: #solution-value }
While Business Rule Management Systems (BRMS) will always be required—they implement mandatory rules related to compliance—you can supplement these systems by improving the accuracy of predicting which incoming claims are fraudulent.
Using historical cases of fraud and their associated features, AI can apply learnings to new claims to assess whether they share characteristics of the learned fraudulent patterns. Unlike BRMS, which are static and have hard-coded rules, AI generates a probabilistic prediction and provides transparency on the unique drivers of fraud for each suspicious claim. This allows investigators to not only route and triage claims by their likelihood of fraud, but also enables them to accelerate the review process as they know which vectors of a claim they should evaluate. The probabilistic predictions also allow investigators to set thresholds that automatically approve or reject claims.
### Problem framing {: #problem-framing }
Work with [stakeholders](#decision-stakeholders) to identify and prioritize the decisions for which automation will offer the greatest business value. In this example, stakeholders agreed that achieving over 20% STP in claims payment was a critical success factor and that minimizing fraud was a top priority. Working with subject matter experts, the team developed a shared understanding of STP in claims payment and built decision logic for claims processing:
Step | Best practice
---- | -------------
Determine which decisions to automate. | Automate simple claims and send the more complex claims to a human claims processor.
Determine which decisions will be based on business rules and which will be based on machine learning. | Mange decisions that rely on compliance and business strategy by rules. Use machine learning for decisions that rely on experiences, including whether a claim is fraudulent and how much the payment will be.
Once the decision logic is in good shape, it is time to build business rules and machine learning models. Clarifying the decision logic reveals the true data needs, which helps decision owners see exactly what data and analytics drive decisions.
### ROI estimation {: #roi-estimation }
One way to frame the problem is to determine how to measure ROI. Consider:
For ROI, multiple AI models are involved in an STP use case. For example, fraud detection, claims severity prediction, and litigation likelihood prediction are common use cases for models that can augment business rules and human judgment. Insurers implementing fraud detection models have reduced payments to fraud by 15% to 25% annually, saving $1 million to $3 million.
To measure:
1. Identify the number of fraudulent claims that models detected but manual processing failed to identify (false negatives).
2. Calculate the monetary amount that would have been paid on these fraudulent claims if machine learning had not flagged them as fraud.
`100 fraudulent claims * $20,000 each on average = $2 million per year`
3. Identify fraudulent claims that manual investigation detected but machine learning failed to detect.
4. Calculate the monetary amount that would have been paid without manual investigation.
`40 fraudulent claims * $5,000 each on average = $0.2 million per year`
The difference between these two numbers would be the ROI.
$2 million – $0.2 million = $1.8 million per year
## Working with data {: #working-with-data }
For illustrative purposes, this guide uses a simulated dataset that resembles insurance company data. The dataset consists of 10,746 rows and 45 columns.

### Features and sample data {: #features-and-sample-data }
The target variable for this use case is whether or not a claim submitted is fraudulent. It is a binary classification problem. In this dataset 1,746 of 10,746 claims (16%) are fraudulent.
The target variable:
* `FRAUD`
### Data preparation {: #data-preparation }
Below are examples of 44 features that can be used to train a model to identify fraud. They consist of historical data on customer policy details, claims data including free-text description, and internal business rules from national databases. These features help DataRobot extract relevant patterns to detect fraudulent claims.
Beyond the features listed below, it might help to incorporate any additional data your organization collects that could be relevant to detecting fraudulent claims. For example, DataRobot is able to process image data as a feature together with numeric, categorical, and text features. Images of vehicles after an accident may be useful to detect fraud and help predict severity.
Data from the claim table, policy table, customer table, and vehicle table are merged with customer ID as a key. Only data known before or at the time of the claim creation is used, except for the target variable. Each record in the dataset is a claim.
### Sample feature list {: #sample-feature-list }
Feature name | Data type | Description | Data source | Example
------------ | --------- | ------------| ------------| --------
ID | Numeric | Claim ID | Claim | 156843
FRAUD | Numeric | Target | Claim | 0
DATE | Date | Date of Policy | Policy | 31/01/2013
POLICY_LENGTH | Categorical | Length of Policy | Policy | 12 month
LOCALITY | Categorical | Customer’s locality | Customer | OX29
REGION | Categorical | Customer’s region | Customer | OX
GENDER | Numeric | Customer’s gender | Customer | 1
CLAIM\_POLICY\_DIFF\_A | Numeric | Internal | Policy | 0
CLAIM\_POLICY\_DIFF\_B | Numeric | Internal | Policy | Policy | 0
CLAIM\_POLICY\_DIFF\_C | Numeric | Internal | Policy | Policy | 1
CLAIM\_POLICY\_DIFF\_D | Numeric | Internal | Policy | Policy | 0
CLAIM\_POLICY\_DIFF\_E | Numeric | Internal | Policy | Policy | 0
POLICY\_CLAIM\_DAY\_DIFF | Numeric | Number of days since policy taken | Policy, Claim | 94
DISTINCT\_PARTIES\_ON\_CLAIM | Numeric | Number of people on claim | Claim | 4
CLM\_AFTER\_RNWL | Numeric | Renewal | History | Policy | 0
NOTIF\_AFT\_RENEWAL | Numeric | Renewal | History | Policy | 0
CLM\_DURING\_CAX | Numeric | Cancellation claim | Policy | 0
COMPLAINT | Numeric | Customer complaint | Policy | 0
CLM\_before\_PAYMENT | Numeric | Claim before premium paid | Policy, Claim | 0
PROP\_before\_CLM | Numeric | Claim History| Claim | 0
NCD\_REC\_before\_CLM | Numeric | Claim History | Claim | 1
NOTIF\_DELAY | Numeric | Delay in notification | Claim | 0
ACCIDENT\_NIGHT | Numeric | Night time accident | Claim | 0
NUM\_PI\_CLAIM | Numeric | Number of personal injury claims | Claim | 0
NEW\_VEHICLE\_BEFORE\_CLAIM | Numeric | Vehicle History | Vehicle, Claim | 0
PERSONAL_INJURY_INDICATOR | Numeric | Personal Injury flag | Claim | 0
CLAIM\_TYPE\_ACCIDENT | Numeric | Claim details | Claim | 1
CLAIM\_TYPE\_FIRE | Numeric | Claim details | Claim | 0
CLAIM\_TYPE\_MOTOR\_THEFT | Numeric | Claim details | Claim | 0
CLAIM\_TYPE\_OTHER | Numeric | Claim details | Claim | 0
CLAIM\_TYPE\_WINDSCREEN | Numeric | Claim details | Claim | 0
LOCAL\_TEL\_MATCH | Numeric | Internal Rule Matching | Claim | 0
LOCAL\_M\_CLM\_ADD\_MATCH | Numeric | Internal Rule Matching | Claim | 0
LOCAL\_M\_CLM\_PERS\_MATCH | Numeric | Internal Rule Matching | Claim | 0
LOCAL_\NON\_CLM\_ADD\_MATCH | Numeric | Internal Rule Matching | Claim | 0
LOCAL\_NON\_CLM\_PERS\_MATCH | Numeric | Internal Rule Matching | Claim | 0
federal\_TEL\_MATCH | Numeric | Internal Rule Matching | Claim | 0
federal\_CLM\_ADD\_MATCH | Numeric | Internal Rule Matching | Claim | 0
federal\_CLM\_PERS\_MATCH | Numeric | Internal Rule Matching | Claim | 0
federal\_NON\_CLM\_ADD\_MATCH | Numeric | Internal Rule Matching | Claim | 0
federal\_NON\_CLM\_PERS\_MATCH | Numeric | Internal Rule Matching | Claim | 0
SCR\_LOCAL\_RULE\_COUNT | Numeric | Internal Rule Matching | Claim | 0
SCR\_NAT\_RULE\_COUNT | Numeric | Internal Rule Matching | Claim | 0
RULE MATCHES | Numeric | Internal Rule Matching | Claim | 0
CLAIM_DESCRIPTION | Text | Customer Claim Text | Claim | this via others themselves inc become within ours slow parking lot fast vehicle roundabout mall not indicating car caravan neck emergency
## Modeling and insights {: #modeling-and-insights }
DataRobot automates many parts of the modeling pipeline, including processing and partitioning the dataset, as described [here](model-data). That activity is not described here and instead the following describes model interpretation. Reference the [DataRobot documentation](gs-dr-fundamentals) to see how to use DataRobot from start to finish and how to understand the data science methodologies embedded in its automation.
### Feature Impact {: #feature-impact }
[**Feature Impact**](feature-impact) reveals that the number of past personal injury claims (`NUM_PI_CLAIM`) and internal rule matches (`LOCAL_M_CLM_PERS_MATCH`, `RULE_MATCHES`, `SCR_LOCAL_RULE_COUNT`) are among the most influential features in detecting fraudulent claims.
### Feature Effects/partial dependence {: #partial-dependence }

The [partial dependence plot](feature-effects#partial-dependence-calculations) in **Feature Effects** shows that the larger the number of personal injury claims (`NUM_PI_CLAIM`), the higher the likelihood of fraud. As expected, when a claim matches internal red flag rules, its likelihood of being fraud increases greatly. Interestingly, `GENDER` and `CLAIM_TYPE_MOTOR_THEFT` (car theft) are also strong features.
### Word Cloud {: #word-cloud }

The current data includes `CLAIM_DESCRIPTION` as text. A [**Word Cloud**](word-cloud) reveals that customers who use the term "roundabout," for example, are more likely to be committing fraud than those who use the term "emergency." (The size of a word indicates how many rows include the word; the deeper red indicates the higher association it has to claims scored as fraudulent. Blue words are terms associated with claims scored as non-fraudulent.)
### Prediction Explanations {: #prediction-explanations }

[**Prediction Explanations**](pred-explain/index) provide up to 10 reasons for each prediction score. Explanations provide Send directly to Special Investigation Unit (SIU) agents and claim handlers with useful information to check during investigation. For example, DataRobot not only predicts that Claim ID 8296 has a 98.5% chance of being fraudulent, but it also explains that this high score is due to a specific internal rule match (`LOCAL_M_CLM_PERS_MATCH`, `RULE_MATCHES`) and the policyholder’s six previous personal injury claims (`NUM_PI_CLAIM`). When claim advisors need to deny a claim, they can provide the reasons why by consulting Prediction Explanations.
### Evaluate accuracy {: #evaluate-accuracy }
There are several vizualizations that help to evaluate accuracy.
#### Leaderboard {: #leaderboard }
Modeling results show that the ENET Blender is the most accurate model, with 0.93 AUC on cross validation. This is an ensemble of eight single models. The high accuracy indicates that the model has learned signals to distinguish fraudulent from non-fraudulent claims. Keep in mind, however, that blenders take longer to score compared to single models and so may not be ideal for real-time scoring.
The Leaderboard shows that the modeling accuracy is stable across Validation, Cross Validation, and Holdout. Thus, you can expect to see similar results when you deploy the selected model.

#### Lift Chart {: #lift-chart }
The steep increase in the average target value in the right side of the [**Lift Chart**](lift-chart) reveals that, when the model predicts that a claim has a high probability of being fraudulent (blue line), the claim tends to actually be fraudulent (orange line).

#### Confusion matrix {: #confusion-matrix }
The [confusion matrix](confusion-matrix) shows:
* Of 2,149 claims in the holdout partition, the model predicted 372 claims as fraudulent and 1,777 claims as legitimate.
* Of the 372 claims predicted as fraud, 275 were actually fraudulent (true positives), and 97 were not (false positives).
* Of 1,777 claims predicted as non-fraud, 1,703 were actually not fraudulent (true negatives) and 74 were fraudulent (false negatives).

Analysts can examine this table to determine if the model is accurate enough for business implementation.
### Post-processing {: #post-processing }
To convert model predictions into decisions, you determine the best thresholds to classify a whether a claim is fraudulent.
#### ROC Curve {: #roc-curve }
Set the [**ROC Curve**](roc-curve-tab-use) threshold depending on how you want to use model predictions and business constraints. Some examples:
If... | Then...
----- | -------
...the main use of the fraud detection model is to automate payment | ...minimize the false negatives (the number of fraudulent claims mistakenly predicted as not fraudulent) by adjusting the threshold to classify prediction scores into fraud or not.
...the main use is to automate the transfer of the suspicious claims to SIU | ...minimize false positives (the number of non-fraudulent claims mistakenly predicted as fraudulent).
...you want to minimize the false negatives, but you do not want false positives to go over 100 claims because of the limited resources of SIU agents | ...lower the threshold just to the point where the number of false positives becomes 100.

#### Payoff matrix {: #payoff-matrix}
From the **Profit Curve** tab, use the [**Payoff Matrix**](profit-curve) to set thresholds based on simulated profit. For example:
Payoff value | Description
------------ | -----------
True positive = $20,000 | Average payment associated with a fraudulent claim.
False positive = -$20,000 | This is assuming that a false positive means that a human investigator will not be able to spend time detecting a real fraudulent claim.
True negative = $100 | Leads to auto pay of claim and saves by eliminating manual claim processing.
False negative = -$20,000 | Cost of missing fraudulent claims.
DataRobot then automatically calculates the threshold that maximizes profit. You can also measure DataRobot ROI by creating the same payoff matrix for your existing business process and subtracting the max profit of the existing process from that calculated by DataRobot.

Once the threshold is set, model predictions are converted into fraud or non-fraud according to the threshold. These classification results are integrated into BRMS and become one of the many factors that determine the final decision.
## Predict and deploy {: #predict-and-deploy }
After selecting the model that best learns patterns to predict fraud, you can deploy it into your desired decision environment. Decision environments are the ways in which the predictions generated by the model will be consumed by the appropriate organizational stakeholders, and how these stakeholders will make decisions using the predictions to impact the overall process.
### Decision stakeholders {: #decision-stakeholders }
The following table lists potential decision stakeholders:
Stakeholder | Description
----------- | -----------
Decision executors | The decision logic assigns claims that require manual investigation to claim handlers (executors) and SIU agents based claim complexity. They investigate the claims referring to insights provided by DataRobot and decide whether to pay or deny. They report to decision authors the summary of claims received and their decisions each week.
Decision managers | Managers monitor the KPI dashboard, which visualizes the results of following the decision logic. For example, they track the number of fraudulent claims identified and missed. They can discuss with decision authors how to improve the decision logic each week.
Decision authors | Senior managers in the claims department examine the performance of the decision logic by receiving input from decision executors and decision managers. For example, decision executors will inform whether or not the fraudulent claims they receive are reasonable, and decision managers will inform whether or not the rate of fraud is as expected. Based on the inputs, decision authors update the decision logic each week.
### Decision process {: #decision-process }
This use case blends augmentation and automation for decisions. Instead of claim handlers manually investigating every claim, business rules and machine learning will identify simple claims that should be automatically paid and problematic claims that should be automatically denied. Fraud likelihood scores are sent to BRMS through the API and post-processed into high, medium, and low risk, based on set thresholds, and arrive at one of the following final decisions:
Action | Degree of risk
------ | -------------
SIU | High
Assign to claim handlers | Medium
Auto pay | Low
Auto deny | Low
Routing to claims handlers includes an intelligent triage, in which claims handlers receive fewer claims and just those which are better tailored to their skills and experience. For example, more complex claims can be identified and sent to more experienced claims handlers. SIU agents and claim handlers will decide whether to pay or deny the claims after investigation.
### Model deployment {: #model-deployment }
Predictions are deployed through the API and sent to the BRMS.

### Model monitoring {: #model-monitoring }
Using DataRobot [MLOps](mlops/index), you can monitor, maintain, and update models within a single platform.
Each week, decision authors monitor the fraud detection model and retrain the model if [data drift](data-drift) reaches a certain threshold. In addition, along with investigators, decision authors can regularly review the model decisions to ensure that data are available for future retraining of the fraud detection model. Based on the review of the model's decisions, the decision authors can also update the decision logic. For example, they might add a repair shop to the red flags list and improve the threshold to convert fraud scores into high, medium, or low risk.
DataRobot provides tools for managing and monitoring the deployments, including accuracy and data drift.
### Implementation considerations {: #implementation-considerations }
Business goals should determine decision logic, not data. The project begins with business users building decision logic to improve business processes. Once decision logic is ready, true data needs will become clear.
Integrating business rules and machine learning to production systems can be problematic. Business rules and machine learning models need to be updated frequently. Externalizing the rules engine and machine learning allows decision authors to make frequent improvements to decision logic. When the rules engine and machine learning are integrated into production systems, updating decision logic becomes difficult because it will require changes to production systems.
Trying to automate all decisions will not work. It is important to decide which decisions to automate and which decisions to assign to humans. For example, business rules and machine learning cannot identify fraud 100% of the time; human involvement is still necessary for more complex claim cases.
| fraud-claims-include |
The execution environment limit allows you to control how many custom model environments a user can add to the [Custom Model Workshop](custom-model-workshop/index). In addition, the execution environment _version_ limit allows you to control how many versions a user can add to _each_ of those environments. These limits can be:
1. **Directly applied to the user**: Set in a user's permissions. Overrides the limits set in the group and organization permissions.
2. **Inherited from a user group**: Set in the permissions of the group a user belongs to. Overrides the limits set in organization permissions.
3. **Inherited from an organization**: Set in the permissions of the organization a user belongs to.
If the environment or environment version limits are defined for an organization or a group, the users within that organization or group inherit the defined limits. However, a more specific definition of those limits at a lower level takes precedence. For example, an organization may have the environment limits set to 5, a group to 4, and the user to 3; in this scenario, the final limit for the individual user is 3.
For more information on adding custom model execution environments, see the [Custom model environments documentation](custom-environments).
To manage the execution environment limits in the platform settings:
| ex-env-limits |
## Feature considerations {: #feature-considerations }
Consider the following when working with Scoring Code:
* Using Scoring Code in production requires additional development efforts to implement model management and model monitoring, which the DataRobot API provides out of the box.
* Exportable Java Scoring Code requires extra RAM during model building. As a result, to use this feature, you should keep your training dataset under 8GB. Projects larger than 8GB may fail due to memory issues. If you get an out-of-memory error, decrease the sample size and try again. The memory requirement _does not apply during model scoring_. During scoring, the only limitation on the dataset is the RAM of the machine on which the Scoring Code is run.
### Model support {: #model-support }
* Scoring Code is available for models containing only _supported_ built-in tasks. It is not available for [custom models](custom-inf-model) or models containing one or more [custom tasks](cml-custom-tasks).
* Scoring Code is not supported in multilabel projects.
* Keras models do not support Scoring Code by default; however, support can be enabled by having an administrator activate the Enable Scoring Code Support for Keras Models feature flag. If enabled, note that these models are not compatible with Scoring Code for Android and Snowflake.
Additional instances in which Scoring Code generation is not available include:
* Naive Bayes
* Text tokenization involving the MeCab tokenizer
* Visual AI and Location AI
### Time series support {: #time-series-support }
* The following time series capabilities are not supported for Scoring Code:
* Row-based / irregular data
* Nowcasting (single forecast point)
* Intramonth seasonality
* Time series blenders
* Autoexpansion
* EWMA (Exponentially Weighted Moving Average)
* Scoring Code is not supported in time series binary classification projects.
* Scoring Code is not typically supported in time series anomaly detection models; however, it is supported for IsolationForest and some XGBoost-based anomaly detection model blueprints. For a list of supported time series blueprints, see the [Time series blueprints with Scoring Code support](#ts-sc-blueprint-support) note.
{% include 'includes/scoring-code-consider-ts.md' %}
### Prediction Explanations support {: #prediction-explanations-support }
Consider the following when working with Prediction Explanations for Scoring Code:
* To download Prediction Explanations with Scoring Code, you _must_ select **Include Prediction Explanations** during [Leaderboard download](sc-download-leaderboard#leaderboard-download) or [Deployment download](sc-download-deployment#deployment-download). This option is _not_ available for [Legacy download](sc-download-legacy).
* Scoring Code _doesn't_ support Prediction Explanations for time series models.
* Scoring Code _only_ supports [XEMP-based](xemp-pe) prediction explanations. [SHAP-based](shap-pe) prediction explanations aren't supported.
| scoring-code-consider |
!!! note
Some [DataRobot University](https://university.datarobot.com){ target=_blank } courses require subscriptions.
| dru-subscription |
The [metrics values](#metrics-explained) on the ROC curve display might not always match those shown on the Leaderboard. For ROC curve metrics, DataRobot keeps up to 120 of the calculated thresholds that best represent the distribution. Because of this, minute details might be lost. For example, if you select **Maximize MCC** as the [display threshold](threshold#set-the-display-threshold), DataRobot preserves the top 120 thresholds and calculates the maximum among them. This value is usually very close but may not exactly match the metric value. | max-metrics-roc |
## Time-aware models on the Leaderboard {: #time-aware-models-on-the-leaderboard }
Once you click **Start**, DataRobot begins the model-building process and returns results to the Leaderboard.
!!! note
Model parameter selection has not been customized for date/time-partitioned projects. Though automatic parameter selection yields good results in most cases, [**Advanced Tuning** ](adv-tuning) may significantly improve performance for some projects that use the Date/Time partitioning feature.
While most elements of the Leaderboard are the same, DataRobot's calculation and assignment of [recommended models](model-rec-process) differs. Also, the **Sample Size** function is different for date/time-partitioned models. Instead of reporting the percentage of the dataset used to build a particular model, under **Feature List & Sample Size**, the default display lists the sampling method (random/latest) and either:
* The start/end date (either manually added or automatically assigned for the recommended model:

* The duration used to build the model:

* The number of rows:

* the **Project Settings** label, indicating custom backtest configuration:

You can filter the Leaderboard display on the time window sample percent, sampling method, and feature list using the dropdown available from the **Feature List & Sample Size**. Use this to, for example, easily select models in a single Autopilot stage.

Autopilot does not optimize the amount of data used to build models when using Date/Time partitioning. Different length training windows may yield better performance by including more data (for longer model-training periods) or by focusing on recent data (for shorter training periods). You may improve model performance by adding models based on shorter or longer training periods. You can customize the training period with the <b>Add a Model</b> option on the Leaderboard.
Another partitioning-dependent difference is the origination of the Validation score. With date partitioning, DataRobot initially builds a model using only the first backtest (the partition displayed just below the holdout test) and reports the score on the Leaderboard. When calculating the holdout score (if enabled) for row count or duration models, DataRobot trains on the first backtest, freezes the parameters, and then trains the holdout model. In this way, models have the same relationship (i.e., end of backtest 1 training to start of backtest validation will be equivalent in duration to end of holdout training data to start of holdout).
Note, however, that backtesting scores are dependent on the [sampling method](#set-rows-or-duration) selected. DataRobot only scores all backtests for a limited number of models (you must manually run others). The automatically run backtests are based on:
* With *random*, DataRobot always backtests the best blueprints on the max available sample size. For example, if `BP0 on P1Y @ 50%` has the best score, and BP0 has been trained on `P1Y@25%`, `P1Y@50%` and `P1Y` (the 100% model), DataRobot will score all backtests for BP0 trained on P1Y.
* With *latest*, DataRobot preserves the exact training settings of the best model for backtesting. In the case above, it would score all backtests for `BP0 on P1Y @ 50%`.
Note that when the model used to score the validation set was trained on less data than the training size displayed on the Leaderboard, the score displays an asterisk. This happens when training size is equal to full size minus holdout.
Just like [cross-validation](data-partitioning), you must initiate a separate build for the other configured backtests (if you initially set the number of backtest to greater than 1). Click a model’s **Run** link from the Leaderboard, or use **Run All Backtests for Selected Models** from the Leaderboard menu. (You can use this option to run backtests for single or multiple models at one time.)

The resulting score displayed in the **All Backtests** column represents an average score for all backtests. See the description of [**Model Info**](model-info) for more information on backtest scoring.

### Change the training period {: #change-the-training-period }
!!! note
Consider [retraining your model on the most recent data](otv#retrain-before-deployment) before final deployment.
You can change the training range and sampling rate and then rerun a particular model for date-partitioned builds. Note that you cannot change the duration of the validation partition once models have been built; that setting is only available from the **Advanced options** link before the building has started. Click the plus sign (**+**) to open the **New Training Period** dialog:

The **New Training Period** box has multiple selectors, described in the table below:

| | Selection | Description |
|---|---|---|
|  | Frozen run toggle | [Freeze the run](frozen-run) |
|  | Training mode | Rerun the model using a different training period. Before setting this value, see [the details](ts-customization#duration-and-row-count) of row count vs. duration and how they apply to different folds. |
|  | Snap to | "Snap to" predefined points, to facilitate entering values and avoid manually scrolling or calculation. |
|  | [Enable time window sampling](#time-window-sampling) | Train on a subset of data within a time window for a duration or [start/end](#setting-the-start-and-end-dates) training mode. Check to enable and specify a percentage. |
|  | [Sampling method](#set-rows-or-duration) | Select the sampling method used to assign rows from the dataset. |
| | Summary graphic | View a summary of the observations and testing partitions used to build the model. |
|  | Final Model | View an image that changes as you adjust the dates, reflecting the data to be used in the model you will make predictions with (see the [note](#about-final-models) below). |
Once you have set a new value, click **Run with new training period**. DataRobot builds the new model and displays it on the Leaderboard.
#### Setting the duration {: #setting-the-duration}
To change the training period a model uses, select the **Duration** tab in the dialog and set a new length. Duration is measured from the beginning of validation working back in time (to the left). With the Duration option, you can also enable [time window sampling](#time-window-sampling).
DataRobot returns an error for any period of time outside of the observation range. Also, the units available depend on the time format (for example, if the format is `%d-%m-%Y`, you won't have hours, minutes, and seconds).

#### Setting the row count {: #setting-the-row-count }
The row count used to build a model is reported on the Leaderboard as the Sample Size. To vary this size, Click the **Row Count** tab in the dialog and enter a new value.

#### Setting the start and end dates {: #setting-the-start-and-end-dates }
If you enable [Frozen run](frozen-run) by clicking the toggle, DataRobot re-uses the parameter settings it established in the original model run on the newly specified sample. Enabling Frozen run unlocks a third training criteria, Start/End Date. Use this selection to manually specify which data DataRobot uses to build the model. With this setting, after unlocking holdout, you can train a model into the Holdout data. (The Duration and Row Count selectors do not allow training into holdout.) Note that if holdout is locked and you overlap with this setting, the model building will fail. With the start and end dates option, you can also enable [time window sampling](#time-window-sampling).

When setting start and end dates, note the following:
* DataRobot does not run backtests because some of the data may have been used to build the model.
* The end date is excluded when extracting data. In other words, if you want data through December 31, 2015, you must set end-date to January 1, 2016.
* If the validation partition (set via Advanced options before initial model build) occurs after the training data, DataRobot displays a validation score on the Leaderboard. Otherwise, the Leaderboard displays N/A.
* Similarly, if any of the holdout data is used to build the model, the Leaderboard displays N/A for the Holdout score.
* Date/time partitioning does not support dates before 1900.
Click **Start/End Date** to open a clickable calendar for setting the dates. The dates displayed on opening are those used for the existing model. As you adjust the dates, check the **Final model** graphic to view the data your model will use.

### Time window sampling {: #time-window-sampling }
If you do not want to use all data within a time window for a date/time-partitioned project, you can train on a subset of data within a time window specification. To do so, check the **Enable Time Window** sampling box and specify a percentage. DataRobot will take a uniform sample over the time range using that percentage of the data. This feature helps with larger datasets that may need the full time window to capture seasonality effects, but could otherwise face runtime or memory limitations.

## View summary information {: #view-summary-information }
Once models are built, use the [**Model Info**](model-info) tab for the model overview, backtest summary, and resource usage information.

Some notes:
* Hover over the folds to display rows, dates, and duration as they may differ from the values shown on the Leaderboard. The values displayed are the actual values DataRobot used to train the model. For example, suppose you request a [Start/End Date](#setting-the-start-and-end-dates) model from 6/1/2015 to 6/30/2015 but there is only data in your dataset from 6/7/2015 to 6/14/2015, then the hover display indicates the actual dates, 6/7/2015 through 6/15/2015, for start and end dates, with a duration of eight days.
* The **Model Overview** is a summary of row counts from the validation fold (the first fold under the holdout fold).
* If you created duration-based testing, the validation summary could result in differences in numbers of rows. This is because the number of rows of data available for a given time period can vary.
* A message of **Not Yet Computed** for a backtest indicates that there was not available data for the validation fold (for example, because of gaps in the dataset). In this case, where all backtests were not completed, DataRobot displays an asterisk on the backtest score.
* The “reps” listed at the bottom correspond to the backtests above and are ordered in the sequence in which they finished running.
| date-time-include-3 |
The following sections describe the components to making predictions in DataRobot:
* [Using the Prediction API](../../../api/reference/predapi/index)
* [Using Scoring Code](../../../predictions/scoring-code/index)
* [Using batch Scoring](../../../predictions/batch/index)
* [Using the UI to make predictions](../../../predictions/ui/index)
* [Using the Portable Prediction Server](portable-pps)
!!! info "Availability information"
DataRobot’s exportable models and independent prediction environment option, which allows a user to export a model from a model building environment to a dedicated and isolated prediction environment, is not available for managed AI Platform deployments.
## Predictions overview {: #predictions-overview }
DataRobot offers several different methods for getting predictions on new data (also known as scoring) from a model.
### Predictions with the UI {: #predictions-with-the-ui }
The simplest method for making predictions is to [use the UI](../../../predictions/ui/index) to score data. This option is great, for example, for those who use Excel to create quarterly reports. You can upload new data to score, let DataRobot make predictions, and then download the results.

### Prediction server to deploy a model {: #prediction-server-to-deploy-a-model }
You can use a [DataRobot prediction server with a REST API](predapi/index) to access a more automated, real-time scoring method. This method can be easily integrated with other IT systems, applications, or code to query a DataRobot model and return predictions. The prediction server can be hosted in both SaaS and Self-Managed AI Platform environments. You can also use the [Portable Prediction Server](portable-pps), which runs disconnected from main installation environments outside of DataRobot.
### Scoring Code to deploy a model {: #scoring-code-to-deploy-a-model }
You can export [Scoring Code](../../../predictions/scoring-code/index) from DataRobot in Java or Python to make predictions. Scoring Code is portable and executable in any computing environment. This method is useful for low-latency applications that cannot fully support REST API performance or lack network access.
### Make predictions and monitor model health {: #make-predictions-and-monitor-model-health }
If you use any of the methods mentioned above, DataRobot allows you to deploy a model and monitor its prediction output and performance over a selected time period.
A critical part of the model management process is to identify when a model starts to deteriorate and quickly address it. Once trained, models can then make predictions on new data that you provide. However, prediction data changes over time—businesses expand to new cities, new products enter the market, policy or processes change—any number of changes can occur. This can result in [data drift](data-drift), the term used to describe when newer data moves away from the original training data, which can result in poor or unreliable prediction performance over time.
Use the [deployment dashboard](../../../mlops/index) to analyze a model's performance metrics: prediction response time, model health, accuracy, data drift analysis, and more. When models deteriorate, the common action to take is to retrain a new model. Deployments allow you to replace models without re-deploying them, so not only do you not need to change your code but DataRobot can track and represent the entire history of a model used for a particular use case.
| pred-tab |
The sample dataset contains patient data.

The goal is to predict the likelihood of patient readmission to the hospital. The target feature is `readmitted`.
| tu-dataset-patient-data |
DataRobot's Workbench interface streamlines the modeling process, minimizing time-to-value while still leveraging cutting-edge ML techniques. It is designed to match the data scientist's iterative workflows with a cleaner interface for easier project creation and model review, smooth navigation, and all key insights in one place.
The Workbench user interface lets you group, organize, and share your modeling assets to better leverage DataRobot for enhanced experimentation. These assets—datasets, experiments, notebooks, and no-code apps—are housed within folder-like containers known as _Use Cases_.
Because the modeling process extends beyond just model training, Workbench incorporates prepping data, training models, and leveraging results to make business decisions. It supports the idea of _experiments_ to iterate through potential solutions until an outcome is reached. In other words, Workbench minimizes the time it takes to prep data, model, learn from modeling, prep more data, model again...and many iterations until a model is chosen and findings can be presented to stakeholders.
Specifically, Workbench improves experimentation by letting you:
* Accelerate iteration and collaboration with repeatable, measured experiments.
* Convert raw data into modeling-ready prepared, partitioned data.
* Automate to quickly generate key insights and predictions from the best models.
* Share model reports and dashboards with key stakeholders for feedback and approval.
* Access from both an intuitive user interface and a notebook environment.
## Architecture {: #architecture }
The following diagram illustrates the general components that make up the Workbench hierarchy:

## Navigation {: #navigation }
DataRobot provides breadcrumbs to help with navigation and asset selection.

Click on any asset in the path to return to a location. For the final asset in the path, DataRobot provides a dropdown of same-type assets within the Use Case, to quickly access different assets without backtracking.
## Use Case assets {: #use-case-assets }
A Use Case is composed of zero or more of the following assets:
Asset (symbol) | Read more
-------------- | ---------
 Datasets | [Data preparation](wb-dataprep/index)
 Experiments | [Experiments](wb-experiment/index)
 Apps | [No-Code AI Apps](wb-apps/index)
 Notebooks | [DataRobot Notebooks](wb-notebook/index) (Public Preview/disabled by default)
## Workbench directory {: #workbench-directory }
To get started with Workbench, click the icon in the top navigation bar of the DataRobot application.

DataRobot opens to your Workbench directory. The directory is the platform's landing page, providing a listing of Use Cases you are a member of and a button for creating new Use Cases.
On first entry, the landing page provides a welcome and displays quick highlights of what you can do in Workbench. After your first Use Case is started, the directory lists all Use Cases either owned by or shared with you.
See additional information on [creating, managing, and sharing Use Cases](wb-build-usecase).
## Sample workflow {: #sample-workflow }
The following workflow shows different ways you can navigate through DataRobot's Workbench:
``` mermaid
flowchart TB
A[Open Workbench] --> B((Create/open a Use Case));
B --> C[Add a dataset];
B --> D[Add an experiment];
B --> E[Add a notebook];
C -. optional .-> F[Wrangle your data];
F -.-> G[Create an experiment];
G --> H[Set the target];
D --> L[Select a dataset];
L --> H;
H --> I[Start modeling];
I --> J[Evaluate models];
J --> K[Make predictions];
```
## Next steps {: #next-steps }
From here, you can:
* [Build a Use Case](wb-usecase/index).
* [Add and wrangle data](wb-dataprep/index).
* [Create experiments](wb-experiment/index). | wb-overview |
!!! note
When ingesting data into a Pipeline Workspace via the AI Catalog Import module, if DataRobot cannot intepret the column as a date, time, or timestamp, it is converted to a string column and the data type must be manually updated. Data type detection accepts the same [date and time formats](file-types#date-and-time-formats) as the AI Catalog. However, if **Force Column Types to Strings** on the module's **Details** tab is enabled, date and time values are also converted to string columns.
| pl-temp-data |
The following table provides an evolving comparison of capabilities available in DataRobot Classic and Workbench.
Feature | DataRobot Classic | Workbench
------- |------------------ | ---------
_**General platform features**_ | :~~: | :~~:
Sharing | Data, projects | Data, Use Cases
Business-wide solution | No, single projects | Yes, experiments in a Use Case
[Authentication](authentication/index){ target=_blank } | SSO, 2FA, API key management | SSO, 2FA, API key management
_**Data-related capabilities**_ | :~~: | :~~:
Data sources | [Certified JDBC Connectors](data-sources/index){ target=_blank }, local file upload, [URL](import-to-dr#import-a-dataset-from-a-url){ target=_blank } | Snowflake, local file upload, static and snapshot datasets created in DataRobot Classic
Data preparation | No | [Wrangling](wb-wrangle-data/index){ target=_blank }
[Feature Discovery](feature-discovery/index){ target=_blank } | Yes | No
[Data Quality Assessment](data-quality){ target=_blank } | Yes | No
Data storage | [AI Catalog](ai-catalog/index){ target=_blank } | [Data Registry](wb-data-registry){ target=_blank }
_**Modeling-related capabilities**_ | :~~: | :~~:
Modeling types | Binary classification, regression, multiclass, multilabel, clustering, anomaly detection | Binary classification, regression
[Partitioning](partitioning){ target=_blank } | Random, Partition Feature, Group, Date/Time, Stratified | Random, Stratified, Date/Time
[TVH partitioning](data-partitioning){ target=_blank } | Yes | Yes
[Feature lists](feature-lists){ target=_blank } | Automatic and manual | Automatic (Informative and Raw, Univariate Selections, Reduced Features)
[Modeling modes](model-data#set-the-modeling-mode){ target=_blank } | Quick, full Autopilot, Comprehensive, Manual | Quick
[Advanced options](additional){ target=_blank } | Yes | Partitioning only
Time-aware | [Time series](time/index){ target=_blank } and [OTV](otv){ target=_blank } | Yes
Blenders | Yes, with [option enabled](additional){ target=_blank } | No
[Retraining](creating-addl-models){ target=_blank } | Yes | New feature list or sample size only
[Model Repository](repository){ target=_blank } | Yes | Yes
[Composable ML](cml/index){ target=_blank } | Yes | No
[Visual AI](visual-ai/index){ target=_blank } | Yes | No
[Bias and Fairness](b-and-f/index){ target=_blank } | Yes | No
[Text AI](textai-resources){ target=_blank } | Yes | Yes, for supported model types
[Location AI](location-ai/index){ target=_blank } | Yes | No
Model insights | [See the full list](analyze-models/index){ target=_blank } | Feature Impact, Blueprint, ROC Curve, Lift Chart, Residuals, Accuracy Over Time, Stability
Unlocking holdout | Automatically for the recommended model or anything prepared for deployment | Automatically for all models
Downloads | Data, Leaderboard, Scoring Code, Compliance Report, exportable charts | Compliance Report
_**Prediction-related capabilities**_ | :~~: | :~~:
Predictions | [Predictions](predictions/index){ target=_blank } | Model Overview > Make predictions
_**MLOps (coming soon)**_ | :~~: | :~~:
[MLOps](mlops/index){ target=_blank } | Yes | No
_**No-Code AI Apps**_ | :~~: | :~~:
[No-Code AI Apps](app-builder/index){ target=_blank } | Yes | Yes
_**DataRobot Notebooks**_ | :~~: | :~~:
[DataRobot Notebooks](dr-notebooks/index){ target=_blank } | Yes | Yes | wb-capability-matrix |
---
title: Solution accelerators
description: This section provides access to the catalog of use case-based solution accelerators, segmented by industry.
---
# Solution accelerators {: #solution-accelerators }
Solution accelerators provide a packaged solution, based on best practices and patterns, to address the most common use cases.
These sections provide a variety of industry-based solutions:
Topic | Provides solutions for...
----- | ------
[Banking](banking/index) | The banking industry.
[Healthcare](healthcare/index) | Healthcare providers, healthcare payers, and life sciences practitioners.
| index |
## Understanding backtests {: #understanding-backtests }
Backtesting is conceptually the same as cross-validation in that it provides the ability to test a predictive model using existing historical data. That is, you can evaluate how the model would have performed historically to estimate how the model will perform in the future. Unlike cross-validation, however, backtests allow you to select specific time periods or durations for your testing instead of random rows, creating in-sequence, instead of randomly sampled, “trials” for your data. So, instead of saying “break my data into 5 folds of 1000 random rows each,” with backtests you say “simulate training on 1000 rows, predicting on the <em>next</em> 10. Do that 5 times.” Backtests simulate training the model on an older period of training data, then measure performance on a newer period of validation data. After models are built, through the Leaderboard you can [change the training](#change-the-training-period) range and sampling rate. DataRobot then retrains the models on the shifted training data.
If the goal of your project is to predict forward in time, backtesting gives you a better understanding of model performance (on a time-based problem) than cross-validation. For time series problems, this equates to more confidence in your predictions. Backtesting confirms model robustness by allowing you to see whether a model consistently outperforms other models across all folds.
The number of backtests that DataRobot defaults to is dependent on the project parameters, but you can configure the build to include up to 20 backtests for additional model accuracy. Additional backtests provide you with more trials of your model so that you can be more sure about your estimates. You can carefully configure the duration and dates so that you can, for example, generate “10 two-month predictions.” Once configured to avoid specific periods, you can ask “Are the predictions similar?” or for two similar months, “Are the errors the same?”
Large gaps in your data can make backtesting difficult. If your dataset has long periods of time without any observed data, it is prudent to review where these gaps fall in your backtests. For example, if a validation window has too few data points, choosing a longer data validation window will ensure more reliable validation scores. While using more backtests may give you a more reliable measure of model performance, it also decreases the maximum training window available to the earliest backtest fold.
## Understanding gaps {: #understanding-gaps }
Configuring gaps allows you to reproduce time gaps usually observed between model training and model deployment (a period for which data is not to be used for training). It is useful in cases where, for example:
* Only older data is available for training (because [ground truth](https://en.wikipedia.org/wiki/Ground_truth#Statistics_and_machine_learning){ target=_blank } is difficult to collect).
* When a model’s validation and subsequent deployment takes weeks or months.
* To deliver predictions in advance for review or actions.
A simple example: in insurance, it can take roughly a year for a claim to "develop" (the time between filing and determining the claim payout). For this reason, an actuary is likely to price 2017 policies based on models trained with 2015 data. To replicate this practice, you can insert a one-year gap between the training set and the validation set. This ensures that model evaluation is more correct. Other examples include when pricing needs regulator approval, retail sales for a seasonal business, and pricing estimates that rely on delayed reporting.
| date-time-include-6 |
| | Element | Description |
|---|---|---|
|  | Filter | Allows you to select a specific [class](vai-ref#image-embeddings)*) to display. All classes display by default.|
|  | Image display | Displays projections of images in two dimensions to help you visualize similarities between groups of images and to identify outliers.|
|  | Target and Actual tooltip | Displays, on hover, the target and the actual value for the image. Use these tooltips to compare images to see whether DataRobot is grouping images as you would expect.|
|  | Zoom controls | Enlarges or reduces the image displayed on the canvas. Alternatively, double-click on the image. To move areas of the display into focus, click and drag. |
See the [reference material](vai-ref#image-embeddings) for more information.
| image-embeddings-include |
## Business problem {: #business-problem }
After the 2008 financial crisis, the IASB (International Accounting Standard Board) and FASB (Financial Accounting Standards Board) reviewed accounting standards. As a result, they updated policies to require estimated Expected Credit Loss (ECL) to maintain enough regulatory capital to handle any Unexpected Loss (UL). Now, every risk model undergoes tough scrutiny, and it is important to be aware of the regulatory guidelines while trying to deliver an AI model. This use case focuses on credit risk, which is defined as the likelihood that a borrower would not repay their lender.
Credit risk can arise for individuals, SMEs, and large corporations and each is responsible for calculating ECL. Depending on the asset class, different companies take different strategies and components for calculation differ, but involve:
* Probability of Default (PD)
* Loss Given Default (LGD)
* Exposure at Default (EAD)
The most common approach for calculating the ECL is the following (see more about these factors in [problem framing](#problem-framing)):
ECL = PD * LGD * EAD
This use case builds a PD model for a consumer loan portfolio and provides some suggestions related LGD and EAD modeling. Sample training datasets for using some of the techniques described here are publicly available on [Kaggle](https://www.kaggle.com/c/home-credit-default-risk){ target=_blank }, but for interpretability, the examples do not exactly represent the Kaggle datasets.
[Click here](#working-with-data) to jump directly to the hands-on sections that begin with working with data. Otherwise, the following several paragraphs describe the business justification and problem framing for this use case.
## Solution value {: #solution-value }
Many credit decisioning systems are driven by scorecards, which are very simplistic rule-based systems. These are built by end-user organizations through industry knowledge or through simple statistical systems. Some organizations go a step further and obtain scorecards from third parties, which may not be customized for an individual organization’s book.
An AI-based approach can help financial institutions learn signals from their own book and assess risk at a more granular level. Once the risk is calculated, a strategy may be implemented to use this information for interventions. If you can predict someone is going to default, this may lead to intervention steps, such as sending earlier notices or rejecting loan applications.
### Problem framing {: #problem-framing }
Banks deal with different types of risks, like credit risk, market risk, and operational risk. Calculating the ECL using `ECL = PD * LGD * EAD` is the most common approach. Risk is defined in financial terms as the chance that an outcome or investment’s actual gains will differ from an expected outcome or return.
There are many ways you can position the problem, but in this specific use case you will be building **_Probability of Default (PD)_** models and will provide some guidance related to LGD and EAD modeling. For the PD model, the **target variable** is `is_bad`. In the training data, `0` indicates the borrower did pay and `1` indicates they defaulted.
Here is additional guidance on the definition of each component of the ECL equation.
_Probability of Default (PD)_
* The borrower’s inability to repay their debt in full or on time.
* Target is normally defined as 90 days delinquency.
* Machine learning models generally give good results if adequate data is available for a particular asset class.
_Loss Given Default (LGD)_
* The proportion of the total exposure that cannot be recovered by the lender once a default has occurred.
* Target is normally defined as the recovery rates and the value lies between 0 and 1.
* Machine learning models required for this problem normally use Beta regression, which is not very common and therefore not supported in a lot of statistical software. We can divide this into two stages since a lot of the values in the target are zero.
* Stage 1—the model predicts the likelihood of recovery greater than zero.
* Stage 2—the model predicts the rate for all loans with the likelihood of recovery greater than zero.
_Exposure at Default (EAD)_
* The total value that a lender is exposed to when a borrower defaults.
* Target is the proportion from the original amount of the loan that is still outstanding at the moment when the borrower defaulted.
* Generally, machine learning models with MSE as loss are used.
### ROI estimation {: #roi-estimation }
The ROI for implementing this solution can be estimated by considering the following factors:
* ROI varies on the size of the business and the portfolio. For example, the ROI for secured loans would be quite different to those for a credit card portfolio.
* If you are moving from one compliance framework to another, you need to take the appropriate considerations—whether to model a new and existing portfolio separately and, if so, make appropriate adjustments to the ROI calculations.
* ROI depends on the decisioning system. If it is a binary (yes or no) decision on loan approval, you can assign dollar values to the amounts of true positives, false positives, true negatives, and false negatives. The sum total of that is the value at a given threshold. If there is an existing model, the difference in results between existing and new models is the ROI captured.
* If the decisioning is non binary, then at every decision point, evaluate the difference between loan provided and the collections done.
## Working with data {: #working-with-data }
For illustrative purposes, this use case simplifies the sample datasets provided by Home Credit Group, which are publicly available on [Kaggle](https://www.kaggle.com/c/home-credit-default-risk){ target=_blank }.
### Sample feature list {: #sample-feature-list }
**Feature Name** | **Data Type** | **Description** | **Data Source** | **Example**
---------------- | -------------- | ----------------| --------------- | -----------
Amount_Credit | Numeric | Credit taken by a person | Application | 20,000
Flag_Own_Car | Categorical | Flag if applicant owns a car | Application | 1
Age | Numeric | Age of the applicant | Application | 25
CreditOverdue | Binomial | Whether credit is overdue | Bureau | TRUE
Channel | Categorical | Channel through which credit taken | PreviousApplication | Online
Balance | Numeric | Balance in credit card | CreditCard | 2,500
**Is_Bad** | **Numeric (target)** | **Whether the borrower defaulted, 0 or 1** | **Bureau** | **1 (default)**
## Modeling and insights {: #modeling-and-insights }
DataRobot automates many parts of the modeling pipeline, including processing and partitioning the dataset, as described [here](model-data). This use case skips the modeling section and moves straight to model interpretation.
DataRobot provides a variety of insights to [interpret results](#interpret-results) and [evaluate-accuracy](#evaluate-accuracy).
### Interpret results {: #interpret-results }
After automated modeling completes, the Leaderboard ranks each model. By default, DataRobot uses LogLoss as the evaluation metric.
#### Feature Impact {: #feature-impact }
[**Feature Impact**](feature-impact) reveals the association between each feature and the model target. For example:

#### Feature Effects {: #feature-effects }
To understand the direction of impact and the default risk at different levels of the input feature, DataRobot provides partial dependence plots as part of the [**Feature Effects**](feature-effects) visualization. It depicts how the likelihood of default changes when the input feature takes different values.
In this example, which is plotting for `AMT_CREDIT` (loan amount), as the loan amount increases above to $300K, the default risk increases in a step manner from 6% to 7% and then in another step to 7.8% when the loan about is around $500K.

#### Prediction Explanations {: #prediction-explanations }
In the [**Prediction Explanations**](pred-explain/index) visualization, DataRobot provides, for each alert scored and prioritized by the model, a human-interpretable rationale. In the example below, the record with ID=3606 gets a very high likelihood of turning into a loan default (prediction=51.2%). The main reasons are due to information from external sources (`EXT_SOURCE_2` and `EXT_SOURCE_3`) and the source of income (`NAME_INCOME_TYPE`) being `pension`.
**Prediction Explanations** also help in maintaining regulatory compliance. It provides reasons why a particular loan decision was taken.

### Evaluate accuracy {: #evaluate-accuracy }
The following insights help evaluate accuracy.
#### Lift Chart {: #lift-chart }
The [**Lift Chart**](lift-chart) shows you how effective the model is at separating the default and non-default applications. Each record in the out-of-sample partition gets scored by the trained model and assigned with a default probability. In the **Lift Chart**, records are sorted based on the predicted probability, broken down into 10 deciles, and displayed from lowest to the highest. For each decile, DataRobot computes the average predicted risk (blue line/plus) as well as the average actual risk (orange line/circle), and displays the two lines together. In general, the steeper the actual line is, and the more closely the predicted line matches the actual line, the better the model is. A consistently increasing line is another good indicator.

#### ROC Curve {: #roc-curve }
Once you know the model is performing well, select an explicit threshold to make a binary decision based on the continuous default risk predicted by DataRobot. The [ROC Curve](roc-curve-tab-use) tools provide a variety of information to help make some of the important decisions in selecting the optimal threshold:
* The false negative rate has to be as small as possible. False negatives are the applications that are flagged as not defaults but actually end up defaulting on payment. Missing a true default is dangerous and expensive.
* Ensure the selected threshold is working not only on the seen data, but on the unseen data too.

### Post-processing {: #post-processing }
In some cases where there are fewer regulatory considerations, straight-through processing (SIP) may be possible, where an automated yes or no decision can be taken based on the predictions.
But the more common approach is to convert the risk probability into a score (i.e., a credit score determined by organizations like Experian and TransUnion). The scores are derived based on exposure of probability buckets and on SME knowledge.
Most of the machine learning models used for credit risk require approval from the Model Risk Management (MRM) team; to address this, DataRobot's [**Compliance Report**](compliance) provides comprehensive evidence and rationale for each step in the model development process.


## Predict and deploy {: #predict-and-deploy }
After finding the right model that best learns patterns in your data, you can deploy the model into your desired decision environment. _Decision environments_ are the ways in which the predictions generated by the model will be consumed by the appropriate stakeholders in your organization, and how these stakeholders will make decisions using the predictions to impact the overall process.
Decisions can be a blend of automated and straight-through processing or manual interventions. The degree of automation depends on the portfolio and business maturity. For example, retail loans or peer-to-peer portfolios in banks and fintechs are highly automated. Some fintechs promote their low loan-processing times. Unlike high ticket items, like mortgages, corporate loans may be a case of manual intervention.
### Decision stakeholders {: #decision-stakeholders }
The following table lists potential decision stakeholders:
Stakeholder | Description
----------- | -----------
Decision Executors | The underwriting team will be the direct consumers of the predictions. These can be direct systems in the case of straight-through processing or an underwriting team sitting in front offices in the case of manual intervention.
Decision Managers | Decisions often flow through to the Chief Risk Officer, who is responsible for the ultimate risk of the portfolio. However, there generally are intermediate managers (based on the structure of the organization).
Decision Authors | Data scientists in credit risk teams drive the modeling. The model risk monitoring team is also be a key stakeholder.
### Decision process {: #decision-process }
Generally, models do not result in a direct yes or no decision being made, except in cases where models are used in less-regulated environments. Instead, the risk is converted to a score and, based on the score, impacts the interest or credit amount offered to the customer.
### Model monitoring {: #model-monitoring }
Predictions are done in real time or batch mode based on the nature of the business. Regular monitoring and alerting is critical for [data drift](data-drift). This is particularly important from a model risk perspective. These models are designed to be robust and last longer, so recalibration may be less frequent than in other industries.
### Implementation considerations {: #implementation-considerations }
* Credit risk models normally require integrations with third-party solutions like Experian and FICO. Ask about deployment requirements and if it is possible to move away from legacy tools.
* Credit risk models require approval from the validation team, which can take significant time (for example, convincing them to adopt new model approval methods if they have not previously approved machine learning models).
* Model validation teams can have strict requirements for specific assets. For example, if models need to generate an equation and therefore model code export. Be certain to discuss any questions in advance with the modeling team before taking the final model to validation teams.
* Discuss alternative approaches for assets where the default rate was historically low, as the model might not be accurate enough to prove ROI.
In addition to traditional risk analysis, [target leakage](data-quality#target-leakage) may require attention in this use case. Target leakage can happen when information that should not be available at the time of prediction is being used to train the model. That is, particular features leak information about the eventual outcome, and that artificially inflates the performance of the model in training. While the implementation outlined in this document involves relatively few features, it is important to be mindful of target leakage whenever merging multiple datasets due to improper joins. DataRobot supports robust target leakage detection in the second round of exploratory data analysis (EDA) and the selection of the Informative Features feature list during Autopilot.
| loan-defaults-include |
!!! note
If you add a [secondary dataset](fd-overview) with images to a primary tabular dataset, the augmentation options described above are not available. Instead, if you have access to Composable ML, you can [modify each needed blueprint](cml-blueprint-edit) by adding an image augmentation vertex directly after the raw image input (as the first vertex in the image branch) and configure augmentation from there.
| image-augmentation-include |
Because of the complexity of many machine learning techniques, models can sometimes be difficult to interpret directly. **Feature Fit** and **Feature Effects** provide similar model detail insights on a per-feature basis. Feature Fit, under the [Evaluate](evaluate/index) tab, ranks features based on the importance score. Feature Effects, under the [Understand](understand/index) tab, ranks features based on the feature impact score.
See the individual explanations for [**Feature Fit**](#feature-fit-explained) and [**Feature Effects**](#feature-effects-explained) to better understand the differences in the insights. Also, see below for information on [interpreting](#interpret-the-displays) the displays and the source of the values, noting the following:
* Both displays can be computed for numeric and categorical features. If you have a text-only dataset or model type, the tabs are grayed out.
* Feature Fit does not support multiclass projects.
* You must run the **Feature Fit** and/or **Feature Effects** computation for each model you are investigating.
* Depending on the model (the number of features and number of values for a feature), it may take several minutes for all features of the model to become available.
## Feature Fit explained {: #feature-fit-explained }
When using **Feature Fit**, features are sorted in order of model-agnostic importance—that is, based on the [**Importance**](model-ref#data-summary-information) score, a univariate comparison, which is calculated during EDA2 (and displayed on the **Data** page). It answers the question, "for this feature of interest, where did my model do well or do poorly?" Clicking **Compute Feature Fit** causes DataRobot to run **Feature Fit** calculations, using the importance score to prioritize the order. By displaying results with higher scores—those features likely to be of more interest—first, you can more quickly view chart results without having to wait for all features to finish.

See [Display options](#display-options) for descriptions of display elements.
Because Importance scores are pre-modeling calculations, they are <em>projections</em> of which individual features might be important in the dataset, based on the chosen target. For a given model, a feature with a high Importance score might not be as impactful as projected, for example, if its signal is captured similarly by another feature.
You can then evaluate the fit of the model as a function of input by clicking through each value of a specific feature and comparing the model's predicted and actual target values.
**Feature Fit** can help identify if there are parts of your data where the model is systematically mis-predicting. If the insight shows larger differences between predicted and actual for a specific feature, it may suggest you need additional data to help explain the discrepancy.
## Feature Effects explained {: #feature-effects-explained }
**Feature Effects** shows the effect of changes in the value of each feature on the model’s predictions. It displays a graph depicting how a model "understands" the relationship between each feature and the target, with the features sorted by [**Feature Impact**](feature-impact). The insight is communicated in terms of [partial dependence](#partial-dependence-logic), which illustrates how a change in a feature's value, while keeping all other features as they were, impacts a model's predictions. Literally, "what is the feature's effect, how is *this* model using *this* feature?" To compare the model evaluation methods side by side:
* **Feature Fit** helps to evaluate the overall fit of a model, from the perspective of each feature.
* **Feature Impact** conveys the relative impact of each feature on a specific model.
* **Feature Effects** (with partial dependence) conveys how changes to the value of each feature change model predictions.
Clicking **Compute Feature Effects** causes DataRobot to first compute **Feature Impact** (if not already computed for the project) and then run the **Feature Effects** calculations for the model:

## Display options {: #display-options }
The following table describes the elements of the displays:
| | Element | Description |
|---|---|---|
|  | [Search for features](#list-of-features) | Lists of the top features that have more than zero-influence on the model, based on the feature importance score (**Feature Fit**) or Feature Impact (**Feature Effects**) score. |
|  | Score *Feature Fit* | Displays a visual indicator of the importance of the feature in predicting the target variable. This is the value displayed on the [**Data**](model-ref#data-summary-information) page in the **Importance** column. |
|  | [Score](#feature-effects-score) *Feature Effects* | Reports the relevance to the target feature. This is the value displayed in the [**Feature Impact**](feature-impact) display. |
|  | [Target range](#target-range-y-axis) | Displays the value range for the target; the Y-axis values can be adjusted with the [scaling](#more-options) option.|
|  | [Feature values](#feature-values-x-axis) | Displays individual values of the selected feature.|
|  | [Feature values tooltip](#feature-value-tooltip)| Provides summary information for a feature's binned values. |
|  | [Feature value count](#feature-value-count)| Sets, for the selected feature, the feature distribution for the selected partition fold. |
|  | [Display controls](#display-controls) | Sets filters that control the values plotted in the display (partial dependence, predicted, and/or actual). |
|  | [Sort by](#sort-options) | Provides controls for sorting. |
|  | [Bins](#set-the-number-of-bins) | For qualifying feature types, sets the binning resolution for the feature value count display. |
|  | [Data Selection](#select-the-partition-fold) | Controls which partition fold is used as 1) the basis of the Predicted and Actual values and 2) the sample used for the computation of Partial Dependence. Options for [OTV projects](#select-the-partition-fold) differ slightly. |
|  | [Select Class](#select-class) (multiclass only) | Provides controls to display graphed results for a particular class within the target feature. |
|  | [**More**](#more-options) | Controls whether to display missing values and changes the Y-axis scale.|
|  | [**Export**](#export) | Provides options for downloading data. |
See below for [more information](#more-info) on how DataRobot calculates values, explanation of tips for using the displays, and how Exposure and Weight change output.
### List of features {: #list-of-features }
To the left of the graph, DataRobot displays a list of the top 500 predictors, sorted by [feature importance](model-ref#data-summary-information) (**Feature Fit**) or [feature impact](feature-impact) (**Feature Effects**) score. Use the arrow keys or scroll bar to scroll through features, or the search field to find by name. If all the sample rows are empty for a given feature, the feature is not available in the list. Selecting a feature in the list updates the display to reflect results for that feature.
### Feature Effects score {: #feature-effects-score }
Each feature in the list is accompanied by its [feature impact](feature-impact) score. Feature impact measures, for each of the top 500 features, the importance of one feature on the target prediction. It is estimated by calculating the prediction difference before and after shuffling the selected rows of one feature (while leaving other columns unchanged). DataRobot normalizes the scores so that the value of the most important column is 1 (100%). A score of 0% indicates that there was no calculated relationship.
### Target range (Y-axis) {: #target-range-y-axis }
The Y-axis represents the value range for the target variable. For binary classification and regression problems, this is a value between 0 and 1. For non-binary projects, the axis displays from min to max values. Note that you can use the [scaling feature](#more-options) to change the Y-axis and bring greater focus to the display.
### Feature values (X-axis) {: #feature-values-x-axis }
The X-axis displays the values found for the feature selected in the [list of features](#list-of-features). The selected [sort order](#sort-and-export) controls how the values are displayed.
#### For numeric features {: #for-numeric-features }
The logic for a numeric feature depends on whether you are displaying Predicted/Actual or Partial Dependence.
#### Predicted/Actual logic {: #predictedactual-logic }
* If the value count in the selected partition fold is greater than 20, DataRobot bins the values based on their distribution in the fold and computes Predicted and Actual for each bin.
* If the value count is 20 or less, DataRobot plots Predicted/Actuals for the top values present in the fold selected.
#### Partial Dependence logic {: #partial-dependence-logic }
* If the value count of the feature in the entire dataset is greater than 99, DataRobot computes Partial Dependence on the percentiles of the distribution of the feature in the entire dataset.
* If the value count is 99 or less, DataRobot computes Partial Dependence on all values in the dataset (excluding outliers).
#### Chart-specific logic {: #chart-specific-logic }
* **Feature Fit**: DataRobot bins the values for the computation of Predicted/Actual. The X-axis may additionally display a `==Missing==` bin, which contains all rows with missing feature values (i.e., NaN as the value of one of the features).
* **Feature Effect**: Partial Dependence feature values are derived from the percentiles of the distribution of the feature across the entire data set. The X-axis may additionally display a `==Missing==` bin, which contains the effect of missing values. Partial Dependence calculation always includes "missing values," even if the feature is not missing throughout data set. The display shows what *would be* the average predictions if the feature were missing—DataRobot doesn't need the feature to actually be missing, it's just a "what if."
#### For categorical features {: #for-categorical-features }
For categorical, the X-axis displays the 25 most frequent values for Predicted, Actual, and Partial Dependence in the selected partition fold. The categories can include, as applicable:
* `=All Other=`: For categorical features, a single bin containing all values other than the 25 most frequent values. No partial dependence is computed for `=All Other=`. DataRobot uses one-hot encoding and ordinal encoding preprocessing tasks to automatically group low-frequency levels.
For both tasks you can use the the `min_support` [advance tuning](adv-tuning) parameter to group low-frequency values. By default, DataRobot uses a value of 10 for the one-hot encoder and 5 for the ordinal encoder. In other words, any category that has fewer than 10 levels (one-hot encoder) or 5 (ordinal encoder) is combined into 1 group.
* `==Missing==`: A single bin containing all rows with missing feature values (that is, NaN as the value of one of the features).
* `==Other Unseen==`: A single bin containing all values that were not present in the Training set. No partial dependence is computed for `=Other Unseen=`. See the [explanation below](#binning-and-top-values) for more information.

### Feature value tooltip {: #feature-value-tooltip }
For each bin, to display a feature's calculated values and row count, hover in the display area above the bin. For example, this tooltip:

Indicates:
For the feature `number diagnoses` when the value is `7`, the partial dependence average was `0.366`, the predicted average was `0.381`, and the actual values average was `0.3`. These averages were calculated from `20` rows in the dataset (in which the number of diagnoses was seven).
### Feature value count {: #feature-value-count }
The bar graph below the X-axis provides a visual indicator, for the selected feature, of each of the feature's value frequencies. The bars are mapped to the feature values listed above them, and so changing the sort order also changes the bar display. This is the same information as that presented in the [**Frequent Values**](histogram#frequent-values-chart) chart on the **Data** page. For qualifying feature types, you can use the [**Bins**](#set-the-number-of-bins) dropdown to set the number of bars (determine the binning).
### Display controls {: #display-controls }
Use the display control links to set the display of plotted data. Actual values are represented by open orange circles, predicted valued by blue crosses, and partial dependence points by solid yellow circles. In this way, points lie on top without blocking view of each other. Click or unclick the label in the legend to focus on a particular aspect of the display. See below for information on how DataRobot [calculates](#partial-dependence-calculations) and displays the values.
### Sort options {: #sort-options }
The **Sort by** dropdown provides sorting options for plot data. For categorical features, you can sort alphabetically, by frequency, or by size of the effect (partial dependence). For numeric features, sort is always numeric.
### Set the number of bins {: #set-the-number-of-bins }
The **Bins** setting allows you to set the binning resolution for the display. This option is only available when the selected feature is a numeric or continuous variable; it is not available for categorical features or numeric features with low unique values. Use the [feature value tooltip](#feature-value-tooltip) to view bin statistics.
### Select the partition fold {: #select-the-partition-fold }
You can set the partition fold used for Predicted, Actual, and Partial Dependence value plotting with the **Data Selection** dropdown—Training, Validation, and, if unlocked, Holdout. While it may not be immediately obvious, there are [good reasons](#training-data-as-the-viewing-subset) to investigate the training dataset results.

When you select a partition fold, that selection applies to all three display controls, whether or not the control is checked. Note, however, that while performed on the same partition fold, the [partial dependence calculation](#partial-dependence-calculations) uses a different range of the data.
Note that **Data Selection** options differ depending on whether or not you are investigating a time-aware project:
<em>For non-time-aware projects:</em> In all cases you can select the Training or Validation set; if you have unlocked holdout, you also have an option to select the Holdout partition.
<em>For time-aware projects:</em> For time-aware projects, you can select Training, Validation, and/or Holdout (if available) as well as a specific backtest. See the section on [time-aware Data Selection](#data-selection-for-time-aware-projects) settings for details.
### Select the class (multiclass only) {: #select-the-class-multiclass-only }
In a multiclass project, you can additionally set the display to chart per-class results for each feature in your dataset.

By default, DataRobot calculates effects for the top 10 features. To view per-class results for features ranked lower than 10, click **Compute** next to the feature name:

### Export {: #export }
The **Export** button allows you to [export](export-results) the graphs and data associated with the model's details and for individual features. If you choose to export a ZIP file, you will get all of the chart images and the CSV files for partial dependence and predicted vs actual data.
### More options {: #more-options }
The **Feature Fit** and **Feature Effects** insights provide tools for re-displaying the chart to help you focus on areas of importance.
!!! note
This option is only available when one of the following conditions is met: there are missing values in the dataset, the chart's access is scalable, the project is binary classification.
Click the gear setting to view the choices:

Check or uncheck the following boxes to activate:
* **Show Missing Values**: Shows or hides the effect of missing values. This selection is available for numeric features only. The bin corresponding to missing values is labeled as **=Missing=**.
* **Auto-scale Y-axis**: Resets the Y-axis range, which is then used to chart the actual data, the prediction, and the partial dependence values. When checked (the default), the values on the axis span the highest and lowest values of the target feature. When unchecked, the scale spans the entire eligible range (for example, 0 through 1 for binary projects).
* **Log X-Axis**: Toggles between the different X-axis representations. This selection is available for highly skewed (distribution where one of tail is longer than the other) with numeric features having values greater than zero.
## More info... {: #more-info }
The following sections describe:
* How DataRobot calculates [average values](#average-value-calculations) and [partial dependence](#partial-dependence-calculations)
* [Interpreting](#interpret-the-displays) the displays
* [Time-aware Data Selection](#select-the-partition-fold)
* Understanding [unseen values](#binning-and-top-values)
* How [Exposure and Weight](#how-exposure-changes-output) change output
### Average value calculations {: #average-value-calculations }
For the predicted and actual values in the display, DataRobot plots the average values. The following simple example explains the calculation.
In the following dataset, Feature A has two possible values—1 and 2:
| Feature A | Feature B | Target |
|-----------:|----------:|-------:|
| 1 | 2 | 4 |
| 2 | 3 | 5 |
| 1 | 2 | 6 |
| 2 | 4 | 8 |
| 1 | 3 | 1 |
| 2 | 2 | 2 |
In this fictitious dataset, the X axis would show two values: 1 and 2. When target value A=1, DataRobot calculates the average as 4+6+1 / 3. When A=2, the average is 5+8+2 / 3. So the actual and predicted points on the graph show the average target for each aggregated feature value.
Specifically:
* For numeric features, DataRobot generates bins based on the feature domain. For example, for the feature `Age` with a range of 16-101, bins (the user selects the number) would be based on that range.
* For categorical features, for example `Gender`, DataRobot generates bins based on the top unique values (perhaps 3 bins—`M`, `F`, `N/A`).
DataRobot then calculates the average values of prediction in each bin and the average of the actual values of each bin.
### Interpret the displays {: #interpret-the-displays }
In the **Feature Effects** and **Feature Fit** displays, categorical features are represented as points; numerical features are represented as connected points. This is because each numerical value can be seen in relation to the other values, while categorical features are not linearly related. A dotted line indicates that there were not enough values to plot.
!!! note
If you are using the [Exposure](additional#set-exposure) parameter feature available from the **Advanced options** tab, [line calculations differ](#how-exposure-changes-output).
Consider the following **Feature Effects** display (calculations are the same for **Feature Fit**):

The orange open circles depict, for the selected feature, the *average target value* for the aggregated **number_diagnoses** feature values. In other words, when the target is **readmitted** and the selected feature is **number_diagnoses**, a patient with two diagnoses has, on average, a roughly 23% chance of being readmitted. Patients with three diagnoses have, on average, a roughly 35% chance of readmittance.
The blue crosses depict, for the selected feature, the *average prediction* for a specific value. From the graph you can see that DataRobot averaged the predicted feature values and calculated a 25% chance of readmittance when **number_diagnoses** is two. Comparing the actual and predicted lines can identify segments where model predictions differ from observed data. This typically occurs when the segment size is small. In those cases, for example, some models may predict closer to the overall average.
The yellow **Partial Dependence** line depicts the marginal effect of a feature on the target variable after accounting for the average effects of all other predictive features. It indicates how, holding all other variables *except* the feature of interest as they were, the value of this feature affects your prediction. The value of the feature of interest is then reassigned to each possible value, calculating the average predictions for the sample at each setting. (From the simple example above, DataRobot calculates the average results when all 1000 rows use value 1 and then again when all 1000 rows use value 2.) These values help determine how the value of each feature affects the target. The shape of the yellow line "describes" the model’s view of the marginal relationship between the selected feature and the target. See the discussion of [partial dependence calculation](#partial-dependence-calculations) for more information.
Tips for using the displays:
* To evaluate model accuracy, uncheck the partial dependence box. You are left with a visual indicator that charts actual values against the model's predicted values.
* To understand partial dependence, uncheck the actual and predicted boxes. Set the sort order to **Effect Size**. Consider the partial dependence line carefully. Isolating the effect of important features can be very useful in optimizing outcomes in business scenarios.
* If there are not enough observations in the sample at a particular level, the partial dependency computation may be missing for a specific feature value.
* A dashed instead of solid predicted (blue) and actual (orange) line indicates that there are no rows in the bins created at the point in the chart.
* For numeric variables, if there are more than 18 values, DataRobot calculates partial dependence on values derived from the percentiles of the distribution of the feature across the entire data set. As a result, the value is not displayed in the hover tooltip.
#### Training data as the viewing subset {: #training-data-as-the-viewing-subset }
Viewing **Feature Fit** or **Feature Effect** for training data provides a few benefits. It helps to determine how well a trained model fits the data it used for training. It also lets you compare the difference between seen and unseen data in the model performance. In other words, viewing the training results is a way to check the model against known values. If the predicted vs the actual results from the training set are weak, it is a sign that the model is not appropriately selected for the data.
When considering partial dependence, using training data means the values are calculated based on training samples and compared against the maximum possible feature domain. It provides the option to check the relationship between a single feature (by removing marginal effects from other features) and the target across the entire range of the data. For example, suppose the validation set covers January through June but you want to see partial dependence in December. Without that month's data in validation, you wouldn't be able to. However, by setting the data selection subset to **Training**, you could see the effect.
### Partial dependence calculations {: #partial-dependence-calculations }
Predicted/Actual and Partial Dependence are computed very differently for continuous data. The calculations for Predicted/Actual that bins the data, for example, (1-40], (40-50]... are created to result in sufficient material for computing averages. DataRobot then bins the values based on the distribution of the feature for the selected partition fold.
Partial dependence, on the other hand, uses single values (e.g., 1, 5, 10, 20, 40, 42, 45...) that are percentiles of the distribution of the feature across the entire data set. It uses up to 1000-row samples to determine the scale of the curve. To make the scale comparable with Predicted/Actual, the 1000 samples are drawn from the data of the selected fold. In other words, partial dependence is *calculated* for the maximum possible range of values from the entire dataset but scaled based on the **Data Selection** fold setting.
For example, consider a feature "year." For Partial Dependence, DataRobot computes values based on all the years in the data. For Actual/Predicted, computation is based on the years in the selected fold. If the dataset dates range from 2001-01-01 to 2010-01-01, DataRobot uses that span for partial dependence calculations. Predicted and Actual calculations, in contrast, contain only the data from the corresponding, selected fold/backtest. You can see this difference when viewing all three control displays for a selected fold:

### Data selection for time-aware projects {: #data-selection-for-time-aware-projects }
When working with time-aware projects, **Data Selection** dropdown works a bit differently because of the backtests. Select the **Feature Fit** or **Feature Effects** tab for your model of interest. If you haven't already computed values for the tab, you are prompted to compute for **Backtest 1** (Validation).
!!! note
If the model you are viewing uses start and end dates (common for the recommended model), backtest selection is not available.
When DataRobot completes the calculations, the insight displays with the following **Data Selection** setting:

#### Calculate backtests {: #calculate-backtests }
The results of clicking on the backtest name depend on whether backtesting has been run for the model. DataRobot automatically computes backtests for the highest scoring models; for lower-scoring models, you must select **Run** from the Leaderboard to initiate backtesting:

For comparison, the following illustrates when backtests have not been run and when they have:

When calculations are complete, you must then run **Feature Fit** or **Feature Effect** calculations for each backtest you want to display, as well as for the Holdout fold, if applicable. From the dropdown, click a backtest that is not yet computed and DataRobot provides a button to initiate calculations.
#### Set the partition fold {: #set-the-partition-fold }
Once backtest calculations are complete for your needs, use the **Data Selection** control to choose the backtest and partition for display. The available partition folds are dependent on the backtest:
Options are:
* For numbered backtests: Validation and Training for each calculated backtest
* For the Holdout Fold: Holdout and Training
Click the down arrow to open the dialog and select a partition:

Or, click the right and left arrows to move through the options for the currently selected partition—Validation or Training—plus Holdout. If you move to an option that has yet to be computed, DataRobot provides a button to initiate the calculation:

### Binning and top values {: #binning-and-top-values }
By default, DataRobot calculates the top features listed in **Feature Fit** and **Feature Effects** using the Training dataset. For categorical feature values, displayed as discrete points on the X-axis, the segmentation is affected if you select a different data source. To understand the segmentation, consider the illustration below and the table describing the segments:

| As illustrated in chart | Label in chart | Description |
|-------------------------|------------------|---------------|
| Top-*N* values | <*feature_value*\> | Values for the selected feature, with a maximum of 20 values. For any feature with more than 10 values, DataRobot further filters the results, as described in the example below. |
| Other values | `==All Other==` | A single bin containing all values other than the Top-*N* most frequent values. |
| Missing values | `==Missing==` | A single bin containing all records with missing feature values (that is, NaN as the value of one of the features). |
| Unseen values | <*feature_value*\> `(Unseen)` | Categorical feature values that were not "seen" in the Training set but qualified as Top-*N* in Validation and/or Holdout. |
| Unseen values | `==Other Unseen==` | Categorical feature values that were not "seen" in the Training set and did not qualify as Top-*N* in Validation and/or Holdout. |
A simple example to explain Top-<em>N</em>:
Consider a dataset with categorical feature `Population` and a world population of 100. DataRobot calculates Top-<em>N</em> as follows:
1. Ranks countries by their population.
2. Selects up to the top-20 countries with the highest population.
3. In cases with more than 10 values, DataRobot further filters the results so that accumulative frequency is >95%. In other words, DataRobot displays in the X-axis those countries where their accumulated population hits 95% of the world population.
A simple example to explain <em>Unseen</em>:
Consider a dataset with the categorical feature `Letters`. The complete list of values for `Letters` is A, B, C, D, E, F, G, H. After filtering, DataRobot determines that Top-<em>N</em> equals three values. Note that, because the feature is categorical, there is no `Missing` bin.
| Fold/set | Values found | Top-3 values | X-axis values |
|----------------|--------------|--------------|----------------|
| Training set | A, B, C, D | A, B, C | A, B, C, `=All Other=` |
| Validation set | B, C, F, G+ | B, C, F* | B, C, F (unseen), `=All Other=`, `Other Unseen`+ |
| Holdout set | C, E, F, H+ | C, E*, F* | C, E (unseen), F (unseen), `=All Other=`, `Other Unseen`+ |
<sup>*</sup> A new value in the top 3 but not present in the Training set, flagged as `Unseen`
<sup>+</sup> A new value not present in Training or in top-3, flagged as `Other Unseen`
### How Exposure changes output {: #how-exposure-changes-output }

If you used the [Exposure](additional#set-exposure) parameter when building models for the project, the **Feature Fit** and **Feature Effects** tabs display the graph adjusted to exposure. In this case:
* The orange line depicts the <em>sum of the target divided by the sum of exposure</em> for a specific value. The label and tooltip display <em>Sum of Actual/Sum of Exposure</em>, which indicates that exposure was used during model building.
* The blue line depicts the <em>sum of predictions divided by the sum of exposure</em> and the legend label displays <em>Sum of Predicted/Sum of Exposure</em>.
* The marginal effect depicted in the yellow <em>partial dependence</em> is divided by the sum of exposure of the 1000-row sample. This adjustment is useful in insurance, for example, to understand the relationship between annualized cost of a policy and the predictors. The label tooltip displays <em>Average partial dependency adjusted by exposure</em>.
### How Weight changes output {: #how-weight-changes-output }
If you set the **Weight** parameter for the project, DataRobot weights the average and sum operations as described above.
| ff-fe |
## Build time-aware models {: #build-time-aware-models }
Once you click **Start**, DataRobot begins the model-building process and returns results to the Leaderboard. Because time series modeling uses date/time partitioning, you can run backtests, change window sampling, change training periods, and more from the Leaderboard.
!!! note
Model parameter selection has not been customized for date/time-partitioned projects. Though automatic parameter selection yields good results in most cases, [**Advanced Tuning** ](adv-tuning) may significantly improve performance for some projects that use the Date/Time partitioning feature.
### Date duration features {: #date-duration-features }
Because having raw dates in modeling can be risky (overfitting, for example, or tree-based models that do not extrapolate well), DataRobot generally excludes them from the Informative Features list if date transformation features were derived. Instead, for OTV projects, DataRobot creates duration features calculated from the difference between date features and the primary date. It then adds the duration features to an optimized Informative Features list. The automation process creates:
* New duration features
* New feature lists
#### New duration features {: #new-duration-features }
When derived features (hour of day, day of week, etc.) are created, the feature type of the newly derived features are not dates. Instead, they become categorical or numeric, for example. To ensure that models learn time distances better, DataRobot computes the duration between primary and non-primary dates, adds that calculation as a feature, and then drops all non-primary dates.
Specifically, when date derivations happen in an OTV project, DataRobot creates one or more new features calculated from the duration between dates. The new features are named `duration(<from date>, <to date>)`, where the `<from date>` is the primary date. The var type, displayed on the **Data** page, displays `Date Duration`.

The transformation applies even if the time units differ. In that case, DataRobot computes durations in seconds and displays the information on the **Data** page (potentially as huge integers). In some cases, the value is negative because the `<to date>` may be before the primary date.
#### New feature lists {: #new-feature-lists }
The new feature lists, automatically created based on Informative Features and Raw Features, are a copy of the originals with the duration feature(s) added. They are named the same, but with "optimized for time-aware modeling" appended. (For univariate feature lists, `duration` features are only added if the original date feature was part of the original univariate list.)

When you run full or Quick Autopilot, new feature lists are created later in the [EDA2](eda-explained#eda2) process. DataRobot then switches the Autopilot process to use the new, optimized list. To use one of the non-optimized lists, you must rerun Autopilot specifying the list you want.
| date-time-include-2 |
??? faq "Drift metric support"
<span id="drift-metric-support">While the DataRobot UI only supports the Population Stability Index (PSI) metric, the API supports Kullback-Leibler Divergence, Hellinger Distance, Kolmogorov-Smirnov, Histogram Intersection, Wasserstein Distance, and Jensen–Shannon Divergence. In addition, using the Python API client, you can [retrieve a list of supported metrics](https://datarobot-public-api-client.readthedocs-hosted.com/reference/mlops/deployment.html#data-drift){ target=_blank }.</span> | drift-metrics-support |
Compare the characteristics and capabilities of the two types of custom models below:
Model type | Characteristics | Capabilities
------------------|-----------------|--------------
Structured | <ul><li>Uses a target type known to DataRobot (e.g., regression, binary classification, multiclass, and anomaly detection).</li><li>Required to conform to a request/response schema.</li><li>Accepts [structured input and output data](structured-custom-models#structured-custom-model-requirements).</li></ul> | <ul><li>Full deployment capabilities.</li><li>Accepts training data after deployment.</li></ul>
Unstructured | <ul><li>Uses a custom target type, unknown to DataRobot.</li><li>Not required to conform to a request/response schema.</li><li>Accepts unstructured input and output data.</li></ul> | <ul><li>Limited deployment capabilities. Doesn't support data drift and accuracy statistics, challenger models, or humility rules.</li><li>Doesn't accept training data after deployment.</li></ul> | structured-vs-unstructured-cus-models |
!!! note
Specified pairwise interactions are not guaranteed to appear in a model's output. Only the interactions that add signal to a model according to the algorithm will be featured in the output. For example, if you specify an interaction group of features A, B, and C, then AxB, BxC, and AxC are the interactions considered during model training. If only AxB adds signal to the model, then only AxB is included in the model's output (excluding BxC and AxC).
| pairwise-warning |
## Modeling and insights {: #modeling-and-insights }
DataRobot automates many parts of the modeling pipeline, including processing and partitioning the dataset, as described [here](model-data). This document starts with the visualizations available once modeling has started.
### Exploratory Data Analysis (EDA) {: #exploratory-data-analysis-eda }
Navigate to the **Data** tab to learn more about your data—summary statistics based on sampled data known as [EDA](eda-explained). Click each feature to see a variety of information, including a [histogram](histogram) that represents the relationship of the feature with the target.

### Feature Associations {: #feature-associations }
While DataRobot is running Autopilot to find the champion model, use the [**Data > Feature Associations**](feature-assoc) tab to view the feature association matrix and understand the correlations between each pair of input features. For example, the features `nbrPurchases90d` and `nbrDistinctMerch90d` (top-left corner) have strong associations and are, therefore, ‘clustered’ together (where each color block in this matrix is a cluster).

DataRobot provides a variety of insights to [interpret results](#interpret-results) and [evaluate accuracy](#evaluate-accuracy).
### Leaderboard {: #leaderboard }
After Autopilot completes, the Leaderboard ranks each model based on the selected optimization metrics (LogLoss in this case).
The outcome of Autopilot is not only a selection of best-suited models, but also the identification of a recommended model—the model that best understands how to predict the target feature `SAR`. Choosing the best model is a balance of accuracy, metric performance, and model simplicity. See the [model recommendation process](model-rec-process) description for more detail.
Autopilot will continue building models until it selects the best predictive model for the specified target feature. This model is at the top of the Leaderboard, marked with the **Recommended for Deployment** badge.

To reduce false positives, you can choose other metrics like Gini Norm to sort the Leaderboard based on how good the models are at giving SAR a higher rank than the non-SAR alerts.
### Interpret results {: #interpret-results }
There are many visualizations within DataRobot that provide insight into why an alert might be SAR. Below are the most relevant for this use case.
#### Blueprint {: #blueprint }
Click on a model to reveal the model [blueprint](blueprints)—the pipeline of preprocessing steps, modeling algorithms, and post-processing steps used to create the model.

#### Feature Impact {: #feature-impact }
[**Feature Impact**](feature-impact) reveals the association between each feature and the target. DataRobot identifies the top three most impactful features (which enable the machine to differentiate SAR from non-SAR alerts) as `total merchant credit in the last 90 days`, `number refund requests by the customer in the last 90 days`, and `total refund amount in the last 90 days`.

#### Feature Effects {: #feature-effects }
To understand the direction of impact and the SAR risk at different levels of the input feature, DataRobot provides partial dependence graphs (within the [**Feature Effects**](feature-effects) tab) to depict how the likelihood of being a SAR changes when the input feature takes different values. In this example, the total merchant credit amount in the last 90 days is the most impactful feature, but the SAR risk is not linearly increasing when the amount increases.
* When the amount is below $1000, the SAR risk remains relatively low.
* SAR risk surges significantly when the amount is above $1000.
* SAR risk increase slows when the amount approaches $1500.
* SAR risk tilts again until it hits the peak and plateaus out at around $2200.

The partial dependence graph makes it very straightforward to interpret the SAR risk at different levels of the input features. This could also be converted to a data-driven framework to set up risk-based thresholds that augment the traditional rule-based system.
#### Prediction Explanations {: #prediction-explanations }
To turn the machine-made decisions into human-interpretable rationale, DataRobot provides [**Prediction Explanations**](pred-explain/index) for each alert scored and prioritized by the machine learning model. In the example below, the record with `ID=1269` has a very high likelihood of being a suspicious activity (prediction=90.2%), and the three main reasons are:
* Total merchant credit amount in the last 90 days is significantly greater than the others.
* Total spend in the last 90 days is much higher than average.
* Total payment amount in the last 90 days is much higher than average.

**Prediction Explanations** can also be used to cluster alerts into subgroups with different types of transactional behaviors, which could help triage alerts to different investigation approaches.
#### Word Cloud {: #word-cloud }
The [**Word Cloud**](word-cloud) allows you to explore how text fields affect predictions. The Word Cloud uses a color spectrum to indicate the word's impact on the prediction. In this example, red words indicate the alert is more likely to be associated with a SAR.

### Evaluate accuracy {: #evaluate-accuracy }
The following insights help evaluate accuracy.
#### Lift Chart {: #lift-chart }
The [**Lift Chart**](lift-chart) shows how effective the model is at separating the SAR and non-SAR alerts. After an alert in the out-of-sample partition gets scored by the model, it is assigned a risk score that measures the likelihood of the alert being a SAR risk or becoming a SAR. In the **Lift Chart**, alerts are sorted based on the SAR risk, broken down into 10 deciles, and displayed from lowest to the highest. For each decile, DataRobot computes the average predicted SAR risk (blue plus) as well as the average actual SAR event (orange circle) and depicts the two lines together. For the champion model built for this false positive reduction use case, the SAR rate of the top decile is 55%, which is a significant lift from the ~10% SAR rate in the training data. The top three deciles capture almost all SARs, which means that the 70% of alerts with very low predicted SAR risk rarely result in SAR.

#### ROC Curve {: #roc-curve }
Once you know the model is performing well, you select an explicit threshold to make a binary decision based on the continuous SAR risk predicted by DataRobot. The [**ROC Curve**](roc-curve-tab-use) tools provide a variety of information to help make some of the important decisions in selecting the optimal threshold:
* The false negative rate has to be as small as possible. False negatives are the alerts that DataRobot determines are not SARs which then turn out to be true SARs. Missing a true SAR is very dangerous and would potentially result in an MRA (matter requiring attention) or regulatory fine.
This case takes a conservative approach. To have a false negative rate of 0, the threshold has to be low enough to capture all the SARs.
* Keep the alert volume as low as possible to reduce enough false positives. In this context, all alerts generated in the past that are not SARs are the de-facto false positives. The machine learning model is likely to assign a lower score to those non-SAR alerts; therefore, pick a high-enough threshold to reduce as many false positive alerts as possible.
* Ensure the selected threshold is not only working on the seen data, but also on the unseen data, so that when the model gets deployed to the transaction monitoring system for ongoing scoring, it could still reduce false positives without missing any SARs.
Different choices of thresholds using the cross-validation data (the data used for model training and validation) determines that `0.03` is the optimal threshold since it satisfies the first two criteria. On the one hand, the false negative rate is 0; on the other hand, the alert volume is reduced from `8000` to `2142`, reducing false positive alerts by 73% (`5858/8000`) without missing any SARs.

For the third criterion—does the threshold also work on the unseen alert—you can quickly validate it in DataRobot. By changing the data selection to Holdout and applying the same threshold (`0.03`), the false negative rate remains 0, and the false positive reduction rate remains at 73% (`1457/2000`). This proves that the model generalizes well and will perform as expected on unseen data.
#### Payoff matrix {: #payoff-matrix}
From the **Profit Curve** tab, use the [**Payoff Matrix**](profit-curve) to set thresholds based on simulated profit. If the bank has a specific risk tolerance for missing a small portion of historical SAR, they can also apply the **Payoff Matrix** to pick up the optimal threshold for the binary cutoff. For example:
Field | Example | Description
----- | ------- | -----------
False Negative | TP=`-$200` | Reflects the cost of remediating a SAR that was not detected.
False Positive | FP=`-$50` | Reflects the cost of investigating an alert that proved a "false alarm."
Metrics | False Positive Rate, False Negative Rate, and Average Profit | Provides standard statistics to help describe model performance at the selected display threshold.
By setting the cost per false positive to `$50` (cost of investigating an alert) and the cost per false negative to `$200` (cost of remediating a SAR that was not detected), the threshold is optimized at `0.1183` which gives a minimum cost of `$53k ($6.6 * 8000)` out of 8000 alerts and the highest ROI of `$347k ($50 * 8000 - $53k)`.
On the one hand, the false negative rate remains low (only 5 SARs were not detected); on the other hand, the alert volume is reduced from 8000 to 1988, meaning the number of investigations is reduced by more than 75% (6012/8000).
The threshold is optimized at `0.0619`, which gives the highest ROI of $300k out of 8000 alerts. By setting this threshold, the bank will reduce false positives by 74.3% (`5940/8000`) at the risk of missing only 3 SARs.

See the [deep dive](#deep-dive-imbalanced-targets) for information on handling class imbalance problems.
### Post-processing {: #post-processing }
Once the modeling team decides on the champion model, they can download [compliance documentation](compliance/index) for the model. The resulting Microsoft Word document provides a 360-degree view of the entire model-building process, as well as all the challenger models that are compared to the champion model. Most of the machine learning models used for the Financial Crime Compliance domain require approval from the Model Risk Management (MRM) team. The compliance document provides comprehensive evidence and rationale for each step in the model development process.

| aml-2-include |
To create a deployment from the Leaderboard:
1. From the Leaderboard, select the model to use for generating predictions and click **Predict > Deploy**. The **Deploy model** page lets you create a new deployment for the selected model. In this example, the model is both recommended for deployment and prepared for deployment:

!!! note
The **Deploy** tab behaves differently in environments without a dedicated prediction server, as described in the section on [shared modeling workers](#use-shared-modeling-workers), below.
2. If the model is not [prepared for deployment](model-rec-process#prepare-a-model-for-deployment) as in the example below, best practice recommends that you click **Prepare for deployment**. DataRobot runs feature impact, retrains the model on a reduced feature list, trains on a higher sample size and then the full sample size (latest data for date/time partitioned projects).

3. If using a binary classification model, set the [**Prediction threshold**](threshold) before proceeding. (New models added to the Leaderboard are assigned the default value 0.5.)
!!! note
If you set the prediction threshold before the [deployment preparation process](model-rec-process), the value does not persist through the process. When deploying the prepared model, if you want it to use a value other than the default, set the value after the model has the **Prepared for Deployment** badge applied.
4. To deploy a prepared model, click **Deploy model**.
5. Add [deployment information and create the deployment](add-deploy-info). | deploy-leaderboard |
DataRobot offers portable prediction methods, allowing you to execute prediction jobs outside of the DataRobot application. The portable prediction methods are detailed below:
Method | Description
------ | ------------
[Scoring Code](scoring-code/index) | You can export Scoring Code from DataRobot in Java or Python to make predictions. Scoring Code is portable and executable in any computing environment. This method is useful for low-latency applications that cannot fully support REST API performance or lack network access.
[Portable Prediction Server](pps/index) | A remote DataRobot execution environment for DataRobot model packages (`MLPKG` files) distributed as a self-contained Docker image.
[DataRobot Prime](prime/index) (Disabled) | The ability to create new DataRobot Prime models has been removed from the application. This does not affect existing Prime models or deployments. | port-pred-options |
The **Time Series** tab sets a variety of features that can be set to customize time series projects.
Using the advanced options settings can impact DataRobot's feature engineering and how it models data. There are a few reasons to work with these options, although for most users, the defaults that DataRobot selects provide optimized modeling. The following describes the available options, which vary depending on the product type:
Option | Description
------- | ----------
[Use multiple time series](multiseries#set-the-series-id) | Set or change the series ID for multiseries modeling.
[Allow partial history in predictions](#allow-partial-history) | Allow predictions that are based on feature derivation windows with incomplete historical data.
[Enable cross-series feature generation](#enable-cross-series-feature-generation) | Set cross-series feature derivation for regression projects.
[Add features as known in advance (KA)](#set-known-in-advance-ka) | Add features that do not need to be lagged.
[Exclude features from derivation](#exclude-features-from-derivation) | Identify features that will have automatic time-based feature engineering disabled.
[Add calendar](#calendar-files) | Upload, add from the catalog, or generate an event file that specifies dates or events that require additional attention.
[Customize splits](#customize-model-splits) | Specify the number of groupings for model training (based on the number of workers).
[Treat as exponential trend](#treat-as-exponential-trend) | Apply a log-transformation to the target feature.
[Exponentially weighted moving average](#exponentially-weighted-moving-average) | Set a smoothing factor for EWMA.
[Apply differencing](#apply-differencing) | Set DataRobot to apply differencing to make the target stationary prior to modeling.
[Weights](#apply-weights) | Set weights to indicate a row's relative importance.
[Use supervised feature reduction](#use-supervised-feature-reduction) | Prevent DataRobot from discarding low impact features.

After setting any advanced options, scroll to the top of the page to begin modeling.
## Use multiple time series {: #use-multiple-time-series }
For multiseries modeling (automatically detected when the data has multiple rows with the same timestamp), you initially set the series identifier from the start page. You can, however, change it before modeling either by [editing it on that page](multiseries#set-the-series-id) or editing on this section of the **Advanced Options > Time Series** tab:

## Allow partial history {: #allow-partial-history }
Not all blueprints are designed to predict on new series with only partial history, as it can lead to suboptimal predictions. This is because for those blueprints the full history is needed to derive the features for specific forecast points. "Cold start" is the ability to model on series that were not seen in the training data; partial history refers to prediction datasets with series history that is only partially known (historical rows are partially available within the feature derivation window). When **Allow partial history** is checked, this option "instructs" Autopilot to run those blueprints optimized for cold start and also for partial history modeling, eliminating models with less accurate results for partial history support.

## Enable cross-series feature generation {: #enable-cross-series-feature-generation }
In multiseries datasets, time series features are derived, by default, based on historical observations of each series independently. For example, a feature “Sales (7 day mean)” calculates the average sales for each store in the dataset. Using this method allows a model to leverage information across multiple series, potentially yielding insight into recent overall market trends.
It may be desirable to have features that consider historical observations across series to better capture signals in the data, a common need for retail or financial market forecasting. To address this, DataRobot allows you to extract rolling statistics on the total target across all series in a regression project. Some examples of derived features using this capability:
* Sales (total) (28 day mean): total sales across all stores within a 28 day window
* Sales (total) (1st lag): latest value of total sales across all stores
* Sales (total) (naive 7 day seasonal value): total sales 7 days ago
* Sales (total) (7 day diff) (28 day mean): average of 7-day differenced total sales in a 28 day window
!!! note
Cross-series feature generation is an advanced feature and most likely should only be used if hierarchical models are needed. Use caution when enabling it as it may result in errors at prediction time. If you do choose to use the feature, all series must be present and have the same start and end date, at both training and prediction time.

To enable the feature, select **Enable cross-series feature generation**. Once selected:
* Set the aggregation method to either total or average target value. As it builds models, DataRobot will generate, in addition to the diffs, lags, and statistics it generates for the target itself, features labeled <code><em>target</em> (average) ...</code> or <code><em>target</em> (total) ...</code>, based on your selection.
* Optionally, set a column to base group aggregation on, for use when there are columns that are meaningful in addition to a series ID. For example, consider a dataset that consists of stock prices over time and that includes a column labeling the industry of the given stock (for example, tech, healthcare, manufacturing, etc.). By entering `industry` as the optional grouping column, target values will be aggregated by industry as well as by the total or average across all series.
When there is no cross-series group-by feature selected, there is only one group—all series.
The resulting features DataRobot builds are named in the format <code><em>target</em>(<em>groupby-feature</em> average)</code> or <code><em>target</em> (<em>groupby-feature</em> total)</code>.
If the "group-by" field is left blank, the target is only aggregated across all series.
* Hierarchical models are enabled for datasets with non-negative target values when cross-series features are generated by using total aggregation. These two-stage models generate the final predictions by first predicting the total target aggregated across series, then predicting the proportion of the total to allocate to each series. DataRobot's hierarchical blueprints apply reconciliation methods to the results, correcting for results where the prediction proportions don't add up to 1. To do this, DataRobot creates a new hierarchical feature list. When running Autopilot, DataRobot only runs hierarchical models using the hierarchical feature list. For user-initiated model builds, you can select any feature list to run a hierarchical model or you can use the hierarchical feature list on other model types. Be aware that results from these options may not yield the best results however.
!!! note
If cross-series aggregation is enabled, all series data must be included in the training dataset. That is, you cannot introduce new series data at prediction time.
## Set "known in advance" (KA) {: #set-known-in-advance-ka }
Variables for which you know the value in advance (KA) and that do not need to be lagged can be added for different handling from **Advanced options** prior to model building. You can add any number of _original_ (parent) features to this list of known variables (i.e., user-transformed or derived features cannot be handled as KA). By informing DataRobot that some variables are known in advance and providing them at prediction time, forecast accuracy is significantly improved (e.g., better forecasts for public holidays or during promotions).
If a feature is flagged as known, its future value needs to be provided at prediction time or predictions will fail. While KA features can have missing values in the prediction data inside of the forecast window, that configuration may affect prediction accuracy. DataRobot surfaces a warning and also an information message beneath the affected dataset.
??? tip "Deep dive: Known in advance "
For time series problems, DataRobot takes original features, lags them, and creates rolling statistics from the history available. Some features, however, are known in advance and their future value can be provided and used at prediction time. For those features, in addition to the lags and rolling statistics, DataRobot will use the actual value as the modeling data.
Holidays are a good example of this—Christmas is on December 25 this year. Or, last week was Christmas. Is December 25 a holiday 0=true, 1=false? Because the value of that variable will always be true on that date, you can use the actual, known, real-time value. Sales promotions are another good example. Knowing that a promotion is planned for next week, by flagging the promotion variable as a "known in advance" variable, DataRobot will:
* exploit the information from the past (that a promotion also happened last week)
* when generating the forecast for next week, leverage the knowledge that a promotion is planned.
If the variable is not flagged as known in advance, DataRobot will ignore the promotion schedule knowledge and forecast quality might be affected.
Because DataRobot cannot know which variables are known in advance, the default for *forecasting* is that no features are marked as such. [Nowcasting](nowcasting#features-known-in-advance), by contrast, adds all covariate features to the KA list by default (although the list can be modified).
!!! tip
See the section [below](#calendar-file-or-ka) for information that helps determine whether to use a calendar event file or to manually add the calendar event and set it to known in advance.
## Exclude features from derivation {: #exclude-features-from-derivation }
DataRobot's time series functionality derives new features from the modeling data and creates a new [modeling dataset](ts-create-data#create-the-modeling-dataset). There are times, however, when you do not want to automate time-based feature engineering (for example, if you have extracted your own time-oriented features and do not want further derivation performed on them). For these features, you can exclude them from derivation from the **Advanced options** link. Note that the standard [automated transformations](auto-transform), part of EDA1, are still performed.
You can exclude any feature from further derivation with the exception of:
* Series identifier
* Primary date/time

See the section on adding features, immediately below. Also, consider excluding features from modeling [feature lists](ts-feature-lists#excluding-features-from-feature-lists) after derivation.
### Add/identify known or excluded features {: #addidentify-known-or-excluded-features }
To add a feature, either:
* Begin typing in the box to filter feature names to match your string, select the feature, and click **Add**. Repeat for each desired features.
* Click **Add All Features** to add every feature from the dataset.

To remove a feature, click the **x** to the right of the feature name; to clear all features click **Clear Selections**.
* From the EDA1 data table (data prior to clicking **Start**), check the boxes to the left of one or more features and, from the menu, choose **Actions > Toggle _x_ features as...** (known in advance or excluded from derivation). To remove a feature, check the box and toggle the selection.

Known in advance and excluded from derivation features must be set separately.
Features that are known in advance or excluded from derivation are marked as such in the raw features list prior to pressing **Start**:

## Calendar files {: #calendar-files }
Calendars provide a way to specify dates or events in a dataset that require additional attention. A calendar file lists different (distinct) dates and their labels, for example:
date,holiday
2019-01-01,New Year's Day
2019-01-21,Martin Luther King, Jr. Day
2019-02-18,Washington's Birthday
2019-05-27,Memorial Day
2019-07-04,Independence Day
2019-09-02,Labor Day
.
.
.
When provided, DataRobot automatically derives and creates special features based on the calendar events (e.g., time until the next event, labeling the most recent event). The [**Accuracy Over Time**](aot#identify-calendar-events) chart provides a visualization of calendar events along the timeline, helping to provide context for predicted and actual results.
[Multiseries calendars](multiseries#multiseries-calendars) (supported for uploaded calendars only) provide additional capabilities for multiseries projects, allowing you to add events per series.
!!! tip
See the section [below](#calendar-file-or-ka) for information that helps determine whether to use a calendar event file or to manually add the calendar event and set it to KA.
### Specify a calendar file {: #specify-a-calendar-file }
You can specify a calendar file containing a list of events relevant to your dataset in one of two ways:
* [Use your own file](#upload-your-own-calendar-file), either by uploading a local file or using a calendar saved to the **AI Catalog**.

* Generate a [preloaded calendar](#use-a-preloaded-calendar-file) based on country code.

Once used, regardless of the selection method, all calendars are stored in the **AI Catalog**. From there, you can view and download any calendar. See the [**AI Catalog**](catalog#upload-calendars) for complete information.
### Upload your own calendar file {: #upload-your-own-calendar-file }
When uploading your own file, you can define calendar events in the best format for your data (that also aligns with DataRobot's [recognized formats](file-types#date-and-time-formats)) or, optionally, specified in [ISO 8601](https://www.iso.org/iso-8601-date-and-time-format.html){ target=_blank } format.
The date/time format must be consistent across all rows. The following table shows sample dates and durations.
Date | Event name | Event Duration*
---- | ---------- | --------------
2017-01-05T09:30:00.000 | Start trading | P0Y0M0DT8H0M0S
2017-01-08T00:00:00.003 | Sensor on | PT10H
2017-12-25T00:00:00.000 | Christmas holiday |
2018-01-01T00:00:00.000 | New Year's day | P1D
\* There is no support for ISO weeks (e.g., P5W).
The event duration field is optional. If not specified, DataRobot assigns a duration based on the time unit found in the uploaded data.
When the detected time unit for the uploaded data is... | Default event duration, if not specified, is...
-------------------------------------- | --------------------------------------
Year, quarter, month, day | 1 day (P1D)
Day, hour, minute, second, millisecond | 1 millisecond (PT0.001S)
See the [calendar file requirements](#calendar-file-requirements) for more detail.
??? tip "Deep dive: Setting duration"
When determining duration (to/from, next/previous), derived features reference the present point in time, where (1) is the duration to next calendar event and (2) is the duration from the previous/current calendar event.

These features can provide information such as:
* How long since the promotion ended?
* When should the next machine downtime be scheduled?
To ensure accuracy, DataRobot provides guardrails to support calendar-derived features based on calendar events that can overlap. In the event of overlap, DataRobot first considers the event of shorter duration and then uses lexical order as the determinator.

Following are examples of how DataRobot uses event duration and lexical order to handle overlapping events:
**Derived features referenced to (a):**
* (1) Next calendar event type = Event G, as it has the shortest duration. If Event G is absent, lexical order promotes Event A over Event D.
* (2) Previous calendar event type = Event B
**Derived features referenced to (b):**
* (3) Next calendar event type = Event B
* (4) Previous calendar event type = Event C
**Derived features referenced to (c):**
* (5) Next calendar event type = Event E
* (6) Previous calendar event type = Event G, as it has the shortest duration. If Event G is absent, lexical order promotes Event A over Event D.
#### Calendar file requirements {: #calendar-file-requirements }
When uploading your own calendar file, note that the file:
* Must have one date column.
* The date/time format should be consistent across all rows.
* Must span the entire training data date range, as well as all future dates in which any models will be forecasting.
* If directly uploaded via a local file, must be in CSV or XLSX format with a header row. If it comes from the **AI Catalog**, it can be from any supported file format as long as it meets the other data requirements and the columns are named.
* Cannot be updated in an active project. You must specify all future calendar events at project start or if you did not, train a new project.
* Can optionally include a second column that provides the event name or type.
* Can optionally include a column named `Event Duration` that specifies the duration of calendar events.
* Can optionally include a series ID column that specifies which series an event is applicable to. This column name must match the name of the column set as the series ID.
* Multiseries ID columns are used to add an ability to specify different sets of events for different series (e.g., holidays for different regions).
* Values of the series ID may be absent for specific events. This means that the event is valid for all series in the project dataset (e.g., New Year's Day is a holiday in all series).
* If a multiseries ID column is not provided, all listed events will be applicable to all series in the project dataset.
Within the app, click **See file requirements** to display an infographic summarizing the format of the calendar file.
#### Best practice column order {: #best-practice-column-order }
* Single series calendars: Date/time, Calendar Events, **Event Duration**.
* [Multiseries calendars](multiseries#multiseries-calendars): Date/time, Calendar Events, Series ID, **Event Duration**.
Note that the duration column must be named **Event Duration**; other columns have no naming requirements.
### Use a preloaded calendar file {: #use-a-preloaded-calendar-file }
To use a preloaded calendar, simply select the country code from the dropdown. DataRobot automatically generates a calendar that covers the span of the dataset (start and end dates).
Preloaded calendars are not available for multiseries projects. To include series-specific events, use the **Attach Calendar** method.
### Calendar file or KA? {: #calendar-file-or-ka }
There are times when you can handle dates either by uploading a calendar event file <em>or</em> manually adding the calendar event as a categorical feature and setting it as KA. In other words, you can:
1. Enter calendar events as columns in your dataset and set them as KA.
2. Import events as a calendar.
The following are differences to consider when choosing a method:
* Calendars must be daily; if you need a more granular time step, you must use KA.
* DataRobot generates additional features from calendars, such as "days until" or "days after" a calendar event.
* Calendar events must be known into the future at training time; KA features must be known into the future at predict time.
* For KA, when deploying predictions you must generate the KA features for each prediction request.
* Calendar events in a multiseries project can apply to a specific series or to all series.
## Customize model splits {: #customize-model-splits }
Use the **Customize splits** option to set the number of splits—groups of models trained— that a given model takes. Set this advanced option based on the number of available workers in your organization. With fewer workers, you may want to have fewer splits so that some workers will be available for other processing. If you have a large number of workers, you can set the number higher, which will result in more jobs in the queue.
!!! note
The maximum number of splits is dependent on DataRobot version. Managed AI Platform users can configure a maximum of five splits; Self-Managed AI Platform users can configure up to 10.
Splits are a group of models that are trained on a set of derived features that have been downsampled. Configuring more splits results in less downsampling of derived features and therefore training on more of the post-processed data. Working with more post-processed data, however, results in longer training times.

## Treat as exponential trend {: #treat-as-exponential-trend }
Accounting for an exponential trend is valuable when your target values rise or fall at increasingly higher rates. A classic example of an exponential trend can be seen in forecasting population size—the size of the population in the next generation is proportional to the size of the previous generation. What will the population be in five generations?
When DataRobot detects exponential trends, it applies a log-transformation to the target feature. DataRobot automatically detects exponential trends and applies a log transform, but you can force a setting if desired. To determine whether DataRobot applied a log transform (e.g., detected an exponential trend) to the target, review the derived, post-EDA2 data. If it was applied, features involving the target have a suffix `(log)` (for example, `Sales (log) (naive 7 day value)`). If you want a different outcome, reload the data and set exponential trends to **No**.
## Exponentially weighted moving average {: #exponentially-weighted-moving-average }
An exponentially weighted moving average (EWMA) is a moving average that places a greater weight and significance on the most recent data points, measuring trend direction over time. The "exponential" aspect indicates that the weighting factor of previous inputs decreases exponentially. This is important because otherwise a very recent value would have no more influence on the variance than an older value.
In regression projects, specify a value between 0 and 1 and it is applied as a smoothing factor (lambda). Each value is weighted by a multiplier; the weight is a constant multiplier of the prior time step's weight.

With this value set, DataRobot creates:
* New derived features, identified by the addition of `ewma` to the feature name.
* An additional feature list: [With Differencing (ewma baseline)](ts-feature-lists#automatically-created-feature-lists).
## Apply differencing {: #apply-differencing }
DataRobot automatically detects whether or not a project's target value is stationary. That is, it detects whether the statistical properties of the target are constant over time (stationary). If the target is *not* stationary, DataRobot attempts to make it stationary by applying a differencing strategy prior to modeling. This improves the accuracy and robustness of the underlying models.
If you want to force a differencing selection, choose one of the following:
| Setting | Description |
|-----------|----------------|
| Auto-detect (default) | Allows DataRobot to apply differencing if it detects that the data is non-stationary. Depending on the data, DataRobot applies either simple differencing or seasonal differencing if periodicity is detected. |
| Simple | Sets differencing based on the delta from the most recent, available value inside the feature derivation window. |
| Seasonal | Sets differencing using the specified time step instead of using the delta from the last available value. The increment of the time step is based on the detected time unit of the data. |
| No | Disable differencing for this project. |
## Apply weights {: #apply-weights }
In some time series projects, the ability to define row weights is critical to the accuracy of the model. To apply weights to a time series project, use the [**Additional**](additional) tab of advanced options.
Once set, weights are included (as applicable) as part of the derived feature creation. The weighted feature is appended with `(actual)` and the **Importance** column identifies the selection:

The actual row weighting happens during model training. Time decay weight blueprints, if any, are multiplied with your configured weight to produce the final modeling weights.
The following derived time series features take weights into account (when applicable):
* Rolling average
* Rolling standard deviation
The following time series features are derived as usual, ignoring weights:
* Rolling min
* Rolling max
* Rolling median
* Rolling lags
* Naive predictions
## Use supervised feature reduction {: #use-supervised-feature-reduction }
Enabled by default, supervised feature reduction discards low-impact features prior to modeling. When identified features are removed, the resulting optimized feature set provides better runtimes with similar accuracy. Model interpretability is also improved as focus is on only impactful features. When disabled, the feature generation process results in more features but also longer model build times.
| ts-adv-opt-include |
If you expect to be able to increase your worker count but cannot, the reasons may be:
* You have hit your [worker limit](#worker-limit).
* Your workers are part of a [shared pool](#pooled-workers).
* Your workers are [in use by another project](#workers-in-use).
#### Worker limit {: #worker-limit }
[Modeling worker allocations](admin-overview#modeling-worker-allocation) are set by your administrator. Each worker processes a modeling job. This job count applies across all projects in the cluster, that is, multiple browser windows building models are all a part of your personal worker count—more windows does not provide more workers.
#### Pooled workers {: #pooled-workers }
If you are in an organization, it may implement a [shared pool of workers](admin-overview#what-are-organizations). In this case, workers are allocated across all jobs for all users in the organization on a first-in, first-out basis. While you may not have to wait for jobs of other users in your organization to complete, your (and their) jobs will be seeded in the queue and processed as they were received.
#### Workers in use {: #workers-in-use }
If you believe you should be able to increase the worker count but you cannot, for example, "using X of Y workers," there are two values to consider for debugging. When Y is lower than you expect, check your worker limit and the [org limit](#worker-limit). When X is less than you expect, check whether workers are being allocated to other projects or users in your organization.
To check worker use in your projects, navigate to the [**Manage Projects** inventory](manage-projects#manage-projects-control-center) and look for queued jobs. You can identify them by:
* An icon and count in the inventory.

* The list that results from using [**Filter Projects**](manage-projects#filter-projects) and selecting **Running or queued**.
If you find a project with queued jobs, you can stop Worker Queue processing.
1. Click on the project in the inventory to make it the active project.
2. In the Worker Queue, click the X icon () to remove all tasks. This removes all in-progress or queued models.

You can also pause the project. As workers complete active jobs, they will become available to pick up jobs from other projects.
| worker-queue-tbsht-include |
## DRUM on Mac {: #drum-on-mac }
The following instructions describe installing DRUM with `conda` (although you can use other tools if you prefer) and then using DRUM to test a task locally. Before you begin, DRUM requires:
* An installation of [`conda`](https://docs.conda.io/en/latest/miniconda.html){ target=_blank }.
* A Python environment (also required for R) of 3.7+.
### Install DRUM on Mac {: #install-drum-mac }
1. Create and activate a virtual environment with Python 3.7+. In the terminal for 3.8, run:
``` sh
conda create -n DR-custom-tasks python=3.8 -y
conda activate DR-custom-tasks
```
2. Install DRUM:
``` sh
conda install -c conda-forge uwsgi -y
pip install datarobot-drum
```
3. To set up the environment, install [Docker Desktop](https://www.docker.com/products/docker-desktop){ target=_blank } and download from GitHub the DataRobot [drop-in environments](https://github.com/datarobot/datarobot-user-models/tree/master/public_dropin_environments){ target=_blank } where your tasks will run. This recommended procedure ensures that your tasks run in the same environment both locally and inside DataRobot.
Alternatively, if you plan to run your tasks in a local `python` environment, install packages used by your custom task into the same environment as DRUM.
| drum-for-mac |
| Host: https://app.datarobot.com | Host: https://app.eu.datarobot.com |
|---------------------------------|------------------------------------|
| 100.26.66.209 | 18.200.151.211 |
| 54.204.171.181 | 18.200.151.56 |
| 54.145.89.18 | 18.200.151.43 |
| 54.147.212.247 | 54.78.199.18 |
| 18.235.157.68 | 54.78.189.139 |
| 3.211.11.187 | 54.78.199.173 |
| 3.214.131.132 | |
| 3.89.169.252 | |
!!! note
These IP addresses are reserved for DataRobot use only.
| whitelist-ip |
# DRUM CLI tool {: #drum-cli-tool }
DataRobot user model (DRUM) is a CLI tool that allows you to work with Python, R, and Java custom models and to quickly test [custom tasks](cml-custom-tasks), [custom models](custom-models/index), and [custom environments](custom-environments) locally before uploading into DataRobot. Because it is also used to run custom tasks and models inside of DataRobot, if they pass local tests with DRUM, they are compatible with DataRobot. You can download DRUM from <a target="_blank" href="https://pypi.org/project/datarobot-drum/">PyPi</a>.
DRUM can also:
* Run performance and memory usage testing for models.
* Perform model validation tests (for example, checking model functionality on corner cases, like null values imputation).
* Run models in a Docker container.
You can install DRUM for [Ubuntu](#drum-on-ubuntu), [Windows](#drum-on-windows-with-wsl2), or [MacOS](#drum-on-mac).
!!! note
DRUM is not regularly tested on Windows or Mac. These steps may differ depending on the configuration of your machine. | drum-tool |
This visualization supports sliced insights. Slices allow you to define a user-configured subpopulation of a model's data based on feature values, which helps to better understand how the model performs on different segments of data. See the full [documentation](sliced-insights) for more information. | slices-viz-include |
!!! note "Time of Prediction"
The <span id="time-of-prediction">Time of Prediction</span> value differs between the [Data Drift](data-drift) and [Accuracy](deploy-accuracy) tabs and the [Service Health](service-health) tab:
* On the Service Health tab, the "time of prediction request" is _always_ the time the prediction server _received_ the prediction request. This method of prediction request tracking accurately represents the prediction service's health for diagnostic purposes.
* On the Data Drift and Accuracy tabs, the "time of prediction request" is, _by default_, the time you _submitted_ the prediction request, which you can override with the prediction timestamp in the [Prediction History](add-deploy-info#prediction-history-and-service-health) settings.
| service-health-prediction-time |
| Prediction method | Details | File size limit |
|-------------------|---------|-----------------|
| Leaderboard predictions | To make predictions on a non-deployed model using the UI, expand the model on the Leaderboard and select [**Predict > Make Predictions**](predict). Upload predictions from a local file, URL, data source, or the AI Catalog. You can also upload predictions using the modeling predictions API, also called the "[V2 predictions API](https://datarobot-public-api-client.readthedocs-hosted.com/en/v2.26.1/entities/predict_job.html){ target=_blank }." Use this API to test predictions using your modeling workers on small datasets. Predictions can be limited to 100 requests per user, per hour, depending on your DataRobot package. | 1GB |
| Batch predictions (UI) | To make batch predictions using the UI, deploy a model and navigate to the deployment's [**Make Predictions**](batch-pred) tab (requires MLOps). | 5GB |
| Batch predictions (API) | The [Batch Prediction API](batch-prediction-api/index) is optimized for high-throughput and contains production grade connectivity options that allow you to not only push data through the API, but also connect to the AI catalog, cloud storage, databases, or data warehouses (requires MLOps). | Unlimited |
| Prediction API (real-time) | To make real-time predictions on a deployed model, use the [Prediction API](dr-predapi). | 50 MB |
| pred-limits-include |
This section provides preliminary documentation for features currently in the public preview pipeline. If not enabled for your organization, the feature is not visible.
Although these features have been tested within the engineering and quality environments, they should not be used in production at this time. Note that public preview functionality is subject to change and that any Support SLA agreements are not applicable.
!!! info "Availability information"
Contact your DataRobot representative or administrator for information on enabling or disabling public preview features.
| pub-preview-notice-include |
To configure the **Time series options**, under **Time series prediction method**, select [**Forecast point** or **Forecast range**](ts-predictions#forecast-settings).
=== "Forecast point"
Select **Forecast point** to choose the specific date from which you want to begin making predictions, and then select a **Forecast point selection method**:
* **Automatic**: DataRobot sets the forecast point for you based on the scoring data.

* **Relative**: You set a forecast point relative to the start time of a scheduled prediction job. Then, you set the **Offset**. Select the number of **Months**, **Days**, **Hours**, and **Minutes** to offset from scheduled job runtime. Click **Before job time** or **After job time**, depending on how you want to apply the offset.

* **Fixed**: You set the forecast point date.

=== "Forecast range"
Select **Forecast range** if you intend to make bulk, historical predictions (instead of forecasting future rows from the forecast point). Then, select a forecast range:
* **Use all dates from prediction source**: Predictions use all forecast distances within the selected time range.
* **Use specific date range**: Set a specific date range using the date selector.

| batch-pred-jobs-ts-options-include |
## Business problem {: #business-problem }
Claim payments and claim adjustment are typically an insurance company’s largest expenses. For long-tail lines of business, such as workers’ compensation (which covers medical expenses and lost wages for injured workers), the true cost of a claim may not be known for many years until it is paid in full. However, claim adjustment activities start when a claim is made aware to the insurer.
Typically when an employee gets injured at work (`Accident Date`), the employer (insured) decides to file a claim to its insurance company (`Report Date`) and a claim record is created in the insurer's claim system with all available information about the claim at the time of reporting. The claim is then assigned to a claim adjuster. This assignment could be purely random or based on roughly defined business rules. During the life cycle of a claim, assignment may be re-evaluated multiple times and re-assigned to a different claim adjuster.
This process, however, has costly consequences:
* It is well-known in insurance that 20% of claims account for 80% of the total claim payouts. Randomly assigning claims wastes resources.
* Early intervention is critical to optimal claim results. Without the appropriate assignment of resources as early as possible, seemingly mild claims can become substantial.
* Claims of low severity and complexity must wait to be processed alongside all other claims, often leading to a poor customer experience.
* A typical claim adjuster can receive several hundred new claims every month, in addition to any existing open claims. When a claim adjuster is overloaded, it is unlikely they can process every assigned claim. If too much time passes, the claimant is more likely to obtain an attorney to assist in the process, driving up the cost of the claim unnecessarily.
## Solution value {: #solution-value }
* **Challenge:** Help insurers assess claim complexity and severity as early as possible so that:
- Claims of low severity and low complexity are routed to straight-through-processing, avoiding the wait and improving the customer experience.
- Claims of high complexity get the required attention of experienced claim adjusters and nurse case managers.
- The improved communication between claimants and the insured leads to minimized attorney involvement.
- The transfer of knowledge between experienced and junior adjusters is improved.
* **Desired Outcome**
- Reduce loss adjustment expenses by more efficiently allocating claim resources.
- Reduce claims’ costs by effectively assigning nurse case managers and experienced adjusters to claims that they can impact the most.
* **How can DataRobot help?**
- Machine learning models using claim- and policy-level attributes at First Notice of Loss (FNOL) can help you understand the complicated relationship between claim severity and various policy attributes at an early stage of a claim's life cycle. Model predictions are used to rank new claims from least severe to most severe. Thresholds can be determined by the business based on the perceived level of low-, medium-, high-severity or volume of claims that a claim adjuster's bandwidth can handle. You can also create thresholds based on a combination of claim severity and claim volume. Use these thresholds and model predictions to route claims in an efficient manner.
Topic | Description
------|-----------
Use case type | Insurance / Claim Triage |
Target audience | Claim adjusters |
Metrics / KPIs| <ul><li>False positive/negative rate</li> <li>Total expense savings (in terms of both labor and more accurate adjudication of claims)</li><li>Customer satisfaction</li></ul> |
Sample dataset | [Download here](https://s3.amazonaws.com/datarobot_public_datasets/DR_Demo_Statistical_Case_Estimates.csv) |
### Problem framing {: #problem-framing }
A machine learning model learns complex patterns from historically observed data. Those patterns can be used to make predictions on new data. In this use case, historical insurance claim data is used to build the model. When a new claim is reported, the model makes a prediction on it.
Depending on how the problem is framed, the prediction can have different meanings. The goal of this claim triage use case is to have a model evaluate the workers' compensation claim severity as early as possible, ideally at the moment a claim is reported (the first notice of loss, or FNOL). The target feature is related to the total payment for a claim and the modeling unit is each individual claim.
When the total payment for a claim is treated as the target, the use case is framed as a regression problem because you are predicting a quantity. The predicted total payment can then be compared with thresholds for low and high severity claims defined by business need, which classifies each claim as low-, medium-, or high-severity.
Alternatively, you can frame this use case as a classification problem. To do so, apply the aforementioned thresholds to the total claim payment first and convert it to a categorical feature with levels "Low", "Medium" and "High". You can then build a classification model that uses this categorical variable as the target. The model instead predicts the probability a claim is going to be low-, medium- or high-severity.
Regardless how the problem is framed, the ultimate goal is to route a claim appropriately.
### ROI estimation {: #roi-estimation }
For this use case, direct return on investment (ROI) comes from improved claim handling results and expense savings. Indirect ROI stems from improved customer experience which in turn increases customer loyalty. The steps below focus on the direct ROI calculation based on the following assumptions:
* 10,000 claims every month
* Category I: 30% (3000) of claims are routed to straight through processing (STP)
* Category II: 60% (6000) of claims are handled normally
* Category III: 10% (1000) of claims are handled by experienced claim adjusters
* Average Category I claim severity is 250 without the model; 275 with the model
* Average Category II claim severity is 10K without the model; 9500 with the model
* Saved labor: 3 full-time employees with an average annual salary of 65000
`Total annual ROI` = `65000 x 3 + [3000 x (250-275) + 1000 x (10000 - 9500)] x 12` = `$5295000`
## Working with data {: #working-with-data}
The sample data for this use case is a synthetic dataset from a worker compensation insurer's claims database, organized at the individual claim level. Most claim databases in an insurance company contain transactional data, i.e., one claim may have multiple records in the claims database. When the claim is first reported, a claim is recorded in the claims systems and initial information about the claim is recorded. Depending on the insurer's practice, a case reserve may be set up. The case reserve is adjusted accordingly when claim payments are made or additional information collected indicates a need to change the case reserve.
Policy-level information can be predictive as well. This type of information includes class, industry, job description, employee tenure, size of the employer, and whether there is a return to work program. Policy attributes should be joined with the claims data to form the modeling dataset, although they are ignored in this example.
When it comes to claim triage, insurers would like to know as early as possible how severe a claim potentially is, ideally at the moment a claim is reported (FNOL). However, an accurate estimate of a claim's severity may not be feasible at FNOL due to insufficient information. Therefore, in practice, a series of claim triage models are needed to predict the severity of a claim at different stages of that claim's life cycle, e.g., FNOL, 30 days, 60 days, 90 days, etc.
For each of the models, the goal is to predict the severity of a claim; therefore, the target feature is the total payment on a claim. The features included in the training data are the claim attributes and policy attributes at different snapshots. For example, for an FNOL model, features are limited to what is known about a claim at FNOL. For insurers still using legacy systems which may not record the true FNOL data, an approximation is often made between 0-30 days.
### Features overview
The following table outlines the prominent features in the [sample training dataset](https://s3.amazonaws.com/datarobot_public_datasets/DR_Demo_Statistical_Case_Estimates.csv).
|Feature Name | Data Type | Description | Data Source |
|:------------|:----------|:------------|:------------|
|ReportingDelay | Numeric | Number of days between the accident date and report date | Claims|
|AccidentHour | Numeric | Time of day that the accident occurred | Claims |
|Age | Numeric | Age of claimant | Claims |
|Weekly Rate | Numeric | Weekly salary | Claims |
|Gender | Categorical | Gender of the claimant | Claims |
|Marital Status | Categorical | Whether the claimant is married or not | Claims |
|HoursWorkedPerWeek | Numeric | The usual number of hours worked per week by the claimant | Claims |
|DependentChildren | Numeric | Claimant's number of dependent children | Claims |
|DependentsOther | Numeric | Claimant's number of dependents who are not children | Claims |
|PartTimeFullTime | Numeric | Whether the claimant works part time or full time | Claims |
|DaysWorkedPerWeek | Numeric | Number of days per week worked by the claimant | Claims |
|DateOfAccident | Date | Date that the accident occurred | Claims |
|ClaimDescription | Text | Text description of the accident and injury | Claims |
|ReportedDay | Numeric | Day of the week that the claim was reported to the insurer | Claims |
|InitialCaseEstimate | Numeric | Initial case estimate set by claim staff | Claims |
|Incurred | Numeric | target : final cost of the claim = all payments made by the insurer | Claims |
| triage-insurance-claims-include |
## DRUM on Ubuntu {: #drum-on-ubuntu }
The following describes the DRUM installation workflow. Consider the language prerequisites before proceeding.
| Language | Prerequisites | Installation command |
|----------|----------------------|------------------------------|
| Python | Python 3 recommended | `pip install datarobot-drum` |
| Java | JRE ≥ 11 | `pip install datarobot-drum` |
| R | <ul><li>Python ≥ 3.6</li><li>R framework installed</li></ul>DRUM uses the rpy2 package to run R (the latest version is installed by default). You may need to adjust the rpy2 and pandas versions for compatibility. | `pip install datarobot-drum[R]` |
To install the DRUM with support for Python and Java models, use the following command:
``` sh
pip install datarobot-drum
```
To install DRUM with support for R models:
``` sh
pip install datarobot-drum[R]
```
!!! note
If you are using a Conda environment, install the wheels with a `--no-deps` flag. If any dependencies are required for a Conda environment, install them with Conda tools. | drum-for-ubuntu |
To configure the **Time series options**, under **Time series prediction method**, select [**Forecast point** or **Forecast range**](ts-predictions#forecast-settings).
=== "Forecast point"
Select **Forecast point** to choose the specific date from which you want to begin making predictions, and then select a **Forecast point selection method**:
* **Automatic**: DataRobot sets the forecast point for you based on the scoring data.

* **Fixed**: You set the forecast point date.

=== "Forecast range"
Select **Forecast range** if you intend to make bulk, historical predictions (instead of forecasting future rows from the forecast point). Then, select a forecast range:
* **Use all dates from prediction source**: Predictions use all forecast distances within the selected time range.
* **Use specific date range**: Set a specific date range using the date selector.

| batch-pred-ts-options-include |
---
title: Managed AI Platform releases
description: Read release announcements for DataRobot's generally available and public preview features released in June, 2023.
---
# Managed AI Platform releases {: #managed-ai-platform-releases }
This page provides announcements of newly released features available in DataRobot's SaaS single- and multi-tenant AI Platform, with links to additional resources.
## This month's deployment {: #this-months-deployment }
_June 28, 2023_
With the latest deployment, DataRobot's AI Platform delivered the new GA and Public Preview features listed below. From the release center you can also access:
* [Monthly deployment announcement history](cloud-history/index)
* [Public preview features](public-preview/index)
* [Self-Managed AI Platform release notes](archive-release-notes/index)
### In the spotlight {: #in-the-spotlight }
#### Foundational Models for Text AI {: #foundational-models-for-text-ai }
With this deployment, DataRobot brings foundational models for [Text AI](textai-resources) to general availability. Foundational models—large AI models trained on a vast quantity of unlabeled data at scale—provide extra accuracy and diversity and allow you to leverage large pre-trained deep learning methods for Text AI.
While DataRobot has already implemented some foundational models, such as [TinyBERT](v7.1.0-aml#tiny-bert-pretrained-featurizer-implementation-extends-nlp), those models operate at the word-level, causing additional computing (converting rows of text requires computing the embeddings for each token and then averaging their vectors). These new model—Sentence Roberta for English and MiniLM for multilingual use cases—can be adapted to a wide range of downstream tasks. These two foundational models are available in pre-built blueprints in the repository or can be added to any blueprint via blueprint customization (via embeddings) to generate leverage these foundational models and improve accuracy.
The new blueprints are available in the Repository.
#### Workbench now generally available {: #workbench-now-generally-available }
With this month’s deployment, Workbench, the DataRobot experimentation platform, moves from Public Preview to general availability. Workbench provides an intuitive, guided, machine learning workflow, helping you to experiment and iterate, as well as providing a frictionless collaboration environment. In addition to becoming GA, other Public Preview features are introduced this month:
See the [capability matrix](wb-capability-matrix) for an evolving comparison of capabilities available in Workbench and DataRobot Classic.
### June release {: #june-release }
The following table lists each new feature:
??? abstract "Features grouped by capability"
Name | GA | Public Preview
---------- | ---- | ---
**Admin** | :~~: | :~~:
[Custom role-based access control (RBAC)](#custom-role-based-access-control-rbac) | ✔ |
**Applications** | :~~: | :~~:
[New app experience in Workbench](#new-app-experience-in-workbench) | | ✔
[Prefilled application templates](#prefilled-application-templates) | | ✔
[Build Streamlit applications for DataRobot models](#build-streamlit-applications-for-datarobot-models) | | ✔
**Data** | :~~: | :~~:
[Share secure configurations](#share-secure-configurations) | ✔ |
[New driver versions](#new-driver-versions) | ✔ |
**Modeling** | :~~: | :~~:
[Foundational Models for Text AI Sentence Featurizers](#foundational-models-for-text-ai) | ✔ | |
[Tune hyperparameters for custom tasks ](#tune-hyperparameters-for-custom-tasks) | | ✔ |
[Expanded data slice support and new features](#expanded-data-slice-support-and-new-features)| ✔ | |
[Improvements to XEMP Prediction Explanation calculations](#improvements-to-xemp-prediction-explanation-calculations)| ✔ | |
[Document AI brings PDF documents](#document-ai-brings-pdf-documents-as-a-data-source) | | ✔ |
[GPU support for deep learning](#gpu-support-for-deep-learning)| | ✔ |
[Blueprint repository and Blueprint visualization](#blueprint-repository-and-blueprint-visualization) | | ✔ |
[Slices in Workbench](#slices-in-workbench) | | ✔ |
[Slices for time-aware projects (Classic)](#slices-for-time-aware-projects-classic) | | ✔ |
**Notebooks** | :~~: | :~~:
[DataRobot Notebooks](#datarobot-notebooks) | ✔ | |
**Predictions and MLOps** | :~~: | :~~:
[GitHub Actions for custom models](#github-actions-for-custom-models) | ✔ |
[Prediction monitoring jobs](#prediction-monitoring-jobs) | ✔ |
[Spark API for Scoring Code](#spark-api-for-scoring-code) | ✔ |
[Extend compliance documentation with key values](#extend-compliance-documentation-with-key-values) | | ✔
**API** | :~~: | :~~:
[DataRobotX](#datarobotx) | | ✔ |
### GA {: #ga }
#### Share secure configurations {: #share-secure-configurations}
IT admins can now configure OAuth-based authentication parameters for a data connection, and then securely share them with other users without exposing sensitive fields. This allows users to easily connect to their data warehouse without needing to reach out to IT for data connection parameters.

For more information, see the [full documentation](secure-config).
#### Custom role-based access control (RBAC) {: #custom-role-based-access-control-rbac }
Now generally available, custom RBAC is a solution for organizations with use cases that are not addressed by default roles in DataRobot. Administrators can create roles and define access at a more granular level, and assign them to users and groups.

You can access custom RBAC from **User Settings > User Roles**, which lists each available role an admin can assign to a user in their organization, including DataRobot default roles.

For more information, see the [full documentation](custom-roles).
#### New driver versions {: #new-driver-versions}
With this release, the following driver versions have been updated:
- MySQL==8.0.32
- Microsoft SQL Server==12.2.0
- Snowflake==3.13.29
See the complete list of [supported driver versions](data-sources/index) in DataRobot.
#### GitHub Actions for custom models {: #github-actions-for-custom-models }
Now generally available, the custom models action manages custom inference models and their associated deployments in DataRobot via GitHub CI/CD workflows. These workflows allow you to create or delete models and deployments and modify settings. Metadata defined in YAML files enables the custom model action's control over models and deployments. Most YAML files for this action can reside in any folder within your custom model's repository. The YAML is searched, collected, and tested against a schema to determine if it contains the entities used in these workflows. For more information, see the [custom-models-action](https://github.com/datarobot-oss/custom-models-action){ target=_blank } repository.
The [quickstart](custom-model-github-action#github-actions-quickstart) example uses a [Python Scikit-Learn model template](https://github.com/datarobot/datarobot-user-models/tree/master/model_templates/python3_sklearn){ target=_blank } from the [datarobot-user-model](https://github.com/datarobot/datarobot-user-models/tree/master/model_templates){ target=_blank } repository. After you configure the workflow and create a model and a deployment in DataRobot, you can access the commit information from the model's version info and package info and the deployment's overview:
=== "Model version info"

=== "Model package info"

=== "Deployment overview"

For more information, see [GitHub Actions for custom models](custom-model-github-action).
#### Prediction monitoring jobs {: #prediction-monitoring-jobs }
Now generally available, monitoring job definitions allow DataRobot to monitor deployments running and storing feature data, predictions, and actuals outside of DataRobot. For example, you can create a monitoring job to connect to Snowflake, fetch raw data from the relevant Snowflake tables, and send the data to DataRobot for monitoring purposes. The GA release of this feature provides a [dedicated API for prediction monitoring jobs](api-monitoring-jobs) and the ability to [use aggregation](ui-monitoring-jobs#set-aggregation-options) for external models with [large-scale monitoring enabled](agent-use#enable-large-scale-monitoring):

For more information, see [Prediction monitoring jobs](pred-monitoring-jobs/index).
#### Spark API for Scoring Code {: #spark-api-for-scoring-code }
The Spark API for Scoring Code library integrates DataRobot Scoring Code JARs into Spark clusters. This update makes it easy to use Scoring Code in PySpark and Spark Scala without writing boilerplate code or including additional dependencies in the classpath, while also improving the performance of scoring and data transfer through the API.
This library is available as a [PySpark API](sc-apache-spark#pyspark-api) and a [Spark Scala API](sc-apache-spark#spark-scala-api). In previous versions, the Spark API for Scoring Code consisted of multiple libraries, each supporting a specific Spark version. Now, one library includes all supported Spark versions:
* The PySpark API for Scoring Code is included in the [`datarobot-predict`](https://pypi.org/project/datarobot-predict/){ target=_blank } Python package, released on PyPI. The PyPI project description contains documentation and usage examples.
* The Spark Scala API for Scoring Code is published on Maven as [`scoring-code-spark-api`](https://central.sonatype.com/artifact/com.datarobot/scoring-code-spark-api){ target=_blank } and documented in the [API reference](https://javadoc.io/doc/com.datarobot/scoring-code-spark-api_3.0.0/latest/com/datarobot/prediction/spark30/Predictors$.html){ target=_blank }.
For more information, see [Apache Spark API for Scoring Code](sc-apache-spark).
#### DataRobot Notebooks {: #datarobot-notebooks }
Now generally available, DataRobot includes an in-browser editor to create and execute notebooks for data science analysis and modeling. Notebooks display computation results in various formats, including text, images, graphs, plots, tables, and more. You can customize the output display by using open-source plugins. Cells can also contain Markdown rich text for commentary and explanation of the coding workflow. As you develop and edit a notebook, DataRobot stores a history of revisions that you can return to at any time.
DataRobot Notebooks offer a dashboard that hosts notebook creation, upload, and management. Individual notebooks have containerized, built-in environments with commonly used machine learning libraries that you can easily set up in a few clicks. Notebook environments seamlessly integrate with DataRobot's API, allowing a robust coding experience supported by keyboard shortcuts for cell functions, in-line documentation, and saved environment variables for secrets management and automatic authentication.

#### Expanded data slice support and new features {: #expanded-data-slice-support-and-new-features }
[Data slices](sliced-insights) allow you to define filters for categorical, numeric, or both types of features. Viewing and comparing insights based on segments of a project’s data helps to understand how models perform on different subpopulations by configuring filters that choose feature and set operators and values to narrow the data returned. As part of the general availability release, several improvements were made:
* Feature Effects now supports slices.
* A [quick-compute](feature-impact#quick-compute) option replaces the sample size modal for setting sample size in **Feature Impact**.
* Manual initiation of slice calculation starts with slice validation and prevents accidental launching of computations.
#### Improvements to XEMP Prediction Explanation calculations {: #improvements-to-xemp-prediction-explanation-calculations }
An additional benefit of the [Pandas library upgrade](may2023-announce#upgrades-to-pandas-libraries) from version 0.23.4 to 1.3.5 in May is an improvement to the way DataRobot calculates [XEMP Prediction Explanations](xemp-pe). With the new libraries, calculation differences, due to accuracy improvements in the newer version of Pandas, result in accuracy improvements in the insight.
### Public Preview {: #public-preview }
#### Document AI brings PDF documents as a data source {: #document-ai-brings-pdf-documents-as-a-data-source }
[Document AI](doc-ai/index) provides a way to build models on raw PDF documents without additional, manually intensive data preparation steps. Until Document AI, data preparation requirements presented a challenging barrier to efficient use of documents as a data source, even making them inaccessible—information spread out in a large corpus, a variety of formats with inconsistencies. Not only does Document AI ease the data prep aspect of working with documents, but DataRobot brings its automation to projects that rely on documents as the data source, including comparing models on the Leaderboard, model explainability, and access to a full repository of blueprints.
With two new user-selectable tasks added to the model blueprint, DataRobot can now extract embedded text (with the Document Text Extractor task) or text of scans (with the Tesseract OCR task) and then use PDF text for model building. DataRobot automatically chooses a task type based on the project but allows you the flexibility to modify that task if desired. Document AI works with many project types, including regression, binary and multiclass classification, multilabel, clustering, and anomaly detection, but also provides multimodal support for text, images, numerical, categorical, etc., within a single blueprint.
To help you see and understand the unique nature of a document's text elements, DataRobot introduces the **Document Insights** visualization. It is useful for double-checking which information DataRobot extracted from the document and whether you selected the correct task:

Support of `document` types has been added to several other data and model visualizations as well.
**Required feature flags:** Enable Document Ingest, Enable OCR for Document Ingest
#### GPU support for deep learning {: #gpu-support-for-deep-learning }
Support for deep learning models, Large Language Models for example, are increasingly important in an expanding number of business use cases. While some of the models can be run on CPUs, other models require GPUs to achieve reasonable training time. To efficiently train, host, and predict using these "heavier" deep learning models, DataRobot leverages Nvidia GPUs within the application. When GPU support is enabled, DataRobot detects blueprints that contain certain tasks and potentially uses GPU workers to train them. That is, if the sample size minimum is not met, the blueprint is routed to the CPU queue. Additionally, a heuristic determines which blueprints will train with low runtime on CPU workers.

**Required feature flag:** Enable GPU Workers
Public preview [documentation](gpus).
#### Blueprint repository and Blueprint visualization {: #blueprint-repository-and-blueprint-visualization }
With this deployment, Workbench introduces the [blueprint repository](wb-experiment-add#blueprint-repository)—a library of modeling blueprints. After running Quick Autopilot, you can visit the repository to select blueprints that DataRobot did not run by default. After choosing a feature list and sample size (or training period for time-aware), DataRobot will then build the blueprints and add the resulting model(s) to the Leaderboard and your experiment.

Additionally, the [Blueprint visualization](wb-experiment-evaluate#blueprint) is now available. The Blueprint tab provides a graphical representation of the preprocessing steps (tasks), modeling algorithms, and post-processing steps that go into building a model.

#### Slices in Workbench {: #slices-in-workbench }
[Data slices](sliced-insights), the capability that allows you to configure filters that create subpopulations of project data, is now available in [select Workbench insights](wb-experiment-evaluate#insights). From the **Data slice** dropdown you can select a slice or access the modal for creating new filters.
**Required feature flag:** Slices in Workbench
#### Prefilled application templates {: #prefilled-application-templates }
Previously, when you created a new application, the application opened to a blank template with limited guidance on how to begin building and generating predictions. Now, applications are populated after creation using training data to help highlight, showcase, and collaborate on the output of your models immediately.
**Required feature flag:** Enable Prefill NCA Templates with Training Data
Public preview [documentation](app-prefill).
#### New app experience in Workbench {: #new-app-experience-in-workbench }
Now available for public preview, DataRobot introduces a new, streamlined application experience in Workbench that provides leadership teams, COE teams, business users, data scientists, and more with the unique ability to easily view, explore, and create valuable snapshots of information. This release introduces the following improvements:
- Applications have a new, simplified interface to make the experience more intuitive.
- You can access model insights, including Feature Impact and Feature Effects, from all new Workbench apps.
- Applications created from an experiment in Workbench no longer open outside of Workbench in the application builder.

**Required feature flag:** Enable New No-Code AI Apps Edit Mode
**Recommended feature flag:** Enable Prefill NCA Templates with Training Data
Public preview [documentation](wb-app-edit).
#### Slices for time-aware projects (Classic) {: #slices-for-time-aware-projects-classic }
Now available for public preview, DataRobot brings the creation and application of data slices to time aware (OTV and time series) projects in DataRobot Classic. Sliced insights provide the option to view a subpopulation of a model's derived data based on feature values. Viewing and comparing insights based on segments of a project’s data helps to understand how models perform on different subpopulations. Use the segment-based accuracy information gleaned from sliced insights, or compare the segments to the "global" slice (all data), to improve training data, create individual models per segment, or augment predictions post-deployment.
**Required feature flag:** Sliced Insights for Time Aware Projects
#### Extend compliance documentation with key values {: #extend-compliance-documentation-with-key-values }
Now available for public preview, you can create key values to reference in compliance documentation templates. Adding a key value reference includes the associated data in the generated template, limiting the manual editing needed to complete the compliance documentation. Key values associated with a model in the Model Registry are key-value pairs containing information about the registered model package:

When you [build custom compliance documentation templates](template-builder), you can include string, numeric, boolean, image, and dataset key values:

Then, when you [generate compliance documentation for a model package](reg-compliance) with a custom template referencing a supported key value, DataRobot inserts the matching values from the associated model package; for example, if the key value has an image attached, that image is inserted.
**Required feature flag:** Enable Extended Compliance Documentation
Public preview [documentation](model-registry-key-values).
#### Tune hyperparameters for custom tasks {: #tune-hyperparameters-for-custom-tasks }
You can now tune hyperparameters for custom tasks. You can provide two values for each hyperparameter: the `name` and `type`. The type can be one of `int`, `float`, `string`, `select`, or `multi`, and all types support a `default` value. See [Model metadata and validation schema](cml-validation) for more details and example configuration of hyperparameters.
Public preview [documentation](cml-hyperparam#configure-hyperparameters-for-custom-tasks).
#### Build Streamlit applications for DataRobot models {: #build-streamlit-applications-for-datarobot-models }
You can now build Streamlit applications using DataRobot models, allowing to easily incorporate DataRobot insights into your Streamlit dashboard.
For information on what’s included and setup, see the [dr-streamlit Github repository](https://github.com/datarobot/dr-streamlit).
### API {: #api }
#### DataRobotX {: #datarobotx }
Now available for public preview, DataRobotX, or DRX, is a collection of DataRobot extensions designed to enhance your data science experience. DRX provides a streamlined experience for common workflows but also offers new, experimental high-level abstractions.
DRX offers unique experimental workflows, including the following:
* Smart downsampling with Pyspark
* Enrich datasets using LLMs
* Feature importance rank ensembling (FIRE)
* Deploy custom models
* Track experiments in MLFlow

Public preview [documentation](https://drx.datarobot.com/){ target=_blank }.
| index |
---
title: Modeling
description: Learn about the modeling process. Covers setting modeling parameters before building, modeling workflow, managing models and projects, and exporting data.
---
# Modeling {: #modeling }
The sections described below provide information to help you easily navigate the ML modeling process.
## Build models {: #build-models }
Topic | Describes...
----- | ------
[Build models](build-basic/index) | Elements of the basic modeling workflow.
[Advanced options](adv-opt/index) | Setting advanced modeling parameters prior to building.
[Manage projects](manage-projects) | Manage models and projects, and export data.
## Model insights (Leaderboard tabs) {: #model-insights-leaderboard-tabs }
Topic | Describes...
----- | ------
[Evaluate tabs](evaluate/index) | View key plots and statistics needed to judge and interpret a model’s effectiveness.
[Understand tabs](understand/index) | Understand what drives a model’s predictions.
[Describe tabs](describe/index) | View model building information and feature details.
[Predict tabs](predictions/index) | Make predictions in DataRobot using the UI or API.
[Compliance tabs](compliance/index) | Compile model development documentation that can be used for regulatory validation.
[Comments tab](catalog-asset#add-comments) | Add comments to assets in the **AI Catalog**.
[Bias and Fairness tabs](bias/index) | Identify if a model is biased and why the model is learning bias from the training data.
[Other model tabs](other/index) | Compare models across a project.
## Specialized workflows {: #specialized-workflows }
Topic | Describes...
----- | ------
[Date/time partitioning](otv) | Build models with time-relevant data (not time series).
[Unsupervised learning](unsupervised/index) | Work with unlabeled or partially labeled data to build anomaly detection or clustering models.
[Composable ML](cml/index) | Build blueprints using built-in DataRobot tasks and custom Python or R code.
[Visual AI](visual-ai/index) | Use image-based datasets.
[Location AI](location-ai/index) | Use geospatial datasets.
## Time series modeling {: #time-series-modeling }
Topic | Describes...
----- | ------
[What is time-based modeling?](whatis-time) | The basic modeling process and a recommended reading path.
[Time series workflow overview](ts-flow-overview) | The workflow for creating a time series project.
[Time series insights](ts-leaderboard) | Visualizations available to help interpret your data and models.
[Time series predictions](ts-predictions) | Making predictions with time series models.
[Multiseries modeling](multiseries) | Modeling with datasets that contain multiple time series.
[Segmented modeling](ts-segmented) | Grouping series into segments, creating multiple projects for each segment, and producing a single Combined Model for the data.
[Nowcasting](nowcasting) | Making predictions for the present and very near future (very short-range forecasting).
[Enable external prediction comparison](cyob) | Comparing model predictions built outside of DataRobot against DataRobot predictions.
[Advanced time series modeling](ts-adv-modeling/index) | Modifying partitions, setting advanced options, and understanding window settings.
[Time series modeling data](ts-modeling-data/index) | Working with the time series modeling dataset:<br /><ul><li>Creating the modeling dataset</li><li>Using the data prep tool</li><li>Restoring pruned features</li></ul>
[Time series reference](ts-reference/index) | How to customize time series projects as well as a variety of deep-dive reference material for DataRobot time series modeling.
## Modeling reference {: #modeling-reference }
Topic | Describes...
----- | ------
[Data and sharing](data-sharing/index) | Dataset requirements, sharing assets, and permissions.
[Modeling details](reference/model-detail/index) | The Leaderboard and the processes that drive model building, including partitioning and feature derivation.
[Eureqa advanced tuning](reference/eureqa-ref/index) | Tune Eureqa models by modifying building blocks, customizing the target expression, and modifying other model parameters.
[Modeling FAQ](general-modeling-faq) | A list of frequently asked modeling questions, including building models and model insights, with brief answers and links to more complete documentation. | index |
---
title: Modeling FAQ
dataset_name: N/A
Description: Provides a list of frequently asked questions, and brief answers about general modeling, building models, and model insights in DataRobot. Answers link to more complete documentation.
domain: core modeling
expiration_date: 10-10-2024
owner: jen@datarobot.com
url: docs.datarobot.com/docs/tutorials/create-ai-models/general-modeling-faq.html
---
# Modeling FAQ {: #modeling-faq }
The following addresses questions and answers about modeling in general and then more specifically about building models and using model insights.
## General modeling {: #general-modeling }
??? faq "What types of models does DataRobot build?"
DataRobot supports Tree-based models, Deep Learning models, Support Vector Machines (SVM), Generalized Linear Models, Anomaly Detection models, Text Mining models, and more. See the list of [specific model types](model-ref#modeling-modes) for more information.
??? faq "What are modeling workers?"
DataRobot uses different types of workers for different types of jobs; modeling workers are for training models and creating insights. You can adjust these workers in the [Worker Queue](worker-queue), which can speed model building and allow you to allocate across projects.
??? faq "Why can't I add more workers?"
You may have reached your maximum, if in a shared pool your coworkers may be using them, or they may be in use with another project. See the [troubleshooting tips](worker-queue#troubleshoot-workers) for more information.
??? faq "What is the difference between a model and a blueprint?"
A *modeling algorithm* fits a model to data, which is just one component of a blueprint. A *blueprint* represents the high-level end-to-end procedure for fitting the model, including any preprocessing steps, modeling, and post-processing steps. Read about accessing the [graphical representation of a blueprint](blueprints).
??? faq "What is smart downsampling?"
[Smart downsampling](smart-ds) is a technique to reduce the total size of the dataset by reducing the size of the majority class, enabling you to build models faster without sacrificing accuracy.
??? faq "What are EDA1 and EDA2?"
[Exploratory data analysis](eda-explained), or EDA, is DataRobot's approach to analyzing datasets and summarizing their main characteristics. It consists of two phases. EDA1 describes the state of your project after data finishes uploading, providing summary statistics based on up to 500MB of your data. In EDA2, DataRobot does additional calculations on the target column using the entire dataset (excluding holdout) and recalculates summary statistics and ACE scores.
??? faq "What does a Leaderboard asterisk mean?"
An [asterisk on the Leaderboard](leaderboard-ref#asterisked-scores) indicates that the scores are computed from [stacked predictions](data-partitioning#what-are-stacked-predictions) on the model's training data.
??? faq "What does the Leaderboard snowflake icon mean?"
The snowflake next to a model indicates that the model is the result of a [frozen run](frozen-run). In other words, DataRobot “froze” parameter settings from a model’s early, small sample size-based run. Because parameter settings based on smaller samples tend to also perform well on larger samples of the same data, DataRobot can piggyback on its early experimentation.
??? faq "What is cross-validation?"
Cross validation is a [partitioning method](data-partitioning#k-fold-cross-validation-cv) for evaluating model performance. It is run automatically for datasets less than 50,000 rows and can be started manually from the Leaderboard for larger datasets.
??? faq "What do the modes and sample sizes mean?"
There are several [modeling mode options](model-data#set-the-modeling-mode); the selected mode determines the sample size(s) of the run. Autopilot is DataRobot's "survival of the fittest" modeling mode that automatically selects the best predictive models for the specified target feature and runs them at ever-increasing sample sizes.
??? faq "Why are the sample sizes shown in the repository not the standard Autopilot sizes?"
The sample size available when adding models from the [Repository](repository) differs depending on the size of the dataset. It defaults to the last Autopilot stage, either 64% or 500MB of data, whichever is smaller. In other words, it is the maximal [training size](repository#notes-on-sample-size) without stepping into validation.
??? faq "Are there modeling guardrails?"
DataRobot provides guardrails to help ensure ML best practices and instill confidence in DataRobot models. Some examples include a substantive data [quality assessment](data-quality), a feature list with [target leakage features removed](feature-lists#automatically-created-feature-lists), and automated [data drift tracking](data-drift).
??? faq "How are missing values handled?"
DataRobot handles missing values differently, depending on the model and/or value type. There are [certain patterns](model-ref#missing-values) recognized and handled as missing, as well as [disguised missing value](data-quality#disguised-missing-values) handling.
## Build models {: #build-models }
??? faq "Does DataRobot support feature transformations?"
In AutoML, DataRobot performs [automatic feature transformations](auto-transform) for features recognized as type “date,” adding these new features to the modeling dataset. Additionally, you can create [manual transformations](feature-transforms) and change variable type. For image datasets, the [train-time image augmentation](ttia-lists) process creates new training images. The [time series feature derivation](ts-create-data#create-the-modeling-dataset) process creates a new modeling dataset. [Feature Discovery](fd-overview) discovers and generates new features from multiple datasets to consolidate datasets. Or, use a [Spark SQL query](spark) from the AI Catalog to prepare a new dataset from a single dataset or blend two or more datasets. Transformed features are marked with an info icon on the data page.
??? faq "Can I choose which optimization metric to use?"
The optimization metric defines how to score models. DataRobot selects a metric best-suited for your data from a [comprehensive set of choices](opt-metric), but also computes alternative metrics. After [EDA1](eda-explained#eda1) completes, you can [change the selection](additional#change-the-optimization-metric) from the **Advanced Options > Additional** tab. After [EDA2](eda-explained#eda2) completes, you can redisplay the Leaderboard listing based on a different computed metric.
??? faq "Can I change the project type?"
Once you enter a target feature, DataRobot automatically analyzes the training dataset, determines the project type (classification if the target has categories or regression if the target is numerical), and displays the distribution of the target feature. If the project is classified as regression and [eligible for multiclass conversion](multiclass#change-regression-projects-to-multiclass), you can change the project to a classification project, and DataRobot will interpret values as classes instead of continuous values.
??? faq "How do I control how to group or partition my data for model training?"
By default, DataRobot splits your data into a 20% holdout (test) partition and an 80% cross-validation (training and validation) partition, which is divided into five sub-partitions. You can change these values after loading data and selecting a target from the **Advanced Options > Partitioning** tab. From there, you can [set the method](partitioning), sizes for data partitions, number of partitions for cross-validation, and the method by which those partitions are created.
??? faq "What do the green "importance" bars represent on the Data tab?"
The [Importance green bars](model-ref#importance-score), based on ["Alternating Conditional Expectations"](https://www.jds-online.com/files/JDS-156.pdf) (ACE) scores, show the degree to which a feature is correlated with the target. Importance has two components—Value and Normalized Value—and is calculated independently for each feature in the dataset.
??? faq "Does DataRobot handle natural language processing (NLP)?"
When text fields are detected in your data, DataRobot automatically detects the language and applies appropriate preprocessing. This may include advanced tokenization, data cleaning (stop word removal, stemming, etc.), and vectorization methods. DataRobot supports n-gram matrix (bag-of-words, bag-of-characters) analysis as well as word embedding techniques such as Word2Vec and fastText with both CBOW and Skip-Gram learning methods. Additional capabilities include Naive Bayes SVM and cosine similarity analysis. For visualization, there are per-class word clouds for text analysis. You can see the applied language preprocessing steps in the [model blueprint](blueprints).
??? faq "How do I restart a project with the same data?"
If your data is stored in the AI Catalog, you can [create and recreate](catalog#create-a-project) projects from that dataset. To recreate a project—using either just the data or the data and the settings (i.e., to duplicate the project)—use the [**Actions** menu](manage-projects#duplicate-a-project) in the project control center.
??? faq "Do I have to use the UI or can I interact programmatically?"
DataRobot provides both a UI and a REST API. The UI and REST API provide nearly matching functionality. Additionally, [Python and R clients](https://docs.datarobot.com/en/api/) provide a subset of what you can do with the full API.
??? faq "Does DataRobot provide partner integrations?"
DataRobot offers an an [Alteryx](alteryx) add-in and a [Tableau](tableau) extension. A [Snowflake integration](fd-overview#snowflake-integration) allows joint users to execute Feature Discovery projects in DataRobot while performing computations in Snowflake for minimized data movement.
=== "SaaS"
??? faq "What is the difference between prediction and modeling servers?"
Modeling servers power all the creation and model analysis done from the UI and from the R and Python clients. Prediction servers are used solely for making predictions and handling prediction requests on deployed models.
=== "Self-Managed"
??? faq "What is the difference between prediction and modeling servers?"
Modeling servers power all the creation and model analysis done from the UI and from the R and Python clients. Modeling worker resources are reported in the [Resource Monitor](resource-monitor). Prediction servers are used solely for making predictions and handling prediction requests on deployed models.
## Model insights {: #model-insights}
??? faq "How do I directly compare model performance?"
There are many ways to compare model performance. Some starter points:
* Look at the Leaderboard to compare [Validation, Cross-Validation, and/or Holdout scores](leaderboard-ref#columns-and-tools).
* Use [Learning Curves](learn-curve) to help determine whether it is worthwhile to increase the size of your dataset for a given model. The results help identify which models may benefit from being trained into the Validation or Holdout partition.
* [Speed vs Accuracy ](speed) compares multiple models in a measure of the tradeoff between runtime and predictive accuracy. If prediction latency is important for model deployment then this will help you find the most effective model.
* [Model Comparison](model-compare) lets you select a pair of models and compare a variety of insights (Lift Charts, Profit Curve, ROC Curves).
??? faq "How does DataRobot choose the recommended model?"
As part of the Autopilot modeling process, DataRobot identifies the most accurate non-blender model and [prepares it for deployment](model-rec-process). Although Autopilot recommends and prepares a single model for deployment, you can initiate the Autopilot recommendation and deployment preparation stages for any Leaderboard model.
??? faq "Why not always use the most accurate model?"
There could be several reasons, but the two most common are:
* Prediction latency—This means the speed at which predictions are made. Some business applications of a model will require very fast predictions on new data. The most accurate models are often blender models which are usually slower at making predictions.
* Organizational readiness—Some organizations favor linear models and/or decision trees for perceived interpretability reasons. Additionally, there may be compliance reasons for favoring certain types of models over others.
??? faq "Why doesn’t the recommended model have text insights?"
One common reason that text models are not built is because DataRobot removes single-character "words" when model building, a common practice in text mining. If this causes a problem, look at your [model log](log) and consider the [documented workarounds](analyze-insights#text-based-insights).
??? faq "What is model lift?"
Lift is the ratio of points correctly classified as positive in a model versus the 45-degree line (or baseline model) represented in the [Cumulative Gain](cumulative-charts#cumulative-gain-chart) plot. The cumulative charts show, for a given % of top predictions, how much more effective the selected model is at identifying the positive class versus the baseline model.
??? faq "What is the ROC Curve chart?"
The [ROC Curve](roc-curve-tab/index) tab provides extensive tools for exploring classification, performance, and statistics related to a selected model at any point on the probability scale. Documentation discusses prediction thresholds, the Matthews Correlation Coefficient (MCC), as well as interpreting the ROC Curve, Cumulative Gain, and Profit Curve charts.
??? faq "Can I tune model hyperparameters?"
You can tune model hyperparameters in the [Advanced Tuning](adv-tuning) tab for a particular model. From here, you can manually set model parameters, overriding the DataRobot selections. However, consider whether it is instead better to spend “tuning” time doing feature engineering, for example using [Feature Discovery](fd-overview) for automated feature engineering.
??? faq "How is Tree-Based Variable Importance different from Feature Impact?"
[Feature Impact](feature-impact) shows, at a high level, which features are driving model decisions. It is computed by permuting the rows of a given feature while leaving the rows of the other features unchanged, and measuring the impact of the permutation on the model's predictive performance. [Tree-based Variable Importance](analyze-insights#tree-based-variable-importance) shows how much gain each feature adds to a model--the relative importance of the key features. It is only available for tree/forest models (for example, Gradient Boosted Trees Classifier or Random Forest).
??? faq "How can I find models that produced coefficients?"
Any model that produces coefficients can be identified on the Leaderboard with a Beta () tag. Those models allow you to [export the coefficients](coefficients#generate-output) and transformation parameters necessary to verify steps and make predictions outside of DataRobot. When a blueprint has coefficients but is not marked with the Beta tag, it indicates that the coefficients are not exact (e.g., they may be rounded).
??? faq "What is the difference between "text mining" and "word clouds"?"
The [Text Mining](analyze-insights#text-mining-insights) and [Word Cloud](analyze-insights#word-cloud-insights) insights demonstrate text importance in different formats. *Text Mining* shows text coefficient effect (numeric value) and direction (positive=red or negative=blue) in a bar graph format. The Word Cloud shows the normalized version of those coefficients in a cloud format using text size and color.
??? faq "Why are there variables in some insights that are not in the dataset?"
DataRobot performs a variety of data preprocessing, such as [automatic transformations](auto-transform) and deriving features (for example, ratios and differences). When building models, it uses all useful features, which includes both original and derived variables.
??? faq "Why does Feature Effects show missing partial dependence values when my dataset has none?"
Partial dependence (PD) is reported as part of Feature Effects. It shows how dependent the prediction value is on different values of the selected feature. Prediction values are affected by all features, though, not just the selected feature, so PD must measure how predictions change given different values of the other features as well. When computing, DataRobot adds “missing” as one of the values [calculated](feature-effects#partial-dependence-calculations) for the selected feature, to show how the absence of a value will affect the prediction. The end result is the average effect of each value on the prediction, given other values, and following the distribution of the training data.
??? faq "How do I determine how long it will take to calculate Feature Effects?"
It can take a long time to compute Feature Effects, particularly if blenders are involved. As a rough estimate of the runtime, use the [Model Info](model-info) tab to check the time it takes, in seconds, for your model to score 1000 rows. Multiply this number by 0.5-1.0 hours. Note that the actual runtime may be longer if you don’t assign enough workers to work on all Feature Effects sub-jobs simultaneously.
??? faq "Why is a feature’s impact different depending on the model?"
Autopilot builds a wide selection of models to capture varying degrees of underlying complexity and each model has its strengths and weaknesses in addressing that complexity. [A feature’s impact](feature-impact) shouldn't be drastically different, however, so while the ordering of features will change, the overall inference is often not impacted. Examples:
* A model that is not capable of detecting nonlinear relationships or interactions will use the variables one way, while a model that can detect these relationships will use the variables another way. The result is different feature impacts from different models.
* If two variables are highly correlated, a regularized linear model will tend to use only one of them, while a tree-based method will tend to use both, and at different splits. With the linear model, one of these variables will show up high in feature importance and the other will be low, while with the tree-based model, both will be closer to the middle.
| general-modeling-faq |
---
title: Workbench
description: Workbench provides an organizational hierarchy that, from a Use Case as the top-level asset, supports experimentation and sharing.
---
# Workbench {: #workbench }
The components and workflow that comprise DataRobot's Workbench interface are summarized in the following sections:
Topic | Describes...
---------|-----------------
[Getting started](wb-getstarted/index) | What is Workbench? Learn about the interface [architecture](wb-overview#architecture) and see a [glossary of terms](wb-glossary).
[Use Cases](wb-usecase/index) | Learn about Workbench Use Cases.
[Data preparation](wb-dataprep/index) | Learn how to connect, import, and wrangle your data.
[Experiments](wb-experiment/index) | Learn how to create, compare, and manage experiements.
[Predictions](wb-predict/index) | Learn how to make predictions in Workbench.
[No-Code AI Apps](wb-apps/index) | Learn how to configure AI-powered applications using a no-code interface.
[Notebooks](wb-notebook/index) | Learn how to access the in-browser editor to create and execute code for data science analysis and modeling.
[Capability matrix](wb-capability-matrix) | An evolving comparison of capabilities available in DataRobot Classic and Workbench.
| index |
---
title: Notebook reference
description: Answers questions and provides tips for working with DataRobot Notebooks in DataRobot's Workbench.
section_name: Notebooks
---
{% include 'includes/notebooks/notebook-ref.md' %}
| dr-notebook-ref |
---
title: DataRobot Notebooks
description: Read documentation for DataRobot's notebook platform.
section_name: Notebooks
---
# Notebooks {: #notebooks }
{% include 'includes/notebooks/nb-index-main.md' %}
## Browser compatibility {: #browser-compatibility }
{% include 'includes/browser-compatibility.md' %}
| index |
---
title: DataRobot API resources
description: Use the REST, Python, and R APIs as a programmatic alternative to the UI for creating and managing DataRobot projects.
---
# DataRobot API resources {: #api-documentation-home }
DataRobot supports REST, Python, and R APIs as a programmatic alternative to the UI for creating and managing DataRobot projects. It allows you to automate processes and iterate more quickly, and lets you use DataRobot with scripted control. The API provides an intuitive modeling and prediction interface. You can use the API with DataRobot-supported clients in either R or Python, or with your own custom code. The clients are supported in Windows, UNIX, and OS X environments. Additionally, you can generate predictions with the prediction and batch prediction APIs, and build DataRobot blueprints in the blueprint workshop.
Review the sections of the API documentation below to find the best resources for your needs:
- New users can review the [API quickstart](#api-quickstart) guide to configure their environments and get started with DataRobot's APIs.
- The [API user guide](#api-user-guide) details examples of machine learning workflows and use cases that address common data science problems.
- The [API reference](#reference-documentation) provides documentation on the DataRobot REST API, Python client, R client, blueprint workshop, Prediction API, and Batch Prediction API.
## API quickstart {: #api-quickstart }
Review the [API quickstart guide](api-quickstart/index) to configure your environment to use the API and try a sample problem with examples in Python, R, and cURL.
## API user guide {: #api-user-guide }
Browse [user guide](api/guide/index) topics to find complete examples of common data science and machine learning workflows. The API user guide includes overviews, Jupyter notebooks, and task-based tutorials.
Topic | Describes...
----- | ------
[Modeling workflow overview](modeling-workflow) | Learn how to use DataRobot's clients, both Python and R, to train and experiment with models. | [Python download](guide/python-modeling.ipynb) <br> [R download](guide/r-modeling.ipynb) |
[Common use cases](guide/common-case/index) | Review Jupyter notebooks that outline common use cases and machine learning workflows using DataRobot's Python client. |
[Python code examples](python/index) | Browse Python code examples for common data science workflows. |
[R code examples](r-nb/index) | Review R code examples that outline common data science workflows. |
[REST API code examples](restapi/index) | Review REST API code examples that outline common data science workflows. |
<!--private start-->
## Reference documentation {: #reference-documentation }
DataRobot offers [reference documentation](api/reference/index) available for the following programmatic tools.
Topic | Describes...
-------- | -----------
[DataRobot REST API](reference/public-api/index) | The DataRobot REST API provides a programmatic alternative to the UI for creating and managing DataRobot projects. You can also access the [legacy REST API docs](/apidocs/).
[Open API specification](/api/v2/openapi.yaml) | Reference the OpenAPI specification for the DataRobot REST API. It helps automate the generation of a client for languages that DataRobot doesn't offer a client. It assists with design, implementation, and testing integration with DataRobot's REST API using a variety of automated OpenAPI-compatible tools.
[Python client](https://datarobot-public-api-client.readthedocs-hosted.com/){ target=_blank } | Installation, configuration, and reference documentation for working with the Python client library.<br /> <ul><li> <a href="https://pypi.org/project/datarobot/" target="_blank">Access the client package.</a> </li> <li> <a href="https://datarobot-public-api-client.readthedocs-hosted.com/" target="_blank">Read client documentation.</a> </li>
[R client](https://cran.r-project.org/package=datarobot){ target=_blank } | **R client**: Installation, configuration, and reference documentation for working with the R client library.<br /> <ul><li> <a href="https://cran.r-project.org/package=datarobot" target="_blank">Access the R package</a> </li> <li> <a href="https://cran.r-project.org/web/packages/datarobot/datarobot.pdf" target="_blank">Read R client documentation</a> </li>
[Blueprint workshop](https://blueprint-workshop.datarobot.com/index.html){ target=_blank } | Construct and modify DataRobot blueprints and their tasks using a programmatic interface.
[Prediction API](dr-predapi) | Generate predictions with a deployment by submitting JSON or CSV input data via a POST request.
[Batch Prediction API](reference/batch-prediction-api/index) | Score large datasets with flexible options for intake and output using the prediction servers you have deployed via the Batch Prediction API.
<!--private end-->
<!--public start-->
### Reference documentation {: #reference-documentation }
DataRobot offers [reference documentation](api/reference/index) available for the following programmatic tools.
Topic | Describes...
-------- | -----------
[DataRobot REST API](reference/public-api/index) | The DataRobot REST API provides a programmatic alternative to the UI for creating and managing DataRobot projects.
[Python client](https://datarobot-public-api-client.readthedocs-hosted.com/){ target=_blank } | Installation, configuration, and reference documentation for working with the Python client library.<br /> <ul><li> <a href="https://pypi.org/project/datarobot/" target="_blank">Access the Python package.</a> </li> <li> <a href="https://datarobot-public-api-client.readthedocs-hosted.com/" target="_blank">Read client documentation.</a> </li>
[R client](https://cran.r-project.org/package=datarobot){ target=_blank } | **R client**: Installation, configuration, and reference documentation for working with the R client library.<br /> <ul><li> <a href="https://cran.r-project.org/package=datarobot" target="_blank">Access the R package</a> </li> <li> <a href="https://cran.r-project.org/web/packages/datarobot/datarobot.pdf" target="_blank">Read R client documentation</a> </li>
[Blueprint workshop](https://blueprint-workshop.datarobot.com/index.html){ target=_blank } | <a href="https://blueprint-workshop.datarobot.com/index.html" target="_blank">Construct and modify</a> DataRobot blueprints and their tasks using a programmatic interface.
[Prediction API](dr-predapi) | Generate predictions with a deployment by submitting JSON or CSV input data via a POST request.
[Batch Prediction API](reference/batch-prediction-api/index) | Score large datasets with flexible options for intake and output using the prediction servers you have deployed via the Batch Prediction API.
<!--public end-->
### AI accelerators {: #ai-accelerators }
[AI Accelerators](accelerators/index) are designed to help speed up model experimentation, development, and production using the DataRobot API. They codify and package data science expertise in building and delivering successful machine learning projects into repeatable, code-first workflows and modular building blocks. AI Accelerators are ready right out-of-the-box, work with the notebook of your choice, and can be combined to suit your needs.
## Self-Managed AI Platform API resources {: #self-managed-ai-platform-api-resources }
To access current and past Python and R clients and documentation, use the following links:
* Python
* [Current client](https://pypi.org/project/datarobot/) from PyPi.
* [Archived clients](https://pypi.org/project/datarobot/#history){ target=_blank } from PyPi.
* [Documentation](https://datarobot-public-api-client.readthedocs-hosted.com/){ target=_blank } from ReadTheDocs; use the version selector in the bottom left of the page to see past versions.
* R
* Current client and documentation from [CRAN](https://cran.r-project.org/package=datarobot){ target=_blank }.
* [Archived client and docs](https://cran.r-project.org/src/contrib/Archive/datarobot/) (docs are included in the client itself).
The table below outlines which versions of DataRobot's SDKs correspond to DataRobot's Self-Managed AI Platform versions.
| Self-Managed AI Platform version | Python SDK version | R SDK version |
|----------------------------- | ------------------ | ------------- |
v9.0 | v3.1 | v2.29 (Public Preview) |
v8.0 | v2.28 | v2.18.2 |
v7.3 | v2.27.3 | v2.18.2 |
v7.2 | v2.26.0 | v2.18.2 |
v7.1 | v2.25.1 | v2.18.2 |
v7.0 | v2.24.0 | v2.18.2 |
v6.3 | v2.23.0 | v2.17.1 |
v6.2 | v2.22.1 | v2.17.1 |
v6.1 | v2.21.5 | v2.17.1 |
v6.0 | v2.20.2 | v2.17.1 |
v5.3 | v2.19.0 | v2.17.1 |
v5.2 | v2.18.0 | v2.17.1 |
v5.1 | v2.17.0 | v2.17.1 |
v5.0 | v2.15.1 | v2.15.0 |
v4.5 | v2.14.2 | v2.14.2 |
v4.4 | v2.13.3 | v2.13.1 |
v4.3 | v2.11.2 | v2.11.0 |
v4.2 | v2.9.3 | v2.9.0 |
v4.0 | v2.8.3 | v2.8.0 |
v3.1 | v2.7.3 | v2.7.1 |
v3.0 | v2.6.2 | v2.6.0 |
v2.9 | v2.4.3 | v2.4.0 |
v2.8 | v2.0.37 | v2.0.30 |
!!! note
Both the backend and clients use versioning in the format Major.Minor.Patch (e.g., v2.3.1), but there is no relationship between the patch version of the backend and the patch version of the clients. There is a requirement, however, that the backend version has a major.minor version equal to or greater than the client version. For example, a v2.2 client can "talk" to either a v2.2 backend or a v2.4 backend, but cannot be used with a v2.0 backend.
### Install commands {: #install-commands }
Use the tabs below to view the install commands for Python and R. The commands are grouped by Major version (v5.x, 4.x, etc.).
#### v8.0 {: #v80 }
Python: `pip install "datarobot>=2.28,<2.29"``
R:
```
mkdir -p ~/datarobot_2.18.2 && tar -xvzf ~/Downloads/datarobot_2.18.2.tar.gz -C ~/datarobot_2.18.2
install.packages('devtools') # (If you don't already have devtools on your system.)
devtools::install('~/datarobot_2.18.2/datarobot')
```
#### v7.x
=== "v7.3"
Python: `pip install "datarobot>=2.27.4,<2.28"``
R:
```
mkdir -p ~/datarobot_2.18.2 && tar -xvzf ~/Downloads/datarobot_2.18.2.tar.gz -C ~/datarobot_2.18.2
install.packages('devtools') # (If you don't already have devtools on your system.)
devtools::install('~/datarobot_2.18.2/datarobot')
```
=== "v7.2"
Python: `pip install "datarobot>=2.26.0,<2.27"`
R:
```
mkdir -p ~/datarobot_2.18.2 && tar -xvzf ~/Downloads/datarobot_2.18.2.tar.gz -C ~/datarobot_2.18.2
install.packages('devtools') # (If you don't already have devtools on your system.)
devtools::install('~/datarobot_2.18.2/datarobot')
```
=== "v7.1"
Python: `pip install "datarobot>=2.25.1,<2.26"`
R:
```
mkdir -p ~/datarobot_2.18.2 && tar -xvzf ~/Downloads/datarobot_2.18.2.tar.gz -C ~/datarobot_2.18.2
install.packages('devtools') # (If you don't already have devtools on your system.)
devtools::install('~/datarobot_2.18.2/datarobot')
```
=== "v7.0"
Python: `pip install "datarobot>=2.24.0,<2.25.1"`
R:
```
mkdir -p ~/datarobot_2.18.2 && tar -xvzf ~/Downloads/datarobot_2.18.2.tar.gz -C ~/datarobot_2.18.2
install.packages('devtools') # (If you don't already have devtools on your system.)
devtools::install('~/datarobot_2.18.2/datarobot')
```
#### v6.x {: #v6x }
=== "v6.3"
Python: `pip install "datarobot>=2.23,<2.24"``
R:
```
mkdir -p ~/datarobot_2.17.1 && tar -xvzf ~/Downloads/datarobot_2.17.1.tar.gz -C ~/datarobot_2.17.1
install.packages('devtools') # (If you don't already have devtools on your system.)
devtools::install('~/datarobot_2.17.1/datarobot')
```
=== "v6.2"
Python: `pip install "datarobot>=2.22.1,<2.23"`
R:
```
mkdir -p ~/datarobot_2.17.1 && tar -xvzf ~/Downloads/datarobot_2.17.1.tar.gz -C ~/datarobot_2.17.1
install.packages('devtools') # (If you don't already have devtools on your system.)
devtools::install('~/datarobot_2.17.1/datarobot')
```
=== "v6.1"
Python: `pip install "datarobot>=2.21.5,<2.22.1"`
R:
```
mkdir -p ~/datarobot_2.17.1 && tar -xvzf ~/Downloads/datarobot_2.17.1.tar.gz -C ~/datarobot_2.17.1
install.packages('devtools') # (If you don't already have devtools on your system.)
devtools::install('~/datarobot_2.17.1/datarobot')
```
=== "v6.0"
Python: `pip install "datarobot>=2.20.2,<2.21.5"`
R:
```
mkdir -p ~/datarobot_2.17.1 && tar -xvzf ~/Downloads/datarobot_2.17.1.tar.gz -C ~/datarobot_2.17.1
install.packages('devtools') # (If you don't already have devtools on your system.)
devtools::install('~/datarobot_2.17.1/datarobot')
```
#### v5.x {: #v5x }
=== "v5.3"
Python: `pip install "datarobot>=2.19.0,<2.20"`
R:
```
mkdir -p ~/datarobot_2.17.1 && tar -xvzf ~/Downloads/datarobot_2.17.1.tar.gz -C ~/datarobot_2.17.1
install.packages('devtools') # (If you don't already have devtools on your system.)
devtools::install('~/datarobot_2.17.1/datarobot')
```
=== "v5.2"
Python: `pip install "datarobot>=2.18,<2.19"`
R:
```
mkdir -p ~/datarobot_2.17.1 && tar -xvzf ~/Downloads/datarobot_2.17.1.tar.gz -C ~/datarobot_2.17.1
install.packages('devtools') # (If you don't already have devtools on your system.)
devtools::install('~/datarobot_2.17.1/datarobot')
```
=== "v5.1"
Python: `pip install "datarobot>=2.17,<2.18"`
R:
```
mkdir -p ~/datarobot_2.17.1 && tar -xvzf ~/Downloads/datarobot_2.17.1.tar.gz -C ~/datarobot_2.17.1
install.packages('devtools') # (If you don't already have devtools on your system.)
devtools::install('~/datarobot_2.17.1/datarobot')
```
=== "v5.0"
Python: `pip install "datarobot>=2.15,<2.16"`
R:
```
mkdir -p ~/datarobot_2.15.0 && tar -xvzf ~/Downloads/datarobot_2.15.0.tar.gz -C ~/datarobot_2.15.0
install.packages('devtools') # (If you don't already have devtools on your system.)
devtools::install('~/datarobot_2.15.0/datarobot')
```
#### v4.x {: #v4x }
=== "v4.5"
Python: `pip install "datarobot>=2.14,<2.15"``
R:
```
mkdir -p ~/datarobot_2.14.0 && tar -xvzf ~/Downloads/datarobot_2.14.0.tar.gz -C ~/datarobot_2.14.0
install.packages('devtools') # (If you don't already have devtools on your system.)
devtools::install('~/datarobot_2.14.0/datarobot')
```
=== "v4.4"
Python: `pip install "datarobot>=2.13,<2.14"`
R:
```
mkdir -p ~/datarobot_2.13.0 && tar -xvzf ~/Downloads/datarobot_2.13.0.tar.gz -C ~/datarobot_2.13.0
install.packages('devtools') # (If you don't already have devtools on your system.)
devtools::install('~/datarobot_2.13.0/datarobot')
```
=== "v4.3.1"
Python: `pip install "datarobot>=2.12,<2.13"`
R:
```
mkdir -p ~/datarobot_2.12.1 && tar -xvzf ~/Downloads/datarobot_2.12.1.tar.gz -C ~/datarobot_2.12.1
install.packages('devtools') # (If you don't already have devtools on your system.)
devtools::install('~/datarobot_2.12.1/datarobot')
```
=== "v4.3"
Python: `pip install "datarobot>=2.11,<2.12"`
R:
```
mkdir -p ~/datarobot_2.11.0 && tar -xvzf ~/Downloads/datarobot_2.11.0.tar.gz -C ~/datarobot_2.11.0
install.packages('devtools') # (If you don't already have devtools on your system.)
devtools::install('~/datarobot_2.11.0/datarobot')
```
=== "v4.2"
Python: `pip install "datarobot>=2.9,<2.10"`
R:
```
mkdir -p ~/datarobot_2.9.0 && tar -xvzf ~/Downloads/datarobot_2.9.0.tar.gz -C ~/datarobot_2.9.0
install.packages('devtools') # (If you don't already have devtools on your system.)
devtools::install('~/datarobot_2.9.0/datarobot')
```
=== "v4.0"
Python: `pip install "datarobot>=2.8,<2.9"`
R:
```
mkdir -p ~/datarobot_2.8.0 && tar -xvzf ~/Downloads/datarobot_2.8.0.tar.gz -C ~/datarobot_2.8.0
install.packages('devtools') # (If you don't already have devtools on your system.)
devtools::install('~/datarobot_2.8.0/datarobot')
```
#### v3.x {: #v3x }
=== "v3.1"
Python: `pip install "datarobot>=2.7,<2.8"`
R:
```
mkdir -p ~/datarobot_2.7.0 && tar -xvzf ~/Downloads/datarobot_2.7.0.tar.gz -C ~/datarobot_2.7.0
install.packages('devtools') # (If you don't already have devtools on your system.)
devtools::install('~/datarobot_2.7.0/datarobot')
```
=== "v3.0"
Python: `pip install "datarobot>=2.8,<2.9"`
R:
```
mkdir -p ~/datarobot_2.6.0 && tar -xvzf ~/Downloads/datarobot_2.6.0.tar.gz -C ~/datarobot_2.6.0
install.packages('devtools') # (If you don't already have devtools on your system.)
devtools::install('~/datarobot_2.6.0/datarobot')
```
#### v2.x {: #v2x }
=== "v2.9"
Python: `pip install "datarobot>=2.4,<2.5"`
R:
```
mkdir -p ~/datarobot_2.4.0 && tar -xvzf ~/Downloads/datarobot_2.4.0.tar.gz -C ~/datarobot_2.4.0
install.packages('devtools') # (If you don't already have devtools on your system.)
devtools::install('~/datarobot_2.4.0/datarobot')
```
=== "v2.8"
Python: `pip install "datarobot>=2.0,<2.1"`
R: `install.packages("datarobot", type="source")`
| index |
---
title: Dataset requirements
description: Detailed dataset requirements for file size and format, rows, columns, encodings and characters sets, column length and name conversion, and more.
---
# Dataset requirements {: #dataset-requirements }
This section provides information on dataset requirements:
* [General requirements](#general-requirements)
* [Ensuring acceptable file sizes](#ensure-acceptable-file-import-size)
* [AutoML file import sizes](#automl-file-import-sizes)
* [Time series (AutoTS) file import sizes](#time-series-file-import-sizes)
* [Feature Discovery file import sizes](#feature-discovery-file-import-sizes)
* [Pipeline data requirements](#pipeline-data-requirements)
* [File formats](#file-formats)
* [Encodings and character sets](#encodings-and-character-sets)
* [Special column detection](#special-column-detection)
* [Length](#length) and [name conversion](#column-name-conversions)
* [File download sizes](#file-download-sizes)
See the associated [considerations](data/index#feature-considerations) for important additional information.
## General requirements {: #general-requirements }
Consider the following dataset requirements for AutoML, [time series](time/index), and [Visual AI](visual-ai/index) projects. See additional information about [preparing your dataset](vai-model#prepare-the-dataset) for Visual AI.
| Requirement | Solution | Dataset type | Visual AI |
|---------------|-------------|----------------|--------------|
| Dataset minimum row requirements for non-date/time projects: <ul><li>For regression projects, 20 data rows plus header row</li><li>For binary classification projects: <ul><li>minimum 20 minority- and 20 majority-class rows</li><li>minimum 100 total data rows, plus header row</li></ul><li>For multiclass classification projects, minimum 100 total data rows, plus header row</li></ul>| Error displays number of rows found; add rows to the dataset until project meets the minimum rows (plus the header).| Training | Yes |
| [Date/time partitioning-based projects (Time series and OTV)](ts-date-time#partition-without-holdout) have specific row [requirements](#time-series-file-import-sizes)). | Error displays number of rows found; add rows to the dataset until project meets the minimum rows (plus the header).| Training | Yes |
| Dataset used for predictions via the GUI must have at least one data row plus header row. | Error displays zero rows found; add a header row and one data row. | Predictions | Yes |
| Dataset cannot have more than 20,000 columns. | Error message displays column count and limit; reduce the number of columns to less than 20,000. | Training, predictions | Yes |
| Dataset must have headers. | Lack of header generally leads to bad predictions or ambiguous column names; add headers. | Training, predictions | Yes for CSV. If ZIP upload contains one folder of images per class, then technically there are not headers and so this is not always true for Visual AI. |
| Dataset must meet deployment type and release size limits. | Error message displays dataset size and configured limit. Contact <a target="_blank" href="https://support.datarobot.com">DataRobot Support</a> for size limits; reduce dataset size by trimming rows and/or columns. | Training, predictions | Yes <br> **managed AI Platform:** 5GB; 100k 224x224 pixels / 50kB images <br>**Self-Managed AI Platform:** 10GB; 200k 224x224 pixels / 50kB images</li></ul> |
| Number of columns in the header row must be greater or equal to the number of columns in all data rows. For any data row with fewer columns than the maximum, DataRobot assumes a value of NA/NULL for that row. Number of columns in the header row must be greater or equal to the number of columns in all data rows. | Error displays the line number of the first row that failed to parse; check the row reported in the error message. Quoting around text fields is a common reason for this error. | Training, predictions | Yes |
| Dataset cannot have more than one blank (empty) column name. Typically the first blank column is the first column, due to the way some tools write the index column. | Error displays column index of the second blank column; add a label to the column. | Training, predictions | Yes |
| Dataset cannot have any column names containing only whitespace. A single blank column name (no whitespace) is allowed, but columns such as “(space)” or "(space)(space)" are not allowed. | Error displays index of the column that contained only space(s); remove the space, or rename the column. | Training, predictions | Yes |
| All dataset feature names must be unique. No feature name can be used for more than one column, and feature names must differ from each other beyond just their use of special characters `(e.g., -, $, ., {, }, \n, \r, ", or ')`. | Error displays the two columns that resolved to the same name after sanitization; rename one column name. Example: `robot.bar` and `robot$bar` both resolve to `robot\_bar`. | Training, predictions | Yes |
| Dataset must use a supported [encoding](#encodings-and-character-sets). Because UTF-8 processes the fastest, it is the recommended encoding. | Error displays that the detected encoding is not supported or could not be detected; save the dataset to a CSV/delimited format, via another program, and change encoding. | Training, predictions | Yes |
| Dataset files must have one of the following delimiters: comma (,), tab (\t), semicolon (;), or pipe ( \| ). | Error displays a malformed CSV/delimited message; save the dataset to another program (e.g., Excel) and modify to use a supported delimiter. A problematic delimiter that is one of the listed values indicates a quoting problem. For text datasets, if strings are not quoted there may be issues detecting the proper delimiter. Example: in a tab separated dataset, if there are commas in text columns that are not quoted, they may be interpreted as a delimiter. See this [note](#ensure-acceptable-file-import-size) for a related file size issue. | Training, predictions | Yes |
| Excel datasets cannot have date times in the header. | Error displays the index of the column and approximation of the column name; rename the column (e.g., “date” or “date-11/2/2016”). Alternatively, save the dataset to CSV/delimited format. | Training, predictions | Yes |
| Dataset must be a single file. | Error displays that the specified file contains more than one dataset. This most commonly occurs with archive files (tar and zip); uncompress the archive and make sure it contains only one file. | Training, predictions | Yes |
| User must have read permissions to the dataset when using URL or HDFS ingest. | Error displays that user does not have permission to access the dataset. | Training, predictions | Yes |
| All values in a [date](#special-column-detection) column must have the same format or be a null value. | Error displays the value that did not match and the format itself; find the unmatched value in the date column and change it. | Training, predictions | Yes, this applies to the dataset whenever there is a date column, with no dependence on an image column. |
| Text features can contain up to 5 million characters (for a single cell); in some cases up to 10 million characters are accepted. In other words, no practical limit and the total size of the dataset is more likely the limiting factor. | N/A | Training, predictions | Yes, this applies to the dataset whenever there is a text column, with no dependence on an image column. |
## Ensure acceptable file import size {: #ensure-acceptable-file-import-size }
!!! note
All file size limits represent the uncompressed size.
When ingesting a dataset, its actual on-disk size might be different inside of DataRobot.
* If the original dataset source is a CSV, then the size may differ slightly from the original size due to data preprocessing performed by DataRobot.
* If the original dataset source is *not* a CSV (e.g., is SAS7BDAT, JDBC, XLSX, GEOJSON, Shapefile, etc.), the on-disk size will be that of the dataset when converted to a CSV. SAS7BDAT, for example, is a binary format that supports different encoding types. As a result, it is difficult to estimate the size of data when converted to CSV based only on the input size as a SAS7BDAT file.
* XLSX, due to its structure, is read in as a single, whole document which can cause OOM issues when trying to parse. CSV, by contrast is read in in chunks to reduce memory usage and prevent errors. Best practice recommends not exceeding 150MB for XLSX files.
* If the original dataset source is an archive or a compressed CSV (e.g., .gzip, .bzip2, .zip, .tar, .tgz), the actual on-disk size will be that of the uncompressed CSV after preprocessing is performed.
Keep the following in mind when considering file size:
* Some of the preprocessing steps that are applied consist of converting the dataset encoding to UTF-8, adding quotation marks for the field data, normalizing missing value representation, converting geospatial fields, and sanitizing column names.
* In the case of image archives or other similar formats, additional preprocessing will be done to add the images file contents to the resulting CSV. This potentially will make the size of the final CSV drastically different from the original uploaded file.
* File size limitations are applied to files *once they have been converted* to CSV. If you upload a zipped file into DataRobot, when DataRobot extracts the file, the file must be less than the file size limits.
* If a delimited-CSV dataset (CSV, TSV, etc.) size is close to the upload limit prior to ingest, it is best to do the conversion outside of DataRobot. This helps ensure that the file import does not exceed the limit. If a non-comma delimited file is near to the limit size, it may be best to convert to a comma-delimited CSV outside of DataRobot as well.
* When converting to CSV outside of DataRobot, be sure to use commas as the delimiter, newline as the record separator, and UTF-8 as the encoding type to avoid discrepancies in uploaded file size and size counted against DataRobot's maximum file size limit.
* Consider modifying optional feature flags in some cases:
* **Disable Early Limit Checking**: By selecting, you disable the estimate-based early limit checker and instead use an exact limit checker. This may help allow ingestion of files that are close to the limit in case the estimate is slightly off. Note however that if the limit _is_ exceeded, projects will fail later in the ingest process.
* **Enable Minimal CSV Quoting**: Sets the conversion process to be more conservative when quoting the converted CSV, allowing the CSV to be smaller. Be aware, however, that doing so may make projects non-repeatable. This is because if you ingest the dataset with and without this setting enabled, the EDA samples and/or partitioning may differ, which can lead to subtle differences in the project. (By contrast, ingesting the same dataset with the same setting will result in a repeatable project.)
## AutoML file import sizes {: #automl-file-import-sizes }
The following sections describe file import size requirements based on deployment type.
!!! note
File size upload is dependent on your DataRobot package, and in some cases the number and size of servers deployed. See tips to ensure [acceptable file size](#ensure-acceptable-file-import-size) for more assistance.
=== "SaaS"
File type | Maximum size | Notes
--------- | ------------ | ------
CSV (training) | 2GB | Base Package
CSV (training) | 5GB | Premium Package
CSV (training) | 5GB | Enterprise Package
CSV (training) | Up to 10GB\* | Business Critical Package
XLSX | 150MB See [note](#ensure-acceptable-file-import-size) |
\* Up to 10GB applies to AutoML projects; [considerations apply](data/index#feature-considerations).
=== "Self-Managed"
File type | Maximum size | Release availability | Notes
--------- | ------------ | -------------------- | ---
CSV (training) | Up to 10GB | All | Varies based on your DataRobot package and available hardware resources.
XLS | 150MB | 3.0.1 and later |
### OTV requirements {: #otv-requirements }
For [out-of-time validation (OTV)](otv) modeling, maximum dataset size: Less than 5GB.
OTV backtests require at least 20 rows in each of the validation and holdout folds and at least 100 rows in each training fold. If you set a number of backtests that results in any of the runs not meeting that criteria, DataRobot only runs the number of backtests that do meet the minimums (and marks the display with an asterisk). For example:
* With one backtest, no holdout, minimum 100 training rows and 20 validation rows (120 total).
* With one backtest and holdout, minimum 100 training rows, 20 validation rows, 20 holdout rows (140 total).
## Prediction file import sizes {: #prediction-file-import-sizes }
{% include 'includes/pred-limits-include.md' %}
## Time series file import sizes {: #time-series-file-import-sizes }
When using [time series](time/index), datasets must meet the following size requirements:
File type | Single series maximum size | Multiseries/segmented maximum size | Release availability | Notes
--------- | ------------ | ------------ | -------------------- | -----
CSV (training) | 500MB | 5GB | N/A | Managed AI Platform (SaaS)
CSV (training) | 500MB | 1GB | 5.3 | 30GB modeler configuration
CSV (training) | 500MB | 2.5GB | 6.0 | 30GB modeler configuration
CSV (training) | 500MB | 5.0GB | 6.0 | 60GB modeler configuration
If you set a number of backtests that results in any of the runs not meeting that criteria, DataRobot only runs the number of backtests that do meet the minimums (and marks the display with an asterisk). Specific features of time series:
Feature | Requirement
--------- | ------------
*Minimum rows per backtest* | :~~:
Data ingest: Regression | 20 rows for training and 4 rows for validation
Data ingest: Classification | 75 rows for training and 12 rows for validation
Post-feature derivation: Regression | Minimum 35 rows
Post-feature derivation: Classification | 100 rows
*Calendars* | :~~:
Calendar event files | Less than 1MB and 10K rows
*Multiseries modeling**| :~~:
External baseline files for model comparison | Less than 5GB
\* Self-Managed AI Platform versions 5.0 or later are limited to 100,000 series; versions 5.3 or later are limited to 1,000,000 series.
!!! note
There are times that you may want to [partition without holdout](ts-leaderboard#partition-without-holdout), which changes the minimum ingest rows and also the output of various visualizations.
For releases 4.5, 4.4 and 4.3, datasets must be less than 500MB. For releases 4.2 and 4.0, datasets must be less than 10MB for time series and less than 500MB for OTV. Datasets must be less than 5MB for projects using Date/Time partitioning in earlier releases.
## Feature Discovery file import sizes {: #feature-discovery-file-import-sizes }
When using Feature Discovery, the following requirements apply:
* Secondary datasets must be either uploaded files or JDBC sources registered in the **AI Catalog**.
* You can have a maximum of 30 datasets per project.
* The sum of all dataset sizes (both primary and secondary) cannot exceed 100GB, and individual dataset sizes cannot exceed 11GB. See the download limits [download limits](#file-download-sizes) mentioned below.
## Data formats {: #data-formats }
DataRobot supports the following formats and types for data ingestion. See also the supported [data types](model-ref#data-summary-information).
### File formats {: #file-formats }
* .csv, .dsv, or .tsv* (preferred formats)
* database tables
* .xls/.xlsx
* .sas7bdat
* .parquet**
* .avro**
\*The file must be a comma-, tab-, semicolon-, or pipe-delimited file with a header for each data column. Each row must have the same number of fields, some of which may be blank.
**These file types are supported only if enabled for users in your organization. Contact your DataRobot representative for more information.
### Location AI file formats {: #location-ai-file-formats }
The following [Location AI](lai-ingest) file types are supported only if enabled for users in your organization:
* ESRI Shapefiles
* GeoJSON
* ESRI File Geodatabase
* Well Known Text (embedded in table column)
* PostGIS Databases (The file must be a comma-delimited, tab-delimited, semicolon-delimited, or pipe-delimited file and must have a header for each data column. Each row must have the same number of fields (columns), some of which may be blank.)
### Compression formats {: #compression-formats }
- .gz
- .bz2
### Archive format {: #archive-format }
- .tar
### Compression and archive formats {: #compression-and-archive-formats }
- .zip
- .tar.gz/.tgz
- .tar.bz2
Both compression and archive are accepted. Archive is preferred, however, because it allows DataRobot to know the uncompressed data size and therefore to be more efficient during data intake.
### Decimal separator {: #decimal-separator }
The period (.) character is the only supported decimal separator—DataRobot does not support locale-specific decimal separators such as the comma (,). In other words, a value of `1.000` is equal to one (1), and cannot be used to represent one thousand (1000). If a different character is used as the separator, the value is treated as categorical.
A _numeric_ feature can be positive, negative, or zero, and must meet one of the following criteria:
* Contains no periods or commas.
* Contains a single period (values with more than one period are treated as <em>categorical</em>).
The table below provides sample values and their corresponding variable type:
| Feature value | Data type |
|---------------|---------------|
| 1000000 | Numeric |
| 0.1 | Numeric |
| 0.1 | Numeric |
| 1,000.000 | Categorical |
| 1.000.000 | Categorical |
| 1,000,000 | Categorical |
| 0,1000 | Categorical |
| 1000.000… | Categorical |
| 1000,000… | Categorical |
| (0,100) | Categorical |
| (0.100) | Categorical |
!!! tip
Attempting a feature transformation (on features considered categorical based on the separator) from categorical to numeric will result in an empty numeric feature.
## Encodings and character sets {: #encodings-and-character-sets }
Datasets must adhere to the following encoding requirements:
- The data file cannot have any extraneous characters or escape sequences (from URLs).
- Encoding must be consistent through the entire data set. For example, if a datafile is encoded as UTF-8 for the first 100MB, but later in the file there are non-utf-8 characters, it can potentially fail due to incorrect detection from the first 100MB.
Data must adhere to one of the following encodings:
- ascii
- cp1252
- utf-8
- utf-8-sig
- utf-16
- utf-16-le
- utf-16-be
- utf-32
- utf-32-le
- utf-32-be
- Shift-JIS
- ISO-2022-JP
- EUC-JP
- CP932
- ISO-8859-1
- ISO-8859-2
- ISO-8859-5
- ISO-8859-6
- ISO-8859-7
- ISO-8859-8
- ISO-8859-9
- windows-1251
- windows-1256
- KOI8-R
- GB18030
- Big5
- ISO-2022-KR
- IBM424
- windows-1252
## Special column detection {: #special-column-detection }
Note that these special columns will be detected if they meet the criteria described below, but `currency`, `length`, `percent`, and `date` cannot be selected as the target for a project. However, `date` can be selected as a partition feature.
### Date and time formats {: #date-and-time-formats }
Columns are detected as date fields if they match any of the formats containing a date listed below. If they are strictly time formats, (for example, `%H:%M:%S`) they are detected as time. See the <a target="_blank" href="https://docs.python.org/2/library/datetime#strftime-and-strptime-behavior">Python definition table</a> for descriptions of the directives. The following table provides examples using the date and time January 25, 1999 at 1:01 p.m. (specifically, 59 seconds and 000001 microseconds past 1:01 p.m.).
| String | Example |
|--------------------------|--------------------------------|
| %H:%M | 13:01 |
| %H:%M:%S | 13:01:59 |
| %I:%M %p | 01:01 PM |
| %I:%M:%S %p | 01:01:59 PM |
| %M:%S | 01:59 |
| %Y %m %d | 1999 01 25 |
| %Y %m %d %H %M %S | 1999 01 25 13 01 59 |
| %Y %m %d %I %M %S %p | 1999 01 25 01 01 59 PM |
| %Y%m%d | 19990125 |
| %Y-%d-%m | 1999-25-01 |
| %Y-%m-%d | 1999-01-25 |
| %Y-%m-%d %H:%M:%S | 1999-01-25 13:01:59 |
| %Y-%m-%d %H:%M:%S.%f | 1999-01-25 13:01:59.000000 |
| %Y-%m-%d %I:%M:%S %p | 1999-01-25 01:01:59 PM |
| %Y-%m-%d %I:%M:%S.%f %p | 1999-01-25 01:01:59.000000 PM |
| %Y-%m-%dT%H:%M:%S | 1999-01-25T13:01:59 |
| %Y-%m-%dT%H:%M:%S.%f | 1999-01-25T13:01:59.000000 |
| %Y-%m-%dT%H:%M:%S.%fZ | 1999-01-25T13:01:59.000000Z |
| %Y-%m-%dT%H:%M:%SZ | 1999-01-25T13:01:59Z |
| %Y-%m-%dT%I:%M:%S %p | 1999-01-25T01:01:59 PM |
| %Y-%m-%dT%I:%M:%S.%f %p | 1999-01-25T01:01:59.000000 PM |
| %Y-%m-%dT%I:%M:%S.%fZ %p | 1999-01-25T01:01:59.000000Z PM |
| %Y-%m-%dT%I:%M:%SZ %p | 1999-01-25T01:01:59Z PM |
| %Y.%d.%m | 1999.25.01 |
| %Y.%m.%d | 1999.01.25 |
| %Y/%d/%m %H:%M:%S.%f | 1999/25/01 13:01:59.000000 |
| %Y/%d/%m %H:%M:%S.%fZ | 1999/25/01 13:01:59.000000Z |
| %Y/%d/%m %I:%M:%S.%f %p | 1999/25/01 01:01:59.000000 PM |
| %Y/%d/%m %I:%M:%S.%fZ %p | 1999/25/01 01:01:59.000000Z PM |
| %Y/%m/%d | 1999/01/25 |
| %Y/%m/%d %H:%M:%S | 1999/01/25 13:01:59 |
| %Y/%m/%d %H:%M:%S.%f | 1999/01/25 13:01:59.000000 |
| %Y/%m/%d %H:%M:%S.%fZ | 1999/01/25 13:01:59.000000Z |
| %Y/%m/%d %I:%M:%S %p | 1999/01/25 01:01:59 PM |
| %Y/%m/%d %I:%M:%S.%f %p | 1999/01/25 01:01:59.000000 PM |
| %Y/%m/%d %I:%M:%S.%fZ %p | 1999/01/25 01:01:59.000000Z PM |
| %d.%m.%Y | 25.01.1999 |
| %d.%m.%y | 25.01.99 |
| %d/%m/%Y | 25/01/1999 |
| %d/%m/%Y %H:%M | 25/01/1999 13:01 |
| %d/%m/%Y %H:%M:%S | 25/01/1999 13:01:59 |
| %d/%m/%Y %I:%M %p | 25/01/1999 01:01 PM |
| %d/%m/%Y %I:%M:%S %p | 25/01/1999 01:01:59 PM |
| %d/%m/%y | 25/01/99 |
| %d/%m/%y %H:%M | 25/01/99 13:01 |
| %d/%m/%y %H:%M:%S | 25/01/99 13:01:59 |
| %d/%m/%y %I:%M %p | 25/01/99 01:01 PM |
| %d/%m/%y %I:%M:%S %p | 25/01/99 01:01:59 PM |
| %m %d %Y %H %M %S | 01 25 1999 13 01 59 |
| %m %d %Y %I %M %S %p | 01 25 1999 01 01 59 PM |
| %m %d %y %H %M %S | 01 25 99 13 01 59 |
| %m %d %y %I %M %S %p | 01 25 99 01 01 59 PM |
| %m-%d-%Y | 01-25-1999 |
| %m-%d-%Y %H:%M:%S | 01-25-1999 13:01:59 |
| %m-%d-%Y %I:%M:%S %p | 01-25-1999 01:01:59 PM |
| %m-%d-%y | 01-25-99 |
| %m-%d-%y %H:%M:%S | 01-25-99 13:01:59 |
| %m-%d-%y %I:%M:%S %p | 01-25-99 01:01:59 PM |
| %m.%d.%Y | 01.25.1999 |
| %m.%d.%y | 01.25.99 |
| %m/%d/%Y | 01/25/1999 |
| %m/%d/%Y %H:%M | 01/25/1999 13:01 |
| %m/%d/%Y %H:%M:%S | 01/25/1999 13:01:59 |
| %m/%d/%Y %I:%M %p | 01/25/1999 01:01 PM |
| %m/%d/%Y %I:%M:%S %p | 01/25/1999 01:01:59 PM |
| %m/%d/%y | 01/25/99 |
| %m/%d/%y %H:%M | 01/25/99 13:01 |
| %m/%d/%y %H:%M:%S | 01/25/99 13:01:59 |
| %m/%d/%y %I:%M %p | 01/25/99 01:01 PM |
| %m/%d/%y %I:%M:%S %p | 01/25/99 01:01:59 PM |
| %y %m %d | 99 01 25 |
| %y %m %d %H %M %S | 99 01 25 13 01 59 |
| %y %m %d %I %M %S %p | 99 01 25 01 01 59 PM |
| %y-%d-%m | 99-25-01 |
| %y-%m-%d | 99-01-25 |
| %y-%m-%d %H:%M:%S | 99-01-25 13:01:59 |
| %y-%m-%d %H:%M:%S.%f | 99-01-25 13:01:59.000000 |
| %y-%m-%d %I:%M:%S %p | 99-01-25 01:01:59 PM |
| %y-%m-%d %I:%M:%S.%f %p | 99-01-25 01:01:59.000000 PM |
| %y-%m-%dT%H:%M:%S | 99-01-25T13:01:59 |
| %y-%m-%dT%H:%M:%S.%f | 99-01-25T13:01:59.000000 |
| %y-%m-%dT%H:%M:%S.%fZ | 99-01-25T13:01:59.000000Z |
| %y-%m-%dT%H:%M:%SZ | 99-01-25T13:01:59Z |
| %y-%m-%dT%I:%M:%S %p | 99-01-25T01:01:59 PM |
| %y-%m-%dT%I:%M:%S.%f %p | 99-01-25T01:01:59.000000 PM |
| %y-%m-%dT%I:%M:%S.%fZ %p | 99-01-25T01:01:59.000000Z PM |
| %y-%m-%dT%I:%M:%SZ %p | 99-01-25T01:01:59Z PM |
| %y.%d.%m | 99.25.01 |
| %y.%m.%d | 99.01.25 |
| %y/%d/%m %H:%M:%S.%f | 99/25/01 13:01:59.000000 |
| %y/%d/%m %H:%M:%S.%fZ | 99/25/01 13:01:59.000000Z |
| %y/%d/%m %I:%M:%S.%f %p | 99/25/01 01:01:59.000000 PM |
| %y/%d/%m %I:%M:%S.%fZ %p | 99/25/01 01:01:59.000000Z PM |
| %y/%m/%d | 99/01/25 |
| %y/%m/%d %H:%M:%S | 99/01/25 13:01:59 |
| %y/%m/%d %H:%M:%S.%f | 99/01/25 13:01:59.000000 |
| %y/%m/%d %H:%M:%S.%fZ | 99/01/25 13:01:59.000000Z |
| %y/%m/%d %I:%M:%S %p | 99/01/25 01:01:59 PM |
| %y/%m/%d %I:%M:%S.%f %p | 99/01/25 01:01:59.000000 PM |
| %y/%m/%d %I:%M:%S.%fZ %p | 99/01/25 01:01:59.000000Z PM |
### Percentages {: #percentages }
Columns that have numeric values ending with `%` are treated as percentages.
### Currencies {: #currencies }
Columns that contain values with the following currency symbols are treated as currency.
- $
- EUR, USD, GBP
- £
- £ (fullwidth)
- €
- ¥
- ¥ (fullwidth)
Also, note the following regarding currency interpretation:
* The currency symbol can be preceding ($1) or following (1EUR) the text but must be consistent across the feature.
* Both comma (`,`) and period (`.`) can be used as a [separator](#decimal-separator) for thousands or cents, but must be consistent across the feature (e.g., 1000 dollars and 1 cent can be represented as 1,000.01 or 1.000,01).
* Leading `+` and `-` symbols are allowed.
### Length {: #length }
Columns that contain values matching the convention <*feet*>’ <*inches*>” are displayed as variable type `length` on the **Data** page. DataRobot converts the length to a number in inches and then treats the value as a numeric in blueprints. If your dataset has other length values (for example, 12cm), the feature is treated as categorical. If a feature has mixed values that show the measurement (5m, 72in, and 12cm, for example), it is best to clean and normalize the dataset before uploading.
## Column name conversions {: #column-name-conversions }
During data ingestion, DataRobot converts the following characters to underscores (`_`): `-`, `$`, `.` `{`, `}`, `"`, `\n`, and `\r`.
## File download sizes {: #file-download-sizes }
Consider the following when downloading datasets:
* There is a 10GB file size limit.
* Datasets are downloaded as CSV files.
* The downloaded dataset may differ from the one initially imported because DataRobot applies the conversions mentioned above.
| file-types |
---
title: Data
description: How to manage data for machine learning, including importing and transforming data, and connecting to data sources.
---
# Data {: #data }
{% include 'includes/data-description.md' %}
See the associated [considerations](#feature-considerations) for important additional information. See also the [dataset requirements](file-types).
Topic | Describes...
----- | ------
[Dataset requirements](file-types) | Dataset requirements, data type definitions, file formats and encodings, and special column treatments.
[Connect to data sources](connect-data/index) | Set up database connections and manage securely stored credentials for reuse when accessing secure data sources.
[AI Catalog](ai-catalog/index) | Import data into the AI Catalog and from there, you can transform data using SQL, as well as create and schedule snapshots of your data. Then, create a DataRobot project from a catalog asset.
[Import data](import-data/index) | Import data from a variety of sources.
[Transform data](transform-data/index) | Transform primary datasets and perform Feature Discovery on multiple datasets.
[Analyze data](analyze-data/index) | Investigate data using reports and visualizations created after EDA1 and EDA2.
[Data FAQ](data-faq) | A list of frequently asked data preparation and management questions with brief answers and links to more complete documentation.
## Feature considerations {: #feature-considerations }
The following are the data-related considerations for working in DataRobot.
### General considerations {: #general-considerations }
For non-time series projects (see time series considerations [here](ts-consider)):
* Ingestion of XLSX files often does not work as well as using the corresponding CSV format. The XLSX format requires loading the entire file into RAM before processing can begin, which can cause RAM availability errors. Even when successful, performance is poorer than CSV (which can begin processing before the entire file is loaded). As a result, XLSX file size limits are suggested. For larger file sizes than those listed below, convert your Excel file to CSV for importing. See the [dataset requirements](file-types#ensure-acceptable-file-import-size) for more information.
* When using the prediction API, there is a 50MB body size limitation to the request. If you make a request seeking a prediction of more than 50MB using dedicated prediction workers, it will fail with the `HTTP response HTTP 413: Entity Too Large`.
* Exportable Java scoring code uses extra RAM during model building and therefore, dataset size should be less than 8GB.
### 10GB Cloud ingest {: #10gb-cloud-ingest }
!!! info "Availability information"
The 10GB ingest option is only available for licensed users of the DataRobot Business Critical package and only available for AutoML (not time series) projects.
Consider the following when working with the 10GB ingest option for AutoML projects:
* Certain modeling activities may deliver less than 10GB availability, as described below.
* The capability is available for regression, binary classification, and multiclass AutoML projects.
* Project creation with datasets close to 10GB may take several hours, depending on the data structure and features enabled.
In some situations, depending on the data or the nature of the modeling activity, 10GB datasets can cause out-of-memory (OOM) errors. The following conditions have resulted in OOM errors during testing:
* Models built from the Repository; retry the model using a smaller sample size.
* **Feature Impact** insights; rerun the **Feature Impact** job using a smaller sample size.
* Using **Advanced Tuning**, particularly tunings that: a) add more trees to XGboost/LGBM models or b) deep grid searches of many parameters.
* Retraining models at larger sample sizes.
* Multiclass projects with more than 5-10 classes.
* **Feature Effects** insight; try reducing the number of features.
* Anomaly detection models, especially for datasets > 2.5GB.
Specific areas of the application may have a limit lower than 10GB. Notably:
* **Location AI** (geospatial modeling) is limited to 100,000 rows and 500 numeric columns. Datasets that exceed those limits will run as regular AutoML modeling projects but the Spatial Neighborhood Featurizer will not run (resulting in no geospatial-specific models).
* Out-of-time validation (OTV) modeling supports datasets up to 5GB.
| index |
---
title: Data FAQ
dataset_name: N/A
description: Provides a list, with brief answers, of frequently asked data preparation and management questions. Answers links to more complete documentation.
domain: platform
expiration_date: 10-10-2025
owner: josh.klaben.finegold@datarobot.com
url: docs.datarobot.com/docs/tutorials/prep-learning-data/data-faq.html
---
# Data FAQ {: #data-faq }
??? faq "What is the AI Catalog?"
The [AI Catalog](ai-catalog/index) is a DataRobot tool for importing, registering, and sharing data and other assets. The catalog supports browsing and searching registered assets, including definitions and relationships with other assets.
??? faq "What is Data Prep?"
[Data Prep](companion-tools/index) is a DataRobot tool for cleaning and transforming data to be used in machine learning. Data Prep lets you prepare data from [multiple sources](companion-tools/index). You can save and share your data, as well as the steps used to prepare it.
??? faq "What file types can DataRobot ingest?"
DataRobot can ingest text, Excel, SAS, and various compressed or archive files. [Supported file formats](file-types#data-formats) are listed at the bottom of the project (Start) page. You can [import files directly into DataRobot](import-to-dr) or you can [import them into the AI Catalog](catalog).
??? faq "What data sources can DataRobot connect to?"
DataRobot can ingest from [JDBC-enabled data sources](data-conn), as well as S3, Azure Blob, Google Cloud Storage, and URLs, among others.
??? faq "What is a histogram used for?"
[Histograms](histogram#histogram-chart) bucket numeric feature values into equal-sized ranges to show a rough distribution of the variable (feature). Access a feature's histogram by expanding the feature in the **Data** tab.
??? faq "What do yellow triangles mean on the **Data** tab?"
Upon uploading data, DataRobot automatically detects and identifies common data quality issues. The [Data Quality Assessment](data-quality) report denotes these data quality issues with yellow triangle warnings. Hover over the triangles to see the specific quality issues, such as excess zeros or outliers.
??? faq "How can I share a dataset?"
Use the AI Catalog to [share a dataset](sharing) with users, groups, and organizations. You can select a role for the users who will share the asset—they can be an owner (can view, edit, and administer), an editor (can view and edit), or a consumer (can view).
??? faq "How does DataRobot reduce features?"
DataRobot automatically implements feature reduction at multiple stages of the modeling life cycle:
1. During [EDA1](eda-explained#eda1): After uploading your data, DataRobot creates an informative feature list by excluding non-informative features, such as those with too many unique values.
2. After [EDA2](eda-explained#eda2): After clicking Start, DataRobot removes features with target leakage (i.e., features with a high correlation to the target) and features with an ACE score less than 0.0005 (i.e., features with a marginal correlation to the target).
3. During model training and analysis: DataRobot removes all redundant features and retrains the model, keeping features with a cumulative feature importance score over 0.95.
4. A step in the model's blueprint: Some algorithms offer intrinsic feature reduction, including LASSO and ENET, by shrinking coefficients to 0.5.
5. [Automated Feature Discovery](fd-gen): Feature Discovery projects explore and generate features based on the secondary dataset(s), and then perform [supervised feature reduction](fd-overview#feature-reduction) to only keep features with an estimated cumulative feature importance score over 0.98.
For more information, see the [documentation for data transformations](transform-data/index).
??? faq "What are informative features?"
Informative features are those that are potentially valuable for modeling. DataRobot generates an [informative features list](feature-lists#automatically-created-feature-lists) where features that will not be useful are removed. Some examples include reference IDs, features that contain empty values, and features that are derived from the target. DataRobot also creates features, such as date type features, and if valuable, includes them in the informative features list.
??? faq "What is a snapshot?"
You can create a *[snapshot](catalog#create-a-snapshot)* of your data in the AI Catalog, in which case DataRobot stores a copy of your data in the catalog. You can then [schedule the snapshot](snapshot) to be refreshed periodically. If you don't create a snapshot, the data is *dynamic*—DataRobot samples for profile statistics but does not keep a copy of the data. Instead, the catalog stores a pointer to the data and pulls it upon request, for example, when you create a project.
??? faq "What are the green "importance" bars on the **Data** tab?"
The [importance bars](model-ref#importance-score) show the degree to which a feature is correlated with the target. These bars are based on "Alternating Conditional Expectations" (ACE) scores which detect non-linear relationships with the target, but are unable to detect interaction effects between features. Importance measures the information content of the feature; this calculation is done independently for each feature in the project.
??? faq "How large can my datasets be?"
[File size requirements](file-types#ensure-acceptable-file-import-size) vary depending on deployment type (Cloud versus on premise) and whether you are using [AutoML](file-types#automl-file-import-sizes), [time series](file-types#time-series-file-import-sizes), and/or [Feature Discovery](file-types#feature-discovery-file-import-sizes).
??? faq "How do I remove rows and columns from my dataset?"
You can use the [Data Prep](companion-tools/index) tool to remove rows or columns from your dataset. If you have the same data in multiple rows, you can [deduplicate](companion-tools/index). You can use a [Filtergram](companion-tools/index) to select rows for removal and you can use the [Columns tool](companion-tools/index) to remove columns.
| data-faq |
---
title: Get started
description: Get started with DataRobot's value-driven AI. Analyze data, create and deploy models, and leverage code-first accelerators and notebooks.
---
# Get started {: #get-started }
Get started with DataRobot's value-driven AI. Analyze data, create and deploy models, and leverage code-first accelerators and notebooks.
Topic | Describes...
---------------|---------------
[DataRobot in 5](gs-dr5/index) | Understand the 5 basic steps to building and deploying AI models.
[Workbench](gs-workbench/index) | Understand the components of Workbench, work with data, build and explore models; compare Workbench capabilities with DataRobot Classic.
[DataRobot Classic](gs-classic/index) | Understand modeling project types, prepare modeling data, build and explore models, and deploy, monitor, and manage models in production.
[Work with notebooks](gs-code) | Learn how to get started coding with DataRobot and how to leverage AI accelerators to quickly engaging in code-first machine learning workflows.
[Get help](gs-get-help/index) | Review troubleshooting tips and view quick, task-based instructions for success in modeling.
| index |
---
title: Predictions reference
description: Learn the file size limits for different methods of making predictions. Prediction file size limits depend on whether the model is deployed or not and whether you use the UI or an API.
---
# Prediction reference {: #prediction-reference }
DataRobot supports many methods of making predictions, including the DataRobot UI and APIs—for example, Python, R, and REST. The prediction methods you use depend on factors like the size of your prediction data, whether you're validating a model prior to deployment or using and monitoring it in production, whether you need immediate low-latency predictions, or if you want to schedule batch prediction jobs. This page hosts considerations, limits, and other helpful information to reference before making predictions.
## File size limits {: #file-size-limits }
!!! note
Prediction file size limits vary for Self-Managed AI Platform installations and limits are configurable.
{% include 'includes/pred-limits-include.md' %}
## Monitor model health {: #monitor-model-health }
If you use any of the prediction methods mentioned above, DataRobot allows you to deploy a model and monitor its prediction output and performance over a selected time period.
A critical part of the model management process is to identify when a model starts to deteriorate and to quickly address it. Once trained, models can then make predictions on new data that you provide. However, prediction data changes over time—businesses expand to new cities, new products enter the market, policy or processes change—any number of changes can occur. This can result in [data drift](data-drift), the term used to describe when newer data moves away from the original training data, which can result in poor or unreliable prediction performance over time.
Use the [MLOps deployment dashboard](mlops/index) to analyze a model's performance metrics: prediction response time, model health, accuracy, data drift analysis, and more. When models deteriorate, the common action to take is to retrain a new model. Deployments allow you to replace models without re-deploying them, so not only do you not need to change your code, but DataRobot can track and represent the entire history of a model used for a particular use case.
## Avoiding common mistakes {: #avoiding-common-mistakes }
The section on [dataset guidelines](file-types) provides important information about DataRobot's dataset requirements. In addition, consider:
1. *Under-trained models*. The most common prediction mistake is to use models in production without retraining them beyond the initial training set. Best practice suggests the following workflow:
* Select the best model based on the validation set.
* Retrain the best model, including the validation set.
* [Unlock holdout](unlocking-holdout), and use the holdout to validate that the retrained model performs as well as you expect.
* Note that this does not apply if you are using the model DataRobot selects as “Recommended for Deployment." DataRobot automates all three of these steps for the recommended model and trains it to 100% of the data.
2. *File encoding issues*. Be certain that you properly format your data to avoid prediction errors. For example, unquoted newline characters and commas in CSV files often cause problems. JSON can be a better choice for data that contains large amounts of text because JSON is more standardized than CSV. CSV can be faster than JSON, but only when it is properly formatted.
3. *Insufficient cores*. When making predictions, keep the number of threads or processes less than or equal to the number of prediction worker cores you have and make synchronous requests. That is, the number of concurrent predictions should generally not exceed the number of prediction worker cores on your dedicated prediction server(s). If you are not sure how many prediction cores you have, contact <a target="_blank" href="https://support.datarobot.com">DataRobot Support</a>.
!!! warning
When performing predictions, the positive class has multiple representations that DataRobot can choose from, from the original positive class as written on the dataset, a user-specified choice in the frontend, or the positive class as provided by the prediction set. Currently DataRobot's internal rules regarding this are not obvious, which can lead to automation issues like `str("1.0")` being returned as the positive class instead of `int(1)`. This issue is being fixed by standardizing the internal ruleset in a future release.
## Prediction speed {: #prediction-speed }
1. *Model scoring speed*. Scoring time differs by model and not all models are fast enough for "real-time" scoring. Before going to production with a model, verify that the model you select is fast enough for your needs. Use the [Speed vs. Accuracy](speed) tab to display model scoring time.
2. *Understanding the model cache*. A dedicated prediction server scores quickly because of its in-memory model cache. As a result, the first few requests using a new model may be slower because the model must first be retrieved.
3. *Computing predictions with Prediction Explanations*. Computing predictions with XEMP Prediction Explanations requires a significantly higher number of operations than only computing predictions. Expect higher runtimes, although actual speed is model-dependent. Reducing the number of features used or avoiding blenders and text variables may increase speed. Increased computation costs *do not* apply to SHAP Prediction Explanations.
| pred-file-limits |
---
title: Predictions
description: Learn the methods and DataRobot components for getting predictions (“scoring”) on new data from a model. To make predictions, you can use real-time predictions, batch predictions, or portable prediction methods.
---
# Predictions {: #predictions }
DataRobot offers several methods for getting predictions on new data from a model (also known as scoring). You can read an [overview of the available methods](#predictions-overview) below. Before proceeding with a prediction method, be sure to review the [prediction file size limits](pred-file-limits).
Topic | Describes...
----- | ------
[Real-time predictions](realtime/index) | Make real-time predictions by connecting to HTTP and requesting predictions for a model via a synchronous call. After DataRobot receives the request, it immediately returns a response containing the prediction results.
[Batch predictons](batch/index) | Score large datasets in batches with one asynchronous prediction job.
[Portable predictions](port-pred/index) | Execute predictions outside of the DataRobot application using [Scoring Code](port-pred/scoring-code/index) or the [Portable Prediction Server](port-pred/pps/index).
[Monitor external predictions](pred-monitoring-jobs/index) | To integrate more closely with external data sources, monitoring job definitions allow DataRobot to monitor deployments running and storing feature data and predictions outside of DataRobot.
## Predictions overview {: #predictions-overview }
DataRobot offers several methods for getting predictions on new data. Select a tab to learn about these methods:
=== "Real-time predictions"
Make real-time predictions by connecting to HTTP and requesting predictions for a model via a synchronous call. Predictions are made after DataRobot receives the request and immediately returns a response.
### Use a deployment
The simplest method for making real-time predictions is to deploy a model from the Leaderboard and make prediction requests with the [Prediction API](dr-predapi).
After deploying a model, you can also navigate to a deployment's [**Prediction API**](code-py) tab to access and configure scripting code to make simple scoring requests. The deployment also hosts [integration snippets](integration-code-snippets).
=== "Batch predictions"
Both batch prediction methods stem from deployments. After deploying a model, you can make batch predictions via the UI by accessing the deployment, or use the [Batch Prediction API](../../api/reference/batch-prediction-api/index).
### Use the Make Predictions tab
Navigate to a deployment's [**Make Predictions** tab](batch-pred) and use the interface to [configure batch prediction jobs](batch-pred-jobs).
### Use the batch prediction API
The [Batch Prediction API](../../api/reference/batch-prediction-api/index) provides flexible options for intake and output when scoring large datasets using the prediction servers you have already deployed. The API is exposed through the DataRobot Public API. The API can be consumed using either any REST-enabled client or the [DataRobot Python package Public API bindings](https://datarobot-public-api-client.readthedocs-hosted.com/page/){ target=_blank }.
=== "Portable predictions"
Portable predictions allow you to execute prediction jobs outside of the DataRobot application. The portable prediction methods are detailed below.
### Use Scoring Code
You can export [Scoring Code](scoring-code/index) from DataRobot in Java or Python to make predictions. Scoring Code is portable and executable in any computing environment. This method is useful for low-latency applications that cannot fully support REST API performance or lack network access.
!!! info "Availability information"
DataRobot’s exportable models and independent prediction environment option, which allows a user to export a model from a model building environment to a dedicated and isolated prediction environment, is not available for managed AI Platform deployments.
### Use the Portable Prediction Server
The [Portable Prediction Server](port-pred/pps/index) (PPS) is a remote DataRobot execution environment for DataRobot model packages (`MLPKG` files) distributed as a self-contained Docker image. It can host one or more production models. The models are accessible through DataRobot's Prediction API for predictions and Prediction Explanations.
### Use RuleFit models {: #use-rulefit-models }
DataRobot RuleFit models generate fast Python or Java Scoring Code, which can be run anywhere with no dependencies. Once created, you can export these models as a Python module or a Java class, and [run the exported script](rulefit-examples).
| index |
---
title: Predictions testing
description: To make predictions and assess model performance prior to deployment, you can make predictions on an external test dataset (i.e., external holdout) or on training data (i.e., validation and/or holdout).
---
# Predictions on test and training data {: #predictions-on-test-and-training-data }
Use the [**Make Predictions**](predict) tab to make predictions and assess model performance prior to deployment. You can [make predictions on an external test dataset](#make-predictions-on-an-external-test-dataset) (i.e., external holdout) or you can [make predictions on training data](#make-predictions-on-training-data) (i.e., validation and/or holdout).
## Make predictions on an external test dataset {: #make-predictions-on-an-external-test-dataset }
To better evaluate model performance, you can upload any number of additional test datasets after project data has been partitioned and models have been trained. An external test dataset is one that:
* Contains [actuals](glossary/index#actuals) (values for the target).
* Is _not_ part of the original dataset (you didn't train on any part of it).
Using an external test dataset allows you to compare model accuracy against the predictions.
By uploading an external dataset and using the original model's dataset partitions, you can compare metric scores and visualizations to ensure consistent performance prior to deployment. Select the external test set as if it were a partition in the original project data. Support for external test sets is available for all project types except supervised time series. Unsupervised (anomaly detection) time series is supported.
To make predictions on an external test set:
1. [Upload new test data](predict#upload-pred-dataset) in the same way you would upload a prediction dataset. For supervised learning, the external set must contain the target column and all columns present in the training dataset (although additional columns [can be added](predict#step-add-columns)). The workflow is slightly different for [anomaly detection projects](#supply-actual-values-for-anomaly-detection-projects).
2. Once uploaded, you'll see the label **EXTERNAL TEST** below the dataset name. Click **Run external test** to calculate predicted values and compute statistics that compare the actual target values to the predicted values. The external test is queued and job status appears in the Worker Queue on the right sidebar.

3. When calculations complete, click **Download predictions** to save prediction results to a CSV file.

!!! note
In a binary classification project, when you click **Run external test**, the current value of the Prediction Threshold is used for computation of the predicted labels. In the downloaded predictions, the labels correspond to that threshold, even if you updated the threshold between computing and downloading. DataRobot displays the threshold that was used in the calculation in the dataset listing.
4. To view external test scores, from the Leaderboard menu select **Show external test column**.

The Leaderboard now includes an **External test** column.
4. From the **External test** column, choose the test data to display results for or click **Add external test** to return to the **Make Predictions** tab to add additional test data.

You can now sort models by external test scores or calculate scores for more models.
### Supply actual values for anomaly detection projects {: #supply-actual-values-for-anomaly-detection-projects }
In [anomaly detection](anomaly-detection) (non-time series) projects, you must set an actuals column that identifies the outcome or future results to compare to predicted results. This provides a measure of accuracy for the event you are predicting on. The prediction dataset must contain the same columns as those in the training set with at least one column for known anomalies. Select the known anomaly column as the **Actuals** value.

### Compare insights with external test sets {: #compare-insights-with-external-test-sets }
Expand the **Data Selection** dropdown to select an external test set as if it was a partition in the original project data.

This option is available when using the following insights:
* [Lift Chart](lift-chart)
* [ROC Curve](roc-curve)
* [Profit Curve](profit-curve)
* [Confusion Matrix](multiclass)
* [Accuracy Over Time](aot) (OTV only)
* [Stability](stability) (OTV only)
* [Residuals](residuals)
Note the following:
* Insights are not computed if an external dataset has fewer than 10 rows; however, metric scores are computed and displayed on the Leaderboard.
* The **ROC Curve** insight is disabled if the external dataset only contains single class actuals.
## Make predictions on training data {: #make-predictions-on-training-data }
Less commonly (although there are [reasons](#why-use-training-data-for-predictions)), you may want to download predictions for your original training data, which DataRobot automatically imports. From the dropdown, select the [partition(s)](data-partitioning) to use when generating predictions.

For small datasets, predictions are calculated by doing [stacked predictions](data-partitioning#what-are-stacked-predictions) and therefore can use all partitions. Because those calculations are too “expensive” to run on large datasets (750MB and higher by default), predictions are based on holdout and/or validation partitions, as long as the data wasn’t used in training.
| Dropdown option | Description for small datasets | Description for large datasets |
|------------------------|-------------|--------------|
| All data | Predictions are calculated by doing stacked predictions on training, validation, and holdout partitions, regardless of whether they were used for training the model or if holdout has been unlocked. | Not available |
| Validation and holdout | Predictions are calculated using the validation and holdout partitions. If validation was used in training, this option is disabled. | Predictions are calculated using the validation and holdout partitions. If validation was used in training or the project was created without a holdout partition, this option is not available. |
| Validation | If the project was created without a holdout partition, this option replaces the *Validation and holdout* option. | If the project was created without a holdout partition, this option replaces the *Validation and holdout* option. |
| Holdout | Predictions are calculated using the holdout partition only. If holdout was used in training, this option is not available (only the All data option is valid). | Predictions are calculated using the holdout partition only. If holdout was used in training, predictions are not available for the dataset. |
!!! note
For [OTV](glossary/index#otv) projects, holdout predictions are generated using a model retrained on the holdout partition. If you upload the holdout as an external test dataset instead, the predictions are generated using the model from backtest 1. In this case, the predictions from the external test will not match the holdout predictions.
Select **Compute predictions** to generate predictions for the selected partition on the existing dataset. Select **Download predictions** to save results as a CSV.
!!! note
The `Partition` field of the exported results indicates the source partition name or fold number of the cross-validation partition. The value `-2` indicates the row was "discarded" (not used in [TVH](data-partitioning#training-validation-and-holdout-tvh)). This could be because the target was missing, the [partition column](partitioning) (Date/Time-, Group, or Partition Feature-partitioned projects) was missing, or [smart downsampling](smart-ds) was enabled, and those rows were discarded from the majority class as part of downsampling.
### Why use training data for predictions? {: #why-use-training-data-for-predictions }
Although less common, there are times when you want to make predictions on your original training dataset. The most common application of the functionality is for use on large datasets. Because running [stacked predictions](#stacked-predictions) on large datasets is often too computationally expensive, the **Make Predictions** tab allows you to download predictions using data from the validation and or holdout partitions (as long as they weren’t used in training).
Some sample use cases:
*Clark the software developer* needs to know the full distribution of his predictions, not just the mean. His dataset is large enough that stacked predictions are not available. With weekly modeling using the R API, he downloads holdout and validation predictions onto his local machine and loads them into R to produce the report he needs.
*Lois the data scientist* wants to verify that she can reproduce model scores exactly as well in DataRobot as when using an in-house metric. She partitions the data, specifying holdout during modeling. After modeling completes, she unlocks holdout, selects the top model, and computes and downloads predictions for just the holdout set. She then compares predictions of that brief exercise to the result of her previous many-month-long project.
| pred-test |
---
title: Create applications
description: Create No-Code AI Apps to enable core DataRobot services while using a no-code interface.
---
# Create applications {: #create-applications}
You can create applications in DataRobot from the [**Applications**](#from-the-applications-tab) tab, a [model on the Leaderboard](#from-the-leaderboard), or a [deployment](#from-a-deployment). If you're creating an application from a time series deployment, see the documentation for [time series applications](ts-app).
!!! note "Multiclass projects with over 1000 classes"
For [unlimited multiclass](multiclass#unlimited-multiclass) projects with more than 1000 classes, by default, DataRobot keeps the top 999 most frequent classes and aggregates the remainder into a single "other" bucket. You can, however, configure the aggregation parameters to ensure all classes necessary to your project are represented.
!!! note
You can create multiple applications from the same deployment.
## Template options {: #template-options }
Before creating an application, consider the purpose of the app and review the template options—Predictor, What-if, or Optimizer—and if the deployment you intend to use is time series or non-time series. While templates only determine the initial configuration of the application and selecting a template does not mean the app can only be used for that purpose, time series applications require additional setup. See the documentation for [time series applications](ts-app).

The table below describes each template option:
| Template | Description | Default configuration| Time series |
| ---------- | ----------- | ---- | ----- |
| Predictor | Makes predictions for a target feature based on the information provided when the app is created and deployed. | Hides the **What-if and Optimizer** widget. | ✔ |
| What-if | Creates and compares multiple prediction scenarios side-by-side to determine the option with the best outcome. | Displays the **What-if and Optimizer** widget with only the what-if functionality enabled. | ✔ |
| Optimizer | Runs simulations to optimize an outcome for a given goal. This is most effective when you want to optimize for a single row. | Displays the **What-if and Optimizer** widget with only the optimizer functionality enabled.<br><br>The **All Rows** widget displays an **Optimized Prediction** column. | |
## From the Applications tab {: #from-the-applications-tab}
When creating an application from the **Applications** tab, DataRobot uses an active deployment as the basis of the app. To from the tab:
1. Navigate to the **Applications** tab.
2. The available application templates are listed at the top of the page. Click **Use template** next to the template best suited for your use case.

3. A dialog box appears, prompting you to name the application and choose a [sharing option](app-settings#permissions)—_Anyone With the Sharing Link_ automatically generates a link that can be shared with non-DataRobot users while _Invited Users Only_ limits sharing to other DataRobot users, groups, and organizations. The access option determines the initial configuration of the sharing permissions, which can be changed in the [application settings](app-settings).

4. Click **Next: select deployment**.
5. Select a deployment for the application and click **Create**. Note that you must be an owner of the deployment in order to launch an application from it.

After signing in with DataRobot and authorizing access, you are taken to the **Applications** tab while the application builds.

## From the Leaderboard {: #from-the-leaderboard }
To create an application from a specific model on the Leaderboard:
1. After your models are built, navigate to the **Leaderboard** and select the model you want to use to build an application.

2. Click the **Build app** tab and select the appropriate template for your use case.

3. Name the application and select a [sharing option](app-settings#permissions) from the dropdown. Click **Create** when you're done.

4. The new application appears on the **Leaderboard** below the model's app templates as well as on the **Applications** tab.

## From a deployment {: #from-a-deployment}
To create an application from a deployed model:
1. Navigate to the **Deployment** inventory and select the deployment you want to launch the application from.

2. Select **Create Application** from the action menu of your desired deployment.

3. Select the application template you would like to use and click **Next: add app info**.

4. Name the application and choose a [sharing option](app-settings#permissions) from the dropdown. When you're done, click **Create**.

The application is available for use on the **Applications** tab.
### Deployments with an association ID {: #deployments-with-an-association-id }
When creating an application from a deployment with an association ID, note the following:
* Accuracy and data drift are tracked for all single and batch predictions made using the application.
* Accuracy and data drift are _not_ tracked for synthetic predictions (simulations) made in the application using the **What-If and Optimizer** widget.
* You cannot add an association ID to deployments that have already been used to create an application.
In the deployment **Settings**, [add an association ID](accuracy-settings#select-an-association-id). If **Require association ID in prediction requests** is enabled, this setting cannot be disabled after the application is created.

If an application is created from a deployment with an association ID, the association ID is added as a required field to make single predictions in the application. This field cannot be removed in **Build** mode.

| create-app |
---
title: Manage applications
description: View, share, and delete current No-Code AI Apps.
---
# Manage applications {: #manage-applications }
In addition to creating apps from the **Applications** tab, you can view all existing applications that you have created or have been shared with you.

The table below describes the elements and available actions on the **Applications** tab when populated:
| | Element | Description |
|---|---|---|
|  | Templates | Deploys a new application using a template. For more information, see the section on [templates and creating applications](create-app). |
|  | Open | Opens an application where you can then access the following pages:<ul><li>[Application](use-apps/index): The end-user application where you test different configurations before sharing.</li><li>[**Build** page](edit-apps/index): Allows you to edit the configuration of an application.</li><li>[**Settings** page](app-settings): Allows you to edit the general configuration and permissions, as well as view app usage information.</li></ul> |
|  | Actions menu | Duplicates, shares, or deletes an application. |
## Duplicate applications {: #duplicate-applications }
The duplicate functionality allows you to create a copy of an existing application along with any predictions made in it. This is useful if you want to share an application with another user, but don't want their changes to affect your application, or when creating multiple iterations of an application.
1. Click the menu icon  next to the app you want to duplicate and select **Duplicate**.

2. This opens the **Duplicate Application** window, where you can name the application and enter a description.

3. Select the box next to **Copy Predictions** to carry over any predictions made with the original application.
4. To finish creating a copy of the application, click **Duplicate**.
## Share applications {: #share-applications}
The sharing capability allows [appropriate user roles](roles-permissions#role-priority-and-sharing) to manage permissions and share an application with users, groups, and organizations, as well as recipients outside of DataRobot. This is useful, for example, for allowing others to use your application without requiring them to have the expertise to create one.
!!! warning
When multiple users have access to the same application, each user can see, edit, and overwrite changes or predictions made by another user, as well as view their uploaded datasets.
You can access sharing functionality from three different areas:
- The **Applications** tab.
- The application's [**Home**](use-apps/index#ui-overview) page.
- The application's [**Settings > Permissions**](app-settings#permissions) tab in Build mode.
To share from the **Applications** tab, click the menu icon  next to the app you want to share and select **Share** .
This opens the **Share** dialog, which lists each associated user and their role. Editors can share an application with one or more users or groups, or the entire organization. Additionally, you can share an application with non-DataRobot users with a sharing link.
=== "Users"
1. To add a new user, enter their username in the **Share with** field.

2. Choose their role from the dropdown.

3. Select **Send notification** to send an email notification and **Add note** to add additional details to the notification.

4. Click **Share**.
=== "Groups and organizations"
1. Select either the **Groups** or **Organizations** tab in the **Share** dialog.

2. Enter the group or organization name in the **Share with** field.
3. Determine the role for permissions.
4. Click **Share**. The app is shared with—and the role is applied to—every member of the designated group or organization.
=== "Anyone With a Sharing Link"
The link that appears at the top of the **Share** dialog allows you to share No-Code AI Apps with end-users who don't have access to DataRobot.

You can revoke access to a sharing link by generating a new link in **Permissions**. To do so, open the application and click **Build**. Then, navigate to [**Settings > Permissions**](app-settings). Under the sharing link, click **Generate new link**.
The following actions are also available in the **Share** dialog:
* To remove a user, click the **X** button to the right of their role.
* To re-assign a user's role, click on the assigned role and assign a new one from the dropdown.

See the [Sharing](sharing) section for more information.
## Delete an application {: #delete-an-application}
If you have the appropriate [permissions](roles-permissions#no-code-ai-app-roles), you can delete an application by opening the menu  and clicking **Delete** . This action initiates an email notification to all users with sharing privileges for the model or environment.
| current-app |
---
title: Time series applications
description: Use No-Code AI Apps to consume time series insights.
---
# Time series applications {: #time-series-applications }
With No-Code AI Apps, you can create Predictor and What-if applications from time series deployments—single series and multiseries. Time series deployments require [additional setup](#configure-a-time-series-deployment) before creating an application and offer [unique insights](#what-if-widget) to time series use cases, including creating simulations that visualize how adjusting known in advance features affects a forecasted prediction and comparing predicted vs. actuals for a given time range.
## Configure a time series deployment {: #configure-a-time-series-deployment }
When creating an application from a time series deployment, there some additional settings required. To configure the time series deployment, go to the **Deployment** inventory, select a time series deployment in the deployment, and navigate to the **Settings** tab.

Use the table below to configure the appropriate deployment settings for a time series application:
| Setting | Description |
| ---------- | ----------- |
| Association ID | **Required**. [Enter the Series ID in the **Association ID** field](accuracy-settings#association-ids-for-time-series-deployments). |
| Require association ID in prediction requests | Toggle on to require an association ID in batch predictions. This prevents you from uploading a dataset without an association ID, which may affect the accuracy of the predictions. |
| Enable target monitoring | **Required**. Must be toggled on for time series applications. |
| Enable feature drift tracking | **Required**. Must be toggled on for time series applications. |
| Enable automatic actuals feedback for time series models | Toggle on to have DataRobot add actuals based on prediction file data. |
| Track attributes for segmented analysis of training data and predictions | **Required**. Must be toggled on for time series applications. |
## Create an application {: #create-an-application}
Before starting, review the [considerations](app-builder/index#considerations) and [deployment settings](#configure-a-time-series-deployment) for time series applications because some options must be set up prior to model building. [Known in advance](nowcasting#features-known-in-advance) features are also required for What-if applications.
[Create an application](create-app#from-a-deployment) from your time series deployment. Depending on the template you select—either Predictor or What-if—the application includes the following widgets:
Widget | Description | Predictor | What-if
---------- | ----------- | ------ | ----------
Add Data | Allows you to upload and score prediction files. | ✔ | ✔
All Rows | Displays individual predictions. | ✔ | ✔
[Time Series Forecasting Widget](#time-series-forecasting-widget) | Visualizes predictions, actuals, residuals, and forecast data on the **Predicted vs Actuals** and **Prediction Explanations over time** charts. | ✔ | ✔
[What-if](#what-if-widget) | Allows you to create, adjust, and save simulations using known in advance features. | | ✔
??? note "Time series vs. non-time series Predictor apps"
There are a few key differences between time series and non-time series Predictor applications:
* The default configuration for time series applications only includes a **Home** page and the **Add Data** widget for batch predictions.
* You cannot add or remove features from the Times Series widget, you can only modify its appearance, including the chart name and line colors, in the [**Properties** tab](app-widgets#configure-widgets).
* Non-time series applications cannot add a Time Series Forecasting widget in Build mode.
### Customize time series widgets {: #customize-time-series-widgets }
All widgets are pre-configured based on the template you select, however, you can further customize each widget by clicking **Build** and selecting a widget. In addition to the customization options described in [Widgets](app-widgets), each time series widget includes unique customization options:
=== "What-if widget"
On the **Data** tab, you can specify which known in advance features can be used to create scenarios. Click **Manage** to add or remove features.

On the **Properties** tab, you can enable the option to add aggregate predictions for each scenario and choose the aggregation method.

=== "Time Series Forecasting Widget"
If event calendars were uploaded during project creation, you have the option to display events on the widget charts. To display events, enter **Build** mode. With the **Time Series Forecasting Widget** selected, click the **Properties** tab and select the box next to **Show events**.

## Score predictions {: #score-predictions }
Initially, time series widgets do not display data or visualizations unless the deployment has already scored predictions.
??? note "Association IDs in prediction files"
If the association ID is not configured properly in the prediction file, the Time Series Forecasting widget will not display prediction information. Consider the following before uploading a prediction file:
- For single series projects, the association ID entered for the deployment must match the name of a dataset column containing dates.
- For multiseries projects, the association ID entered for the deployment must match the name of a "combined" dataset column—a column with values that are a combination of the series name and date, for example, `Boston_2014_09_12`. The "combined" column is only required in prediction files, not the training dataset.
To score new predictions:
1. Drag-and-drop a prediction file into the **Add Data** widget. A prediction line appears on both charts and Prediction Explanations are calculated (**Predicted vs Actual** chart shown here).
In this example, the file contains sales predictions for `6/14/2014` to `6/20/2014`.


2. Click **Deployments** and select your time series deployment in the **Deployment** inventory.

3. Click **Settings > Data**. Scroll down to **Actuals** and [upload the dataset containing actuals](accuracy-settings#add-actuals). In this example, the file contains actuals for `6/14/2014` to `6/20/2014`. The actuals file must contain the association ID, which you can also find on the Settings page.

!!! note
If the forecast file contains actuals for the range of the initial prediction file, you do not need to upload actuals to the deployment and can proceed to step 5.
In this example, the forecast file would need to contain actuals for `6/14/2014` to `6/20/2014`, the range of the initial prediction, in addition to predictions for `6/21/2014` to `6/27/2014`.
The application displays an _Actuals_ line and calculates _Residuals_—the difference between the prediction and the actuals for a given range—in the Time Series Forecasting widget.
4. Navigate back to your time series application.
5. Drag-and-drop a second prediction file, or forecast file, into the **Add Data** widget. A forecast line appears on both widgets. This forecast file contains predictions for `6/21/2014` to `6/27/2014`.

### What-if widget {: #what-if-widget }
Once the application finishes scoring predictions, the What-if widget displays a forecast line and you can begin creating new scenarios; click **Add Scenario**.
In the resulting window, select a date range for the scenario using the date selector (1), or by populating the date fields (2).

The project's known in advance features are listed on the right side of the widget. In the time series What-if widget, these features serve as your [flexible features](whatif-opt#flexible-features)—features you have control over, for example, launching a marketing campaign on a holiday versus a non-holiday. To create your scenario, enter new values for the features and click **Save**.

??? faq "Where are the rest of my features?"
If some of your features are missing from the widget, go to **Build** mode, select the **What-if** widget, and add them in the **Data** tab. For more information, see the documentation on [adding flexible features](whatif-opt#flexible-features).
Continue creating new scenarios one at a time by adjusting these values until you find one that maximizes the prediction score.

#### Bulk edit scenarios {: #bulk-edit-scenarios }
After scoring predictions and adding scenarios to the What-if widget, if you need to modify the same known-in-advance feature for multiple scenarios, you can do so with the bulk edit feature.
Click **Manage Scenarios** at the top of the What-if widget.

Select the box next to the scenarios you want to edit or **Select All** to modify all existing scenarios. Then, click the pencil icon to the right of **Select All**.

!!! note
If you are modifying a single scenario, click the pencil icon to the right of the scenario you're editing. If you're editing multiple scenarios, click the pencil icon to the right of Select All. When editing multiple scenarios, clicking the pencil icon to the right of a specific scenario only edits that scenario.
Use the (1) slider to select a date range and (2) modify the known in advance features for the selected date range.

Click (3) **Save** and (4) **Update scenario**. Once DataRobot finishes processing the batch prediction job, the updated scenarios appear on the What-if chart.
### Time Series Forecasting widget {: #time-series-forecasting-widget }
The **Time Series Forecasting** widget surfaces insights using two charts: **Prediction vs Actual** and **Prediction Explanations** over time. Use the tabs below to learn more about each chart:
=== "Predicted vs Actual chart"
Similar to [Accuracy Over Time](aot), the **Predicted vs Actual** chart helps you visualize how predictions change over time using predicted and actual vs time values based on forecast distances.

| | Setting | Description |
| ---------- | ----------- | ------- |
|  | Filter data | Hide target (actual) and residual information from the chart. Prediction information cannot be hidden. |
|  | Resolution | View the results by day, week, month, quarter, or year. |
|  | Prediction line | Represents scored predictions. |
|  | Actual line | Represents actual values for the target. |
|  | Forecast line | Represents the prediction based on time, into the future using recent inputs to predict future values. |
|  | Residuals | Represents the difference between predicted and actual values. |
|  | Date range | Drag the handles on the preview panel to bring specific areas into focus on the main chart. |
Hover over any point to view the date, prediction values, and top 3 Prediction Explanations.

=== "Prediction Explanations chart"
To view Prediction Explanations over time for your scored predictions, click the **Prediction Explanations** tab. Every point on the chart represents a separate prediction, and therefore has its own set of Prediction Explanations. Every Prediction Explanation also has its own unique color, allowing you to explore trends in the data.

| | Setting | Description |
| ---------- | ----------- | ------- |
|  | Fade explanations | Allows you to hide either all positive or negative Prediction Explanations. Select the box next to **Fade explanations** and select an option from the dropdown. |
|  | Highlight explanations | Highlights specific features in the Prediction Explanations based on its unique color. Click **Highlight explanations** and select features from the dropdown. |
|  | Resolution | View the results by day, week, month, quarter, or year. |
|  | [Enable segment analysis](ts-segmented) | Creates the specified number of additional rows for the forecast value. Select **Enable segment analysis** and choose a **Forecast distance** from the dropdown. |
|  | Prediction Explanations | Each point represents a separate prediction and each prediction has its own set of explanations, which are grouped by color. |
|  | Prediction line | Represents scored predictions. |
|  | Forecast line | Represents the prediction based on time, into the future using recent inputs to predict future values. |
|  | Date range | Drag the handles on the preview panel to bring specific areas into focus on the main chart. |
#### Forecast Details page {: #forecast-details-page }
The **Forecast Details** page allows you to view additional forecast details, including the average prediction values and up to 10 Prediction Explanations for a selected date, as well as segmented analysis for each forecast distance within the forecast window.

In the **Time Series Forecasting** widget, click on a prediction in either the **Predictions vs Actuals** or **Prediction Explanations** chart to view the following forecast details for the selected date:


| Description
---------- | -----------
 | The average prediction value in the forecast window.
 | Up to 10 Prediction Explanations for each prediction.
 | Segmented analysis for each forecast distance within the forecast window.
 | Prediction Explanations for each forecast distance included in the segmented analysis.
Click the arrow next to **Forecast details** to return to the main application.
#### Download predictions {: #download-predictions }
After scoring a batch prediction, the **Download** button appears in the Time Series Forecasting widget, allowing you to download the prediction results as a CSV.

!!! note
When downloading predictions from the Time Series Forecasting widget, non-KA features will always be empty.
| ts-app |
# Overview
Applications are highly customizable, shareable -- therefore, there are many different ways you can modify and use apps, depending on your deployment type, user role, and use case.
This page provides a basic overview of how you might navigate through an application built from a binary classification model as an "Owner", from configuring your application to using it.
1. After [creating an application](create-app), navigate to the **Applications** tab.
2. Click **Open** to the right of the application you just created.
3. You will be prompted to sign in with DataRobot and authorize access.
4. After doing so, the application opens in Consume mode.
Applications have two modes: **Build** mode, where you customize and configure your applications, and **Consume** mode, where you use the application to make predictions and analyze insights.
5. To modify the application, click **Build** in the upper-right corner; Build mode opens to the page you were viewing in Consume mode. In this case, that's the Home page. Note that each application is preconfigured based on the template you selected upon creation.
6. Applications are made up of widgets; some are required (default widgets) and others are optional, however, all widgets can be customized by selecting them.
Click to select the All Rows widget (a default widget). A panel opens on the left where you can configure the widget. Configuration options vary depending on the widget.
7. On the **Data** tab, you can add or remove feature from the view. Click **Manage**, add the **Country** feature, and click **Save**. Notice that Country now appears under Selected features and as a column in the All Rows widget.
8. Click the **Properties** tab. On this tab, you can configure the widget's appearance and behavior.
9. Open the **Widgets** menu and drag-and-drop the Bar chart (an optional widget) onto the canvas.
| app-overview |
---
title: AI Apps
description: Create and configure AI-powered applications using a no-code interface to enable core DataRobot services without having to build models and evaluate their performance in DataRobot.
---
# AI Apps {: #ai-apps}
{% include 'includes/no-code-app-intro.md' %}
The following sections describe the documentation available for DataRobot No-Code AI Apps:
Topic | Describes...
----- | ------
[Create applications](create-app) | Create applications from the **Applications** tab or **Deployment** inventory.
[Manage applications](current-app) | Launch, share, duplicate, or delete applications from the **Applications** tab.
[Edit applications](edit-apps/index) | Configure your application widgets, pages, settings, and more.
[Use applications](use-apps/index) | Use a configured application to make predictions and interpret insights from your data.
[Time series applications](ts-app) | Create time series applications and consume insights in the What-if forecasting widget.
## Considerations {: #considerations}
Consider the following before deploying an application.
* The following project types are supported by No-Code AI Apps:
* Binary classification
* Regression
* Time series
* Geospatial
* Multiclass
* No-Code AI Apps do not support features generated by DataRobot.
* Exponentiation (i.e., `**`) is not a supported feature transformation for custom features.
* Users accessing applications via a sharing link cannot:
* Make batch predictions from assets in the AI Catalog.
* Submit a batch prediction to create the forecast or scenarios on the time series What-If app.
* Chart widgets display two types of data:
* Raw data (training dataset file): Chart widgets will only display training data when the app is created from a project in the AI Catalog.
* Prediction data: The prediction results from all single and batch predictions made in the application.
* Note the following when creating an application from a Leaderboard model:
* You cannot create an application from time series models on the Leaderboard.
* You cannot duplicate apps that have been created from Leaderboard models.
* Organizations are limited to 200 applications. To remove this limit, contact your DataRobot representative.
* For users with _Read_ access only, Prediction Explanations must be manually computed for the model. If a user has _User_ access to the project, Prediction Explanations are automatically computed.
* While there is no limit to the number of flexible features you can specify in Optimizer applications, if the Grid Search algorithm is selected, then the grid cannot contain more than 10000 points and will return an error message if it exceeds this limit. DataRobot does not recommend using Grid Search if you are optimizing more than three features.
### Time series applications {: #time-series-applications }
Note the following before creating a time series application:
* The project must use `hours`, `days`, `weeks`, `months`, `quarters`, or `years` as the time unit.
* To include calendar events in the widget, [add a calendar file](ts-adv-opt#calendar-files) to your project and include calendar events for the timeline of the training dataset and forecasting window.
* [Known in advance (KA)](ts-adv-opt#set-known-in-advance-ka) features must be set during project creation and supported by the deployed model. Some models, for example Baseline Only models, do not support KA features even if the project is configured to use them.
* You cannot train a model on the [_Time Series Informative Features_](ts-feature-lists#automatically-created-feature-lists) list after it has been deployed to production.
* The project must be deployed to a DataRobot prediction server.
* The deployment must have an association ID and the appropriate deployment settings configured.
| index |
---
title: Reduce 30-day readmissions rate
description: Build ML models with the DataRobit UI to reduce the 30-Day readmissions rate by predicting at-risk patients.
---
# Reduce 30-day readmissions rate {: #reduce-30day-readmissions-rate }
This page outlines a use case to reduce the 30-day hospital readmission rate by predicting at-risk patients. It is captured below as a UI-based walkthrough. It is also available as a [Jupyter notebook](readmission.ipynb) that you can download and execute.
{% include 'includes/hospital-readmit-include.md' %}
### No-Code AI Apps {: #no-code-ai-apps }
Consider building a custom application where stakeholders can interact with the predictions and record the outcomes of the investigation. Once the model is deployed, predictions can be consumed for use in the [decision process](#decision-process). For example, this [No-Code AI App](app-builder/index) is an easily shareable, AI-powered application using a no-code interface:


### Notebook demo {:#notebook-demo}
See the notebook version of this accelerator [here](loan-default-nb.ipynb).
| readmit |
---
title: Likelihood of a loan default
description: AI models for predicting the likelihood of a loan default can be deployed within the review process to score and rank all new flagged cases.
---
# Likelihood of a loan default {: #likelihood-of-a-loan-default }
This page outlines the use case to reduce defaults and minimize risk by predicting the likelihood that a borrower will not repay their loan. It is captured below as a UI-based walkthrough. It is also available as a [Jupyter notebook](loan-default-nb.ipynb) that you can download and execute.
{% include 'includes/loan-defaults-include.md' %}
### No-Code AI Apps {: #no-code-ai-apps }
A [no-code](app-builder/index) or Streamlit app can be useful for showing aggregate results of the model (e.g., risky transactions at an entity level). Consider building a custom application where stakeholders can interact with the predictions and record the outcomes of the investigation. Once the model is deployed, predictions can be consumed for use in the [decision process](#decision-process). For example, this [No-Code AI App](app-builder/index) is an easily shareable, AI-powered application using a no-code interface:


### Notebook demo {:#notebook-demo}
See the notebook version of this accelerator [here](loan-default-nb.ipynb).
| loan-default |
---
title: Triage insurance claims
description: Evaluate the severity of an insurance claim in order to triage it effectively.
---
# Triage insurance claims {: #triage-insurance-claims }
This page outlines a use case that assesses claim complexity and severity as early as possible to optimize claim routing, ensure the appropriate level of attention, and improve claimant communications. It is captured below as a UI-based walkthrough. It is also available as a [Jupyter notebook](claims.ipynb) that you can download and execute.
{% include 'includes/triage-insurance-claims-include.md' %}
### Data preparation {: #data-preparation }
The example data is organized at the claim level; each row is a claim record, with all the claim attributes taken at FNOL. On the other hand, the target variable, `Incurred`, is the total payment for a claim when it is closed. So there are no open claims in the data.
A workers’ compensation insurance carrier’s claim database is usually stored at the transaction level. That is, a new record will be created for each change to a claim, such as partial claim payments and reserve changes. This use case snapshots the claim (and all the attributes related to the claim) when it is first reported and then again when the claim is closed (from target to total payment). Policy-level information can be predictive as well, such as class, industry, job description, employee tenure, size of the employer, whether there is a return to work program, etc. Policy attributes should be joined with the claims’ data to form the modeling dataset.
### Data evaluation {: #data-evaluation }
Once the modeling data is uploaded to DataRobot, [EDA](eda-explained) produces a brief summary of the data, including descriptions of feature type, summary statistics for numeric features, and the distribution of each feature. A [data quality assesment](data-quality) helps ensure that only appropriate data is used in the modeling process. Navigate to the **Data** tab to learn more about your data.
#### Exploratory Data Analysis {: #exploratory-data-analysis }
Click each feature to see [histogram](histogram) information such as the summary statistics (min, max, mean, std) of numeric features or a histogram that represents the relationship of a feature with the target.

DataRobot automatically performs data quality checks. In this example, it has detected outliers for the target feature. Click **Show Outliers** to view them all (outliers are common in insurance claims data). To avoid bias introduced by the outlier, a common practice is to cap the target, such as capping it to the 95th percentile. This cap is especially important for linear models.
#### Feature Associations {: #feature-associations }
Use the [**Feature Associations**](feature-assoc) tab to visualize the correlations between each pair of the input features. For example, in the plot below, the features `DaysWorkedPerWeek` and `PartTimeFullTime` (top-left corner) have strong associations and are therefore "clustered" together. Each color block in this matrix is a cluster.

## Modeling and insights {: #modeling-and-insights }
After modeling completes, you can begin interpreting the model results.
### Feature Impact {: #feature-impact }
[**Feature Impact**](feature-impact) reveals the association between each feature and the model target—the key drivers of the model. **Feature Impact** ranks features based on feature importance, from the most important to the least important, and also shows the relative importance of those features. In the example below we can see that `InitialCaseEstimate`is the most important feature for this model, followed by `ClaimDescription`, `WeeklyRate`, `Age`, `HoursWorkedPerWeek`, etc.

This example indicates that features after `MaritalStatus` contribute little to the model. For example, `gender `has minimal contribution to the model, indicating that claim severity doesn't vary by the gender of the claimant. If you create a new feature list that does not include gender (and other features less impactful than `MaritalStatus`) and only includes the most impactful features, the model accuracy should not be significantly impacted. A natural next step is to [create a new feature list](work-with-feature-lists#create-a-feature-list) with only the top features and rerun the model. DataRobot automatically creates a new feature list, "DR Reduced Features", by including features that have a cumulative feature impact of 95%.
### Partial Dependence plot {: #partial-dependence-plot }
Once you know which features are important to the model, it is useful to know how each feature affects predictions. This can be seen in [**Feature Effects**](feature-effects) and in particular a model's partial dependence plot. In the example below, notice the partial dependence for the `WeeklyRate` feature. You can observe that claimants with lower weekly pay have lower claim severity, while claimants with higher weekly pay have higher claim severity.

### Prediction Explanations {: #prediction explanations }
When a claims adjuster sees a low prediction for a claim, they are likely to initially ask what the drivers are behind such a low prediction. The [**Prediction Explanation**](pred-explain/index) insight, provided at an individual prediction level, can help claim adjusters understand how a prediction is made, increasing confidence in the model. By default, DataRobot provides the top three explanations for each prediction, but you can request up to 10 explanations. Model predictions and explanations can be downloaded as a CSV and you can control which predictions are populated in the CSV by specifying the thresholds for high and low predictions.
The graph below shows the top three explanations for the 3 highest and lowest predictions. The graph shows that, generally, high predictions are associated with older claimants and higher weekly salary, while the low predictions are associated with a lower weekly salary.

### Word Cloud {: #word-cloud }
The feature `ClaimDescription` is an unstructured text field. DataRobot builds text mining models on textual features, and the output from those text-mining models is used as inputs into subsequent modeling processes. Below is a [**Word Cloud**](word-cloud) for `ClaimDescription`, which shows the keywords parsed out by DataRobot. Size of the word indicates how frequently the word appears in the data: `strain` appears very often in the data while `fractured `does not appear as often. Color indicates severity: both `strain` and `fractured` (red words) are associated with high severity claims while `finger` and `eye` (blue words) are associated with low severity claims.

## Evaluate accuracy {: #evaluate-accuracy }
The following insights help evaluate accuracy.
### Lift Chart {: #lift-chart }
The [**Lift Chart**](lift-chart) shows how effective the model is at differentiating lowest risks (on the left) from highest risks (on the right). In the example below, the blue curve represents the average predicted claim cost, and the orange curve indicates the average actual claim cost. The upward slope indicates the model has effectively differentiated the claims of low severity (close to 0) on the left and those of high severity (~45K) on the right. The fact that the actual values (orange curve) closely track the predicted values (blue curve) tells you that the model fits the data well.

Note that DataRobot only displays lift charts on validation or holdout partitions.
## Post-processing {: #post-processing }
A prediction for claim severity can be used for multiple different applications, requiring different post-processing steps for each. Primary insurers may use the model predictions for claim triage, initial case reserve determination, or reinsurance reporting. For example, for claim triage at FNOL, the model prediction can be used to determine where the claim should be routed. A workers’ compensation carrier may decide:
* All claims with predicted severity under $5000 go to straight-through processing (STP).
* Claims between $5000 and $20,000 go through the standard process.
* Claims over $20,000 are assigned a nurse case manager.
* Claims over $500,000 are also reported to a reinsurer, if applicable.
Another carrier may decide to pass 40% of claims to STP; 55% to regular process; and 5% get assigned a nurse case manager so that thresholds can be determined accordingly. These thresholds can be programmed into the business process so that claims go through the predesigned pipeline once reported and then get routed appropriately. Note that companies with STP should carefully design their claim monitoring procedures to ensure unexpected claim activities are captured.
In order to test these different assumptions, design single or multiple A/B tests and run them in sequence or parallel. Power analysis and p-value needs to be set before the tests in order to determine the number of observations required before stopping the test. In designing the test, think carefully about the drivers of profitability. Ideally you want to allocate resources based on the change they can effect, not just on the cost of the claim. For example, fatality claims are relatively costly but not complex, and so often can be assigned to a very junior claims handler. Finally, at the end of the A/B tests, you can identify the best combination based on the profit of each test.
## Predict and deploy {: #predict-and-deploy }
You can use the DataRobot UI or REST API to deploy a model, depending on how ready it is to be put into production. However, before the model is fully integrated into production, a pilot may be beneficial for:
* Testing the model performance using new claims data.
* Monitoring unexpected scenarios so a formal monitoring process can be designed or modified accordingly.
* Increasing the end users’ confidence in using the model outputs to assist business decision making.
Once stakeholders feel comfortable about the model and also the process, integration of the model with production systems can maximize the value of the model. The outputs from the model can be customized to meet the needs of claim management.
### Decision process {: #decision-process }
Deploy the selected model into your desired decision environment to embed the predictions into your regular business decisions. Insurance companies often have a separate system for claims management. For this particular use case, it may be in the best interest of the users to integrate the model with the claims management system, and with visualization tools such as Power BI or Tableau.
If a model is integrated within an insurer’s claim management system when a new claim is reported, FNOL staff can record all the available information in the system. The model can then be run in the background to evaluate the ultimate severity. The estimated severity can help suggest initial case reserves and appropriate route for further claim handling (i.e., STP, regular claim adjusting, or experienced claim adjusters, possibly with nurse case manager involvement and/or reinsurance reporting).
Carriers will want to include rules-based decisions as well, to capture decisions that are driven by considerations other than ultimate claim severity.
Most carriers do not set initial reserves for STP claims. For those claims beyond STP, you can use model predictions to set initial reserves at the first notice of loss. Claims adjusters and nurse case managers will only be involved for claims over certain thresholds. The reinsurance reporting process may benefit from the model predictions as well; instead of waiting for claims to develop to very high severity, the reporting process may start at FNOL. Reinsurers will certainly appreciate the timely reporting of high severity claims, which will further improve the relationship between primary carriers and reinsurers.
### Decision stakeholders {: #decision-stakeholders }
Consider the following to serve as decision stakeholders:
* Claims management team
* Claims adjusters
* Reserving actuaries
### Model monitoring {: #model-monitoring }
Carriers implementing a claims severity model usually have strictly defined business rules to ensure abnormal activities will be captured before they get out of control. Triggers based on abnormal behavior (for example, abnormally high predictions, too many missing inputs, etc.) can trigger manual reviews. Use the [performance monitoring capabilities](monitor/index)—especially service health, data drift, and accuracy to produce and distribute regular reports to stakeholders.
### Implementation considerations {: #implementation-considerations }
A claim severity model at FNOL should be one of a series of models built to monitor claim severity over time. Besides the FNOL Model, build separate models at different stages of a claim (e.g., 30 days, 90 days, 180 days) to leverage the additional information available and further evaluate the claim severity. Additional information comes in over time regarding medical treatments and diagnoses and missed work, allowing for improved accuracy as a claim matures.
### No-Code AI Apps {: #no-code-ai-apps }
Consider building a custom application where stakeholders can interact with the predictions and record the outcomes of the investigation. Once the model is deployed, predictions can be consumed for use in the [decision process](#decision-process). For example, this [No-Code AI App](app-builder/index) is an easily shareable, AI-powered application using a no-code interface:

### Notebook demo {:#notebook-demo}
See the notebook version of this accelerator [here](claims.ipynb).
| insurance-claims |
---
title: Anti-Money Laundering (AML) Alert Scoring
description: Build a model that uses historical data, including customer and transactional information, to identify which alerts resulted in a Suspicious Activity Report (SAR).
---
# Anti-Money Laundering (AML) Alert Scoring {: #anti-money-laundering-aml-alert-scoring }
This use case builds a model that uses historical data, including customer and transactional information, to identify which alerts resulted in a Suspicious Activity Report (SAR). The model can then be used to assign a suspicious activity score to future alerts and improve the efficiency of an AML compliance program using rank ordering by score. It is captured below as a UI-based walkthrough. It is also available as a [Jupyter notebook](anti_money_laundering.ipynb) that you can download and execute.
Download the sample training dataset [here](https://s3.amazonaws.com/datarobot-use-case-datasets/DR_Demo_AML_Alert_train.csv).
{% include 'includes/aml-1-include.md' %}
{% include 'includes/aml-2-include.md' %}
{% include 'includes/aml-3-include.md' %}
### No-Code AI Apps {: #no-code-ai-apps }
Consider building a custom application where stakeholders can interact with the predictions and record the outcomes of the investigation. Once the model is deployed, predictions can be consumed for use in the [decision process](#decision-process). For example, this [No-Code AI App](app-builder/index) is an easily shareable, AI-powered application using a no-code interface:


### Notebook demo {:#notebook-demo}
See the notebook version of this accelerator [here](anti_money_laundering.ipynb).
{% include 'includes/aml-4-include.md' %}
| money-launder |
---
title: Business accelerators
description: A catalog of UI-based, end-to-end walkthroughs that address common industry-specific problems.
---
# Business accelerators {: #business-accelerators }
This section provides access to a catalog of UI-based, end-to-end walkthroughs, based on best practices and patterns, that address common industry-specific problems.
Use case | Description
-------- | -----------
[Business application briefs](biz-app-briefs) | A variety of quick summary applications with an accompanying No-Code AI App to provide an overview of possible uses.
[Purchase card fraud detection](p-card-detect) | Helps organizations that employ purchase cards for procurement to monitor for fraud and misuse.
[Likelihood of a loan default](loan-default) | Helps minimize risk by predicting the likelihood that a borrower will not repay their loan.
[Late shipment predictions](late-ship) | Helps supply chain managers can evaluate root cause and then implement short-term and long-term adjustments that prevent shipping delays.
[Reduce hospital readmission rates](readmit) | Helps to reduce the 30-Day readmissions rate by predicting at-risk patients.
[Triage insurance claim](insurance-claims) | Helps insurers assess claim complexity and severity as early as possible for optimized routing and handling.
[Fraudulent claim detection](fraud-claims) | Helps reduce the risk of fraudulent claims while increasing claim efficienecy processing.
| index |
---
title: Fraudulent claim detection
description: Improve the accuracy in predicting which insurance claims are fraudulent.
---
# Fraudulent claim detection {: #fraudulent-claim-detection }
This page outlines the use case to improve the accuracy in predicting which insurance claims are fraudulent. It is captured below as a UI-based walkthrough. It is also available as a [Jupyter notebook](pred-fraud-v3.ipynb) that you can download and execute.
{% include 'includes/fraud-claims-include.md' %}
### No-Code AI Apps {: #no-code-ai-apps }
Consider building a custom application where stakeholders can interact with the predictions and record the outcomes of the investigation. Once the model is deployed, predictions can be consumed for use in the [decision process](#decision-process). For example, this [No-Code AI App](app-builder/index) is an easily shareable, AI-powered application using a no-code interface:


### Notebook demo {:#notebook-demo}
See the notebook version of this accelerator [here](pred-fraud-v3.ipynb).
| fraud-claims |
---
title: Business application briefs
description: A variety of quick summary applications with an accompanying No-Code AI App to provide an overview of possible uses.
---
# Business application briefs {: #business-application-briefs }
This section provides a variety of quick use case summaries, with an accompanying [No-Code AI App](app-builder/index)), to provide examples of possible uses for predictive models in various industries:
{% include 'includes/no-code-app-intro.md' %}
* [Parts failure predictions](#parts-failure-predictions)
* [Early loan payment predictions](#early-loan-payment-predictions)
* [Predictions for fantasy baseball](#predictions-for-fantasy-baseball)
## Parts failure predictions {: #parts-failure-predictions }
According to a study done by [Aberdeen Group](https://www.aberdeen.com/techpro-essentials/playing-russian-roulette-with-your-infrastructure-can-lead-to-big-downtime/){ target=_blank }, unplanned equipment failure can cost more than $260K an hour and can have associated health and safety risks. Existing best practices, such as scheduled preventative maintenance, can mitigate failure, but will not catch unusual, unexpected failures. Scheduled maintenance can also be dangerously conservative, resulting in excessive downtime and maintenance costs. A predictive model can signal your maintenance crew when an impending issue is likely to occur.
This proactive approach to maintenance (automating related processes) allows operators to:
1. Identify subtle or unknown issues with equipment operation in the collected sensor data.
2. Schedule maintenance when maintenance is truly needed
3. Be automatically notified to intervene when a sudden failure is imminent.
Leveraging collected sensor data not only saves your organization unintended down time, but also allows you to prevent unintended consequences of equipment failure.
A sample app:

Or, consider building a predictive model using a [Python notebook](part-fail.ipynb).
## Early loan payment predictions {: #early-loan-payment-predictions }
When a borrower takes out a 30-year mortgage, usually they won’t finish paying back the loan in exactly thirty years—it could be later or earlier, or the borrower may refinance. For regulatory purposes—and to manage liabilities—banks need to accurately forecast the effective duration of any given mortgage. Using DataRobot, mortgage loan traders can combine their practical experience with modeling insights to understand which mortgages are likely to be repaid early.
Between general economic data and individual mortgage records, there’s plenty of data available to predict early loan prepayment. The challenge lies in figuring out which features, in which combination, with which modeling technique, will yield the most accurate model. Furthermore, federal regulations require that models be fully transparent so that regulators can verify that they are non-discriminatory and robust.
A sample app:

## Predictions for fantasy baseball {: #predictions-for-fantasy-baseball }
Millions of people play fantasy baseball using leagues that are typically draft- or auction-based. Choosing a team based on your favorite players—or simply on last year's performance without any regard for regression to the mean—is likely to field a weaker team. Because baseball is one of the most "documented" of all sports (statistics-wise), you can derive a better estimate of each player's true talent level and their likely performance in the coming year using machine learning. This allows for better drafting and helps avoid overpaying for players coming off of "career" seasons.
When drafting players for fantasy baseball, you must make decisions based on the player's performance over their career to date, as well as variables like the effects of aging. Basing evaluation on personal interpretation of the player's performance is likely to cause you to overvalue a player's most recent performance. In other words, it's common to overvalue a player coming off a career year or undervalue a player coming off a bad year. The goal is to generate a better estimate of the player's value in the next year based on what he has done in prior years. If you build a machine learning model to predict a player's performance in the next year based on their previous performance, it will help you identify when over- or under-performance is a fluke, and when it is an indicator of that player’s future performance.
A sample app:

Or, consider building a predictive model using a [Python notebook](fantasy.ipynb).
| biz-app-briefs |
---
title: Purchase card fraud detection
description: Helps organizations that employ purchase cards for procurement monitor for fraud and misuse.
---
# Purchase card fraud detection {: #purchase-card-fraud-detection }
In this use case you will build a model that can review 100% of purchase card transactions and identify the riskiest for further investigation via manual inspection. In addition to automating much of the resource-intensive tasks of reviewing transactions, this solution can also provide high-level insights such as aggregating predictions at the organization level to identify problematic departments and agencies to target for audit or additional interventions.
Sample training data used in this use case:
* [`synth_training_fe.csv`](https://datarobot.box.com/shared/static/191i15wnpzshmmfatjedvxvlobhk5fs1.csv){ target=_blank }
[Click here](#data) to jump directly to the hands-on sections that begin with working with data. Otherwise, the following several paragraphs describe the business justification and problem framing for this use case.
## Background {: #background }
Many auditor’s offices and similar fraud shops rely on business rules and manual processes to manage their operations of thousands of purchase card transactions each week. For example, an office reviews transactions manually in an Excel spreadsheet, leading to many hours of review and missed instances of fraud. They need a way to simplify this process drastically while also ensuring that instances of fraud are detected. They also need a way to seamlessly fold each transaction’s risk score into a front-end decision application that will serve as the primary way to process their review backlog for a broad range of users.
Key use case takeaways:
**Strategy/challenge**: Organizations that employ purchase cards for procurement have difficulty monitoring for fraud and misuse, which can comprise 3% or more of all purchases. Much of the time spent by examiners is quite manual and involves sifting through mostly safe transactions looking for clear instances of fraud or applying rules-based approaches that miss out on risky activity.
**Model solution**: ML models can review 100% of transactions and identify the riskiest for further investigation. Risky transactions can be aggregated at the organization level to identify problematic departments and agencies to target for audit or additional interventions.
## Use case applicability {: #use-case-applicability }
The following table summarizes aspects of this use case:
Topic | Description
---- | ----
**Use case type** | Public Sector / Banking & Finance / Purchase Card Fraud Detection
**Target audience** | Auditor’s office or fraud investigation unit leaders, fraud investigators or examiners, data scientists
**Desired outcomes**| <ul><li>Identify additional fraud</li><li>Increase richness of fraud alerts</li><li>Provide enterprise-level visibility into risk</li></ul>
**Metrics/KPIs** | <ul><li>Current fraud rate</li><li>Percent of investigated transactions that end in fraud determination</li><li>Total cost of fraudulent transactions & estimated undetected fraud</li><li>Analyst hours spent reviewing fraudulent transactions</li></ul>
**Sample dataset** | [`synth_training_fe.csv`](https://datarobot.box.com/shared/static/191i15wnpzshmmfatjedvxvlobhk5fs1.csv)
The solution proposed requires the following high-level technical components:
* Extract, Transform, Load (ETL): Cleaning of purchase card data (feed established with bank or processing company, e.g., TSYS) and additional feature engineering.
* Data science: Modeling of fraud risk using AutoML, selection/downweighting of features, tuning of prediction threshold, deployment of model and monitoring via MLOps.
* Front-end app development: Embedding of data ingest and predictions into a front-end application (e.g., Streamlit).
## Solution value {: #solution-value }
The primary issues and corresponding opportunities that this use case addresses include:
Issue | Opportunity
----- | -----
Government accountability / trust | Reviewing 100% of procurement transactions to increase public trust in government spending.
Undetected fraudulent activity | Identifying 40%+ more risky transactions ($1M+ value, depending on organization size).
Staff productivity | Increasing personnel efficiency by manually reviewing only the riskiest transactions.
Organizational visibility | Providing high-level insight into areas of risk within the organization.
## Sample ROI calculation {: #sample-roi-calculation }
Calculating ROI for this use case can be broken down into two main components:
* Time saved by pre-screening transactions
* Detecting additional risky transactions
!!! note
As with any ROI or valuation exercise, the calculations are "ballpark" figures or ranges to help provide an understanding of the magnitude of the impact, rather than an exact number for financial accounting purposes. It is important to consider the calculation methodology and any uncertainty in the assumptions used as it applies to your use case.
### Time savings from pre-screening transactions {: #time-savings-from-prescreening transactions }
Consider how much time can be saved by a model _automatically_ detecting True Negatives (correctly identified as "safe"), in contrast to an examiner manually reviewing transactions.
#### Input Variables
Variable | Value
-------- | -----
Model's True Negative + False Negative Rate <br /> This is the number of transactions that will now be automatically reviewed (False Positives and True Positives still require manual review, and so do not have a time savings component) | 95%
Number of transactions per year | 1M
Percent (%) of transactions manually reviewed | 25% (assumes the other 75% are not reviewed)
Average time spent on manual review (per transaction) | 2 minutes
Hourly wage (fully loaded FTE) | $30
#### Calculations
Variable | Formula | Value
-------- | ------- | -----
Transactions reviewed manually by examiner today | 1M * 25% | 250,000
Transactions pre-screened by model as _not_ needing review | 1M * 95% | 950,000
Transactions identified by model as needing manual review | 1M - 950,000 | 50,000
Net transactions no longer needing manual review | 250,000 - 50,000 | 200,000
Hours of transactions reviewed manually per year | 200,000 * (2 minutes / 60 minutes) | 6,667 hours
Cost savings per year | 6,667 * $30 | **$200,000**
### Calculating additional fraud detected annually {: #calculating-additional-fraud-detected-annually }
#### Input Variables
Variable | Value
-------- | -----
Number of transactions per year | 1M
Percent (%) of transactions manually reviewed | 25% (assumes the other 75% are not reviewed)
Average transaction amount | $300
Model True Positive rate | 2% (assume the model detects “risky” not necessarily fraud)
Model False Negative rate | 0.5%
Percent (%) of risky transactions that are actually fraud | 20%
#### Calculations
Variable | Formula | Value
-------- | ------- | -----
Number of transactions that are now reviewed by model that were not previously | 1M * (100%-25%) | 750,000
Number of transactions that are accurately identified as risky | 750k * 2% | 15,000
Percent (%) of risky transactions that are fraud | 15,000 * 20% | 3,000
Value ($) of newly identified fraud | 3,000 * $300 | $900,000
Number of transactions that are False Negatives (for risk of fraud) | 0.5% * 1M | 5,000
Number of False Negatives that would have been manually reviewed | 5,000 * 25% | 1,250
Number of False Negative transactions that are actually fraud | 1,250 * 20% | 250
Value ($) of missed fraud | 250 * $300 | $75,000
Net Value ($) | $900,000 - $75,000 | $825,000
#### _Total annual savings estimate: $1.025M_
!!! tip
Communicate the model’s value in a range to convey the degree of uncertainty based on assumptions taken. For the above example, you might convey an estimated range of $0.8M - $1.1M.
#### Considerations
There may be other areas of value or even potential costs to implementing this model.
* The model may find cases of fraud that were missed in the manual review by an examiner.
* There may be additional cost to reviewing False Positives and True Positives that would not otherwise have been reviewed before. That said, this value is typically dwarfed by the time savings from the number of transactions that no longer need review.
* To reduce the value lost from False Negatives, where the model misses fraud that an examiner would have found, a common strategy is to optimize your prediction threshold to reduce False Negatives so that these situations are less likely to occur. Prediction thresholding should closely follow the estimated cost of a False Negative versus a False Positive (in this case, the former is much more costly).
## Data {: #data }
The linked synthetic dataset illustrates a purchase card fraud detection program. Specifically, the model is detecting fraudulent transactions (*purchase card holders making non-approved/non-business related purchases*).
The unit of analysis in this dataset is one row per transaction. The dataset must contain transaction-level details, with itemization where available:
* If no child items present, one row per transaction.
* If child items present, one row for parent transaction and one row for each underlying item purchased with associated parent transaction features.
### Data preparation {: #data-preparation }
Consider the following when working with the data:
**Define the scope of analysis**: For initial model training, the amount of data needed depends on several factors, such as the rate at which transactions occur or the seasonal variability in purchasing and fraud trends. This example case uses 6 months of labeled transaction data (or approximately 300,000 transactions) to build the initial model.
**Define the target**: There are several options for setting the target, for example:
* `risky/not risky` (as labeled by an examiner in an audit function).
* `fraud/not fraud` (as recorded by actual case outcomes).
* The target can also be [multiclass/multilabel](multiclass), with transactions marked as `fraud`, `waste`, and/or `abuse`.
**Other data sources**: In some cases, other data sources can be joined in to allow for the creation of additional features. This example pulls in data from an employee resource management system as well as timecard data. Each data source must have a way to join back to the transaction level detail (e.g., Employee ID, Cardholder ID).
### Features and sample data {: #features-and-sample-data }
Most of the features listed below are transaction or item-level fields derived from an industry-standard TSYS (DEF) file format. These fields may also be accessible via bank reporting sources.
To apply this use case in your organization, your dataset should contain, minimally, the following features:
Target:
* `risky/not risky` (or an option as described above)
Required features:
* Transaction ID
* Account ID
* Transaction Date
* Posting Date
* Entity Name (akin to organization, department, or agency)
* Merchant Name
* Merchant Category Code (MCC)
* Credit Limit
* Single Transaction Limit
* Date Account Opened
* Transaction Amount
* Line Item Details
* Acquirer Reference Number
* Approval Code
Suggested engineered features:
* Is_split_transaction
* Account-Merchant Pair
* Entity-MCC pair
* Is_gift_card
* Is_holiday
* Is_high_risk_MCC
* Num_days_to_post
* Item Value Percent of Transaction
* Suspicious Transaction Amount (multiple of $5)
* Less than $2
* Near $2500 Limit
* Suspicious Transaction Amount (whole number)
* Suspicious Transaction Amount(ends in 595)
* Item Value Percent of Single Transaction Limit
* Item Value Percent of Account Limit
* Transaction Value Percent of Account Limit
* Average Transaction Value over last 180 days
* Item Value Percentage of Average Transaction Value
Other helpful features to include are:
* Merchant City
* Merchant ZIP
* Cardholder City
* Cardholder ZIP
* Employee ID
* Sales Tax
* Transaction Timestamp
* Employee PTO or Timecard Data
* Employee Tenure (in current role)
* Employee Tenure (in total)
* Hotel Folio Data
* Other common features
* Suspicious Transaction timing (Employee on PTO)
## Exploratory Data Analysis (EDA) {: #exploratory-data-analysis-eda }
* **Smart downsampling**: For large datasets with few labeled samples of fraud, use [Smart Downsampling](smart-ds) to reduce total dataset size by reducing the size of the majority class. (From the **Data** page, choose **Show advanced options > Smart Downsampling** and toggle on Downsample Data.)
* **Time aware**: For longer time spans, [time-aware modeling](time/index) could be necessary and/or beneficial.
!!! tip "Check for time dependence in your dataset"
You can create a year+month feature from transaction time stamps and perform modeling to try to predict this. If the top model performs well, it is worthwhile to leverage time-aware modeling.
* **Data types**: Your data may have transaction features encoded as numerics but they must be [transformed to categoricals](feature-transforms). For example, while Merchant Category Code (MCC) is a four-digit number used by credit card companies to classify businesses, there is not necessarily an ordered relationship to the codes (e.g., 1024 is not similar to 1025).
Binary features must have either a categorical variable type or, if numeric, have values of `0` or `1`. In the sample data, several binary checks may result from feature engineering, such as `is_holiday`, `is_gift_card`, `is_whole_num`, etc.
## Modeling and insights {: #modeling-and-insights }
After cleaning the data, performing feature engineering, uploading the dataset to DataRobot (AI Catalog or direct upload), and performing the EDA checks above, modeling can begin. For rapid results/insights, [Quick Autopilot mode](model-ref#quick-autopilot) presents the best ratio of modeling approaches explored and time to results. Alternatively, use full Autopilot or Comprehensive modes to perform thorough model exploration tailored to the specific dataset and project type. Once the appropriate modeling mode has been selected from the dropdown, start modeling.
The following sections describe the insights available after a model is built.
### Model blueprint {: #model-blueprint }
The model [blueprint](blueprints), shown on the Leaderboard and sorted by a “survival of the fittest” scheme ranking by accuracy, shows the overall approach to model pipeline processing. The example below uses smart processing of raw data (e.g., text encoding, missing value imputation) and a robust algorithm based on a decision tree process to predict transaction riskiness. The resulting prediction is a fraud probability (0-100).

### Feature Impact {: #feature-impact }
[**Feature Impact**](feature-impact) shows, at a high level, which features are driving model decisions.

The **Feature Impact** chart above indicates:
* Merchant information (e.g., MCC and its textual description) tend to be impactful features that drive model predictions.
* Categorical and textual information tend to have more impact than numerical features.
The chart provides a clear indication of over-dependence on at least one feature—Merchant Category Code (MCC). To effectively downweight the dependence, consider [creating a feature list](feature-lists#create-feature-lists) with this feature excluded and/or blending top models. These steps can balance feature dependence with comparable model performance. For example, this use case creates an additional feature list that excluded the MCC and an additional engineered feature based on MCCs recognized as high risk by SMEs.
Also, starting with a large number of engineered features may result in a **Feature Impact** plot that shows minimal amounts of reliance on many of the features. Retraining with reduced features may result in increased accuracy and will also reduce the computational demand of the model.
The final solution used a blended model created from combining the top model from each of these two modified feature lists. It achieved comparable accuracy to the MCC-dependent model but with a more balanced **Feature Impact** plot. Compare the plot below to the one above:

### Confusion Matrix {: #confusion-matrix }
Leverage the [**ROC Curve**](roc-curve-tab/index) to tune the prediction threshold based on, for example, the auditor’s office desired risk tolerance and capacity for review. The [**Confusion Matrix**](confusion-matrix) and [**Prediction Distribution**](pred-dist-graph) graph provide excellent tools for experimenting with threshold values and seeing the effects on False Positive and False Negative counts/percentages. Because the model marks transactions as risky and in need of further review, the preferred threshold prioritizes minimizing false negatives.
You can also use the ROC Curve tools to explain the tradeoff between optimization strategies. In this example, the solution mostly minimizes False Negatives (e.g., missed fraud) while slightly increasing the number of transactions needing review.

You can see in the example above that the model outputs a probability of risk.
* Anything above the set probability threshold marks the transaction as risky (or needs review) and vice versa.
* Most predictions have low probability of being risky (left, **Prediction Distribution** graph).
* The best performance evaluators are Sensitivity and Precision (right, **Confusion Matrix** chart).
* The default **Prediction Distribution** display threshold of 0.41 balances the False Positive and False Negative amounts (adjustable depending on risk tolerance).
### Prediction Explanations {: #prediction-explanations }
With each transaction risk score, DataRobot provides two associated _Risk Codes_ generated by [Prediction Explanations](pred-explain/index). These Risk Codes inform users which two features had the highest effect on that particular risk score and their relative magnitude. Inclusion of Prediction Explanations helps build trust by communicating the "why" of a prediction, which aids in confidence-checking model output and also identifying trends.
## Predict and deploy {: #predict-and-deploy }
Use the tools above (blueprints on the Leaderboard, Feature Impact results, Confusion/Payoff matrices) to determine the best blueprint for the data/use case.
Deploy the model that serves risk score predictions (and accompanying prediction explanations) for each transaction on a batch schedule to a database (e.g., Mongo) that your end application reads from.
Confirm the ETL and prediction scoring frequency with your stakeholders. Often the TSYS DEF file is provided on a daily basis and contains transactions from several days prior to posting date. Generally daily scoring of the DEF is acceptable—the post-transaction review of purchases does not need to be executed in real-time. Point of reference, though—some cases can take up to 30 days post purchase to review transactions.
A [no-code](app-builder/index) or Streamlit app can be useful for showing aggregate results of the model (e.g., risky transactions at an entity level). Consider building a custom application where stakeholders can interact with the predictions and record the outcomes of the investigation. A useful app will allow for intuitive and/or automated data ingestion and the review of individual transactions marked as risky, as well as organization- and entity-level aggregation.
## Monitoring and management {: #monitoring-and-management }
Fraudulent behavior is dynamic as new schemes replace ones that have been mitigated. It is crucial to capture ground truth from SMEs/auditors to track model accuracy and verify the effectiveness of the model. [Data drift](data-drift), as well as concept drift, can pose significant risks.
For fraud detection, the process of retraining a model may require additional batches of data manually annotated by auditors. Communicate this process clearly and early in the project setup phase. [Champion/challenger](challengers) analysis suits this use-case well and should be enabled.
For models trained with target data labeled as `risky` (as opposed to confirmed fraud), it could be useful in the future to explore modeling `confirmed fraud` as the amount of training data grows. The model threshold serves as a confidence knob that may increase across model iterations while maintaining low false negative rates. Moving to a model that predicts the actual outcome as opposed to risk of the outcome also addresses the potential difficulty when retraining with data primarily labeled as the actual outcome (collected from the end-user app).
| p-card-detect |
---
title: Late shipment predictions
description: Helps supply chain managers can evaluate root cause and then implement short-term and long-term adjustments that prevent shipping delays.
---
# Late shipment predictions {: #late-shipment-prediction }
With the inception of one-day and same-day delivery, customer standards on punctuality and speed have risen to levels unlike ever before. While a delayed delivery will usually be only a nuisance to the individual consumer, demands for speed ultimately flow upstream into the supply chain, where retailers and manufacturers are constantly being pressed on time. For these organizations, on-time performance is a matter of millions of dollars of customer orders or contractual obligations. Unfortunately, with the unavoidable challenges that come with managing variability in the supply chain, even the most well-known logistics carriers saw a 6.9 percent average delay across shipments made by 100 e-commerce retailers who collectively delivered more than 500,000 packages in the first quarter of 2019.
Sample training data used in this use case, "Supply Chain Shipment Pricing Data":
* [`SCMS_Delivery_History_Dataset.csv`](https://www.kaggle.com/datasets/divyeshardeshana/supply-chain-shipment-pricing-data?select=SCMS_Delivery_History_Dataset.csv){ target=_blank }
[Click here](#data) to jump directly to the hands-on sections that begin with working with data. Otherwise, the following several paragraphs describe the business justification and problem framing for this use case.
## Business problem {: #business-problem }
A critical component of any supply chain network is to prevent parts shortages, especially when they occur at the last minute. Parts shortages not only lead to underutilized machines and transportation, but also cause a domino effect of late deliveries through the entire network. In addition, the discrepancies between the forecasted and actual number of parts that arrive on time prevent supply chain managers from optimizing their materials plans.
To mitigate the impact delays will have on the supply chain, manufacturers adopt approaches such as holding excess inventory, optimizing product designs for more standardization, and moving away from single-sourcing strategies. However, most of these approaches add up to unnecessary costs for parts, storage, and logistics.
In many cases, late shipments persist until supply chain managers can evaluate the root cause and then implement short-term and long-term adjustments that prevent them from occurring in the future. Unfortunately, supply chain managers have been unable to efficiently analyze historical data available in MRP systems because of the time and resources required.
## Solution value {: #solution-value }
AI helps supply chain managers reduce parts shortages by predicting the occurrence of late shipments, which in turn gives them time to intervene. By learning from past cases of late shipments and their associated features, AI applies these patterns to future shipments to predict the likelihood that those shipments will also be delayed. Unlike complex MRP systems, AI provides supply chain managers with the statistical reasons behind each late shipment in an intuitive but scientific way. For example, when AI notifies supply chain managers of a late shipment, it will also explain why, offering reasons such as the shipment’s vendor, mode of transportation, or country.
Using this information, supply chain managers can apply both-short term and long-term solutions to preventing late shipments. In the short term, based on their unique characteristics, shipment delays can be prevented by adjusting transportation or delivery routes. In the long term, supply chain managers can conduct aggregated root-cause analyses to discover and solve the systematic causes of delays. They can use this information to make strategic decisions, such as choosing vendors located in more accessible geographies or reorganizing shipment schedules and quantities.
## ROI estimation {: #roi-estimation }
The ROI for implementing this solution can be estimated by considering the following factors:
* Starting with the manufacturing company and production line stoppage, the cycle time of the production process can be used to understand how much of the production loss relates to part shortages. For example, if the cycle time (time taken to complete one part) is 60 seconds and each day 15 minutes of production are lost to part shortages, then total production loss is equivalent to 15 products, which can be translated to loss in profit of 15 products in a day. A similar calculation can be used to estimate annual loss due to part shortage.
* For a logistic provider, predicting part shortages early can increase savings in terms of reduced inventory. This can be roughly measured by capturing the difference in maintaining parts stock before and after implementation of the AI solution. The difference in stock when multiplied by the holding and inventory cost per unit, calculates the overall ROI. Furthermore, in cases when the demand for parts is left unfulfilled (because of part shortages), the opportunity cost related to the unsatisfied demand could directly result in the loss of prospective business opportunities.
## Data {: #data }
This accelerator uses a [publicly-available dataset](https://www.kaggle.com/datasets/divyeshardeshana/supply-chain-shipment-pricing-data?select=SCMS_Delivery_History_Dataset.csv), provided by the President’s Emergency plan for AIDS relief (PEPFAR), to represent how a manufacturing or logistics company can leverage AI models to improve decision-making. This dataset provides supply chain health commodity shipment and pricing data. Specifically, it identifies Antiretroviral (ARV) and HIV lab shipments to supported countries. In addition, it provides the commodity pricing and associated supply chain expenses necessary to move the commodities to other countries for use.
### Features and sample data {: #features-and-sample-data }
The features in the dataset represent some of the factors that are important in predicting delays.
#### Target
The target variable:
* `Late_delivery`
This feature represents whether or not a shipment would be delayed using values such as `True \ False`, `1 \ 0`, etc. This choice in target makes this a binary classification problem. The distribution of the target variable is imbalanced, with 11.4% being 1 (late delivery) and 88.6% being 0 (on-time delivery).
#### Sample feature list
The following shows sample features for this use case:
Feature name | Data type | Description | Data source | Example
------------ | --------- | ----------- | ----------- | -------
Vendor | Categorical | Name of the vendor who would be shipping the delivery |Purchase order | Ranbaxy, Sun Pharma etc.
Item description | Text | Details of the part/item that is being shipped | Purchase order| 30mg HIV test kit, 600mg Lamivudine capsules
Line item quantity | Numeric | Amount of item that was ordered | Purchase order | 1000, 300 etc.
Line item value | Numeric | Unit price of the line item ordered | Purchase order | 0.39, 1.33
Manufacturing site | Categorical | Site of the vendor manufacturing (the same vendor can ship parts from different sites) | Invoice | Sun Pharma, India
Product group | Categorical | Category of the product that is ordered | Purchase order | HRDT, ARV
Shipment mode | Categorical | Mode of transport for part delivery | Invoice | Air, Truck
Late delivery | Target (Binary) | Whether the delivery was late or on-time | ERP System, Purchase Order | 0 or 1
In addition to the features listed above, incorporate any additional data that your organization collects that might be relevant to delays. (DataRobot is able to differentiate important/unimportant features if your selection would not improve modeling.) These features are generally stored across proprietary data sources available in the ERP systems of the organization.
### Data preparation {: #data-preparation }
The included dataset contains historical information on procurement transactions. Each row of analysis in the dataset is an individual order that is placed and whose delivery needs to be predicted. Every order has a scheduled delivery date and actual delivery date—the difference between these dates is used to define the target variable (`Late_delivery`). If the delivery date surpassed the scheduled date, then the target variable had a value `1`, otherwise, the value is `0`. Overall, the dataset contains roughly 10,320 rows and 26 features, including the target variable.
## Modeling and insights {: #modeling-and-insights }
DataRobot automates many parts of the modeling pipeline, including processing and partitioning the dataset, as described [here](model-data).
While this use case skips the modeling section and moves straight to model interpretation, it is worth noting that because the dataset is imbalanced, DataRobot automatically recommends using [**LogLoss**](opt-metric#loglossweighted-logloss) as the optimization metric to identify the most accurate model, being that it is an error metric which penalizes wrong predictions.
For this dataset, DataRobot found the most accurate model to be the *Extreme Gradient Boosting Tree Classifier* with unsupervised learning features using the open-source XGboost library.
The following sections describe the insights available after a model is built.
### Feature Impact {: #feature-impact }
To provide transparency on how the model works, DataRobot provides both global and local levels of model explanations. [**Feature Impact**](feature-impact) shows, at a high level, which features are driving model decisions (the relative importance of the features in the dataset in relation to the selected target variable).
From the visualization, you can see that the model identified Pack Price, Country, Vendor, Vendor INCO Term, and Line item Insurance as some of the most critical factors affecting delays in the parts shipments:

### Prediction Explanations {: #prediction-explanations }
DataRobot also provides [Prediction Explanations](pred-explain/index) to help understand the 10 key drivers for each prediction generated. This offers you the granularity you need to tailor your actions to the unique characteristics behind each part shortage.
For example, if a *particular country* is a top reason for a shipment delay, you can take action by reaching out to vendors in these countries and closely monitoring the shipment delivery across these routes.
Similarly, if there are *certain vendors* that are among the top reasons for delays, you can proactively reach out to these vendors and take corrective actions to avoid any delayed shipments that would affect the supply chain network. These insights help businesses make data-driven decisions to improve the supply chain process by incorporating new rules or alternative procurement sources.

### Word Cloud {: #word-cloud }
For text variables, such as Part description in the included dataset), use [Word Clouds](word-cloud) to discover the words or phrases that are highly associated with delayed shipments. Although text features are generally the most challenging and time consuming to build models for, DataRobot automatically fits each individual text column as an individual classifier, which is directly preprocessed with natural language processing (NLP) techniques (tf-idf, n grams, etc.) In this cloud, you can see that the items described as nevirapine 10 mg are more likely to get delayed in comparison to other items.

### Evaluate accuracy {: #evaluate-accuracy }
To evaluate the performance of the model, DataRobot, by default, ran five-fold cross validation and the resulting AUC score was roughly 0.82. The AUC score on the Holdout set (unseen data) was nearly equivalent, indicating that the model is generalizing well and is not overfitting. The reason to use the AUC score for evaluating the model is because AUC ranks the output (i.e., the probability of delayed shipment) instead of looking at actual values. The [Lift Chart](lift-chart), below, shows how the predicted values (blue line) compare to actual values (red line) when the data is sorted by predicted values. You can see that the model has slight under-predictions for the orders which are more likely to get delayed. But overall, the model performs well. Furthermore, depending on your ultimate problem framework, you can review the [Confusion Matrix](roc-curve-tab/confusion-matrix) for the selected model and, if required, adjust the prediction threshold to optimize for precision and recall.

## Predict and deploy {: #predict-and-deploy }
After selecting a model, you can deploy it into your desired decision environment. _Decision environments_ are the ways in which the predictions generated by the model will be consumed by the appropriate stakeholders in your organization, and how these stakeholders will make decisions using the predictions to impact the overall process.
The predictions from this use case can **augment** the decisions of the supply chain managers as they foresee any upcoming delays in logistics. It acts as an intelligent machine that, combined with the decisions of the managers, help improve your entire supply chain network.
### Decision stakeholders {: #decision-stakeholders }
The following table lists potential decision stakeholders
Stakeholder | Description
----------- | -----------
Decision Executors | Supply chain managers and procurement teams who are empowered with the information they need to ensure that the supply chain network is free from bottlenecks. These personnel have strong relationships with vendors and the ability to take corrective action using the model’s predictions.
Decision Managers | Executive stakeholders who manage large-scale partnerships with key vendors. Based on the overall results, these stakeholders can perform quarterly reviews of the health of their vendor relationships to make strategic decisions on long-term investments and business partnerships.
Decision Authors | Business analysts or data scientists who would build this decision environment. These analysts could be the engineers/analysts from the supply chain, engineering, or vendor development teams in the organization who usually work in collaboration with the supply chain managers and their teams.
### Model deployment {: #model-deployment }
The model can be deployed using the DataRobot Prediction API. A REST API endpoint can bounce back predictions in near real time when new scoring data from new orders are received.
### No-Code AI Apps {: #no-code-ai-apps }
Once the model is deployed (in whatever way the organization decides), the predictions can be consumed in several ways. For example, a front-end application that acts as the supply chain’s reporting tool can be used to deliver new scoring data as an input to the model, which then returns predictions and Prediction Explanations in real-time for use in the [decision process](#decision-process). For example, this [No-Code AI App](app-builder/index) is an easily shareable, AI-powered application using a no-code interface:

### Decision process {: #decision-process }
The action that managers and executive stakeholders could decide to take, based on the predictions and Prediction Explanations for identifying potential bottlenecks, is reaching out and collaborating with appropriate vendor teams in the supply chain network based on data-driven insights. The could make both long- and short-term decisions based on the severity of the impact of shortages on the business.
## Monitoring and management {: #monitoring-and-management }
Tracking model health is one of the most critical components of proper model lifecycle management, similar to product lifecycle management. Use DataRobot's [MLOps](mlops/index) to deploy, monitor (for data drift and accuracy), and manage all models across the organization through a centralized platform.
### Implementation considerations {: #implementation-considerations }
One of the major risks in implementing this solution in the real world is adoption at the ground level. Having strong and transparent relationships with vendors is also critical in taking corrective action. The risk is that vendors may not be ready to adopt a data-driven strategy and trust the model results.
| late-ship |
---
title: Android integration
description: Learn how to use Java Scoring Code on Android with little or no modifications. Supported only for Android 8.0 (API 26) or later.
---
# Android integration {: #android-integration }
It is possible to use Java Scoring Code on Android with little or no modifications.
!!! note
Supported Android versions are 8.0 (API 26) or later.
## Using a single model {: #using-a-single-model }
Using a single model in an Android project is almost the same as using it in any Java project:
1. Copy the Scoring Code JAR file into the Android project in the directory `app/libs`.
2. Add the following lines to the `dependency` section in `app/build.gradle`:
```
implementation fileTree(include: ['*.jar'], dir: 'libs')
annotationProcessor fileTree(include: ['*.jar'], dir: 'libs')
```
3. You can now use the model in the same way as the [Java API](quickstart-api#java-api-example).
## More complex use cases {: #more-complex-use-cases }
You must process the Scoring Code JARs to enable more complex functionality.
DataRobot provides a tool: `scoring-code-jar-tool` that will process one or more Scoring Code JAR files to be able to accomplish the following goals.
`scoring-code-jar-tool` is distributed as a JAR file and can be obtained <a target="_blank" href="https://mvnrepository.com/artifact/com.datarobot/scoring-code-jar-tool">here</a>.
### Using multiple models {: #using-multiple-models }
It is not possible to use more than one Scoring Code JAR in the same Android project.
Each Scoring Code JAR contains the same dependencies and Android does not allow multiple classes with the same fully qualified name.
To fix this, `scoring-code-jar-tool` can be used to take multiple input JAR files and merge them into a single JAR file with duplicate classes removed.
For example:
`java -jar scoring-code-jar-tool.jar --output combined.jar model1.jar model2.jar`
### Dynamic loading of JARs {: #dynamic-loading-of-jars }
To dynamically load scoring code jars, they must be compiled into Dalvik Executable (DEX) format.
`scoring-code-jar-tool` can compile to dex using the `--dex` parameter.
For example:
`java -jar scoring-code-jar-tool.jar --output combined.jar --dex /home/user/Android/Sdk/build-tools/29.0.3/dx model1.jar model2.jar`
The `--dex` parameter requires the path to the `dx` tool which is a part of the Android SDK.
#### Java example {: #java-example }
In this example, a model with id `5ebbeb5119916f739492a021` has been processed by `scoring-code-jar-tool` with the `--dex` argument to produce an output JAR called `model-dex.jar`.
For the sake of this example, the merged JAR file has been added as asset to the project.
It is not possible to get a filesystem path to assets which is why the asset is copied to a location in the filesystem before it is loaded.
```java
public class MainActivity extends AppCompatActivity {
@Override
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.activity_main);
String filename = "model-dex.jar";
File externalFile = new File(getExternalFilesDir(null), filename);
try {
copyAssetToFile(filename, externalFile);
} catch (IOException e) {
throw new RuntimeException(e);
}
DexClassLoader loader = new DexClassLoader(externalFile.getAbsolutePath(), "", null, MainActivity.class.getClassLoader());
IClassificationPredictor classificationPredictor = Predictors.getPredictor("5ebbeb5119916f739492a021", loader);
}
private void copyAssetToFile(String assetName, File dest) throws IOException {
AssetManager assetManager = getAssets();
try (InputStream in = assetManager.open(assetName)) {
try (OutputStream out = new FileOutputStream(dest)) {
byte[] buffer = new byte[1024];
int read;
while ((read = in.read(buffer)) != -1) {
out.write(buffer, 0, read);
}
}
}
}
}
```
| android |
---
title: Apache Spark API for Scoring Code
description: Learn how to use the Spark API for Scoring Code, a library that integrates Scoring Code JARs into Spark clusters.
---
# Apache Spark API for Scoring Code {: #apache-spark-api-for-scoring-code }
The Spark API for Scoring Code library integrates DataRobot Scoring Code JARs into Spark clusters. It is available as a [PySpark API](#pyspark-api) and a [Spark Scala API](#spark-scala-api).
In previous versions, the Spark API for Scoring Code consisted of multiple libraries, each supporting a specific Spark version. Now, one library supports all supported Spark versions. The following Spark versions support this feature:
* Spark 2.4.1 or greater
* Spark 3.x
!!! important
Spark must be compiled for Scala 2.12.
For a list of the deprecated, Spark version-specific libraries, see the [Deprecated Spark libraries](#deprecated-spark-libraries) section.
## PySpark API {: #pyspark-api }
The PySpark API for Scoring Code is included in the [`datarobot-predict`](https://pypi.org/project/datarobot-predict/){ target=_blank } Python package, released on PyPI. The PyPI project description contains documentation and usage examples.
## Spark Scala API {: #spark-scala-api }
The Spark Scala API for Scoring Code is published on Maven as [`scoring-code-spark-api`](https://central.sonatype.com/artifact/com.datarobot/scoring-code-spark-api){ target=_blank }. For more information, see the [API reference documentation](https://javadoc.io/doc/com.datarobot/scoring-code-spark-api_3.0.0/latest/com/datarobot/prediction/spark30/Predictors$.html){ target=_blank }.
Before using the Spark API, you must add it to the Spark classpath. For `spark-shell`, use the `--packages` parameter to load the dependencies directly from Maven:
``` sh
spark-shell --conf "spark.driver.memory=2g" \
--packages com.datarobot:scoring-code-spark-api:VERSION \
--jars model.jar
```
### Score a CSV file {: #score-a-csv-file }
The following example illustrates how you can load a CSV file into a Spark DataFrame and score it:
``` scala
import com.datarobot.prediction.sparkapi.Predictors
val inputDf = spark.read.option("header", true).csv("input_data.csv")
val model = Predictors.getPredictor()
val output = model.transform(inputDf)
output.show()
```
### Load models at runtime {: #load-models-at-runtime }
The following examples illustrate how you can load a model's JAR file at runtime instead of using the spark-shell `--jars` parameter:
=== "From DataRobot"
Define the `PROJECT_ID`, the `MODEL_ID`, and your `API_TOKEN`.
``` scala
val model = Predictors.getPredictorFromServer(
"https://app.datarobot.com/projects/PROJECT_ID/models/MODEL_ID/blueprint","API_TOKEN")
```
=== "From HDFS filesystem"
Define the path to the model JAR file and the `MODEL_ID`.
``` scala
val model = Predictors.getPredictorFromHdfs("path/to/model.jar", spark, "MODEL_ID")
```
### Time series scoring {: #time-series-scoring }
The following examples illustrate how you can perform time series scoring with the `transform` method, just as you would with non-time series scoring. In addition, you can customize the time series parameters with the `TimeSeriesOptions` builder.
If you don't provide additional arguments for a time series model through the `TimeSeriesOptions` builder, the `transform` method returns forecast point predictions for an auto-detected forecast point:
``` scala
val model = Predictors.getPredictor()
val forecastPointPredictions = model.transform(timeSeriesDf)
```
To define a forecast point, you can use the `buildSingleForecastPointRequest()` builder method:
``` scala
import com.datarobot.prediction.TimeSeriesOptions
val tsOptions = new TimeSeriesOptions.Builder().buildSingleForecastPointRequest("2010-12-05")
val model = Predictors.getPredictor(modelId, tsOptions)
val output = model.transform(inputDf)
```
To return historical predictions, you can define a start date and end date through the `buildForecastPointRequest()` builder method:
``` scala
val tsOptions = new TimeSeriesOptions.Builder().buildForecastDateRangeRequest("2010-12-05", "2011-01-02")
```
For a complete reference, see [TimeSeriesOptions javadoc](https://javadoc.io/doc/com.datarobot/datarobot-prediction/latest/com/datarobot/prediction/TimeSeriesOptions.Builder.html).
## Deprecated Spark libraries {: #deprecated-spark-libraries }
Support for Spark versions earlier than 2.4.1 or Spark compiled for Scala earlier than 2.12 is deprecated. If necessary, you can access deprecated libraries published on Maven Central; however, they will not receive any further updates.
The following libraries are deprecated:
| Name | Spark version | Scala version |
|------------------------------------------------------------------------------------------------------------------|---------------|---------------|
| [scoring-code-spark-api_1.6.0](https://central.sonatype.com/artifact/com.datarobot/scoring-code-spark-api_1.6.0) | 1.6.0 | 2.10 |
| [scoring-code-spark-api_2.4.3](https://central.sonatype.com/artifact/com.datarobot/scoring-code-spark-api_2.4.3) | 2.4.3 | 2.11 |
| [scoring-code-spark-api_3.0.0](https://central.sonatype.com/artifact/com.datarobot/scoring-code-spark-api_3.0.0) | 3.0.0 | 2.12 | | sc-apache-spark |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.