markdown
stringlengths 44
160k
| filename
stringlengths 3
39
|
---|---|
---
title: Alteryx
description: Provides documentation for a tool that allows you to create projects and make predictions without leaving the Alteryx interface.
---
# Tools for Alteryx {: #alteryx }
DataRobot Tools allow you to create projects and make predictions without leaving the Alteryx interface.
1. Download the tools from <a target="_blank" href="https://s3.amazonaws.com/datarobot-public-external-connectors/DataRobotTools.yxi">this link</a>.
2. Once the download completes, double click the file and follow the instructions to install.
When the installation finishes, the **DataRobot Automodel** and **DataRobot Predict** tools will be available under the Predictive tab in Alteryx.
DataRobot Tools support Alteryx versions 2018.1 and later.
See the following documents for more information:
* <a target="_blank" href="https://s3.amazonaws.com/datarobot-public-external-connectors/modelFactoryHelp.html">Help for DataRobot Automodel Tool.</a>
* <a target="_blank" href="https://s3.amazonaws.com/datarobot-public-external-connectors/predictHelp.html">Help for DataRobot Predict Tool.</a>
* <a target="_blank" href="https://s3.amazonaws.com/datarobot-public-external-connectors/CHANGELOG.html">Release Notes.</a>
|
alteryx
|
---
title: How-tos
description: Step-by-step instructions to perform tasks within the DataRobot application as well as partners, cloud providers, and 3rd party vendors.
---
# How-tos {: #how-tos }
These sections provide step-by-step instructions to perform tasks within the DataRobot application as well as partners, cloud providers, and 3rd party vendors:
Topic | Describes...
----- | ------
[Alteryx](alteryx) | How to create projects and make predictions without leaving the Alteryx interface.
[AWS](aws/index) | How to integrate DataRobot with Amazon Web Services.
[Azure](azure/index) | How to integrate DataRobot with Azure cloud services.
[Google](google/index) | How to integrate DataRobot with Google Kubernetes and cloud platforms.
[Snowflake](snowflake/index) | How to integrate DataRobot with Snowflake's Data Cloud.
[Tableau extension URL](tableau) | How to modify the Tableau extension configuration to work with DataRobot.
[Android for Scoring Code](android) | How to use DataRobot Scoring Code on Android.
[Apache Spark API for Scoring Code](sc-apache-spark) | How to use the Spark API to integrate DataRobot Scoring Code JARs into Spark clusters.
[DataRobot Provider for Apache Airflow](apache-airflow) | How to use the DataRobot Provider for Apache Airflow to implement a basic [DAG (Directed Acyclic Graph)](https://airflow.apache.org/docs/apache-airflow/stable/core-concepts/dags.html) orchestrating an end-to-end DataRobot AI pipeline.
!!! note
The Microsoft Excel Add-In was deprecated in June 2022 and was removed from the product in the July 2022 release.
|
index
|
---
title: Tableau Extension URL
description: Provides documentation for changing the Tableau TREX configuration to work with DataRobot deployments.
---
# Tableau Extension URL {: #tableau-extension-url }
The DataRobot extensions for Tableau, downloadable from the <a target="_blank" href="https://extensiongallery.tableau.com/">Tableau Extensions Gallery</a>, are configured to work with DataRobot Cloud. If your organization runs DataRobot Enterprise or DataRobot EU Cloud, you must change the extension configuration to work with your deployment.
!!! note
All Tableau extensions must use the _HTTPS_ protocol, except in a [testing environment](#http-versus-https). Additionally, the server that hosts your extension must have a Certificate Authority (CA)-based certificate; self-signed or test-signed certificates are not allowed. The <a target="_blank" href="https://tableau.github.io/extensions-api/docs/trex_security">Tableau documentation</a> provides a complete list of implementation requirements. ({% include 'includes/github-sign-in.md' %})
## Extension configuration {: #extension-configuration }
The following steps change the TREX configuration to work with Self-Managed AI Platform DataRobot deployments.
1. **Determine the URL for your DataRobot deployment.** Connect to your DataRobot server and identify the URL used by your browser. For example, `https://10.0.15.65`, or `https://my-server-address`.
**Note:** If you are using the EU Cloud, the full URL to use is:
- Insights: `https://app.eu.datarobot.com/tableau/insights`
- What-If: `https://app.eu.datarobot.com/tableau/what-if`
2. **Download the DataRobot manifest file (TREX).** From the <a target="_blank" href="https://extensiongallery.tableau.com/">Tableau Extensions Gallery</a>, download the DataRobot manifest (`.trex`) file to your local machine.
3. **Open the TREX file in a text editor** such as _Notepad_ on a PC or _TextEdit_ on a Mac.
4. **Identify and update the server configuration.** The server configuration is found inside of an XML tag named `source-location`. In an unedited file, it looks like:
```
<source-location>
<url>https://app.datarobot.com/tableau/insights</url>
</source-location>
```
Replace the server part of the URL (`app.datarobot.com` in the example above) with your server URL. For example:
```
<source-location>
<url>https://my-on-prem-server-address/tableau/insights</url>
</source-location>
```
5. **Save your changes.** The saved `.trex` file can now be loaded.
### Load a modified configuration {: #load-a-modified-configuration }
The following steps load the new configuration file into Tableau.
1. In a Tableau workbook, open a dashboard sheet.
2. From the _Objects_ section, drag **Extension** to the dashboard.
3. In the _Choose an Extension_ dialog box click **My Extensions**, and navigate to the `.trex` file you just modified.

### Share the modified TREX file {: #share-the-modified-trex-file }
Once you've modified the Tableau/DataRobot configuration file to suit your environment, you can share the modified `.trex` with your coworkers. The configuration will work for everyone in your organization that uses the same DataRobot instance.
### Additional Examples {: #additional-examples }
The following sections describe alternate configuration possibilities.
#### DataRobot EU Cloud {: #datarobot-eu-cloud }
```xml
<source-location>
<url>https://app.eu.datarobot.com/tableau/insights</url>
</source-location>
```
#### HTTP versus HTTPS {: #http-versus-https }
Tableau will not accept an extension source URL that starts with `http` (as opposed to `https`) unless that URL points to `localhost`. Note that use of `localhost` is normally reserved for developer test environments and is unlikely to be in use at your organization. If the URL where you normally access DataRobot starts with _http://_ and not _https://_, work with your IT team to provide an HTTPS endpoint.
#### Named addresses {: #named-addresses }
Named locations (or _DNS names_) will operate the same as the IP address examples above. For example if you normally access DataRobot at something like `http://datarobot.mycompany.corp/`, then change as follows:
```xml
<source-location>
<url>http://datarobot.mycompany.corp/tableau/insights</url>
</source-location>
```
#### Subdirectories {: #subdirectories }
Some configurations access the DataRobot instance with a URL containing one or more subdirectories, for example:
* `http://10.0.15.65/datarobot`
* `http://10.0.15.65/apps/dr`
If your instance applies this approach, be sure to include the full path before any given subdirectories in the original URL of the `.trex` file. For example, if your path includes `apps/dr` folders, your modified `.trex` file should look like:
```xml
<source-location>
<url>http://10.0.15.65/apps/dr/tableau/insights</url>
</source-location>
```
|
tableau
|
---
title: Apache Airflow
description: How to use the DataRobot Provider for Apache Airflow to implement a basic DAG orchestrating an end-to-end DataRobot AI pipeline.
---
# DataRobot provider for Apache Airflow
The combined capabilities of [DataRobot MLOps](mlops/index) and [Apache Airflow](https://airflow.apache.org/docs/){ target=_blank } provide a reliable solution for retraining and redeploying your models. For example, you can retrain and redeploy your models on a schedule, on model performance degradation, or using a sensor that triggers the pipeline in the presence of new data. This quickstart guide on the DataRobot provider for Apache Airflow illustrates the setup and configuration process by implementing a basic [Apache Airflow DAG (Directed Acyclic Graph)](https://airflow.apache.org/docs/apache-airflow/stable/core-concepts/dags.html){ target=_blank } to orchestrate an end-to-end DataRobot AI pipeline. This pipeline includes creating a project, training models, deploying a model, scoring predictions, and returning target and feature drift data. In addition, this guide shows you how to import [example DAG files](https://github.com/datarobot/airflow-provider-datarobot/tree/main/datarobot_provider/example_dags){ target=_blank } from the `airflow-provider-datarobot` repository so that you can quickly implement a variety of DataRobot pipelines.
The DataRobot provider for Apache Airflow is a Python package built from [source code available in a public GitHub repository](https://github.com/datarobot/airflow-provider-datarobot){ target=_blank } and [published in PyPi (The Python Package Index)](https://pypi.org/project/airflow-provider-datarobot/){ target=_blank }. It is also [listed in the Astronomer Registry](https://registry.astronomer.io/providers/datarobot/versions/latest){ target=_blank }. For more information on using and developing provider packages, see the [Apache Airflow documentation](https://airflow.apache.org/docs/apache-airflow-providers/index.html){ target=_blank }. The integration uses [the DataRobot Python API Client](https://pypi.org/project/datarobot/){ target=_blank }, which communicates with DataRobot instances via REST API. For more information, see [the DataRobot Python package documentation](https://datarobot-public-api-client.readthedocs-hosted.com/en/latest-release/){ target=_blank }.
## Install the prerequisites {: #install-the-prerequisites }
The DataRobot provider for Apache Airflow requires an environment with the following dependencies installed:
* [Apache Airflow](https://pypi.org/project/apache-airflow/){ target=_blank } >= 2.3
* [DataRobot Python API Client](https://pypi.org/project/datarobot/){ target=_blank } >= 3.2.0b1
Before you start, install the [Astronomer command line interface (CLI) tool](https://github.com/astronomer/astro-cli#readme){ target=_blank } to manage your local Airflow instance:
=== "MacOS"
First, install Docker Desktop for [MacOS](https://docs.docker.com/desktop/install/mac-install/){ target=_blank }.
Then, run the following command:
``` sh
brew install astro
```
=== "Linux"
First, install Docker Desktop for [Linux](https://docs.docker.com/desktop/install/linux-install/){ target=_blank }.
Then, run the following command:
``` sh
curl -sSL https://install.astronomer.io | sudo bash
```
=== "Windows"
First, install Docker Desktop for [Windows](https://docs.docker.com/desktop/install/windows-install/){ target=_blank }.
Then, see the [Astro CLI README](https://github.com/astronomer/astro-cli#windows){ target=_blank }.
Next, install [pyenv](https://github.com/pyenv/pyenv#simple-python-version-management-pyenv){ target=_blank } or another Python version manager.
## Initialize a local Airflow project {: #initialize-a-local-airflow-project }
After you complete the installation prerequisites, you can create a new directory and initialize a local Airflow project there with [AstroCLI](https://github.com/astronomer/astro-cli#get-started){ target=_blank }:
1. Create a new directory and navigate to it:
``` sh
mkdir airflow-provider-datarobot && cd airflow-provider-datarobot
```
2. Run the following command within the new directory, initializing a new project with the required files:
``` sh
astro dev init
```
3. Navigate to the `requirements.txt` file and add the following content:
``` txt
airflow-provider-datarobot
```
4. Run the following command to start a local Airflow instance in a Docker container:
``` sh
astro dev start
```
5. Once the installation is complete and the web server starts (after approximately one minute), you should be able to access Airflow at `http://localhost:8080/`.
6. Sign in to Airflow. The Airflow **DAGs** page appears.

## Load example DAGs into Airflow {: #load-example-dags-into-airflow }
The example DAGs _don't_ appear on the **DAGs** page by default. To make the DataRobot provider for Apache Airflow's example DAGs available:
1. Download the DAG files from the [airflow-provider-datarobot](https://github.com/datarobot/airflow-provider-datarobot/tree/main/datarobot_provider/example_dags){ target=_blank } repository.
2. Copy the [`datarobot_pipeline_dag.py` Airflow DAG](https://github.com/datarobot/airflow-provider-datarobot/blob/main/datarobot_provider/example_dags/datarobot_pipeline_dag.py){ target=_blank } (or the entire `datarobot_provider/example_dags` directory) to your project.
3. Wait a minute or two and refresh the page.
The example DAGs appear on the **DAGs** page, including the **datarobot_pipeline** DAG:

## Create a connection from Airflow to DataRobot {: #create-a-connection-from-airflow-to-datarobot }
The next step is to create a connection from Airflow to DataRobot:
1. Click **Admin > Connections** to [add an Airflow connection](https://airflow.apache.org/docs/apache-airflow/stable/howto/connection.html#creating-a-connection-with-the-ui){ target=_blank }.
2. On the **List Connection** page, click **+ Add a new record**.
3. In the **Add Connection** dialog box, configure the following fields:

Field | Description
----------------|-------------
Connection Id | `datarobot_default` (this name is used by default in all operators)
Connection Type | DataRobot
API Key | A DataRobot API token ([locate or create an API key in **Developer Tools**](api-key-mgmt#api-key-management))
DataRobot endpoint URL | `https://app.datarobot.com/api/v2` by default
4. Click **Test** to establish a test connection between Airflow and DataRobot.
5. When the connection test is successful, click **Save**.
## Configure the DataRobot pipeline DAG {: #configure-the-datarobot-pipeline-dag }
The [datarobot_pipeline Airflow DAG](https://github.com/datarobot/airflow-provider-datarobot/blob/main/datarobot_provider/example_dags/datarobot_pipeline_dag.py){ target=_blank } contains operators and sensors that automate the DataRobot pipeline steps. Each operator initiates a specific job, and each sensor waits for a predetermined action to complete:
Operator | Job
-------------------------------|-----------------------------------------------
CreateProjectOperator | Creates a DataRobot project and returns its ID
TrainModelsOperator | Triggers DataRobot Autopilot to train models
DeployModelOperator | Deploys a specified model and returns the deployment ID
DeployRecommendedModelOperator | Deploys a recommended model and returns the deployment ID
ScorePredictionsOperator | Scores predictions against the deployment and returns a batch prediction job ID
AutopilotCompleteSensor | Senses if Autopilot completed
ScoringCompleteSensor | Senses if batch scoring completed
GetTargetDriftOperator | Returns the target drift from a deployment
GetFeatureDriftOperator | Returns the feature drift from a deployment
!!! note
This example pipeline doesn't use every available operator or sensor; for more information, see the [Operators](https://github.com/datarobot/airflow-provider-datarobot/tree/main#operators){ target=_blank } and [Sensors](https://github.com/datarobot/airflow-provider-datarobot/tree/main#sensors){ target=_blank } documentation in the project `README`.
Each operator in the DataRobot pipeline requires specific parameters. You define these parameters in a configuration JSON file and provide the JSON when running the DAG.
``` json
{
"training_data": "local-path-to-training-data-or-s3-presigned-url-",
"project_name": "Project created from Airflow",
"autopilot_settings": {
"target": "readmitted",
"mode": "quick",
"max_wait": 3600
},
"deployment_label": "Deployment created from Airflow",
"score_settings": {}
}
```
The parameters from `autopilot_settings` are passed directly into the [`Project.set_target()`](https://datarobot-public-api-client.readthedocs-hosted.com/en/v2.28.0/autodoc/api_reference.html#datarobot.models.Project.set_target){ target=_blank } method; you can set any parameter available in this method through the configuration JSON file.
Values in the `training_data` and `score_settings` depend on the intake/output type. The parameters from `score_settings` are passed directly into the [`BatchPredictionJob.score()`](https://datarobot-public-api-client.readthedocs-hosted.com/en/v2.28.0/autodoc/api_reference.html#datarobot.models.BatchPredictionJob.score){ target=_blank } method; you can set any parameter available in this method through the configuration JSON file.
For example, see the local file intake/output and Amazon AWS S3 intake/output JSON configuration samples below:
=== "Local file example"
**Define `training_data`**
For local file intake, you should provide the local path to the `training_data`:
``` json linenums="1" hl_lines="2"
{
"training_data": "include/Diabetes10k.csv",
"project_name": "Project created from Airflow",
"autopilot_settings": {
"target": "readmitted",
"mode": "quick",
"max_wait": 3600
},
"deployment_label": "Deployment created from Airflow",
"score_settings": {}
}
```
**Define `score_settings`**
For the scoring `intake_settings` and `output_settings`, define the `type` and provide the local `path` to the intake and output data locations:
``` json linenums="1" hl_lines="11 12 13 15 16 17"
{
"training_data": "include/Diabetes10k.csv",
"project_name": "Project created from Airflow",
"autopilot_settings": {
"target": "readmitted",
"mode": "quick",
"max_wait": 3600
},
"deployment_label": "Deployment created from Airflow",
"score_settings": {
"intake_settings": {
"type": "localFile",
"file": "include/Diabetes_scoring_data.csv"
},
"output_settings": {
"type": "localFile",
"path": "include/Diabetes_predictions.csv"
}
}
}
```
!!! note
When using the Astro CLI tool to run Airflow, you can place local input files in the `include/` directory. This location is accessible to the Airflow application inside the Docker container.
=== "Amazon AWS S3 example"
**Define `training_data`**
For Amazon AWS S3 intake, you can generate a pre-signed URL for the training data file on S3:
1. In the S3 bucket, click the CSV file.
2. Click **Object Actions** at the top-right corner of the screen and click **Share with a pre-signed URL**.
3. Set the expiration time interval and click **Create presigned URL**. The URL is saved to your clipboard.
4. Paste the URL in the JSON configuration file as the `training_data` value:
``` json linenums="1" hl_lines="2"
{
"training_data": "s3-presigned-url",
"project_name": "Project created from Airflow",
"autopilot_settings": {
"target": "readmitted",
"mode": "quick",
"max_wait": 3600
},
"deployment_label": "Deployment created from Airflow",
"datarobot_aws_credentials": "connection-id",
"score_settings": {}
}
```
**Define `datarobot_aws_credentials` and `score_settings`**
For scoring data on Amazon AWS S3, you can add your DataRobot AWS credentials to Airflow:
1. Click **Admin > Connections** to [add an Airflow connection](https://airflow.apache.org/docs/apache-airflow/stable/howto/connection.html#creating-a-connection-with-the-ui){ target=_blank }.
2. On the **List Connection** page, click **+ Add a new record**.
3. In the **Connection Type** list, click **DataRobot AWS Credentials**.

4. Define a **Connection Id** and enter your Amazon AWS S3 credentials.
5. Click **Test** to establish a test connection between Airflow and Amazon AWS S3.
4. When the connection test is successful, click **Save**.
You return to the **List Connections** page, where you should copy the **Conn Id**.
You can now add the **Connection Id** / **Conn Id** value (represented by `connection-id` in this example) to the `datarobot_aws_credentials` field when you [run the DAG](#run-the-datarobot-pipeline-dag).
For the scoring `intake_settings` and `output_settings`, define the `type` and provide the `url` for the AWS S3 intake and output data locations:
``` json linenums="1" hl_lines="12 13 14 16 17 18"
{
"training_data": "s3-presigned-url",
"project_name": "Project created from Airflow",
"autopilot_settings": {
"target": "readmitted",
"mode": "quick",
"max_wait": 3600
},
"deployment_label": "Deployment created from Airflow",
"datarobot_aws_credentials": "connection-id",
"score_settings": {
"intake_settings": {
"type": "s3",
"url": "s3://path/to/scoring-data/Diabetes10k.csv",
},
"output_settings": {
"type": "s3",
"url": "s3://path/to/results-dir/Diabetes10k_predictions.csv",
}
}
}
```
!!! note
Because this pipeline creates a deployment, the output of the deployment creation step provides the `deployment_id` required for scoring.
## Run the DataRobot pipeline DAG {: #run-the-datarobot-pipeline-dag }
After completing the setup steps above, you can run a DataRobot provider DAG in Airflow using the configuration JSON you assembled:
1. On the Airflow **DAGs** page, locate the DAG pipeline you want to run.

2. Click the run icon for that DAG and click **Trigger DAG w/ config**.

3. On the **DAG conf parameters** page, enter the JSON configuration data required by the DAG. In this example, the JSON you assembled in the previous step.
4. Select **Unpause DAG when triggered**, and then click **Trigger**. The DAG starts running:

!!! note
While running Airflow in a Docker container (e.g., using the Astro CLI tool), expect the predictions file created inside the container. To make the predictions available in the host machine, specify the output location in the `include/` directory.
|
apache-airflow
|
---
title: PCA and K-Means clustering
description: The impact of principal component analysis (PCA) on KMeans in DataRobot modeling
---
# PCA and K-Means clustering {: #pca-and-k-means-clustering }
<span style="color:red;font-size: 1rem"> `Robot 1`</span>
**What is the impact of principal component analysis (PCA) on K-Means clustering?**
Hi team, a customer is asking how exactly a PCA > k-means is being used during modeling. I see that we create a CLUSTER_ID feature in the transformed dataset and I am assuming that is from the k-means. My question is, if we are creating this feature, why aren't we tracking it in, for example, feature impact?
<span style="color:red;font-size: 1rem"> `Robot 2`</span>
Feature impact operates on the level of dataset features, not derived features. If we have one-hot encoding for categorical feature CAT1—we also calculate feature impact of just CAT1, not CAT1-Value1, CAT1-Value2,...
Permutation of original features would also produce permutation of KMeans results—so if those are important for the modeling result, its impact will be assigned to the original columns.
<span style="color:red;font-size: 1rem"> `Robot 3`</span>
Some blueprints use the one-hot-encoded cluster ID as features, and other blueprints use the cluster probabilities as features.
If you wish to assess the impact of the kmeans step on the outcome of the model, delete the kmeans branch in composable ML and use the Leaderboard to assess how the model changed.
As Robot 2 says, feature impact operates on the RAW data and is inclusive of both the preprocessing AND the modeling.
|
rr-pca-kmeans
|
---
title: Prediction Explanations on small data
description: How to generate Prediction Explanations in DataRobot when working with small datasets.
---
# Prediction Explanations on small data {: #prediction-explanations-on-small-data }
!!! warning
The described workaround is intended for users who are very familiar with the [partitioning methods](partitioning) used in DataRobot modeling. Be certain you understand the implications of the changes and their impact on resulting models.
<span style="color:red;font-size: 1rem"> `Robot 1`</span>
**Can I get [Prediction Explanations](pred-explain/index) for a small dataset?**
For small datasets, specifically those with validation subsets less than 100 rows, we cannot run XEMP Prediction Explanations. (I assume that's true for SHAP also, but I haven't confirmed). Is there a common workaround for this? I was considering just doubling or tripling the dataset by creating duplicates, but not sure if others have used slicker approaches.
<span style="color:red;font-size: 1rem"> `Robot 2`</span>
It’s not true for SHAP, actually. No minimum row count there. 🤠
I feel like I’ve seen workarounds described in `#cfds` or `#data-science` or somewhere... One thing you can do is adjust the partitioning ratios to ensure 100 rows land in Validation. There might be other tricks too.
<span style="color:red;font-size: 1rem"> `Robot 1`</span>
Right, that second idea makes sense, but you'd need probably > 200 rows. The user has a dataset with 86 rows.
I just don't want to have to eighty-six their use case. 🥁
<span style="color:red;font-size: 1rem"> `Robot 2`</span>
OK, no dice there. 🎲
I’d want to be really careful with duplicates, but this _MIGHT_ finesse the issues:
1. Train on your actual dataset, do “Training Predictions”, and carefully note the partitions for all rows.
2. Supplement the dataset with copied rows, and add a partition columns such that all original rows go in the same partitions as before, and all copied rows go in the Validation fold. I guess you probably want to leave the holdout the same.
3. Start a new project, select User CV, and train the model. Probably do Training Predictions again and make sure the original rows kept the same prediction values.
4. You should be able to run XEMP now.
<span style="color:red;font-size: 1rem"> `Robot 2`</span>
I think (fingers crossed) that this would result in the same trained model, but you will have faked out XEMP. However, Validation scores for the modified model would be highly suspect. XEMP explanations would probably be OK, as long as you ensure the copied data didn’t appreciably change the distributions of any features in the Validation set.
I think if you scrupulously kept the Holdout rows the same, and the Holdout scores match in the two models, that is a sign of success.
<span style="color:red;font-size: 1rem"> `Robot 1`</span>
Right, so if I ran Autopilot again, it would do unreasonably well on that Validation set, but if I just train the same blueprint from the original Autopilot, that would be fine.
<span style="color:red;font-size: 1rem"> `Robot 2`</span>
Yes. Autopilot would probably run a different sequence of blueprints because the Leaderboard order would be wacky and the winning blueprint would quite likely be different.
It almost goes without saying, but this is more suspect the earlier you do it in the model selection process. If you’re doing a deep dive on a model you’ve almost locked in on, that’s one thing, but if you’re still choosing among many options, it’s a different situation.
<span style="color:red;font-size: 1rem"> `Robot 1`</span>
Brilliant, thank you!
|
rr-predex-small-data
|
---
title: Dynamic time warping (DTW)
description: Does dynamic time warping attempt to align the endpoint of series that may not be entirely overlapping?
---
# Dynamic time warping (DTW) {: #dynamic-time-warping-dtw }
<span style="color:red;font-size: 1rem"> `Robot 1`</span>
**It is my understanding that dynamic time warping attempts to align the endpoint of series that may not be entirely overlapping.**
Consider my client's use case, which involves series of movie KPIs from upcoming releases. They get 10-20 weeks of KPIs leading up to a movie's opening weekend. Clearly many time series are not overlapping, but relatively they could be lined up (like 1 week from opening, 2 weeks from opening, etc.). They could do this in R/Python, but I was thinking time series clustering might be able to handle this.
What do I need to know—like series length limitations or minimal overlapping time periods, etc.? Is my understanding of dynamic time warping even correct?
<span style="color:red;font-size: 1rem"> `Robot 2`</span>
Well it would be more about the points in the middle generally rather than the ends
<span style="color:red;font-size: 1rem"> `Robot 3`</span>
For running [time series clustering](ts-clustering), you need:
* 10 or more series.
* If you want *K* clusters, you need at least *K* series with 20+ time steps. (So if you specify 3 clusters, at least three of your series need to be of length 20 or greater.)
* If you took the union of all your series, the union needs to collectively span at least 35 time steps.
<span style="color:red;font-size: 1rem"> `Robot 3`</span>
In DR, the process of DTW is handled during model building—it shouldn’t require any adjustment from the user. If it errors out, flag it for us so we can see why.
**Under the hood**
* When you press Start in clustering for time series, some k-means blueprints involve DTW (others involve a related technique called Soft-DTW) and then k-means is applied to cluster the series.
* The goal of DTW is to align the series. For example, `sin(x)`, -`sin(x)`, and `cos(x)` are all different functions, but follow the same pattern. DTW will (trivially) shift these functions so their peaks and valleys line up. Their distance would effectively be zero, after applying DTW.
* _But..._ DTW can do more than shifting; it can impute repeated values in the middle of a series, like Robot 2 mentioned.

(Image pulled from [here](https://rtavenar.github.io/blog/dtw.html){ target=_blank }. That site has lots of good moving images; I’m texting from the car so can’t get them to copy over.)
In the image, the left example is straight up Euclidean distance. You just take each time point, calculate the distance between the two series at that moment in time, then square and add them. That’s your Euclidean distance.
In DTW, the top series (I’ll call it `T`) is mapped to the bottom (`B`) in a more complicated way.
* We’re gonna create 2 new series: `T*` and `B*`, and calculate the Euclidean distance between those.
T1 (the first observation in `T`) is mapped to B1 through B6. This means, to “match up” `T` and `B`, T1 is going to be copied 6 times. So, `T*1 = T1, T*2 = T1, … T*6 = T1`.
* That is, DTW takes `T` and stretches it out by making five copies of T1, so for that `region-of-time`, *T* and *B* are aligned. It’s kind of like shifting part of the series, but using “last value carried forward imputation” to do it. (I’m sure the video on the site will show it better than I’ve described, sorry.)
Then, `T*7 = T2, T*8 = T3` and so on.
So far, `B* = B` for like the first 25 steps or so. Fast forward to the right side of the valley. Since `B` is mapped to multiple places in `T`, we’re gonna define `B` here to take on multiple copies of `B`.
`B*25 = B25`
`B*26 = B25`
`...`
`B*30 = B25`
Then, `B31 = B26, B32 = B27`, and so on.
At the end, this should mean that `B*` and `T*` are the same length. Then, we calculate the Euclidean distance between `B*` and `T*`.
<span style="color:red;font-size: 1rem"> `Robot 3`</span>
So, Robot 1, in more detail than you wanted (and almost surely way sloppier!), the starting and ending points get aligned but there’s additional imputation operations that happen inside that stretch the series.
<span style="color:red;font-size: 1rem"> `Robot 3`</span>
And in DataRobot all this happens under the hood. So, thank a developer 😉
<span style="color:red;font-size: 1rem"> `Robot 1`</span>
Hey Robot 3, the client was very appreciative of this information, so thank you! They did ask if there was any documentation on the guardrails/constraints around time series clustering. Do we have them published somewhere?
<span style="color:red;font-size: 1rem"> `Robot 4`</span>
We have that information in the [documentation](ts-consider#clustering-considerations)!
|
rr-dynamic-time
|
---
title: Single- vs. multi-tenant SaaS
description: DataRobot supports both single-tenant and multi-tenant SaaS and here's what it means.
---
# Single- vs. multi-tenant SaaS {: #single-vs-multi-tenant-saas }
<span style="color:red;font-size: 1rem"> `Robot 1`</span>
**What do we mean by Single-tenant and multi-tenant SaaS? Especially with respect to the DataRobot cloud?**
<span style="color:red;font-size: 1rem"> `Robot 2`</span>
Single-tenant and multi-tenant generally refer to the architecture of a software-as-a-service (SaaS) application. In a single-tenant architecture, each customer has their own dedicated instance of the DataRobot application. This means that their DataRobot is completely isolated from other customers, and the customer has full control over their own instance of the software (it is self-managed). In our case, these deployment options fall in this category:
* Virtual Private Cloud (VPC), customer-managed
* AI Platform, DataRobot-managed
In a multi-tenant SaaS architecture, multiple customers share a single instance of the DataRobot application, running on a shared infrastructure. This means that the customers do not have their own dedicated instance of the software, and their data and operations are potentially stored and running alongside other customers, while still being isolated through various security controls. This is what our DataRobot Managed Cloud offers.
In a DataRobot context, multi-tenant SaaS is a single core DataRobot app (app.datarobot.com), a core set of instances/nodes. All customers are using the same job queue & resources pool.
In single-tenant, we instead run a custom environment for each user & connect to them with a private connection. This means that resources are dedicated to a single customer and allows for more restriction of access AND more customizability.
<span style="color:red;font-size: 1rem"> `Robot 3`</span>
* Single-tenant = We manage a cloud install for one customer.
* Multi-tenant = We manage multiple customers on one instance—this is https://app.datarobot.com/
<span style="color:red;font-size: 1rem"> `Robot 2`</span>
In a single-tenant environment, one customer's resource load is isolated from any other customer, which avoids someone's extremely large and resource-intensive job affecting others. That said, we isolate our workers, so even if a large working job is running on one user it doesn’t affect other users. We also have worker limits to prevent one user from hogging all the workers.
<span style="color:red;font-size: 1rem"> `Robot 1`</span>
Ah okay, I see...
<span style="color:red;font-size: 1rem"> `Robot 2`</span>
Single-tenant's more rigid separation is a way to balance the benefits of on-prem (privacy, dedicated resources, etc.) and the benefits of cloud (don't have to upkeep your own servers/hardware, software updating and general maintenance is handled by DR, etc.).
<span style="color:red;font-size: 1rem"> `Robot 1`</span>
Thank you very much Robot 2 (and 3)... I understand this concept much better now!
<span style="color:red;font-size: 1rem"> `Robot 2`</span>
Glad I could help clarify it a bit! Note that I'm not directly involved in single-tenant development, so I don't have details on how we're implementing it, but this is accurate as to the general motivation to host single-tenant SaaS options alongside our multi-tenant environments.
<span style="color:red;font-size: 1rem"> `Robot 4`</span>
**Single tenant**: you rent an apartment. When you're not using it, nobody else is. You can leave your stuff there without being concerned that others will mess with it.
**Multi tenant**: you stay in a hotel room.
There is another analogy in [ELI5](eli5), too.
|
rr-singlemulti-tenant
|
---
title: Normalizing for monotonicity
description: With DataRobot's monotonicity, to normalize or not to normalize, that is the question.
---
# Normalizing for monotonicity {: #normalizing-for-monotonicity }
<span style="color:red;font-size: 1rem"> `Robot 1`</span>
**To normalize or not to normalize, that is the question**
Got some questions about monotonicity (I've been testing models heavily with such constraints). Would appreciate any answers or documentation that can help.
1. When we apply monotonic Increasing/Decreasing constraints to attributes, is DataRobot doing some kind of normalization (capping amd flooring, binning etc.)?
2. When we apply `try only monotonic` models, will it try GBM, XGBOOST, RF etc.?
Thanks for the help!
<span style="color:red;font-size: 1rem"> `Robot 2`</span>
No normalization that I know of and just xgboost and gam. You don't really need to normalize data for xgboost models.
<span style="color:red;font-size: 1rem"> `Robot 3`</span>
For docs you can check out how to [configure feature constraints](feature-con#monotonicty) and then the [workflow to build them](monotonic). And here's an [ELI5](eli5) answer to "what does monotonic mean?"
|
rr-norm-for-mono
|
---
title: Ordered categoricals
description: With DataRobot's monotonicity, to normalize or not to normalize, that is the question.
---
# Ordered categoricals {: #ordered-categoricals }
<span style="color:red;font-size: 1rem"> `Robot 1`</span>
Does DataRobot recognize ordered categoricals, like grade in the infamous lending club data?
This is a question from a customer:
> Can you tell your models that `A < B < C` so that it’s more regularized?
I feel like the answer is that to leverage ordering you would use it as numeric feature. Quite likely a boosting model is at the top, so it’s just used as an ordered feature anyway.
If you just leave it as is, our models will figure it out.
When using a generalized linear model (GLM), you would want to leverage this information because you need fewer degrees of freedom in your model; however, I'm asking here to see if I missed some points.
<span style="color:red;font-size: 1rem"> `Robot 2`</span>
We actually do order these variables for XGBoost models. The default is frequency ordering but you can also order lexically.
<span style="color:red;font-size: 1rem"> `Robot 1`</span>
You mean ordinal encoding or directly in XGBoost?
<span style="color:red;font-size: 1rem"> `Robot 2`</span>
Yeah the ordinal encoding orders the data.
<span style="color:red;font-size: 1rem"> `Robot 1`</span>

<span style="color:red;font-size: 1rem"> `Robot 2`</span>
Just change frequency to `lexical` and try it out.
<span style="color:red;font-size: 1rem"> `Robot 3`</span>

Build your own blueprint and select the cols —explicitly set to `freq/lex`.
<span style="color:red;font-size: 1rem"> `Robot 2`</span>
If you’re using a GLM, you can also manually encode the variables in an ordered way, (outside DR):
Use 3 columns:
A: 0, 0, 1
B: 0, 1, 1
C: 1, 1, 1
Lexical works fine in a lot of cases; just do it for all the variables. You can use an `mpick` to choose different encodings for different columns.
<span style="color:red;font-size: 1rem"> `Robot 1`</span>
Got it! Thanks a lot everyone!
|
rr-ordered-cat
|
---
title: Model integrity and security
description: What measures does the platform support to assure the integrity and security of AI models?
---
# Model integrity and security {: #model-integrity-and-security }
<span style="color:red;font-size: 1rem"> `Robot 1`</span>
**What measures does the platform support to assure the integrity and security of AI models?**
For example, do we provide adversarial training, reducing the attack surface through security controls, model tampering detection, and model provenance assurance?
<span style="color:red;font-size: 1rem"> `Robot 2`</span>
We have a variety of approaches:
1. While we don’t use adversarial training explicitly, we do make heavy use of tree-based models, such as XGBoost, which are very robust to outliers and adverse examples. These models do not extrapolate, and we fit them to the raw, unprocessed data. Furthermore, since XGBoost only uses the order of the data, rather than the raw values, large outliers do not impact its results, even if those outliers are many orders of magnitude. In our internal testing, we’ve found that XGBoost is very robust to mislabeled data as well. If your raw training data contains outliers and adverse examples, XGBoost will learn how to handle them.
2. All of our APIs are protected by API keys. We do not allow general access, even for predictions. This prevents unauthorized users from accessing anything about a DataRobot model.
3. We do not directly allow user access to model internals, which prevents model tampering. The only way to tamper with models is through point 1, and XGBoost is robust to adverse examples. (Note that rating table models and custom models do allow the user to specify the model, and should therefore be avoided in this case. Rating table models are fairly simple though, and for custom models, we retain the original source code for later review).
4. In MLOPs we provide a full lineage of model replacements and can tie each model back to the project that created it, including the training data, models, and tuning parameters.
<span style="color:red;font-size: 1rem"> `Robot 1`</span>
Do not extrapolate?
<span style="color:red;font-size: 1rem"> `Robot 2`</span>
That is a huge factor in preventing adverse attacks. Most of our models do not extrapolate.
Take a look at the materials on bias and fairness too. Assessing a model's bias is very closely related to protecting against adverse attacks. Here are the docs on [bias and fairness](b-and-f/index) functionality which include options from the settings when starting a project, model insights, and deployment monitoring.
|
rr-model-integrity
|
---
title: Word Cloud repeats
description: Why would a term would show up multiple times in a word cloud?
---
# Word Cloud repeats {: #word-cloud-repeats }
<span style="color:red;font-size: 1rem"> `Robot 1`</span>
**Why would a term would show up multiple times in a word cloud? **
And for those occurrences, why would they have different coefficients?
<span style="color:red;font-size: 1rem"> `Robot 2`</span>
Is the word cloud combining multiple text columns?
<span style="color:red;font-size: 1rem"> `Robot 1`</span>
Ah, yes, that is definitely it, thank you!! Is there a way to see a word cloud on only one feature?
<span style="color:red;font-size: 1rem"> `Robot 2`</span>
The simplest solution would be to use a feature list and train a model on just the one text feature you’re interested in.
<span style="color:red;font-size: 1rem"> `Robot 1`</span>
^^ That’s exactly what I ended up doing. Thank you so much for the quick answer.
|
rr-word-cloud
|
---
title: Target transform
description: How does transforming your target (log, ^2) help ML models and when should you use each?
---
# Target transform {: #target-transform }
<span style="color:red;font-size: 1rem"> `Robot 1`</span>
**How does transforming your target (`log(target)`, `target^2`, etc.) help ML models and when should you use each?**
<span style="color:red;font-size: 1rem"> `Robot 2`</span>
This is not going to be an ELI5 answer, but [here is a reason](https://www.codecademy.com/article/data-transformations-for-multiple-linear-regression){ target=_blank } you log transform your target.
TL;DR: When you run a linear regression, you try to approximate your response variable (e.g., target) by drawing a line through a bunch of points and you want the line to be as close to the response variable for those points as possible. Sometimes though, those points don’t follow a straight line. They might follow a curvy line, in which case a simple line through the points doesn’t approximate your target very well. In some of those scenarios, you can log transform your target to make the relationship between those points and your response variable more like a straight line. The following images from Code Academy show the bad:

And the good:

<span style="color:red;font-size: 1rem"> `Robot 3`</span>
This is specific to linear models that fit a straight line. For tree-based models like XGBoost, you don’t need to transform your target (or any other variable)!
<span style="color:red;font-size: 1rem"> `Robot 4`</span>
Yeah—log-transforming specifically was born out of trying to better meet the assumptions in linear regression (when they're violated). I have seen some cases where log-transformations can help from the predictive performance standpoint. (AKA when your target has a really long tail, log-transforming makes this tail smaller and this sometimes helps models understand the target better.)
<span style="color:red;font-size: 1rem"> `Robot 5`</span>
[Honey, I shrunk the target variable](https://florianwilhelm.info/2020/05/honey_i_shrunk_the_target_variable/){ target=_blank }.
<span style="color:red;font-size: 1rem"> `Robot 3`</span>
`Robot 4` you can also use a log link loss function (such as poisson loss) on both XGBoost and many linear regression solvers. I prefer that over the log transform, as the log transform biases the predicted mean, which makes your lift charts look funny on the scale of the original target.
But it really depends on the problem and what you’re trying to model!
<span style="color:red;font-size: 1rem"> `Robot 4`</span>
Or use inverse hyperbolic sine amirite? 😂
<span style="color:red;font-size: 1rem"> `Robot 1`</span>
Thanks all! Very helpful. `Robot 4` hyperbolic sine amirite is the name of my favorite metal band.
|
rr-target-transform
|
---
title: Defining redundant features
description: How does DataRobot define similarity in features and call them redundant?
---
# Defining redundant features {: #defining-redundant-features }
<span style="color:red;font-size: 1rem"> `Robot 1`</span>
**What makes a feature redundant?**
The [docs](feature-impact#remove-redundant-features-automl) say:
> If two features change predictions in a similar way, DataRobot recognizes them as correlated and identifies the feature with lower feature impact as redundant
How do we quantify or measure "similar way"?
<span style="color:red;font-size: 1rem"> `Robot 2`</span>
If two features are highly correlated, the prediction difference (prediction before feature shuffle / prediction after feature shuffle) **of the two features should also be correlated**. The prediction difference can be used to evaluate pairwise feature correlation. For example, two highly correlated features are first selected. The feature with lower feature impact is identified as the redundant feature.
<span style="color:red;font-size: 1rem"> `Robot 1`</span>
Do we consider two features redundant when their prediction differences is the same/between `-x%` and `+x%`?
<span style="color:red;font-size: 1rem"> `Robot 2`</span>
We look at the correlation coefficient between the prediction differences and if it's above a certain threshold, we call the less important one (according to the models' feature impact) redundant.
<span style="color:red;font-size: 1rem"> `Robot 2`</span>
Specifically:
1. Calculate prediction difference before and after feature shuffle:
`(pred_diff[i] = pred_before[i] - pred_after[i])`
2. Calculate pairwise feature correlation (top 50 features, according to model feature impact) based on `pred_diff`.
3. Identify redundant features (high correlation based on our threshold) then test that removal does not affect accuracy significantly.
<span style="color:red;font-size: 1rem"> `Robot 1`</span>
Thank you, Robot 2! Super helpful.
|
rr-redundant-features
|
---
title: Import for Keras or TF
description: Use DataRobot custom models to import custom inference models.
---
# Import for Keras or TF {: #import-for-keras-or-tf }
<span style="color:red;font-size: 1rem"> `Robot 1`</span>
**Is there a way to import .tf or .keras models?**
<span style="color:red;font-size: 1rem"> `Robot 2`</span>
We don’t allow any form of model import (.pmml, .tf, .keras, .h5, .json etc.), except for [custom models](custom-inf-model). That said, you can use custom models to do _whatever you want_, including _importing whatever you wish_.
<span style="color:red;font-size: 1rem"> `Robot 1`</span>
I have a customer trying to do Custom Inference. Can he use this or only .h5? I don't really understand the tradeoff of one version of the model objects versus another. I've only used .h5 and JSON. I think he was also curious if we have any support for importing JSON weights.
<span style="color:red;font-size: 1rem"> `Robot 2`</span>
We do not support importing model files, except for custom models—he'll need to write a custom inference model to load the file and score data.
But yes, we support custom inference models. You can do literally whatever you want in custom inference models.
<span style="color:red;font-size: 1rem"> `Robot 1`</span>
Thanks, sorry if I'm being dense / confused—so with Custom Inference he should be able to load the .pb, .tf, .keras files?
<span style="color:red;font-size: 1rem"> `Robot 2`</span>
Yes. He will need to write his own Python code. So if he can write a Python script to load the .pd, .tf, .keras, or .whatever file and score data with it, he can make that script a custom inference model.
<span style="color:red;font-size: 1rem"> `Robot 1`</span>
Ohhh of course :) now I understand. Duh, Robot 1. Thanks!
|
rr-import-keras
|
---
title: ACE score and row order
description: Does DataRobot's ACE score, a univariate measure of correlation, depend on row order?
---
# ACE score and row order {: #ace-score-and-row-order }
!!! faq "What is ACE?"
ACE scores (Alternating Conditional Expectations) are a univariate measure of correlation between the feature and the target. ACE scores detect non-linear relationships, but as they are univariate, they do not detect interaction effects.
<span style="color:red;font-size: 1rem"> `Robot 1`</span>
**Does ACE score depend on the order of rows?**
Is there any sampling done in EDA2 when calculating ACE score? The case is two datasets which are only different in the order of rows, are run separately with same OTV setting (same date range for partitions, same number of rows in each partition), and there is a visible difference in the ACE scores. Does the ACE score depend on the order of rows?
<span style="color:red;font-size: 1rem"> `Robot 2`</span>
EDA1 sample will vary based on order of rows for sure. EDA2 starts with EDA1 and then removes rows that are in the holdout too, so project settings can also matter.
<span style="color:red;font-size: 1rem"> `Robot 1`</span>
There are 8k rows and 70 features, fairly small datasets.
<span style="color:red;font-size: 1rem"> `Robot 2`</span>
ACE doesn’t need a large sample: It could be 1k or even 100. If the dataset is less than 500MB, then all rows may be in the sample, but the order may be different.
|
rr-ace-score
|
---
title: Neural networks and tabular data
description: A compendium of reasons why you don't need neural networks with tabular data.
---
# Neural networks and tabular data {: #neural-networks-and-tabular-data }
<span style="color:red;font-size: 1rem"> `Robot 1`</span>
**Can someone share research on why we don't need neural networks for tabular data?**
Hi, I am speaking with a quant team and explaining why you don't need neural networks for tabular data. I've said that "conventional" machine learning typically performs as well or better than neural networks, but can anyone point to research papers to support this point?
<span style="color:red;font-size: 1rem"> `Robot 2`and `Robot 3`</span>
Here are a few:
* [TABULAR DATA: DEEP LEARNING IS NOT ALL YOU NEED](chrome-extension://efaidnbmnnnibpcajpcglclefindmkaj/https://arxiv.org/pdf/2106.03253.pdf){ target=_blank }
* [Why do tree-based models still outperform deep learning on tabular data?](https://arxiv.org/abs/2207.08815){ target=_blank }
* [Deep Neural Networks and Tabular Data: A Survey](https://arxiv.org/abs/2110.01889){ target=_blank }
<span style="color:red;font-size: 1rem"> `Robot 1`</span>
This is great. Thanks!
<span style="color:red;font-size: 1rem"> `Robot 3`</span>
Not done yet...
[This](https://medium.com/@tunguz/another-deceptive-nn-for-tabular-data-the-wild-unsubstantiated-claims-about-constrained-f9450e911c3f){ target=_blank } ("Another Deceptive NN for Tabular Data — The Wild, Unsubstantiated Claims about Constrained Monotonic Neural Networks") and also a series of medium posts by Bojan Tunguz, I just read these cause I'm not smart enough for actual papers `¯\_(ツ)_/¯ `. Also these:
* [Trouble with Hopular](https://medium.com/@tunguz/trouble-with-hopular-6649f22fa2d3){ target=_blank }
* [About Those Transformers for Tabular Data...](https://medium.com/@tunguz/about-those-transformers-for-tabular-data-116c13c36a5c){ target=_blank }
He puts out one of these once a month, basically he beats the neural nets with random forests or untuned GBMs most of the time.
<span style="color:red;font-size: 1rem"> `Robot 3`</span>
Lol deep learning on tabular data is. Also Robot 3, not smart enough? You could write any one of them. Point them at Bojan Tunguz on Twitter:
[XGBoost Is All You Need](https://twitter.com/tunguz/status/1509197350576672769?s=20){ target=_blank }
<span style="color:red;font-size: 1rem"> `Robot 3`</span>
Looking...
<span style="color:red;font-size: 1rem"> `Robot 4`</span>
[Here](https://twitter.com/tunguz/status/1578730907711655937?s=20){ target=_blank } he is again (this is the thread that spawned the blog posts above). Basically this guy has made a name for himself disproving basically every paper on neural nets for tabular data.
Internally, our own analytics show that gradient-boosted trees are the best model for 40% of projects, linear models win for 20% of projects, and keras/deep learning models are less than 5% of projects.
Basically, Xgboost is roughly 10x more useful than deep learning for tabular data.
If they're quants, they can be convinced with data!
<span style="color:red;font-size: 1rem"> `Robot 4`</span>
Robot 1, also we have at least 2 patents for our deep learning on tabular data methods. We spent 2 years building the state of the art here, which includes standard MLPs, our own patented residual architecture for tabular data, and tabular data "deep CTR models" such Neural Factorization machines and AutoInt.
Even with 2 years worth of work and the best data science team I've worked with in my career and we still couldn't get to "5% of projects have a deep learning model as the best model".
<span style="color:red;font-size: 1rem"> `Robot 1`</span>
You all are the best. Thanks!
|
rr-nns-tabular-data
|
---
title: NPS in DataRobot
description: Using DataRobot to implement an NPS (net promoter scores) solution.
---
# NPS in DataRobot {: #nps-in-datarobot }
<span style="color:red;font-size: 1rem"> `Robot 1`</span>
**Has anyone implemented an NPS solution in DataRobot?**
Hello NLP team. I was wondering if anyone has implemented an NPS (net promoter scores) solution in DataRobot. I have a customer that wants to use a multilabel project that not only labels Good, Bad, Neutral, but also tags the cause of the bad/good review. Say for example someone responds with:
“I loved the product but the service was terrible.”
How can we use DataRobot to tell us that that is contains both a good and bad comment, the bad comment is assigned to “service” and the good is assigned to “product”?
<span style="color:red;font-size: 1rem"> `Robot 2`</span>
Multilabel with classes like `good_product`, `bad_product` , `good_service`, etc ?
<span style="color:red;font-size: 1rem"> `Robot 3`</span>
I would use the raw 1-10 score as a target. A good model should be able to learn something like:
Target: 7
Text: “I loved the product but the service was terrible.”
coefficients:
* intercept: +5.0
* "loved the product": +4.0
* "service was terrible": -1.0
prediction: 5 + 4 - 1 = 7
<span style="color:red;font-size: 1rem"> `Robot 3`</span>
Don't aggregate the data to get a NPS and then try to label and bin. Just use the raw survey scores directly and look at how the words/phrases in the word cloud drive the score up and down. Multilabel (and multiclass) both feel like they are overcomplicating the problem—great for other things but you don't need it here!
<span style="color:red;font-size: 1rem"> `Robot 1`</span>
“don’t aggregate the data to get a NPS and then try to label and bin”
Can you elaborate a bit more on this ^^ sentence
<span style="color:red;font-size: 1rem"> `Robot 3`</span>
So a "net promoter score" is an aggregate number. It doesn't exist in individual surveys. This is a [great article](https://en.wikipedia.org/wiki/Netpromoterscore){ target=_blank } on it.
Typically, a net promotor score survey has 2 questions:
1. On a scale of 1-10, how likely are you to recommend this product to a friend?
2. Free form text: Why?
<span style="color:red;font-size: 1rem"> `Robot 1`</span>
gotcha, i see what you mean.
<span style="color:red;font-size: 1rem"> `Robot 3`</span>
So lets say you get 100 surveys, and the distribution of scores is something like:
- 1: 1 respondent
- 2: 3 respondents
- 3: 5 respondents
- 4: 7 respondents
- 5: 15 respondents
- 6: 25 respondents
- 7: 15 respondents
- 8: 15 respondents
- 9: 10 respondents
- 10: 4 respondents
And the net promotor methodology bins these up:
- Detractors: 1-6
- Passives: 7-8
- Promotors: 9-10
<span style="color:red;font-size: 1rem"> `Robot 3`</span>
So in our case we have:
- Detractors: 1+3+5+7+15+25 = 56
- Passives: 15+15 = 30
- Promotors: 10+4 = 14
Converting to %'s we get:
- Detractors: 56%
- Passives: 30%
- Promotors: 14%
The net promotor score is `(Promotors %) - (Detractors %)`, or in this case 14-56.
So the NPS for this survey is -42, which is bad.
<span style="color:red;font-size: 1rem"> `Robot 3`</span>
Ok, now why is it bad? Well, you hire some consultant to read the survey and tell you, or you can use DataRobot to read the surveys instead!
The concept of a net promotor score at the level doesn't apply—you can't look at one person and compute their NPS. NPS is a property of a group of users. At the user level you could bin people up into multiclass "detractor" "passive" and "promotor" but you lose information, particularly in the detractor class.
I personally think a 6 is really different from a 1. A 1 hates your product, and a 6 is pretty much a passive.
So it's useful to build an individual-level model where the target is the direct, raw score from 1-10, and then the predictor is the text of the response. And as I pointed out above, DataRobot's word cloud and coefficients will tell you which pieces of text increase the users score and which pieces decrease their score, adding up to a total predicted score for a user based on what they said.
<span style="color:red;font-size: 1rem"> `Robot 2`</span>
You can also use text prediction explanations to look at individual reviews.
<span style="color:red;font-size: 1rem"> `Robot 3`</span>
Oh that’s right! That will give you word level positive/negative/neutral for each review.
thanks Robot 2 and Robot 3! This is all great information. I’ll see what we can come up with, but I’d definitely like to leverage the [text prediction explanations](predex-text) for this one.
|
rr-nps-scores
|
---
title: Offset/exposure with Gamma distributions
description: With DataRobot's monotonicity, to normalize or not to normalize, that is the question.
---
# Offset/exposure with Gamma distributions {: #offset-exposure-with-gamma-distributions }
<span style="color:red;font-size: 1rem"> `Robot 1`</span>
**How does DataRobot treat exposure and offset in model training with the target following a Gamma distribution?**
The target is total claim cost while `exposure = claim count`. So, in DataRobot, one can either set exposure equal to “claim count” _or_ set `offset = ln(“claim count”)`. Should I reasonably expect that both scenarios are mathematically equivalent?
Thanks you!
<span style="color:red;font-size: 1rem"> `Robot 2`</span>
Yes, they are mathematically equivalent. You either multiply by the exposure or add the `ln(exposure)`.
<span style="color:red;font-size: 1rem"> `Robot 1`</span>
Thanks, that was my impression as well. However, I did an experiment, setting up projects using the two approaches with the same feature list. One project seems to overpredict the target, while the other underpredicts. If they are mathematically equal, what might have caused the discrepancy?
<span style="color:red;font-size: 1rem"> `Robot 2`</span>
Odd. Are you using the same error metric in both cases?
<span style="color:red;font-size: 1rem"> `Robot 1`</span>
Yes, both projects used the recommended metric—Gamma Deviance.
<span style="color:red;font-size: 1rem"> `Robot 2`</span>
Can you manually compare predictions and actuals by downloading the validation or holdout set predictions?
<span style="color:red;font-size: 1rem"> `Robot 1`</span>
Upon further checking, I see I used the wrong feature name (for the exposure feature) in the project with the exposure setting. After fixing that, predictions from both projects match (by downloading from the Predict tab).
I did notice, however, that the Lift Charts are different.
<span style="color:red;font-size: 1rem"> `Robot 2`</span>
That is likely a difference in how we calculate offset vs. exposure for Lift. I would encourage making your own Lift Charts in a notebook. Then you could use any method you want for handling weights, offset, and exposure in the Lift Chart.
<span style="color:red;font-size: 1rem"> `Robot 3`</span>
We do have a great AI Accelerator for [customizing lift charts](custom-lift-chart).
<span style="color:red;font-size: 1rem"> `Robot 1`</span>
Amazing. Thank you!
|
rr-offset-exposure-gamma
|
---
title: Intermittent target leakage
description: DataRobot's target leakage detection explained.
---
# Intermittent target leakage {: #intermittent-target-leakage }
<span style="color:red;font-size: 1rem"> `Robot 1`</span>
**Why might target leakage show intermittently?**
Hi team! A student in the DataRobot for Data Scientists class created a ModelID Categorical Int feature from the standard class “fastiron 100k data.csv.zip ” file and it flagged as [Target Leakage](data-quality#target-leakage) on his first run under manual.
When he tried to do it again, the platform did not give the yellow triangle for target leakage but the Data Quality Assessment box did flag a target leakage feature.
His questions are:
1. Why is DataRobot showing the target leakage intermittently?
2. The original ModelID as a numeric int did not cause a target leakage flag and also when he included that Parent feature with the child feature (ModelID as categorical int) it did not flag as Target Leakage—why is that?
<span style="color:red;font-size: 1rem"> `Robot 2`</span>
At a quick glance, it sounds like the user created a new feature `ModelID (Categorical Int)` via Var Type transform, and then kicked off manual mode in which the created feature received a calculated ACE importance scores. The importance scores passed our target leakage threshold and therefore Data Quality Assessment tagged the feature as potential leakage.
<span style="color:red;font-size: 1rem"> `Robot 2`</span>
After looking at the project, I see that there was not a feature list called "Informative Features - Leakage Removed" created, meaning it didn't pass the "high-risk" leakage threshold value, and therefore was tagged as "moderate-risk" leakage feature.
I found the `/eda/profile/` values from the Network Console for the project for the specific feature `ModelId (Categorical Int)`. The calculated ACE importance score (Gini Norm metric) for that created feature is about 0.8501:
`target_leakage_metadata: {importance: {impact: "moderate", reason: "importance score", value: 0.8501722785995578}}`
And... yeah, the hard-coded moderate-risk threshold value in the code is in fact 0.85.
<span style="color:red;font-size: 1rem"> `Robot 2`</span>
You can let the user know that changing a Numeric feature to Categorical var type can lead to potentially different univariate analysis results with regards to our Data page Importance score calculations. The Importance scores just narrowly passed our moderate-risk detected Target Leakage threshold value. Hope that helps.
<span style="color:red;font-size: 1rem"> `Robot 1`</span>
Thanks for such a detailed feedback!
<span style="color:red;font-size: 1rem"> `Robot 2`</span>
Any time, any further questions about Target Leakage let the TREX (trust and explainability) team know (or ping me, since I helped work on it)!
|
rr-target-leakage
|
---
title: Robot-to-Robot
description: DataRobot employees ask and answer question questions related to the platform and data science.
---
# Robot-to-Robot {: #robot-to-robot }
This section pulls back the curtain to reveal what DataRobot employees talk about in Slack. No surprises here—data science is still top of mind.
Use case | Description
-------- | -----------
[Model integrity and security](rr-model-integrity) | What measures does the platform support to assure the integrity and security of AI models?
[Single- vs. multi-tenant SaaS](rr-singlemulti-tenant.md) | What's the difference between the two deployment options with respect to DataRobot SaaS?
[ACE score and row order](rr-ace-score) | How is sampling done in EDA2 when calculating ACE scores?
[Redundant features explained](rr-redundant-features) | What makes a feature redundant?
[Import .tf or .keras?](rr-import-keras) | Have you looked at custom models?
[N-grams and prediction confidence](rr-n-gram-predictions) | What tools help understand n-gram predictions?
[GPU vs. CPU](rr-gpu-v-cpu) | What's the difference between GPUs and CPUs?
[Prediction Explanations on small data](rr-predex-small-data) | Can you get Prediction Explanations for a small dataset?
[Neural nets and tabular data](rr-nns-tabular-data) | Do you need neural networks for tabular data?
[PCA and K-Means clustering](rr-pca-kmeans) | What's the impact of PCA on KMeans?
[Default language change with Japanese](rr-language-nlp) | Why did the default language change when modeling Japanese text features?
[Let's talk target leakage](rr-target-leakage) | Why might target leakage show intermittently?
[Normalizing for monotonicity](rr-norm-for-mono) | Can you help me with monotonicity?
[An interesting way to use a payoff matrix](rr-payoff-matrix) | Consider justifying cost vs. identifying profit drivers.
[Target transforms](rr-target-transform) | How does transforming your target help ML models?
[Dynamic time warping](rr-dynamic-time) | What should I look out for with dynamic time warping?
[Multiple reduced feature lists](rr-reduced-feature-lists) | Can I have multiple Reduced Features lists for one project?
[NPS in DataRobot](rr-nps-scores) | Has anyone worked with net promoter scores (NPS)?
[Word Cloud repeats](rr-word-cloud) | Why would a term would show up multiple times in a word cloud?
[Ordered categoricals](rr-ordered-cat) | Does DataRobot recognize ordered categoricals?
[Offset/exposure with Gamma distributions](rr-offset-exposure-gamma) | How does DataRobot treat exposure and offset in model training with the target following a Gamma distribution?
|
index
|
---
title: N-grams and prediction confidence
description: Which DataRobot tools help understand n-gram predictions?
---
# N-grams and prediction confidence {: #n-grams-and-prediction-confidence }
<span style="color:red;font-size: 1rem"> `Robot 1`</span>
**Customer question: how do we know which words/n-grams increase confidence in the predictions?**
<span style="color:red;font-size: 1rem"> `Robot 2`</span>
The [Word Cloud](word-cloud) is useful for this!
<span style="color:red;font-size: 1rem"> `Robot 3`</span>
You can also look at the [coefficients](coefficients) directly for any linear model with n-grams.
<span style="color:red;font-size: 1rem"> `Robot 2`</span>
Finally we’ve got [Prediction Explanations](predex-text) for text features.
<span style="color:red;font-size: 1rem"> `Robot 1`</span>
Thanks for the response. I love these and they make sense. I recommended word cloud but she indicated that that was intensity not confidence (to me they are highly related).
<span style="color:red;font-size: 1rem"> `Robot 2`</span>
Our linear models are regularized so low confidence words will be dropped.
|
rr-n-gram-predictions
|
---
title: Alternate use for a payoff matrix
description: An interesting way to use DataRobot's payoff matrix, consider justifying cost vs. identifying profit drivers.
---
# Alternate use for a payoff matrix {: #alternate-use-for-a-payoff-matrix }
<span style="color:red;font-size: 1rem"> `Robot 1`</span>
**How about justify cost savings vs identify profit drivers?**
Hello Team. A client and I were putting together numbers for a [payoff matrix](profit-curve#compare-models-based-on-a-payoff-matrix) and had an alternative way to look at things. For them, the goal is to justify cost savings vs identify profit drivers from a use case.
Example:
1. True Positive (benefit): This is the benefit from an order that is correctly predicted as canceled. The benefit is no/limited inventory costs. For example, if the item costs $100 to store typically, but due to cancellation - no additional cost (we put a 0 here). Benefit can come from additional revenue generated through proactive reach out
2. True Negative (benefit): This is the benefit from an order that is correctly predicted as not canceled. The additional benefit / costs are 0 because a customer simply does not cancel this item and continues with shipment ($1000 profit on avg or -100 inventory cost per order)
3. False Positive (cost): This is the cost associated with classifying an order as canceled when it did not cancel. What costs are incurred - Lost opportunity or business since the order is not fulfilled or delayed (-200)
4. False Negative (cost): This is the cost associated with classifying an order as not canceled, when it will actually cancel. We have to incur a cost here for inventory management -$100
Just thought I'd share!
<span style="color:red;font-size: 1rem"> `Robot 2`</span>
Nice!
|
rr-payoff-matrix
|
---
title: GPU vs. CPU
description: DataRobot applies GPUs or CPUs depending on the task, as explained here.
---
# GPU vs. CPU {: #gpu-vs-cpu }
<span style="color:red;font-size: 1rem"> `Robot 1`</span>
**How does CPU differ from GPU (in terms of training ML models)?**
<span style="color:red;font-size: 1rem"> `Robot 2`</span>
Think about CPUs as a 4-lane highway with trucks delivering the computation, and GPUs as a 100-lane highway with little shopping carts. GPUs are great at parallelism, but only for less complex tasks.
Deep learning, specifically, benefits from that since it's mainly batches of matrix multiplication, and these can be parallelized very easily. So training a neural network in a GPU can be 10x faster than on a CPU. But other models don't get that benefit at all.
<span style="color:red;font-size: 1rem"> `Robot 1`</span>
Thank you! This makes a lot more sense now.
<span style="color:red;font-size: 1rem"> `Robot 3`</span>
Wait, I've got a deep analogy coming!
**Let’s say that I am in a very large library, and my goal is to count all of the books.**
There’s a librarian. That librarian is super smart and knowledgeable about where books are, how they’re organized, how the library works, and all that. The librarian is the boss! The librarian is perfectly capable of counting the books on their own and they’ll probably be very good and organized about it.
But what if there was a big team of people who could count the books with the librarian? We don’t need these people to be library experts — it’s not like you have to be a librarian to count books — we just need people who can count accurately.
* If you have 3 people who count books, that speeds up your counting.
* If you have 10 people who count books, your counting gets even faster.
* If you have 100 people who count books… that’s awesome!
**A CPU is like a librarian.**
* Just like you need a librarian running a library, you need a CPU. A CPU can basically do any jobs that you need done.
* Just like a librarian could count all of the books on their own, a CPU can do math things like building machine learning models.
**A GPU is like a team of people counting books.**
* Just like counting books is something that can be done by many people without specific library expertise, a GPU makes it much easier to take a job, split it among many different units, and do math things like building machine learning models.
**A GPU can usually accomplish certain tasks much faster than a CPU can.**
* If you’re part of a team of people who are counting books, maybe the librarian assigns every person a shelf. You count the books on your shelf. At the exact same time, everyone else counts the books on their shelves. Then you all come together, add up your books, and get the total number of books in the library.
This process is called **parallelizing**, which is just a fancy word for “we split a big job into small chunks, and these small chunks can be done at the same time.” We say parallelizing because we’re doing these jobs “in parallel.” You count your books at the same time as your neighbor counts their books. (Jobs that can't be done in parallel are usually done sequentially, which means "one after another.")
**Let’s say you have 100 shelves in your library and it takes 1 minute to count all of the books on 1 shelf.**
* If the librarian was the only person counting, they couldn’t parallelize their work, because only one person can count one stack of books at one time. So your librarian will take 100 minutes to count all of the books.
* If you have a team of 100 people and they each count their shelves, then every single book is counted in 1 minute. (Now, it will take a little bit of time for the librarian to assign who gets what shelf. It’ll also take a little bit of time to add all of the numbers together from the 100 people. But those parts are relatively fast, so you’re still getting your whole library counted in, maybe, 2-3 minutes.)
Three minutes instead of 100 minutes—that’s way better! Again: GPUs can usually accomplish certain tasks much faster than a CPU can.
**There are some cases when a GPU probably _isn’t_ needed**
1. Let’s say you only have one shelf of books. Taking the time for the librarian to assign 100 people to different parts of the shelf, counting, then adding them up probably isn’t worth it. It might be faster for the librarian to just count all of the books.
2. If a data science job can’t be parallelized (split into small chunks where the small chunks can be done at the same time), then a GPU usually isn’t going to be helpful. Luckily for us, some very smart people have made the vast majority of data science tasks parallelizable.
Let’s look at a simple math example: calculating the average of a set of numbers.
**If you calculate the average with a CPU**, it’s kind of like using your librarian. Your CPU has to add up all of the numbers, then divide by the sample size.
**If you leverage a GPU to help calculate the average**, it’s kind of like using a full team. Your CPU splits the numbers up into small chunks, then each of your GPU workers (called cores) sums their chunk of numbers. Then, your CPU (librarian) will coordinate combining those numbers back together into the average.
If you’re calculating the average of a set of, say, a billion numbers, it will probably be much faster for your CPU to split that billion into chunks and having separate GPU workers doing the addition rather than your CPU doing all of it by itself.
Let’s look at a more complicated machine learning example: a random forest is basically a large number of decision trees. Let’s say you want to build a random forest with 100 decision trees.
* **If you build a random forest on a CPU**, it’s kind of like using your librarian to do the entire counting on their own. Your CPU basically has to build a first tree, then build a second tree, then build a third tree, and so on.
* **If you leverage a GPU to help build a random forest**, then it’s kind of like using a full team to count your books with the librarian coordinating everyone. One part of your GPU (called a core) will build the first tree. At the same time, another GPU core will build the second tree. This all happens simultaneously!
Here’s a good image from NVIDIA that helps to compare CPUs to GPUs.

Just like your librarian has to manage all sorts of things in the library (counting, organizing, staffing the front desk, replacing books), your CPU has a bunch of different jobs that it does. The green ALU boxes in the CPU image represent “arithmetic logic units.” These ALUs are used to do mathematical calculations. Your CPU can do some mathematical calculations on its own, just like your librarian can count books! But a lot of your CPU’s room is taken up by those other boxes, because your CPU is responsible for lots of other things, too. It’s not just for mathematical calculations.
Just like your team of counters are there to do one job (count books), your GPU is optimized to basically just do mathematical calculations. It’s way more powerful when it comes to doing math things.
**So, in short:**
* CPUs have many jobs to do. CPUs can do mathematical calculations on their own.
* GPUs are highly optimized to do mathematical calculations.
* If you have a job that relies on math (like counting or averaging or building a machine learning model), then a GPU can probably do some of the math much faster than a CPU can. This is because we can use the discoveries of very smart people to parallelize (split big jobs into small chunks).
* If you have a job that doesn’t rely on math or is very small, a GPU probably isn’t worth it.
<span style="color:red;font-size: 1rem"> `Robot 4`</span>
Speaking of an analogy, [here is a video](https://www.youtube.com/watch?v=GVsUOuSjvcg){ target=_blank } about, quite literally, an analogy. Specifically analog CPUs (as opposed to digital). This video is very interesting, very well presented, and gives a full history of CPUs and GPUs usage wrt AI, and why the next evolution could be analog computers. Well worth watching!!
<span style="color:red;font-size: 1rem"> `Robot 1`</span>
Ah, I was hoping for a Robot 3 analogy, they are always fantastic 🙂 Thanks all who shared!
<span style="color:red;font-size: 1rem"> `Robot 2`</span>
A more simplified and general comparison:
CPU's are designed to coordinate AND calculate a bunch of math - they have a bunch of routing set up and they're going to have drivers [or operating systems] built to make that pathing and organizing as easy as the simple calculations. Because they're designed to be a "brain" for a computer, they're built to do it ALL.
GPU's are designed to be specialized for, well, graphics hence the name. To quickly render video and 3d graphics, you want a bunch of very simple calculations performed all at once - instead of having one "thing" [CPU cores] calculating the color for a 1920x1080 display [a total of 2073600 pixels], maybe you have 1920 "things" [GPU cores] dedicated to doing one line of pixels each and all running in parallel.
"Split this Hex code for this pixel's color into a separate R, G, and B value and send it to the screen's pixel matrix" is a much simpler task than, say, the "convert this video file into a series of frames, combine them with the current display frame of this other application, be prepared to interrupt this task to catch and respond to keyboard/mouse input, and keep this background process running the whole time..." tasks that a CPU might be doing. Because of this, a GPU can be slower and more limited than a CPU while still being useful, and it might have unique methods to complete its calculations so it can be specialized for X purpose [3d rendering takes more flexibility than "display to screen"]. Maybe it only knows very simple conversions or can't keep track of what it used to be doing - "history" isn't always useful for displaying graphics, especially if there's a CPU and a buffer [RAM] keeping track of history for you.
Since CPU's want to be usable for a lot of different things, there tends to be a lot of Operating Systems/drivers to translate between the higher level code I might write and the machine's specific registers and routing. BUT since a GPU is made with the default assumption "this is going to make basic graphics data more scalable" they often have more specialized machine functionality, and drivers can be much more limited in many cases. It might be harder to find a translator that can tell the GPU how to do the very specific thing that would be helpful in a specific use case, vs the multiple helpful translators ready to explain to your CPU how to do what you need
<span style="color:red;font-size: 1rem"> `Robot 2`</span>
Wait, this analogy thing is fun. How about, if your CPU is a teacher, your GPU is a classroom full of elementary school students.
Sometimes it might be worth having the teacher explain to the class how to help with a group project… but it depends on the cost of the teacher having to figure out how to talk to each student in a way they'll understand and listen plus the energy the teacher now has to spend making sure they're doing what they're supposed to and getting the materials they need along the way. Meanwhile, your teacher came pre-trained and already knows how to do a bunch of more complicated tasks and organization!
If it's a project where a lot of unskilled but eager help can make things go faster, then it might be worth using a GPU. But before you can get the benefits, you need to make sure you know what languages each kid in the classroom speaks and what they already know how to do. Sometimes, its just easier and more helpful to focus on making sure your teachers can do the tasks themselves before recruiting the kids.
|
rr-gpu-v-cpu
|
---
title: Default language change in Japanese
description: DataRobot's natural language processing heuristics improvements.
---
# Default language change in Japanese {: #default-language-change-in-japanese }
<span style="color:red;font-size: 1rem"> `Robot 1`</span>
**Why did the default language change when modeling Japanese text features?**
Hi team, this is a question from a customer:
> When modeling with Japanese text features, the "language" used to be set to "english" by default. However, when I recently performed modeling using the same data, the setting was changed to "language=japanese". It has been basically set to "language=english" by default until now, but from now on, if I input Japanese, will it automatically be set to "language=japanese"?
I was able to reproduce this event with my data. The model created on July 19, 2022 had `language=english`, but when I created a model today with the same settings, it had `language=japanese`. Is this a setting that was updated when the default was changed from "Word N-Gram" to "Char N-Gram"?
<span style="color:red;font-size: 1rem"> `Robot 2`</span>
Before, for every dataset we showed "english", which is incorrect. Now after NLP Heuristics Improvements, we dynamically detect and set the dataset's language.
Additionally, we found that char-grams for Japanese datasets perform better than word-grams, thus we switched to char-grams for better speed & accuracy. But to keep Text AI Word Cloud Insights in a good shape, we also train 1 word-gram based blueprint so you can inspect both char & word-gram WCs.
Let me know if you have more questions, happy to help!
<span style="color:red;font-size: 1rem"> `Robot 1`</span>
Robot 2, thank you for the comment. I will tell the customer that NLP has improved and language is now properly set. I was also able to confirm that the word-gram based BP model was created as you mentioned. Thanks!
|
rr-language-nlp
|
---
title: Multiple DR Reduced Features lists
description: How to have multiple Reduced Features lists in one project.
---
# Multiple DR Reduced Features lists {: #multiple-reduced-feature-lists }
<span style="color:red;font-size: 1rem"> `Robot 1`</span>
**Can I have multiple DR Reduced Features lists for one project?**
Hi team, can I have multiple [DR Reduced Features](feature-lists#automatically-created-feature-lists) lists for one project? I would like to create the DR Reduced Features based on a feature list after Autopilot completes. But I don’t see DR Reduced Feature created when I retrain the model on a new feature list through “Configure Modeling Settings”.
<span style="color:red;font-size: 1rem"> `Robot 2`</span>
You can have a new reduced feature list but only if there is a new most accurate model. We don't base the recommendation stages on the specific Autopilot run. You can take the new best model and, under the **Deploy** tab, [prepare it for deployment](model-rec-process#prepare-a-model-for-deployment). This will run through all of the stages, including a reduced feature list stage. Note that not all feature lists can be reduced, so this stage might be skipped.
<span style="color:red;font-size: 1rem"> `Robot 3`</span>
Robot 1, you can manually make a reduced featurevlist from [Feature Impact](feature-impact#create-a-new-feature-list) for any model:
1. Run Feature Impact.
2. Hit "Create feature list".
3. Choose the number of features you want.
3. Check “exclude redundant features” option (it only shows up if there _are_ redundant features.
4. Name the feature list.
5. Click to create!
|
rr-reduced-feature-lists
|
---
title: Glossary home
description: The Glossary provides brief definitions of terms relevant to the DataRobot platform.
---
# Glossary
The DataRobot glossary provides brief definitions of terms relevant to the DataRobot platform. These terms span all phases of machine learning, from data to deployment.
[All](#){ .md-button .selected data-type=all }
[AI Catalog](#){ .md-button data-type=ai-catalog }
[Bias and Fairness](#){ .md-button data-type=bias-and-fairness }
[Data Prep](#){ .md-button data-type=data-prep }
[Feature Discovery](#){ .md-button data-type=feature-discovery }
[MLOps](#){ .md-button data-type=mlops }
[Predictions](#){ .md-button data-type=predictions }
[Time-aware](#){ .md-button data-type=time-aware }
[Visual AI](#){ .md-button data-type=visual-ai }
## A
-----------
#### Accuracy Over Space {: #accuracy-over-space }
A model Leaderboard tab ([Evaluate > Accuracy Over Space](lai-insights)) and Location AI insight that provides a spatial residual mapping within an individual model.
#### Accuracy Over Time {: #accuracy-over-time }
A model Leaderboard tab ([Evaluate > Accuracy Over Time](aot)) that visualizes how predictions change over time.
#### ACE scores {: #ace-scores }
Also known as Alternating Conditional Expectations. A univariate measure of correlation between the feature and the target. ACE scores detect non-linear relationships, but as they are univariate, they do not detect interaction effects.
#### Actuals {: #actuals data-category=predictions }
(Predictions) Actual values for an ML model that let you track its prediction outcomes. To generate accuracy statistics for a deployed model, you compare the model's predictions to real-world actual values for the problem. Both the prediction dataset and the actuals dataset must contain association IDs, which let you match up corresponding rows in the datasets to gauge the model's accuracy.
#### Advanced Tuning {: #advanced-tuning }
The ability to manually set model parameters after the model build, supporting experimentation with parameter settings to improve model performance.
#### Aggregate image feature {: #aggregate-image-feature data-category=visual-ai }
(Visual AI) A set of image features where each individual element of that set is a constituent image feature. For example, the set of image features extracted from an image might include a set of features indicating:
1. The colors of the individual pixels in the image.
2. Where edges are present in the image.
3. Where faces are present in the image.
From the aggregate it may be possible to determine the impact of that feature on the output of a data analytics model and compare that impact to the impacts of the model's other features.
#### AI Catalog {: #ai-catalog data-category=ai-catalog }
A browsable and searchable collection of registered objects that contains definitions and relationships between various objects types. Items stored in the catalog include: data connections, data sources, data metadata.
#### AIM {: #aim }
The second phase of Exploratory Data Analysis (i.e., EDA2), that determines feature importances based on cross-correlation with the target feature. That data determines the “informative features” used for modeling during Autopilot.
#### Alternating Conditional Expectations {: #alternating-conditional-expectations }
See [ACE scores](#ace-scores).
#### Anomaly detection {: #anomaly-detection }
A form of [unsupervised learning](#unsupervised-learning) used to detect anomalies in data. Anomaly detection, also referred to as outlier or novelty detection, can be useful with data having a low percentage of irregularities or large amounts of unlabeled data.
#### AnswerSet {: #answerset data-category=data-prep }
(Data Prep) The published result of your data prep steps. You can export the results of all of your data prep steps to an AnswerSet or you can create a [lens](#lens) to specify the set of steps to export as an AnswerSet.
#### APF {: #apf }
See [Automatic Project Flows](#automatic-project-flows-apf).
#### Apps {: #apps }
See [No-Code AI Apps](#no-code-ai-apps).
#### ARIMA (AutoRegressive Integrated Moving Average) {: #arima-autoregressive-integrated-moving-average }
A class of time series model that projects the future values of a series based entirely on the patterns of that series.
#### Association ID {: #association-id data-category=mlops }
(MLOps) An identifier that functions as a foreign key for your prediction dataset so you can later match up actual values (or "actuals") with the predicted values from the deployed model. An association ID is required for monitoring the accuracy of a deployed model.
#### AUC (Area Under the Curve) {: #auc-area-under-the-curve }
A common error metric for binary classification that considers all possible thresholds and summarizes performance in a single value on the ROC Curve. It works by optimizing the ability of a model to separate the 1s from the 0s. The larger the area under the curve, the more accurate the model.
#### Augmented Intelligence {: #augmented-intelligence }
DataRobot's enhanced approach to artificial intelligence, which expands current model building and deployment assistance practices. The DataRobot platform fully automates and governs the AI lifecycle from data ingest to model training and predictions to model-agnostic monitoring and governance. Guardrails ensure adherence to data science best practices when creating machine learning models and AI applications. Transparency across user personas and access to data wherever it resides avoids lock-in practices.
#### Automatic Project Flows (APF) {: #automatic-project-flows-apf data-category=data-prep }
(Data Prep) Functionality that allows you to operationalize curated data flows. Use APFs to schedule a sequence of data prep steps across projects, datasets, and [AnswerSets](#answerset). Then you manage runs using APF’s monitoring capabilities.
#### AutoML (Automated Machine Learning) {: #automl-automated-machine-learning }
A software system that automates many of the tasks involved in preparing a dataset for modeling and performing a model selection process to determine the performance of each with the goal of identifying the best performing model for a specific use case.
#### Autopilot (full Autopilot) {: #autopilot-full-autopilot }
The DataRobot "survival of the fittest" modeling mode that automatically selects the best predictive models for the specified target feature and runs them at ever-increasing sample sizes. In other words, it runs more models in the early stages on a small sample size and advances only the top models to the next stage. In full Autopilot, DataRobot runs models at 16% (by default) of total data and advances the top 16 models, then runs those at 32%. Taking the top 8 models from that run, DataRobot runs on 64% of the data (or 500MB of data, whichever is smaller). See also [Quick (Autopilot)](#quick) and [Comprehensive](#comprehensive).
#### AutoTS (Automated time series) {: #autots-automated-time-series }
A software system that automates all or most of the steps needed to build forecasting models, including featurization, model specification, model training, model selection, validation, and forecast generation.
#### Average baseline {: #average-baseline data-category=time-aware }
(Time-aware) The average of the target in the [Feature Derivation Window](#feature-derivation-window).
## B
-----------
#### Backtesting {: #backtesting data-category=time-aware }
(Time-aware) The time series equivalent of cross-validation. Unlike cross-validation, however, backtests allow you to select specific time periods or durations for your testing instead of random rows, creating “trials” for your data.
#### Base dataset {: #base-dataset data-category=data-prep }
(Data Prep) The data imported into a Data Prep project on which all actions are performed.
#### Baseline model {: #baseline-model data-category=time-aware }
(AutoML, Time-aware) Also known as a naive model. A simple model used as a comparison point to confirm that a generated ML model is learning with more accuracy than a basic non-ML model.
For example, generated ML models for a regression project should perform better than a baseline model that predicts the mean or median of the target. Generated ML models for a time series project should perform better than a baseline model that predicts the future using the most recent actuals (i.e., using today's actual value as tomorrow's prediction).
For time series projects, baseline models are used to calculate the [MASE metric](opt-metric#mase) (the ratio of the [MAE metric](opt-metric#maeweighted-mae) over the baseline model).
#### Batch predictions {: #batch-predictions data-category=predictions;mlops }
(MLOps, Predictions) A method of making predictions with large datasets, in which you pass input data and get predictions for each row; predictions are written to output files. Users can make batch predictions with MLOps via the **Predictions** interface or can use the Batch Prediction API for automating predictions. Schedule batch prediction jobs by specifying the prediction data source and destination and determining when the predictions will be run.
#### Bias Mitigation {: #bias-mitigation }
Augments blueprints with a pre- or post-processing task intended to reduce bias across classes in a protected feature. Bias Mitigation is also a model Leadboard tab ([Bias and Fairness > Bias Mitigation](fairness-metrics#retrain-with-mitigation)) where you can apply mitigation techniques after Autopilot has finished.
#### Bias vs Accuracy {: #bias-vs-accuracy }
A Leaderboard tab that generates a chart to show the tradeoff between predictive accuracy and fairness, removing the need to manually note each model's accuracy score and fairness score for the protected features.
#### "Blind History" {: #blind-history data-category=time-aware }
(Time-aware) “Blind history" captures the gap created by the delay of access to recent data (e.g., “most recent” may always be one week old). It is defined as the period of time between the smaller of the values supplied in the Feature Derivation Window and the forecast point. A gap of zero means "use data up to, and including, today;" a gap of one means "use data starting from yesterday" and so on.
#### Blender {: #blender }
A model that potentially increases accuracy by combining the predictions of between two and eight models. DataRobot can be configured to automatically create blender models as part of Autopilot, based on the top three regular Leaderboard models (for AVG, GLM, and ENET blenders). You can also create blenders manually (aka ensemble models).
#### Blueprint {: #blueprint }
A graphical representation of the many steps involved in transforming input predictors and targets into a model. A blueprint represents the high-level end-to-end procedure for fitting the model, including any preprocessing steps, algorithms, and post-processing. Each box in a blueprint may represent multiple steps. You can view a graphical representation of a blueprint by clicking on a model on the Leaderboard. See also [user blueprints](#user-blueprints).
## C
-----------
#### "Can't operationalize" period {: #cant-operationalize-period data-category=time-aware }
(Time-aware) The "can't operationalize" period defines the gap of time immediately after the Forecast Point and extending to the beginning of the Forecast Window. It represents the time required for a model to be trained, deployed to production, and to start making predictions—the period of time that is too near-term to be useful. For example, predicting staffing needs for tomorrow may be too late to allow for taking action on that prediction.
#### Catalog {: #catalog }
See [AI Catalog](#ai-catalog).
#### Centroid {: #centroid }
The center of a cluster generated using [unsupervised learning](#unsupervised-learning). A centroid is the multi-dimensional average of a cluster, where the dimensions are observations (data points).
#### CFDS (Customer Facing Data Scientist) {: #cfds-customer-facing-data-scientist }
A DataRobot employee responsible for the technical success of user and potential users. They assist with tasks like structuring data science problems to complete integration of DataRobot. CFDS are passionate about ensuring user success.
#### Challenger models {: #challenger-models data-category=mlops }
(MLOps) Models that you can compare to a currently deployed model (the "champion" model) to continue model comparison post-deployment. Submit a challenger model to shadow a deployed model and replay predictions made against the champion to determine if there is a superior DataRobot model that would be a better fit.
#### Champion model {: #champion-model category=mlops;time-aware}
(Time-aware, MLOps) A model recommended by DataRobot—for a deployment (MLOps) or for segmented modeling.
In MLOps, you can replace the champion selected for a deployment yourself or you can set up [Automated Retraining](set-up-auto-retraining), where DataRobot compares challenger models with the champion model and replaces the champion model if a challenger outperforms the champion.
In the segmented modeling workflow, DataRobot builds a model for each segment. DataRobot recommends the best model for each segment—the segment champion. The segment champions roll up into a Combined Model. For each segment, you can select a different model as champion, which is then used in the Combined Model.
#### Channel {: #channel }
The connection between an output port of one module and an input port of another module. Data flows from one module's output port to another module's input port via a channel, represented visually by a line connecting the two.
#### Classification {: #classification }
A type of prediction problem that classifies values into discrete, final outcomes or classes. _Binary classification_ problems are those datasets in which what you are trying to predict can be one of two classes (for example, "yes" or "no"). _Multiclass classification_ is a classification problem that results in more than two outcomes (for example, "buy", "sell", or "hold"). _Unlimited multiclass_ is the ability to handle projects with a target feature containing an unlimited number of classes, with support for both a high threshold of individual classes and multiclass aggregation to support an unlimited number of classes above the threshold.
#### ClicktoPrep link {: #clicktoprep-link data-category=data-prep }
(Data Prep) A link to Data Prep components from business intelligence (BI) and data visualization tools, (for example Tableau®). ClicktoPrep links in a BI or visualization tool can take you to the last step in a Data Prep project, a Data Prep [Filtergram](#filtergram), or to a specific Data Prep project step. You can make changes to the Data Prep data, then republish and refresh the visualization or report in your BI or visualization tool.
#### Clustering {: #clustering }
A form of [unsupervised learning](#unsupervised-learning) used to group similar data and identify natural segments.
#### Coefficients {: #coefficients }
A model Leaderboard tab ([Describe > Coefficients](coefficients)) that provides a visual indicator of information that can help you refine and optimize your models.
#### Combined Model { #combined-model data-category=time-aware }
(Time-aware) The final model generated in the segmented modeling workflow. With segmented modeling, DataRobot builds a model for each segment and combines the segment champions into a single Combined Model that you can deploy.
#### Common event {: #common-event data-category=time-aware }
(Time-aware) A data point is a common event if it occurs in a majority of weeks in data (for example, regular business days and hours would be common, but an occasional weekend data point would be uncommon).
#### Compliance documentation {: #compliance-documentation }
Automated model development documentation that can be used for regulatory validation. The documentation provides comprehensive guidance on what constitutes effective model risk management.
#### Composable ML {: #composable-ml }
A code-centric feature, designed for data scientists, that allows applying custom preprocessing and modeling methods to create a blueprint for model training. Using built-in and [custom tasks](#custom-task), you can compose and then integrate the new blueprint with other DataRobot features to augment and improve machine learning pipelines.
#### Comprehensive {: #comprehensive }
A modeling mode that runs all Repository blueprints on the maximum Autopilot sample size to ensure more accuracy for models.
#### Computer vision {: #computer-vision data-category=visual-ai }
(Visual AI) Use of computer systems to analyze and interpret image data. Computer vision tools generally use models that incorporate principles of geometry to solve specific problems within the computer vision domain. For example, computer vision models may be trained to perform object recognition (recognizing instances of objects or object classes in images), identification (identifying an individual instance of an object in an image), detection (detecting specific types of objects or events in images), etc.
#### Computer vision tools/techniques {: #computer-vision-toolstechniques data-category=visual-ai }
(Visual AI) Tools—for example, models, systems—that perform image preprocessing, feature extraction, and detection/segmentation functions.
#### Confusion matrix {: #confusion-matrix }
A table that reports true versus predicted values. The name “confusion matrix” refers to the fact that the matrix makes it easy to see if the model is confusing two classes (consistently mislabeling one class as another class). The confusion matrix is available as part of DataRobot's ROC Curve, Eureqa, and Confusion Matrix for multiclass model visualizations.
#### Constraints {: #constraints }
A model Leaderboard tab ([Describe > Constraints](monotonic)) that allows you to review monotonically constrained features if feature constraints were configured in Advanced Options prior to modeling.
#### Automated Retraining {: #automated-retraining data-category=mlops }
(MLOps) Retraining strategies for MLOps that refresh production models based on a schedule or in response to an event (for example, a drop in accuracy or data drift). Automated Retraining also uses DataRobot’s AutoML create and recommend new challenger models. When combined, these strategies maximize accuracy and enable timely predictions.
#### Credentials {: #credentials data-category=ai-catalog }
(AI Catalog) Information used to authenticate and authorize actions against data connections. The most common connection is through username and password, but alternate authentication methods include LDAP, Active Directory, and Kerberos.
#### Cross-Class Accuracy {: #cross-class-accuracy }
A model Leadboard tab ([Bias and Fairness > Cross-Class Accuracy](cross-acc)) that helps to shows why the model is biased, and where in the training data it learned the bias from. [Bias and Fairness settings](fairness-metrics) must be configured.
#### Cross-Class Data Disparity {: #cross-class-data-disparity }
A model Leadboard tab ([Bias and Fairness > Cross-Class Data Disparity](cross-data)) that calculates, for each protected feature, evaluation metrics and ROC curve-related scores segmented by class. [Bias and Fairness settings](fairness-metrics) must be configured.
#### Cross-Validation {: #cross-validation }
Also known as CV. A type of validation partition that is run to test (validate) model performance. Using subsets ("folds") of the validation data, DataRobot creates one model per fold, with the data assigned to that fold used for validation and the rest of the data used for training. By default, DataRobot uses five-fold cross-validation and presents the mean of those five scores on the Leaderboard. See also [validation](#validation).
#### Custom inference models {: #custom-inference-models data-category=mlops }
(MLOps) User-created, pre-trained models uploaded as a collection of files via the Custom Model Workshop. Upload a model artifact to create, test, and deploy custom inference models to DataRobot's centralized deployment hub. An inference model can have a predefined input/output schema or it can be unstructured. To customize prior to model training, use [custom tasks](#custom-task).
#### Custom model workshop {: #custom-model-workshop data-category=mlops }
(MLOps) In the [Model Registry](#model-registry), a location where you can upload user-created, pre-trained models as a collection of files. You can use these model artifacts to create, test, and deploy custom inference models to DataRobot's centralized deployment hub.
#### Custom task {: #custom-task data-category=mlops }
A data transformation or ML algorithm, for example, XGBoost or One-hot encoding, that can be used as a step in an ML blueprint inside DataRobot and used for model training. Tasks are written in Python or R and are added via the Custom Model Workshop. Once saved, the task can be used when modifying a blueprint with [Composable ML](#composable-ml). To deploy a pre-trained model where re-training is not required, use [custom inference models](#custom-inference-models).
#### CV {: #cv }
See [Cross Validation](#cross-validation).
## D
-----------
#### Data drift {: #data-drift data-category=mlops }
(MLOps) The difference between values in new inference data used to generate predictions for models in production and the training data initially used to train the deployed model. Predictive models learn patterns in training data and use that information to predict target values for new data. When the training data and the production data change over time, causing the model to lose predictive power, the data surrounding the model is said to be drifting. Data drift can happen for a variety of reasons, including data quality issues, changes in feature composition, and even changes in the context of the target variable.
#### Data management {: #data-management }
The umbrella term related to loading, cleaning, transforming, and storing data within DataRobot. It also refers to the practices that companies follow when collecting, storing, using, and deleting data.
#### Data Prep {: #data-prep data-category=data-prep }
(Formerly Paxata) A DataRobot tool that lets you gather, explore, and prepare data from multiple sources for machine learning.
#### Data preparation {: #data-preparation }
The process of transforming raw data to the point where it can be run through machine learning algorithms to uncover insights or make predictions. Also called “data preprocessing.”
#### Data Prep library {: #data-prep-library data-category=data-prep }
(Data Prep) The Data Prep component (and page) where you add and manage datasets, including [AnswerSets](#answerset) that you publish from your [Data Prep projects](#data-prep-project). Select **Library** on the top left of the Data Prep window to access the library. In the Data Prep library, you can also export datasets, set them up for automation, add new versions, and create profiles for your datasets.
#### Data Prep project {: #data-prep-project data-category=data-prep }
(Data Prep) The Data Prep component (and page) that contains your projects. Select **Projects** on the top left of your Data Prep window to access all of your projects. The **Projects** page is where you access and manage your projects, as well as those of the other users of your Data Prep instance. You can create new projects on the **Projects** page or you can create projects by uploading datasets on the [**Library**](#data-prep-library) page.
#### Data Quality Handling Report {: #data-quality-handling-report }
A model Leaderboard tab ([Describe > Data Quality Handling Report](dq-report)) that analyzes the training data and provides the following information for each feature: feature name, variable type, row count, percentage, and data transformation information.
#### DataRobot User Model (DRUM) {: #datarobot-user-model-drum }
A tool that allows you to test Python, R, and Java custom models and tasks locally. The test allows you to verify that a custom model can successfully run and make predictions in DataRobot before uploading it.
#### DataRobot University (DRU) {: #datarobot-university-dru }
Provides practical data science education to solve business problems. <a target="_blank" href="https://university.datarobot.com/">DRU</a> offers guided learning, self-paced and instructor-led courses, and labs, as well as certification programs, across many topics and skill levels.
#### Dataset {: #dataset }
Data, a file or the content of a data source, at a particular point in time. A data source can produce multiple datasets; an AI Catalog dataset has exactly one data source. In [Data Prep](#data-prep), a dataset can be generated from multiple data sources. In [AI Catalog](#ai-catalog), a dataset is materialized data that is stored with a catalog version record. There may be multiple catalog version records associated with an entity, indicating that DataRobot has reloaded or refreshed the data. The older versions are stored to support existing projects, new projects use the most recent version. A dataset can be in one of two states:
* A "snapshotted" (or materialized) dataset is an immutable snapshot of data that has previously been retrieved and saved.
* A “remote” (or unmaterialized) dataset has been configured with a location from which data is retrieved on-demand (AI Catalog).
#### Data connection {: #data-connection }
A configured connection to a database—it has a name, a specified driver, and a JDBC URL. You can register data connections with DataRobot for ease of re-use. A data connection has one connector but can have many data sources.
#### Data source {: #data-source }
A configured connection to the backing data (the location of data within a given endpoint). A data source specifies, via SQL query or selected table and schema data, which data to extract from the data connection to use for modeling or predictions. Examples include the path to a file on HDFS, an object stored in S3, and the table and schema within a database. A data source has one data connection and one connector but can have many datasets. It is likely that the features and columns in a datasource do not change over time, but that the rows within change as data is added or deleted.
#### Data stage {: #data-stage }
Intermediary storage that supports multipart upload of large datasets, reducing the chance of failure when working with large amounts of data. Upon upload, the dataset is uploaded in parts to the data stage, and once the dataset is whole and finalized, it is pushed to the AI Catalog or Batch Predictions. At any time after the first part is uploaded to the data stage, the system can instruct Batch Predictions to use the data from the data stage to fill in predictions.
#### Data/time partitioning {: #data-time-partitioning data-category=time-aware }
The only valid partitioning method for time-aware projects. With date/time, rows are assigned to [backtests](#backtesting) chronologically instead of, for example, randomly. Backtests are configurable, including number, start and end times, and sampling method.
#### Deep learning {: #deep-learning }
A set of algorithms that run data through several “layers” of neural network algorithms, each of which passes a simplified representation of the data to the next layer. Deep learning algorithms are essential to DataRobot's Visual AI capabilities, and their processing can be viewed from the Training Dashboard visualization.
#### Deployment inventory {: #deployments-inventory data-category=mlops }
(MLOps) The central hub for managing deployments. Located on the Deployments page, the inventory serves as a coordination point for stakeholders involved in operationalizing models. From the inventory, you can monitor deployed model performance and take action as necessary, managing all actively deployed models from a single point.
#### Detection/segmentation {: #detectionsegmentation data-category=visual-ai }
(Visual AI) A computer vision technique that involves the selection of a subset of the input image data for further processing (for example, one or more images within a set of images or regions within an image).
#### Downloads tab {: #downloads-tab }
A model Leaderboard tab ([Predict > Downloads](download)) where you can download model artifacts.
#### Downsampling {: #downsampling }
See [Smart downsampling](#smart-downsampling).
#### Driver {: #driver data-category=ai-catalog }
(AI Catalog) The software that allows the DataRobot application to interact with a database; each data connection is associated with one driver (created and installed by your administrator). The driver configuration saves the JAR file storage location in DataRobot and any additional dependency files associated with the driver. DataRobot supports JDBC drivers.
## E
-----------
#### EDA (Exploratory Data Analysis) {: #eda-exploratory-data-analysis }
The DataRobot approach to analyzing and summarizing the main characteristics of a dataset. Generally speaking, there are two stages of EDA:
* EDA1 provides summary statistics based on a sample of data. In EDA1, DataRobot counts, categorizes, and applies automatic feature transformations (where appropriate) to data.
* EDA2 is a recalculation of the the statistics collected in EDA1 but using the entire dataset, excluding holdout. The results of this analysis are the criteria used for model building.
#### Ensemble models {: #ensemble-models }
See [blender](#blender).
#### Environment {: #environment }
A Docker container where a custom task runs.
#### ESDA {: #esda }
Exploratory Spatial Data Analysis (ESDA) is the exploratory data phase for Location AI. DataRobot provides a variety of tools for conducting ESDA within the DataRobot AutoML environment, including geometry map visualizations, categorical/numeric thematic maps, and smart aggregation of large geospatial datasets.
#### Eureqa {: #eureqa }
Model blueprints for Eureqa generalized additive models (Eureqa GAM), Eureqa regression, and Eureqa classification models. These blueprints use a proprietary Eureqa machine learning algorithm to construct models that balance predictive accuracy against complexity.
#### EWMA (Exponentially Weighted Moving Average) {: #ewma-exponentially-weighted-moving-average }
A moving average that places a greater weight and significance on the most recent data points, measuring trend direction over time. The "exponential" aspect indicates that the weighting factor of previous inputs decreases exponentially. This is important because otherwise a very recent value would have no more influence on the variance than an older value.
#### External stage {: #external-stage }
An external stage is a cloud location outside of the Snowflake environment used for loading and unloading data for Snowflake. The cloud location can be either Amazon S3 or Microsoft Azure storage.
## F
-----------
#### Fairness score {: #fairness-score data-category=bias-and-fairness }
(Bias and Fairness) A numerical computation of model fairness against the protected class, based on the underlying fairness metric.
#### Fairness Threshold {: #fairness-threshold data-category=bias-and-fairness }
(Bias and Fairness) The measure of whether a model performs within appropriate fairness bounds for each protected class. It does not affect the fairness score or performance of any protected class.
#### Fairness Value {: #fairness-value data-category=bias-and-fairness }
(Bias and Fairness) Fairness scores normalized against the most favorable protected class (i.e., the class with the highest fairness score).
#### Favorable Outcome {: #favorable-outcome data-category=bias-and-fairness }
(Bias and Fairness) A value of the target that is treated as the favorable outcome for the model. Predictions from a binary classification model can be categorized as being a favorable outcome (i.e., good/preferable) or an unfavorable outcome (i.e., bad/undesirable) for the protected class.
#### FDW {: #fdw data-category=time-aware }
See [Feature Derivation Window](#feature-derivation-window).
#### Feature {: #feature }
A column in a dataset, also called "variable" or "feature variable." The target feature is the name of the column in the dataset that you would like to predict.
#### Feature Derivation Window {: #feature-derivation-window data-category=time-aware }
(Time-aware) Also known as FDW. A rolling window of past values that models use to derive features for the modeling dataset. Consider the window relative to the [Forecast Point](#forecast-point), it defines the number of recent values the model can use for forecasting.
#### Feature Discovery {: #feature-discovery data-type=feature-discovery }
A DataRobot capability that discovers and generates new features from multiple datasets, eliminating the need to perform manual feature engineering to consolidate multiple datasets into one. A relationship editor visualizes these relationships and the end product is additional, derived features that result from the created linkages.
#### Feature Effects {: #feature-effects }
A model Leaderboard tab ([Understand > Feature Effects](feature-effects)) that shows the effect of changes in the value of each feature on the model’s predictions. It displays a graph depicting how a model "understands" the relationship between each feature and the target, with the features sorted by [Feature Impact](#feature-impact).
#### Feature engineering {: #feature-engineering }
The generation of additional features in a dataset, which as a result, improve model accuracy and performance. Time series and Feature Discovery both rely on feature engineering as the basis of their functionality.
#### Feature extraction {: #feature-extraction data-category=visual-ai }
(Visual AI) Models that perform image preprocessing (or image feature extraction and image preprocessing) are also known as “image feature extraction models” or “image-specific models.”
#### Feature Extraction and Reduction (FEAR) {: #feature-extraction-and-reduction-fear data-category=time-aware }
(Time series) Feature generation for time series (e.g., lags, moving averages). It extracts new features (now) and then reduces the set of extracted features (later). See time series feature derivation.
#### Feature Impact {: #feature-impact }
A measurement that identifies which features in a dataset have the greatest effect on model decisions. In DataRobot, the measurement is reported as a visualization available from the Leaderboard.
#### Feature imputation {: #feature-imputation data-category=time-aware }
(Time series) A mechanism that uses forward filling to enable imputation for all features (target and others) when using the time series data prep tool. This results in a dataset with no missing values (with the possible exception of leading values at the start of each series where there is no value to forward fill).
#### Feature list {: #feature-list }
A subset of features from a dataset used to build models. DataRobot creates several lists during EDA2 including all informative features, informative features excluding those with a leakage risk, a raw list of all original features, and a reduced list. Uses can create project-specific lists as well.
#### Filtergram {: #filtergram data-category=data-prep }
(Data Prep) A Data Prep column tool that is both a filter to help you transform your data and a histogram to help you visualize your data. Filtergrams allow you to visualize your data before, during, and after every transformation.
#### Fitting {: #fitting }
See [model fitting](#model-fitting).
#### Forecast Distance {: #forecast-distance data-category=time-aware }
(Time-aware) A unique time step—a relative position—within the Forecast Window. A model outputs one row for each Forecast Distance.
#### Forecast Point {: #forecast-point data-category=time-aware }
(Time-aware) The point you are making a prediction from; a relative time "if it was now..."; DataRobot trains models using all potential forecast points in the training data. In production, it is typically the most recent time.
#### Forecast vs Actual {: #forecast-vs-actual }
A model Leaderboard tab ([Evaluate > Forecast vs Actual](fore-act)) commonly used in time series projects that allows you to compare how different predictions behave from different forecast points to different times in the future. Although similar to the [Accuracy Over Time](#accuracy-over-time) chart, which displays a single forecast at a time, the Forecast vs Actual chart shows multiple forecast distances in one view.
#### Forecast Window {: #forecast-window data-category=time-aware }
(Time-aware) Also known as FW. Beginning from the Forecast Point, defines the range (the Forecast Distance) of future predictions—"this is the range of time I care about." DataRobot then optimizes models for that range and ranks them on the Leaderboard on the average across that range.
#### Forecasting {: #forecasting data-category=time-aware }
(Time-aware) Predictions based on time, into the future; use inputs from recent rows to predict future values. Forecasting is a subset of predictions, using trends in observation to characterize expected outcomes or expected responses.
#### Frozen run {: #frozen-run }
A process that “freezes” parameter settings from a model’s early, small sample size-based run. Because parameter settings based on smaller samples tend to also perform well on larger samples of the same data.
#### FW {: #fw }
See [Forecast Window](#forecast-window).
## G
-----------
#### Governance lens {: #governance-lens data-category=mlops }
(MLOps) A filtered view of DataRobot's deployment inventory on the Deployments page, summarizing the social and operational aspects of a deployment. These include the deployment owner, how the model was built, the model's age, and the humility monitoring status.
#### GPU (graphics processing unit) {: #gpu-graphics-processing-unit }
A mechanism for processing computational tasks. GPUs are GPUs are highly optimized to do mathematical calculations and great at parallelism, but only for less complex tasks. Deep learning specifically benefits from that since it's mainly batches of matrix multiplication, and these can be parallelized very easily.
#### Grid search {: #grid-search }
An exhaustive search method used for hyperparameters.
## H
-----------
#### Holdout {: #holdout }
A subset of data that is unavailable to models during the training and validation process. Use the Holdout score for a final estimate of model performance only after you have selected your best model.
#### Humility {: #humility data-category=mlops }
(MLOps) A user-defined set of rules for deployments that allow models to be capable of recognizing, in real-time, when they make uncertain predictions or receive data they have not seen before. Unlike data drift, model humility does not deal with broad statistical properties over time—it is instead triggered for individual predictions, allowing you to set desired behaviors with rules that depend on different triggers.
## I
-----------
#### Image data {: #image-data data-category=visual-ai }
(Visual AI) A sequence of digital images (e.g., video), a set of digital images, a single digital image, and/or one or more portions of any of the foregoing. A digital image may include an organized set of picture elements (“pixels”) stored in a file. Any suitable format and type of digital image file may be used, including but not limited to raster formats (e.g., TIFF, JPEG, GIF, PNG, BMP, etc.), vector formats (e.g., CGM, SVG, etc.), compound formats (e.g., EPS, PDF, PostScript, etc.), and/or stereo formats (e.g., MPO, PNS, JPS).
#### Image preprocessing {: #image-preprocessing data-category=visual-ai }
(Visual AI) A computer vision technique. Some examples include image re-sampling, noise reduction, contrast enhancement, and scaling (e.g., generating a scale space representation). Extracted features may be:
* Low-level: raw pixels, pixel intensities, pixel colors, gradients, textures, color histograms, motion vectors, edges, lines, corners, ridges, etc.
* Mid-level: shapes, surfaces, volumes, etc.
* High-level: objects, scenes, events, etc.
#### Inference data {: #inference-data data-category=predictions }
(Predictions) Data that is scored by applying an algorithmic model built from a historical dataset in order to uncover practical insights. See also [Scoring data](#scoring-data).
#### In-sample predictions {: #in-sample-predictions data-category=predictions }
(Predictions) Models trained on data outside of the training set (i.e., Validation and potentially Holdout). DataRobot uses 64% of the training set by default. When models are trained with a sample size above 64%, DataRobot marks the _Validation_ score with an asterisk to indicate that some in-sample predictions were used for that score. If you train above 80%, the _Holdout_ score is also asterisked. Compare to [stacked](#stacked-predictions) (out-of-sample) predictions.
#### Irregular data {: #irregular-data data-category=time-aware }
(Time-aware) Data in which no consistent spacing and no time step is detected.
## K
-----------
#### KA {: #ka }
See [Known in advance features](#known-in-advance-features).
#### Known in advance features {: #known-in-advance-features data-category=time-aware }
(Time-aware) Also known as KA. A variable for which you know the value in advance and does not need to be lagged, such as holiday dates. Or, for example, you might know that a product will be on sale next week and so you can provide the pricing information in advance.
## L
-----------
#### Leaderboard {: #leaderboard }
The list of trained blueprints (models) for a project, ranked according to a project metric.
#### Leakage {: #leakage }
See [target leakage](#target-leakage).
#### Learning Curves {: #learning-curves }
A graph to help determine whether it is worthwhile to increase the size of a dataset. The Learning Curve graph illustrates, for the top-performing models, how model performance varies as the sample size changes.
#### Lens {: #lens data-category=data-prep }
(Data Prep) A Data Prep construct that lets you generate a snapshot of your dataset at a particular step in a project. You create a lens to identify the project steps to be published to the [AnswerSet](#answerset).
#### Lift Chart {: #lift-chart }
Depicts how well a model segments the target population and how capable it is of predicting the target to help visualize model effectiveness.
#### Linkage keys {: #linkage-keys data-category=feature-discovery }
(Feature Discovery) The features in the primary dataset used as keys to join and create relationships.
#### Location AI {: #location-ai }
DataRobot's support for geospatial analysis by natively ingesting common geospatial formats and recognizing coordinates, allowing [ESDA](#esda), and providing spatially-explicit modeling tasks and visualizations.
#### Log {: #log }
A model Leaderboard tab ([Describe > Log](log)) that displays the status of successful operations with green INFO tags, along with information about errors marked with red ERROR tags.
## M
-----------
#### Machine Learning Operations {: #machine-learning-operations }
See [MLOps](#mlops-maching-learning-operations).
#### Majority class {: #majority-class }
If you have a categorical variable (e.g., `true`/`false` or `cat`/`mouse` ), the value that's more frequent is the majority class. For example, if a dataset has 80 rows of value `cat` and 20 rows of value `mouse`, then `cat` is the majority class.
#### Make Predictions tab {: #make-predictions-tab }
A model Leaderboard tab ([Predict > Make Predictions](predict)) that allows you to make predictions before deploying a model to a production environment.
#### Management agent {: #management-agent data-category=mlops }
(MLOps) A downloadable client included in the MLOps agent tarball (accessed via **Developer Tools**) that allows you to manage external models (i.e., those running outside of DataRobot MLOps). This tool provides a standard mechanism to automate model deployment to any type of infrastructure. The management agent sends periodic updates about deployment health and status via the API and reports them as MLOps events on the Service Health page.
#### Manual {: #manual }
A modeling mode that causes DataRobot to complete EDA2 and prepare data for modeling, but does not execute model building. Instead, users select specific models to build from the model Repository.
#### Materialized {: #materialized data-category=ai-catalog }
(AI Catalog) Materialized data is data that DataRobot has pulled from the data asset and is currently keeping a copy of in the catalog. See [snapshot](#snapshot).
#### Metadata {: #metadata data-category=ai-catalog }
(AI Catalog) Details of the data asset, such as creation and modification dates, number and types of features, snapshot status, and more.
#### Metric {: #metric }
See [optimization metric](#optimization-metric).
#### Minority class {: #minority-class }
If you have a categorical variable (e.g., `true`/`false` or `cat`/`mouse` ), the value that's less frequent is the minority class. For example, if a dataset has 80 rows of value `cat` and 20 rows of value `mouse`, then `mouse` is the minority class.
#### MLOps (Machine Learning Operations) {: #mlops-maching-learning-operations data-category=mlops }
(MLOps) A scalable and governed means to rapidly deploy and manage ML applications in production environments.
#### MLOps agent {: #mlops-agent data-category=mlops }
(MLOps) One of two downloadable clients included in the MLOps agent tarball (accessed via **Developer Tools**) that allows you to monitor and manage external models (i.e., those running outside of DataRobot MLOps). See [monitoring agent](#monitoring-agent) and [management agent](#management-agent).
#### Models/modeling {: #modelsmodeling }
A trained ML pipeline, capable of scoring new data. Models—descriptive, predictive, prescriptive—form the basis of data analysis. Modeling extracts insights from data that you can then use to make better business decisions. Algorithmic models tell you which outcome is likely to hold true for your target variable based on your training data. They construct a representation of the relationships and tease out patterns between all the different features in your dataset that you can apply to similar data you collect in the future, allowing you to make decisions based on those patterns and relationships.
#### Model Comparison {: #model-comparison }
A Leaderboard tab that allows you to compare two models using different evaluation tools, helping identify the model that offers the highest business returns or candidates for blender models.
#### Model fitting {: #model-fitting }
A measure of how well a model generalizes similar data to the data on which it was trained. A model that is well-fitted produces more accurate outcomes. A model that is overfitted matches the data too closely. A model that is underfitted doesn’t match closely enough.
#### Model Info {: #model-info }
A model Leaderboard tab ([Describe > Model Info](model-info)) that displays an overview for a given model, including model file size, prediction time, and sample size.
#### Model package {: #model-package }
(MLOps) Archived model artifacts with associated metadata stored in the Model Registry. Model packages can be created manually or automatically, for example, through the deployment of a custom model. You can deploy, share, and permanently archive model packages.
#### Model Registry {: #model-registry data-category=mlops }
(MLOps) An organizational hub for the variety of models used in DataRobot. Models are registered as deployment-ready model packages; the registry lists each package available for use. Each package functions the same way, regardless of the origin of its model. The Model Registry also contains the Custom Model Workshop where you can create and deploy custom models. Model packages can be created manually or automatically depending on the type of model.
#### Model scoring {: #model-scoring }
The process of applying an optimization metric to a partition of the data and assigning a numeric score that can be used to evaluate a model performance.
#### Modeling dataset {: #modeling-dataset data-category=time-aware }
(Time-aware) A transform of the original dataset that pre-shifts data to future values, generates lagged time series features, and computes time-series analysis metadata. Commonly referred to as feature derivation, it is used by time series but not OTV. See the time series feature engineering reference for a list of operators used and feature names created by the feature derivation process.
#### Modeling mode {: #modeling-mode }
A setting that controls the sample percentages of the training set that DataRobot uses to build models. DataRobot offers four modeling modes: Autopilot, Quick (the default), Manual, and Comprehensive.
#### Monitoring agent { data-category=mlops }
(MLOps) A downloadable client included in the MLOps agent tarball (accessed via **Developer Tools**) that allows you to monitor external models (i.e., those running outside of DataRobot MLOps). With this functionality, predictions and information from these models can be reported as part of deployments. You can use this tool to monitor accuracy, data drift, prediction distribution, latency, and more, regardless of where the model is running.
#### Monotonic modeling {: #monotonic-modeling }
A method to force certain XGBoost models to learn only monotonic (always increasing or always decreasing) relationships between specific features and the target.
#### Multiclass {: #multiclass }
See [classification](#classification).
#### Multilabel {: #multilabel }
A classification task where each row in a dataset is associated with one, several, or zero labels. Common multilabel classification problems are text categorization (a movie is both "crime" and "drama") and image categorization (an image shows a house and a car).
#### Multimodal {: #multimodal }
A model type that supports multiple var types at the same time, in the same model.
#### Multiseries {: #multiseries data-category=time-aware }
(Time-aware) Datasets that contain multiple time series (for example, to forecast the sales of multiple stores) based on a common set of input features.
## N
-----------
#### Naive model {: #naive-model }
See [baseline model](#baseline-model).
#### No-Code AI Apps {: #no-code-ai-apps }
A no-code interface to create AI-powered applications that enable core DataRobot services without having to build models and evaluate their performance. Applications are easily shared and do not require consumers to own full DataRobot licenses in order to use them.
#### N-gram {: #n-gram }
A sequence of words, where N is the number of words. For example, "machine learning" is a 2-gram. Text features are divided into n-grams to prepare for natural language processing (NLP).
#### Notebook {: #notebook }
An interactive, computational environment that hosts code execution and rich media. DataRobot provides its own in-app environment to create, manage, and execute Jupyter-compatible hosted notebooks.
#### Nowcasting {: #nowcasting data-category=time-aware }
(Time-aware) A method of time series modeling that predicts the current value of a target based on past and present data. Technically, it is a forecast window in which the start and end times are 0 (now).
## O
-----------
#### Offset {: #offset }
Feature(s) that should be treated as a fixed component for modeling (coefficient of 1 in generalized linear models or gradient boosting machine models). Offsets are often used to incorporate pricing constraints or to boost existing models.
#### Optimization metric {: #optimization-metric }
An error metric used in DataRobot to determine how well a model predicts actual values. After you choose a target feature, DataRobot selects an optimization metric based on the modeling task.
#### OTV {: #otv data-category=time-aware }
(Time-aware) Also known as out-of-time validation. A method for modeling time-relevant data. With OTV you are not forecasting, as with [time series](#autots-automated-time-series). Instead, you are predicting the target value on each individual row.
#### Overfitting {: #overfitting }
A situation in which a model fits its training data too well and therefore loses its ability to perform accurately against unseen data. This happens when a model trains too long on the training data and learns (and models on) its "noise," making the model unable to generalize.
## P
-----------
#### Partition {: #partition }
Segments of training data, broken down to maximize accuracy. The segments (splits) of the dataset. See also [training](#training-data), [validation](#validation), [cross-validation](#cross-validation), and [holdout](#holdout).
#### Per-Class Bias {: #per-class-bias }
A model Leadboard tab ([Bias and Fairness > Per-Class Bias](per-class)) that helps to identify if a model is biased, and if so, how much and who it's biased towards or against. [Bias and Fairness settings](fairness-metrics) must be configured.
#### PID (project identifier) {: #pid-project-identifier }
An internal identifier used for uniquely identifying a project.
#### PII {: #pii }
Personal identifiable information, including name, pictures, home address, SSN or other identifying numbers, birthdate, and more. DataRobot automates the detection of specific types of personal data to provide a layer of protection against the inadvertent inclusion of this information in a dataset.
#### Portable prediction server (PPS) {: #portable-prediction-server-pps data-category=mlops }
(MLOps) A DataRobot execution environment for DataRobot model packages (`.mlpkg` files) distributed as a self-contained Docker image. It can be run disconnected from main installation environments.
#### Predicting {: #predicting data-category=time-aware }
(Time-aware) For non-time-series modeling. Use information in a row to determine the target for that row. Prediction uses explanatory variables to characterize expected outcomes or expected responses (e.g., a specific event in the future, gender, fraudulent transactions).
#### Prediction data {: #prediction-data data-category=mlops;predictions }
(MLOps, Predictions) Data that contains prediction requests and results from the model.
#### Prediction environment {: #prediction-environment data-category=mlops;predictions }
(MLOps, Predictions) An environment configured to manage deployment predictions on an external system, outside of DataRobot. Prediction environments allow you to configure deployment permissions and approval processes. Once configured, you can specify a prediction environment for use by DataRobot models running on the Portable Prediction Server and for remote models monitored by the MLOps monitoring agent.
#### Prediction Explanations {: #prediction-explanations }
A visualization that helps to illustrate what drives predictions on a row-by-row basis—they provide a quantitative indicator of the effect variables have on a model, answering why a given model made a certain prediction. It helps to understand why a model made a particular prediction so that you can then validate whether the prediction makes sense. See also [SHAP](#shap-shapley-values), [XEMP](#xemp-exemplar-based-explanations-of-model-predictions).
#### Prediction intervals {: #prediction-intervals }
Prediction intervals help DataRobot assess and describe the uncertainty in a single record prediction by including an upper and lower bound on a point estimate (e.g., a single prediction from a machine learning model). The prediction intervals provide a probable range of values that the target may fall into on future data points.
#### Prediction point {: #prediction-point }
The point in time when you made or will make a prediction. Plan your prediction point based on the production model (for example, “one month before renewal” or “loan application submission time”). Once defined, create that entry in the training data to help avoid lookahead bias. With [Feature Discovery](fd-overview), you define the prediction point to ensure the derived features only use data prior to that point.
#### Primary dataset {: #primary-dataset data-category=feature-discovery }
(Feature Discovery) The dataset used to start a project.
#### Primary features {: #primary-features data-category=feature-discovery }
(Feature Discovery) Features in the project’s primary dataset.
#### Project {: #project }
A referenceable item that includes a dataset, which is the source used for training, and any models built from the dataset. Projects can be created and accessed from the home page, the project control center, and the AI Catalog. They can be shared to users, groups, and an organization.
#### Protected class {: #protected-class data-category=bias-and-fairness }
(Bias and Fairness) One categorical value of the protected feature.
#### Protected feature {: #protected-feature data-category=bias-and-fairness }
(Bias and Fairness) The dataset column to measure fairness of model predictions against. Model fairness is calculated against the protected features from the dataset. Also known as “protected attribute.”
## Q
-----------
#### Quick (Autopilot) {: #quick-autopilot }
A shortened version of the full Autopilot modeling mode that runs models directly at 64%. With Quick, the 16% and 32% sample sizes are not executed. DataRobot selects models to run based on a variety of criteria, including target and performance metric, but as its name suggests, chooses only models with relatively short training runtimes to support quicker experimentation.
## R
-----------
#### Rating Table {: #rating-table }
A model Leaderboard tab ([Describe > Rating Table](rating-table)) where you can export the model's complete, validated parameters.
#### Real-time predictions {: #real-time-predictions data-category=predictions }
(Predictions) Method of making predictions when low latency is required. Use the Prediction API for real-time deployment predictions on a dedicated and/or a standalone prediction server.
#### Receiver Operating Characteristic Curve {: #receiver-operating-characteristic-curve }
See [ROC Curve](#roc-curve).
#### Regression {: #regression }
A type of prediction problem that predicts continuous values (for example, 1.7, 6, 9.8…).
#### Regular data {: #regular-data data-category=time-aware }
(Time-aware) Data is regular if rows in the dataset fall on an evenly spaced time grid (e.g., there’s one row for every hour across the entire dataset).
#### Relationships {: #relationships data-category=feature-discovery }
(Feature Discovery) Relationships between datasets. Each relationship involves a pair of datasets, and a join key from each dataset. A key comprises one or more columns of a dataset. The keys from both datasets are ordered, and must have the same number of columns. The combination of keys is used to determine how two datasets are joined.
#### Remote models {: #remote-models data-category=mlops }
(MLOps) Models running outside of DataRobot in external prediction environments, often monitored by the MLOps monitoring agent to report statistics back to DataRobot.
#### Repository {: #repository }
A library of modeling blueprints available for a selected project (based on the problem type). These models may be selected and built by DataRobot and also can be user-executed.
#### ROC Curve {: #roc-curve }
Also known as Receiver Operating Characteristic Curve. A visualization that helps to explore classification, performance, and statistics related to a selected model at any point on the probability scale. In DataRobot, the visualization is available from the Leaderboard.
#### Role {: #role data-category=ai-catalog }
(AI Catalog) Roles—Owner, Consumer, and Editor—describe the capabilities provided to each user for a given dataset. This supports the scenarios when the user creating a data source or data connection and the enduser are not the same, or there are multiple endusers of the asset.
## S
-----------
#### Sample size {: #sample-size }
The percentage of the total training data used to build models. The percentage is based on the selected modeling mode or can be user-selected.
#### Scoring {: #scoring data-category=predictions }
See [Model scoring](#model-scoring), [Scoring data](#scoring-data).
#### Scoring Code {: #scoring-code data-category=mlops;predictions }
(MLOps, Predictions) A method for using DataRobot models outside of the application. It is available for select models from the Leaderboard as a downloadable JAR file containing Java code that can be used to score data from the command line.
An exportable JAR file, available for select models, that runs in Java. Scoring Code JARs contain prediction calculation logic identical to the DataRobot API—the code generation mechanism tests each model for accuracy as a part of the generation process.
#### Scoring data {: #scoring-data data-category=predictions }
(Predictions) Applying an algorithmic model built from a historical dataset to a new dataset in order to uncover practical insights. Common scoring methods are batch and real-time scoring. "Scored data" (also called "inference data") refers to the dataset being scored.
#### Seasonality {: #seasonality data-category=time-aware }
(Time-aware) Repeating highs and lows observed at different times of year, within a week, day, etc. Periodicity. For example, temperature is very seasonal (hot in the summer, cold in the winter, hot during the day, cold at night).
#### Secondary dataset {: #secondary-dataset data-category=feature-discovery }
(Feature Discovery) A dataset that is added to a project and part of a relationship with the primary dataset.
#### Secondary features {: #secondary-features data-category=feature-discovery }
(Feature Discovery) Features derived from a project’s secondary datasets.
#### Segmented analysis {: #segmented-analysis data-category=mlops }
(MLOps) A deployment utility that filters data drift and accuracy statistics into unique segment attributes and values. Useful for identifying operational issues with training and prediction request data.
#### Segmented modeling {: #segmented-modeling data-category=time-aware}
(Time-aware) A method of modeling multiseries projects by generating a model for each segment. DataRobot selects the best model for each segment (the segment champion) and includes the segment champions in a single Combined Model that you can deploy.
#### Semi-regular data {: #semi-regular-data data-category=time-aware }
(Time-aware) Data is semi-regular if most time steps are regular but there are some small gaps (e.g., business days, but no weekends).
#### Segment ID {: #segment-id data-category=time-aware }
(Time-aware) A column in a dataset used to group series into segments for a multiseries project. A segment ID is required for the segmented modeling workflow, where DataRobot builds a separate model for each segment. See also [Segmented modeling](ts-segmented).
#### Series ID {: #series-id data-category=time-aware }
(Time-aware) A column in a dataset used to divide a dataset into series for a multiseries project. The column contains labels indicating which series each row belongs to. See also [Multiseries modeling](multiseries).
#### Service health {: #service-health data-category=mlops }
(MLOps) A performance monitoring component for deployments that tracks metrics about a deployment’s ability to respond to prediction requests quickly and reliably. Useful for identifying bottlenecks and assessing prediction capacity.
#### SHAP (Shapley Values) {: #shap-shapley-values }
A fast, open-source methodology for computing Prediction Explanations for tree-based, deep learning, and linear-based models. SHAP estimates how much each feature contributes to a given prediction differing from the average. It is additive, making it easy to see how much top-N features contribute to a prediction. See also [Prediction Explanations](#prediction-explanations), [XEMP](#xemp-exemplar-based-explanations-of-model-predictions).
#### Smart downsampling {: #smart-downsampling }
A technique to reduce total dataset size by reducing the size of the majority class, enabling you to build models faster without sacrificing accuracy. When enabled, all analysis and model building is based on the new dataset size after smart downsampling.
#### Snapshot {: #snapshot data-category=ai-catalog }
(AI Catalog) A snapshot is an asset created from a data source. For example, with a database it represents either the entire database or a selection of (potentially joined) tables, taken at a particular point in time. It is taken from a live database but creates a static, read-only copy of data. DataRobot creates a snapshot of each data asset type, while allowing you to disable the snapshot when importing the data.
#### Speed vs Accuracy {: #speed-vs-accuracy }
A Leaderboard tab that generates an analysis plot to show the tradeoff between runtime and predictive accuracy and help you choose the best model with the lowest overhead.
#### Stability {: #stability }
A model Leaderboard tab ([Evaluate > Stability](stability)) that provides an at-a-glance summary of how well a model performs on different backtests. The backtesting information in this chart is the same as that available from the [Model Info](#model-info) tab.
#### Stacked predictions {: #stacked-predictions data-category=predictions }
(Predictions) A method for building multiple models on different subsets of the data. The prediction for any row is made using a model that excluded that data from training. In this way, each prediction is effectively an “out-of-sample” prediction. See an example in the [predictions documentation](data-partitioning#what-are-stacked-predictions). Compare to ["in-sample"](#in-sample-predictions) predictions.
#### Stationarity {: #stationarity data-category=time-aware }
(Time-aware) The mean of the series does not change over time. A stationary series does not have a trend or seasonal variation.
#### Supervised learning {: #supervised-learning }
Machine learning using labeled data, meaning that for each record, the dataset contains a known value for the target feature. By knowing the target during training, the model can "learn" how other features relate to the target and make predictions on new data.
## T
-----------
#### Target {: #target }
The name of the column in the dataset that you would like to predict.
#### Target leakage {: #target-leakage }
An outcome when using a feature whose value cannot be known at the time of prediction (for example, using the value for “churn reason” from the training dataset to predict whether a customer will churn). Including the feature in the model’s feature list would incorrectly influence the prediction and can lead to overly optimistic models.
#### Task {: #task }
An ML method, for example a data transformation such as one-hot encoding, or an estimation such as an XGBoost classifier, which is used to define a blueprint. There are hundreds of built-in tasks you can use, or you can define your own (custom) tasks.
#### Time series {: #time-series data-category=time-aware }
(Time-aware) A series of data points indexed in time order. Ordinarily a sequence of measurements taken at successive, equally spaced intervals.
#### Time series analysis {: #time-series-analysis data-category=time-aware }
(Time-aware) Methods for analyzing time series data in order to extract meaningful statistics and other characteristics of the data.
#### Time series forecasting {: #time-series-forecasting data-category=time-aware }
(Time-aware) The use of a model to predict future values based on previously observed values. In practice, a forecasting model may combine time series features with other data.
#### Time step {: #time-step data-category=time-aware }
(Time-aware) The detected median time delta between rows in the time series; DataRobot determines the time unit. The time step consists of a number and a time-delta unit, for example (15, “minutes”). If a step isn’t detected, the dataset is considered irregular and time series mode may be disabled.
#### Tracking agent {: #tracking-agent }
See [MLOps agent](#mlops-agent).
#### Training {: #training }
The process of building models on data in which the target is known.
#### Training Dashboard {: #training-dashboard }
A model Leaderboard tab ([Evaluate > Training dashboard](training-dash)) that provides, for each executed iteration, information about a model's training and test loss, accuracy, learning rate, and momentum to help you get a better understanding about what may have happened during model training.
#### Training data {: #training-data }
The portion (partition) of data used to build models. See also [validation](#validation), [cross-validation](#cross-validation), and [holdout](#holdout).
#### Transfer learning {: #transfer-learning data-category=visual-ai }
(Visual AI) A project training on one dataset, extracting information that may be useful, and applying that learning to another.
#### Trend {: #trend data-category=time-aware }
(Time-aware) An increase or decrease over time. Trends can be linear or non-linear and can show fluctuation. A series with a trend is not stationary.
#### Tuning {: #tuning }
A trial-and-error process by which you change some hyperparameters, run the algorithm on the data again, then compare performance to determine which set of hyperparameters results in the most accurate model. In DataRobot, this functionality is available from the Advanced Tuning tab.
## U
-----------
#### Unit of analysis {: #unit-of-analysis }
(Machine learning) The unit of observation at which you are making a prediction.
#### Unlimited multiclass {: #unlimited-multiclass }
See [classification](#classification).
#### Unmaterialized {: #unmaterialized data-category=ai-catalog }
(AI Catalog) Unmaterialized data is data that DataRobot samples for profile statistics, but does not keep. Instead, the catalog stores a pointer to the data and only pulls it upon user request at project start or when running batch predictions.
#### Unsupervised learning {: #unsupervised-learning }
The ability to infer patterns from a dataset without reference to known (labeled) outcomes and without a specified target. Types of unsupervised learning include anomaly detection, outlier detection, novelty detection, and clustering. With anomaly detection, DataRobot applies unsupervised learning to detect abnormalities in a dataset. With clustering, DataRobot uses unsupervised learning to discern natural groupings in the data.
#### User blueprint {: #user-blueprints }
A blueprint (and extra metadata) that has been created by a user and saved to the AI Catalog, where it can be both shared and further modified. This is not the same as a blueprint available from the Repository or via models on the Leaderboard, though both can be used as the basis for creation of a user blueprint. See also [blueprint](#blueprint).
## V
-----------
#### Validation {: #validation }
The validation (or testing) partition is a subsection of data that is withheld from training and used to evaluate a model’s performance. Since this data was not used to build the model, it can provide an unbiased estimate of a model’s accuracy. You often compare the results of validation when selecting a model. See also [cross-validation](#cross-validation).
#### Variable {: #variable }
See [feature](#feature).
#### Visual AI {: #visual-ai }
DataRobot's ability to combine supported image types, either alone or in combination with other supported feature types, to create models that use images as input. The feature also includes specialized insights (e.g., image embeddings, activation maps, neural network visualizer) to help visually assess model performance.
## W
-----------
#### Word Cloud {: #word-cloud}
A model Leaderboard tab ([Understand > Word Cloud](word-cloud)) that displays the most relevant words and short phrases in word cloud format.
#### Worker {: #worker }
The processing power behind the DataRobot platform, used for creating projects, training models, and making predictions. They represent the portion of processing power allocated to a task. DataRobot uses different types of workers for different phases of the project workflow, including DSS workers (Dataset Service workers), EDA workers, secure modeling workers, and quick workers.
## X
-----------
#### XEMP (eXemplar-based Explanations of Model Predictions) {: #xemp-exemplar-based-explanations-of-model-predictions }
A methodology for computing Prediction Explanations that works for all models. See also [Prediction Explanations](#prediction-explanations), [SHAP](#shap-shapley-values).
## Z
-----------
#### Z Score {: #z-score data-category=bias-and-fairness }
(Bias and Fairness) A metric measuring whether a given class of the protected feature is “statistically significant” across the population.
|
index
|
---
title: Deploy and monitor Spark models with DataRobot MLOps
description: Deploy and monitor Spark models with DataRobot MLOps
---
# Deploy and monitor Spark models with DataRobot MLOps {: #deploy-and-monitor-spark-models-with-datarobot-mlops }
This page shows how to use DataRobot's Monitoring Agent (MLOps agent) to manage and monitor models from a central dashboard without deploying them within MLOps.
You will explore how to manage and monitor remote models—models that are not running within DataRobot MLOps—deployed on your own infrastructure. Common examples are serverless deployments (AWS Lambda, Azure Functions) or deployments on Spark clusters (Hadoop, Databricks, AWS EMR).

The sections below show how to take a DataRobot model to be deployed on a Databricks cluster and monitor this model with DataRobot MLOps in a central dashboard. This dashboard covers all of your models, regardless of where they were developed or deployed. This approach works for any model that runs within a Spark cluster.
## Create a model {: #create-a-model }
In this section, you are creating a model with DataRobot AutoML, then importing it into your Databricks cluster, using the [Lending Club dataset](https://s3.amazonaws.com/datarobot_public_datasets/10K_Lending_Club_Loans.csv).
!!! note
If you already have a regression model that runs in your Spark cluster, you can skip this step and proceed to [Install DataRobot's MLOps monitoring agent and library](#install-the-mlops-monitoring-agent-and-library).
1. To upload the training data to DataRobot, do either of the following:
* Click **Local File**, and then select the LendingClub dataset CSV file from your local filesystem.
* Click **URL** to open the **Import URL** dialog box and copy the LendingClub dataset URL above:

In the **Import URL** dialog box, paste the LendingClub dataset **URL** and click **Import from URL**:

2. Enter `loan_amt` as your target (1) (what you want to predict) and click **Start** (2) to run Autopilot.

3. After Autopilot finishes, click **Models** (1) and select a model with the **SCORING CODE** label (2) at the top of the Leaderboard (3).

4. Under the model you selected, click **Predict** (1) and click **Downloads** (2) to access the **Scoring Code JAR** download.

!!! note
The ability to download Scoring Code for a model from the Leaderboard depends on the MLOps configuration for your organization.
5. Click **Download** (3) to start the JAR file download.
For more information, see the documentation on [Scoring Code](sc-overview).
## Deploy a model {: #deploy-a-model }
To install DataRobot's Scoring Code, you can install the previously downloaded JAR file, along with DataRobot's Spark Wrapper, on the Databricks cluster as shown below.
1. Click **Clusters** to open the cluster settings.
2. Select the cluster to which you'd like to deploy the DataRobot model.
3. Click the **Libraries** tab.
4. Click **Install New**.
5. In the **Install Library** dialog box, with the **Library Source** set to **Upload** and the **Library Type** set to **JAR**, drag-and-drop the Scoring Code JAR file (e.g., `5ed68d70455df33366ce0508.jar`),
6. Click **Install**.

Once the install is complete, repeat the same steps and install DataRobot's Spark Wrapper, which you can [download here](https://github.com/datarobot-community/ai_engineering/blob/master/mlops/DRMLOps_Spark_examples/scoring-code-spark-api.jar){ target=_blank }, or pull the latest version of it directly from [Maven](https://mvnrepository.com/artifact/com.datarobot/scoring-code-spark-api_2.4.3){ target=_blank }.
## Install the MLOps monitoring agent and library {: #install-the-mlops-monitoring-agent-and-library }

Remote models do not directly communicate with DataRobot MLOps.
Instead, the communication is handled via DataRobot MLOps monitoring agents, which support many spooling mechanisms (e.g., flat files, AWS SQS, RabbitMQ).
These agents are typically deployed in the external environment where the model is running.
Libraries are available for all common programming languages to simplify communication with the DataRobot MLOps monitoring agent. The model is instructed to talk to the agent with the help of the MLOps library. The agent then collects all metrics from the model and relays them to the MLOps server and dashboards.
In this example, the runtime environment is Spark. Therefore, you will install the MLOps library to your Spark cluster (Databricks) in the same way you installed the model itself previously (in **Deploy a Model**). You will also install the MLOps monitoring agent in an Azure Kubernetes Service (AKS) cluster alongside RabbitMQ, which is used as your queuing system.
This process assumes that you are familiar with Azure Kubernetes Service and the Azure CLI. For more information, see [Microsoft's Quick Start Tutorial](https://docs.microsoft.com/en-us/azure/aks/kubernetes-walkthrough){ target=_blank }.
### Create an AKS cluster {create-an-aks-cluster}
1. If you don't have a running AKS cluster, create one, as shown below:
```bash
RESOURCE_GROUP=ai_success_eng
CLUSTER_NAME=AIEngineeringDemo
az aks create \
--resource-group $RESOURCE_GROUP \
--name $CLUSTER_NAME \
-s Standard_B2s \
--node-count 1 \
--generate-ssh-keys \
--service-principal XXXXXX \
--client-secret XXXX \
--enable-cluster-autoscaler \
--min-count 1 \
--max-count 2
```
2. Start the Kubernetes dashboard:
```bash
az aks browse --resource-group $RESOURCE_GROUP --name $CLUSTER_NAME
```
### Install RabbitMQ {install-rabbitmq}
There are many ways to deploy applications. The most direct way is via the Kubernetes dashboard.
To install RabbitMQ:
1. Click **CREATE > CREATE AN APP** (1).
2. On the **CREATE AN APP** page (2), specify the following:

| | Field | Value |
|---|-------|-------|
| 3 | **App name** | e.g., `rabbitmqdemo` |
| 4 | **Container image** | e.g., `rabbitmq:latest` |
| 5 | **Number of pods** | e.g., `1` |
| 6 | **Service** | `External` |
| 7 | **Port** and **Target port** | `5672` and `5672` <br> `15672` and `15672` |
3. Click **DEPLOY** (8).
### Download the MLOps monitoring agent {: #download-the-mlops-monitoring-agent }
To download the MLOps monitoring agent directly from your DataRobot cluster:
1. In the upper-right corner of DataRobot, click your profile icon (or the default avatar ).
2. Click **Developer Tools**.
3. Under **External Monitoring Agent**, click the download icon.

### Install the MLOps monitoring agent {: #install-the-mlops-monitoring-agent }
You can install the agent anywhere; however, for this process, you will install it alongside RabbitMQ.
1. Copy the monitoring agent tarball you downloaded in the previous section to the container where RabbitMQ is running. To do this, run the following command:
!!! note
You may need to replace the filename of the tarball in the example below.
```bash
kubectl cp datarobot-mlops-agent-6.1.0.tar.gz default/rabbitmq-649ccbd8cb-qjb4l:/opt
```
2. Click **Pods** (1) and click the container **Name** (2) to connect to the CLI of the container.

3. Click **Exec** (3) to start the container.

4. In the container's CLI, begin to configure the agent. Review the tarball name and, if necessary, update the filename in the following commands, and then run them:
```bash
cd /opt && tar -xvzf mlops-agent-6.1.0.tar &&
cd mlops-agent-6.1.0/conf
```
5. In the directory, update the `mlops.agent.conf.yaml` configuration file to point to your DataRobot MLOps instance and message queue.
6. To update the configuration and run the agent, you must install Vim and Java with the following commands:
```bash
apt-get update &&
apt-get install vim &&
apt-get install default-jdk
```
7. In this example, you are using RabbitMQ and the DataRobot managed AI Platform solution, so you must configure the `mlopsURL` and `apiToken` (1) and the `channelConfigs` (2) as shown below.

!!! note
You can obtain your `apiToken` from the [Developer Tools](api-key-mgmt#api-key-management).
8. Before starting the agent, you can enable the RabbitMQ Management UI and create a new user to monitor queues:
```bash
### Enable the RabbitMQ UI
rabbitmq-plugins enable rabbitmq_management &&
### Add a user via the CLI
rabbitmqctl add_user <username> <your password> &&
rabbitmqctl set_user_tags <username> administrator &&
rabbitmqctl set_permissions -p / <username> ".*" ".*" ".*"
```
9. Now that RabbitMQ is configured and the updated configuration is saved, switch to the `/bin` directory and start the agent:
```bash
cd ../bin &&
./start-agent.sh
```
10. Confirm that the agent is running correctly by checking its status:
```bash
./status-agent.sh
```
11. To ensure that everything is running as expected, check the logs located in the `/logs` directory.

### Install the MLOps library in a Spark cluster {: #install-the-mlops-library-in-a-spark-cluster }
First, <a href="https://github.com/datarobot-community/ai_engineering/blob/master/mlops/DRMLOps_Spark_examples/datarobot-mlops-6.2.0.jar" target="_blank">Download the library from here</a>.
To install the library in a Spark cluster (Databricks):
1. Click **Clusters** to open the cluster settings.
2. Select the cluster to which you'd like to deploy the DataRobot model.
3. Click the **Libraries** tab.
4. In the **Install Library** dialog box, with the **Library Source** set to **Upload** and the **Library Type** set to **JAR**, drag-and-drop the MLOps JAR file (e.g., `MLOps.jar`).
5. Click **Install**.

## Run your Model {: #run-your-model }
Now that all the prerequisites are in place, run your model to make predictions:
```scala
// Scala example (see also PySpark example in notebook references at the bottom)
// 1) Use local DataRobot Model for Scoring
import com.datarobot.prediction.spark.Predictors
// referencing model_id, which is the same as the generated filename of the JAR file
val DataRobotModel = com.datarobot.prediction.spark.Predictors.getPredictor("5ed68d70455df33366ce0508")
// 2) read the scoring data
val scoringDF = sql("select * from 10k_lending_club_loans_with_id_csv")
// 3) Score the data and save results to spark dataframe
val output = DataRobotModel.transform(scoringDF)
// 4) Review/consume scoring results
output.show(1,false)
```
To track the actual scoring time, wrap the scoring command, so the updated code would look like the following:
```scala
// to track the actual scoring time
def time[A](f: => A): Double = {
val s = System.nanoTime
val ret = f
val scoreTime = (System.nanoTime-s)/1e6 * 0.001
println("time: "+ scoreTime+"s")
return scoreTime
}
// 1) Use local DataRobot Model for Scoring
import com.datarobot.prediction.spark.Predictors
// referencing model_id, which is the same as the generated filename of the JAR file
val DataRobotModel = com.datarobot.prediction.spark.Predictors.getPredictor("5ed708a8fca6a1433abddbcb")
// 2) read the scoring data
val scoringDF = sql("select * from 10k_lending_club_loans_with_id_csv")
val scoreTime = time {
// Score the data and save results to spark dataframe
val scoring_output = DataRobotModel.transform(scoringDF)
scoring_output.show(1,false)
scoring_output.createOrReplaceTempView("scoring_output")
}
```
## Report usage to MLOps via monitoring agents {: #report-usage-to-mlops-via-monitoring-agents }
After using the model to predict the loan amount of an application, you can report the telemetrics around these predictions to your DataRobot MLOps server and dashboards. To do this, see the commands in the following sections.
### Create an external deployment {: #create-an-external-deployment }
Before you can report scoring details, you must create an external deployment within DataRobot MLOps. This only has to be done once and can be done via the UI in DataRobot MLOps:
1. Click **Model Registry** (1), click **Model Packages** (2), and then click **New external model package** (3).

2. Specify a package name and description (1 and 2), upload the corresponding training data for drift tracking (3), and identify the model location (4), target (5), environment (6), and prediction type (7), then click **Create package** (8).

3. After creating the external model package, note the model ID in the URL, as shown below (blurred in the image for security purposes).

4. Click **Deployments** (1) and click **Create new deployment** (2).

Once the deployment is created, the **Deployments > Overview** page is shown.
5. On the **Overview** page, copy the deployment ID (from the URL).

Now that you have your model ID and deployment ID, you can report the predictions in the next section.
### Report prediction details {: #report-prediction-details }
To report prediction details to DataRobot, run the following code in your Spark environment. Make sure you update the input parameters.
```python
import com.datarobot.mlops.spark.MLOpsSparkUtils
val channelConfig = "OUTPUT_TYPE=RABBITMQ;RABBITMQ_URL=amqp://<<RABBIT HOSTNAME>>:5672;RABBITMQ_QUEUE_NAME=mlopsQueue"
MLOpsSparkUtils.reportPredictions(
scoringDF, // spark dataframe with actual scoring data
"5ec3313XXXXXXXXX", // external DeploymentId
"5ec3313XXXXXXXXX", // external ModelId
channelConfig, // rabbitMQ config
scoringTime, // actual scoring time
Array("PREDICTION"), //target column
"id" // AssociationId
)
```
### Report actuals {: #report-actuals }
When you get actual values, you can report them to track accuracy over time.
Report actuals using the function below:
```python
import com.datarobot.mlops.spark.MLOpsSparkUtils
val actualsDF = spark.sql("select id as associationId, loan_amnt as actualValue, null as timestamp from actuals")
MLOpsSparkUtils.reportActuals(
actualsDF,
deploymentId,
ModelId,
channelConfig
)
```
Even though you deployed a model outside of DataRobot on a Spark cluster (Databricks), you can monitor it like any other model to track service health, data drift, and actuals in one central dashboard.

For complete sample notebooks with code snippets for Scala and PySpark, go to [the DataRobot Community GitHub](https://github.com/datarobot-community/ai_engineering/tree/master/mlops/DRMLOps_Spark_examples){ target=_blank }.
|
spark-deploy-and-monitor
|
---
title: Deploy and monitor DataRobot models in Azure Kubernetes Service
description: Deploy and monitor DataRobot models in Azure Kubernetes Service
---
# Deploy and monitor DataRobot models in Azure Kubernetes Service {: #deploy-and-monitor-datarobot-models-in-azure-kubernetes-service }
!!! info "Availability information"
The MLOps model package export feature is off by default. Contact your DataRobot representative or administrator for information on enabling this feature for DataRobot MLOps.
**Feature flag**: Enable MMM model package export
This page shows how to deploy machine learning models on Azure Kubernetes Services (AKS) to create production scoring pipelines with DataRobot's MLOps Portable Prediction Server (PPS).

DataRobot Automated Machine Learning provides a dedicated prediction server as a low-latency, synchronous REST API suitable for real-time predictions. The DataRobot MLOps PPS extends this functionality to serve ML models in container images, giving you portability and control over your ML model deployment architecture.
A containerized PPS is well-suited to deployment in a Kubernetes cluster, allowing you to take advantage of this deployment architecture's auto-scaling and high availability. The combination of PPS and Kubernetes is ideal for volatile, irregular workloads such as those you can find in IoT use cases.
## Create a model {: #create-a-model }
The examples on this page use the [public LendingClub dataset](https://s3.amazonaws.com/datarobot_public_datasets/10K_Lending_Club_Loans.csv) to predict the loan amount for each application.
1. To upload the training data to DataRobot, do either of the following:
* Click **Local File**, and then select the LendingClub dataset CSV file from your local file system.
* Click **URL** to open the **Import URL** dialog box and copy the LendingClub dataset URL above:

In the **Import URL** dialog box, paste the LendingClub dataset **URL** and click **Import from URL**:

2. Enter `loan_amt` as your target (what you want to predict)(1) and click **Start** (2) to run Autopilot.

3. After Autopilot finishes, click **Models** and select a model at the top of the Leaderboard.
4. Under the model you selected, click **Predict > Deploy** to access the **Get model package** download.
5. Click **Download .mlpkg** to start the `.mlpkg` file download.
!!! note
For more information, see the documentation on the [Portable Prediction Server](portable-pps).
## Create an image with the model package {: #create-an-image-with-the-model-package }
After you [obtain the PPS base image](portable-pps#obtain-the-pps-docker-image), create a new version of it by creating a Dockerfile with the content below:
``` bash
FROM datarobot/datarobot-portable-prediction-api:<TAG> AS build
COPY . /opt/ml/model
```
!!! note
For more information on how to structure this Docker command, see the [Docker build](https://docs.docker.com/engine/reference/builder/) documentation.
For the `COPY` command to work, you must have the `.mlpkg` file in the same directory as the Dockerfile. After creating your Dockerfile, run the command below to create a new image that includes the model:
``` bash
docker build . --tag regressionmodel:latest
```
## Create an Azure Container Registry {: #create-an-azure-container-registry }
Before deploying your image to AKS, push it to a container registry such as the Azure Container Registry (ACR) for deployment:
1. In the Azure Portal, click **Create a resource > Containers**, then click **Container Registry.**
2. On the **Create container registry blade**, enter the following:
| Field | Description |
|-------|-------------|
| **Registry name**| Enter a suitable name |
| **Subscription**| Select your subscription |
| **Resource group**| Use your existing resource group |
| **Location**| Select your region |
| **Admin user**| Enable |
| **SKU**| Standard |
3. Click **Create**.
4. Navigate to your newly-generated registry and select **Access Keys**.
5. Copy the admin password to authenticate with Docker and push the `.mlpkg` image to this registry.

## Push the model image to ACR {: #push-the-model-image-to-acr }
To push your new image to Azure Container Registry (ACR), log in with the following command (replace `<DOCKER_USERNAME>` with your previously-selected repository name):
``` bash
docker login <DOCKER_USERNAME>.azurecr.io
```
The password is the administrator password you created with the Azure Container Registry (ACR).
Once logged in, make sure your Docker image is correctly tagged, and then push it to the repo with the following command (replace `<DOCKER_USERNAME>` with your previously selected repository name):
``` bash
docker tag regressionmodel <DOCKER_USERNAME>.azurecr.io/regressionmodel
docker push <DOCKER_USERNAME>.azurecr.io/regressionmodel
```
## Create an AKS cluster {: #create-an-aks-cluster }
This section assumes you are familiar with AKS and Azure's Command Line Interface (CLI).
!!! note
For more information on AKS, see [Microsoft's Quickstart tutorial](https://docs.microsoft.com/en-us/azure/aks/kubernetes-walkthrough){ target=_blank }.
1. If you don't have a running AKS cluster, create one:
``` bash
RESOURCE_GROUP=ai_success_eng
CLUSTER_NAME=AIEngineeringDemo
az aks create \
--resource-group $RESOURCE_GROUP \
--name $CLUSTER_NAME \
-s Standard_B2s \
--node-count 1 \
--generate-ssh-keys \
--service-principal XXXXXX \
--client-secret XXXX \
--enable-cluster-autoscaler \
--min-count 1 \
--max-count 2
```
2. Create a secret Docker registry so that AKS can pull images from the private repository. In the command below, replace the following with your actual credentials:
* `<SECRET_NAME>`
* `<YOUR_REPOSITORY_NAME>`
* `<DOCKER_USERNAME>`
* `<YOUR_SECRET_ADMIN_PW>`
``` bash
kubectl create secret docker-registry <SECRET_NAME> --docker-server=<YOUR_REPOSITORY_NAME>.azurecr.io --docker-username=<DOCKER_USERNAME> --docker-password=<YOUR_SECRET_ADMIN_PW>
```
3. Deploy your Portable Prediction Server image. There are many ways to deploy applications, but the easiest method is via the Kubernetes dashboard. Start the Kubernetes dashboard with the following command:
``` bash
az aks browse --resource-group $RESOURCE_GROUP --name $CLUSTER_NAME
```
## Deploy a model to AKS {: #deploy-a-model-to-aks }
To install and deploy the PPS containing your model:
1. Click **CREATE > CREATE AN APP**.
2. On the **CREATE AN APP** page, specify the following:

| Field | Value |
|-------|-------|
| **App name** | e.g., `portablepredictionserver` |
| **Container image** | e.g., `aisuccesseng.azurecr.io/regressionmodel:latest` |
| **Number of pods** | e.g., `1` |
| **Service** | `External` |
| **Port**, **Target port**, and **Protocol** | `8080`, `8080`, and `TCP` |
| **Image pull secret** | previously created |
| **CPU requirement (cores)** | `1` |
| **Memory requirement (MiB)** | `250` |
3. Click **Deploy**—it may take several minutes to deploy.
## Score predictions with Postman {: #score-predictions-with-postman }
To test the model, download the [DataRobot PPS Examples](https://github.com/datarobot-community/ai_engineering/blob/master/mlops/DRMLOps_PortablePredictionServer_examples/DR%20MLOps%20Portable%20Prediction%20Server%20Public.postman_collection.json){ target=_blank } a [Postman Collection](https://www.postman.com/collection/){ target=_blank }, and update the hostname from `localhost` to the external IP address assigned to your service. You can find the IP address in the **Services** tab on your Kubernetes dashboard:

To make a prediction, execute the make predictions request:

## Configure autoscaling and high availability {: #configure-autoscaling-and-high-availability }
Kubernetes supports horizontal pod autoscaling to adjust the number of pods in a deployment depending on CPU utilization or other selected metrics. The Metrics Server provides resource utilization to Kubernetes and is automatically deployed in AKS clusters.
In the previous sections, you deployed one pod for your service and defined only the minimum requirement for CPU and memory resources.
To use the autoscaler, you must define CPU requests and utilization limits.
By default, the Portable Prediction Server spins up one worker, which means it can handle only one HTTP request simultaneously. The number of workers you can run, and thus the number of HTTP requests that it can handle simultaneously, is tied to the number of CPU cores available for the container.
Because you set the minimum CPU requirement to `1`, you can now set the limit to `2` in the `patchSpec.yaml` file:
``` yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: portablepredictionserver
spec:
selector:
matchLabels:
app: portablepredictionserver
replicas: 1
template:
metadata:
labels:
app: portablepredictionserver
spec:
containers:
- name: portablepredictionserver
image: aisuccesseng.azurecr.io/regressionmodel:latest
ports:
- containerPort: 8080
resources:
requests:
cpu: 1000m
limits:
cpu: 2000m
imagePullSecrets:
- name: aisuccesseng
```
Then run the following command:
``` bash
kubectl patch deployment portablepredictionserver --patch "$(cat patchSpec.yaml)"
```
Alternatively, you can update the deployment directly in the Kubernetes dashboard by editing the JSON as shown below and clicking **UPDATE**.

Now that the CPU limits are defined, you can configure autoscaling with the following command:
``` bash
kubectl autoscale deployment portablepredictionserver --cpu-percent=50 --min=1 --max=10
```
This enables Kubernetes to autoscale the number of pods in the `portablepredictionserver` deployment. If the average CPU utilization across all pods exceeds 50% of their requested usage, the autoscaler increases the pods from a minimum of one instance up to ten instances.
To run a load test, download the sample JMeter test plan below and update the URLs/ authentication. Run it with the following command:
``` bash
jmeter -n -t LoadTesting.jmx -l results.csv
```
The output will look similar to the following example:

## Report usage to DataRobot MLOps via monitoring agents {: #report-usage-to-datrobot-mlops-via-monitoring-agents }
After deploying your model to AKS, you can monitor this model, along with all of your other models, in one central dashboard by reporting the telemetrics for these predictions to your DataRobot MLOps server and dashboards.
1. Navigate to the **Model Registry > Model Packages > Add New Package** and follow the instructions in the [documentation](reg-create#register-external-model-packages).

2. Select **Add new external model package** and specify a package name and description (1 and 2), upload the corresponding training data for drift tracking (3), and identify the model location (4), target (5), environment (6), and prediction type (7), then click **Create package** (8).

3. After creating the external model package, note the model ID in the URL as shown below (blurred in the image for security purposes).

4. While still on the **Model Registry** page and within the expanded new package, select the **Deployments** tab and click **Create new deployment**.

The deployment page loads prefilled with information from the model package you created.
5. Complete any missing information for the deployment and click **Create deployment**.
6. Navigate to **Deployments > Overview** and copy the deployment ID (from the URL).

Now that you have your model ID and deployment ID, you can report predictions as described in the next section.
### Report prediction details {: #report-prediction-details }
To report prediction details to DataRobot, you need to provide a few environment variables to your Portable Prediction Server container.
Update the deployment directly in the Kubernetes dashboard by editing the JSON and then clicking **UPDATE**:
``` json
"env": [
{
"name": "PORTABLE_PREDICTION_API_WORKERS_NUMBER",
"value": "2"
},
{
"name": "PORTABLE_PREDICTION_API_MONITORING_ACTIVE",
"value": "True"
},
{
"name": "PORTABLE_PREDICTION_API_MONITORING_SETTINGS",
"value": "output_type=output_dir;path=/tmp;max_files=50;file_max_size=10240000;model_id=<modelId>;deployment_id=<deployment_id>"
},
{
"name": "MONITORING_AGENT",
"value": "False"
},
{
"name": "MONITORING_AGENT_DATAROBOT_APP_URL",
"value": "https://app.datarobot.com/"
},
{
"name": "MONITORING_AGENT_DATAROBOT_APP_TOKEN",
"value": "<YOUR TOKEN>"
}
]
```

Even though you deployed a model outside of DataRobot on a Kubernetes cluster (AKS), you can monitor it like any other model and track service health and data drift in one central dashboard (see below).

|
aks-deploy-and-monitor
|
---
title: Run Batch Prediction jobs from Azure Blob Storage
description: Run Batch Prediction jobs from Azure Blob Storage
---
# Run Batch Prediction jobs from Azure Blob Storage
The DataRobot Batch Prediction API allows users to take in large datasets and score them against deployed models running on a Prediction Server. The API also provides flexible options for file intake and output.
This page shows how you can set up a Batch Prediction job—using the [DataRobot Python Client package](https://datarobot-public-api-client.readthedocs-hosted.com/en/v2.24.0/){ target=_blank } to call the Batch Prediction API—that will score files from Azure Blob storage and write the results back to Azure Blob storage. This method also works for Azure Data Lake Storage Gen2 accounts because the underlying storage is the same.
All the code snippets on this page are part of a [Jupyter Notebook that you can download](https://github.com/datarobot-community/ai_engineering/blob/master/batch_predictions/azure_batch_prediction.ipynb){ target=_blank } to get started.
## Prerequisites {: #prerequisites }
To run this code, you will need the following:
* Python 2.7 or 3.4+
* The DataRobot Python Package (2.21.0+) ([pypi](https://pypi.org/project/datarobot/){ target=_blank })([conda](https://anaconda.org/conda-forge/datarobot){ target=_blank })
* A DataRobot deployment
* An Azure storage account
* An Azure storage container
* A Scoring dataset (to use for scoring with your DataRobot deployment in the storage container)
## Create stored credentials in DataRobot {: #create-stored-credentials-in-datarobot }
The Batch Prediction job requires credentials to read and write to Azure Blob storage, including the name of the Azure storage account and an access key.
To obtain these credentials:
1. In the Azure portal for the Azure Blob Storage account, click **Access keys**.

2. Click **Show keys** to reveal the values of your access keys. You can use either of the keys shown (**key1** or **key2**).

3. Use the following code to create a new credential object within DataRobot, used by the Batch Prediction job to connect to your Azure storage account.
``` python
AZURE_STORAGE_ACCOUNT = "YOUR AZURE STORAGE ACCOUNT NAME"
AZURE_STORAGE_ACCESS_KEY = "AZURE STORAGE ACCOUNT ACCESS KEY"
DR_CREDENTIAL_NAME = f"Azure_{AZURE_STORAGE_ACCOUNT}"
# Create an Azure-specific Credential
# The connection string is also found below the access key in Azure if you want to copy that directly.
credential = dr.Credential.create_azure(
name=DR_CREDENTIAL_NAME,
azure_connection_string=f"DefaultEndpointsProtocol=https;AccountName={AZURE_STORAGE_ACCOUNT};AccountKey={AZURE_STORAGE_ACCESS_KEY};"
)
# Use this code to look up the ID of the credential object created.
credential_id = None
for cred in dr.Credential.list():
if cred.name == DR_CREDENTIAL_NAME:
credential_id = cred.credential_id
break
print(credential_id)
```
## Set up and run a Batch Prediction job {: #set-up-and-run-a-batch-prediction-job }
After creating a credential object, you can set up the Batch Prediction job.
Set the intake settings and output settings to the **azure** type.
Provide both attributes with the URL to the files in Blob storage that you want to read and write to (the output file does not need to exist already) and the ID of the credential object that previously set up.
The code below creates and runs the Batch Prediction job and, and when finished, provide the status of the job.
This code also demonstrates how to configure the job to return both Prediction Explanations and passthrough columns for the scoring data.
``` python
DEPLOYMENT_ID = 'YOUR DEPLOYMENT ID'
AZURE_STORAGE_ACCOUNT = "YOUR AZURE STORAGE ACCOUNT NAME"
AZURE_STORAGE_CONTAINER = "YOUR AZURE STORAGE ACCOUNT CONTAINER"
AZURE_INPUT_SCORING_FILE = "YOUR INPUT SCORING FILE NAME"
AZURE_OUTPUT_RESULTS_FILE = "YOUR OUTPUT RESULTS FILE NAME"
# Set up your batch prediction job
# Input: Azure Blob Storage
# Output: Azure Blob Storage
job = dr.BatchPredictionJob.score(
deployment=DEPLOYMENT_ID,
intake_settings={
'type': 'azure',
'url': f"https://{AZURE_STORAGE_ACCOUNT}.blob.core.windows.net/{AZURE_STORAGE_CONTAINER}/{AZURE_INPUT_SCORING_FILE}",
"credential_id": credential_id
},
output_settings={
'type': 'azure',
'url': "https://{AZURE_STORAGE_ACCOUNT}.blob.core.windows.net/{AZURE_STORAGE_CONTAINER}/{AZURE_OUTPUT_RESULTS_FILE}",
"credential_id": credential_id
},
# If explanations are required, uncomment the line below
max_explanations=5,
# If passthrough columns are required, use this line
passthrough_columns=['column1','column2']
)
job.wait_for_completion()
job.get_status()
```
When the job is complete, the output file is displayed in your Blob storage container. You now have a Batch Prediction job that can read and write from Azure Blob Storage via the DataRobot Python client package and the Batch Prediction API.
## More information {: #more-information }
* [Community Github example code](https://github.com/datarobot-community/ai_engineering/blob/master/batch_predictions/azure_batch_prediction.ipynb){ target=_blank }
* [DataRobot Python Client Batch Prediction Methods](https://datarobot-public-api-client.readthedocs-hosted.com/en/v2.24.0/autodoc/api_reference.html#batch-predictions){ target=_blank }
|
azure-blob-storage-batch-pred
|
---
title: Azure
description: Integrate DataRobot with Azure cloud services.
---
# Azure {: #azure }
The sections within describe techniques for integrating Azure cloud services with DataRobot:
Topic | Describes...
----- | ------
[Run Batch Prediction jobs from Azure Blob Storage](azure-blob-storage-batch-pred) | Running Batch Prediction jobs from Azure Blob Storage with the Batch Prediction API.
[Deploy and monitor DataRobot models in Azure Kubernetes Service](aks-deploy-and-monitor) | Deploying and monitoring DataRobot models on Azure Kubernetes Services (AKS) to create production scoring pipelines with the Portable Prediction Server (PPS).
[Deploy and monitor Spark models](spark-deploy-and-monitor) | Deploying and monitoring Spark models in DataRobot MLOps with the monitoring agent.
[Deploy and monitor ML.NET models](mlnet-deploy-and-monitor) | Deploying and monitoring ML.NET models in DataRobot MLOps with monitoring agent.
[Use Scoring Code with Azure ML](sc-azureml) | Importing Scoring Code models to Azure ML to make prediction requests with Azure.
|
index
|
---
title: Deploy and monitor ML.NET models with DataRobot MLOps
description: Deploy and monitor ML.NET models with DataRobot MLOps
---
# Deploy and monitor ML.NET models with DataRobot MLOps {: #deploy-and-monitor-ml-net-models-with-datarobot-mlops }
This page explores how models built with ML.NET can be deployed and monitored with DataRobot MLOps. ML.NET is an open-source machine learning framework created by Microsoft for the .NET developer platform. To learn more, see [What is ML.NET?](https://dotnet.microsoft.com/learn/ml-dotnet/what-is-mldotnet){ target=_blank }.
The examples on this page use the [public LendingClub dataset](https://s3.amazonaws.com/datarobot_public_datasets/10K_Lending_Club_Loans.csv).
You want to predict the likelihood of a loan applicant defaulting; in machine learning, this is referred to as a "binary classification problem." You can solve this with DataRobot AutoML, but in this case, you will create the model with ML.NET, deploy it into production, and monitor it with DataRobot MLOps. DataRobot MLOps allows you to monitor all of your models in one central dashboard, regardless of the source or programming language.
Before deploying a model to DataRobot MLOps, you must create a new ML.NET model, and then create an ML.NET environment for DataRobot MLOps.
!!! note
This DataRobot MLOps ML.NET environment only has to be created once, and if you only require support for binary classification and regression models, you can skip this step and use the existing "DataRobot ML.NET Drop-In" environment from the [DataRobot Community GitHub](https://github.com/datarobot-community/ai_engineering/tree/master/mlops/DRMLOps_MLNET_environment){ target=_blank }.
## Prerequisites {: #prerequisites }
To start building .NET apps, download and install the .NET software development kit (SDK). To install the SDK, follow the steps outlined in the [.NET intro](https://dotnet.microsoft.com/learn/dotnet/hello-world-tutorial/intro){ target=_blank }.
1. Once you've installed the .NET SDK, open a new terminal and run the following command:
``` bash
dotnet
```
2. If the previous command runs without error, you can proceed with the next step and install the ML.NET framework with the following command:
``` bash
dotnet tool install -g mlnet
```
### Create the ML.NET model {: #create-the-mlnet-model }
If the installation of the ML.NET framework is successful, create the ML.NET model:
``` bash
mkdir DefaultMLApp
cd DefaultMLApp
dotnet new console -o consumeModelApp
dotnet classification --dataset "10K_Lending_Club_Loans.csv" --label-col "is_bad" --train-time 1000
```
### Evaluate your model {: #evaluate-your-model }
After the ML.NET CLI selects the best model, it displays the experiment results—a summary of the exploration process—including how many models were explored in the given training time.

While the ML.NET CLI generates code for the highest performing model, it also displays up to five models with the highest accuracy found during the given exploration time. It displays several evaluation metrics for those top models, including AUC, AUPRC, and F1-score.
### Test your model {: #test-your-model }
The ML.NET command-line interface (CLI) generates the machine learning model and adds the .NET apps and libraries needed to train and consume the model. The files created include the following:
* A .NET console app (`SampleBinaryClassification.ConsoleApp`), which contains `ModelBuilder.cs` (builds and trains the model) and `Program.cs` (runs the model).
* A .NET Standard class library (`SampleBinaryClassification.Model`), which contains `ModelInput.cs` and `ModelOutput.cs` (the input and output classes for model training and consumption) and `MLModel.zip` (a generated serialized ML model).
To test the model, run the console app (`SampleBinaryClassification.ConsoleApp`), which predicts the likelihood of default for a single applicant:
``` bash
cd SampleClassification/SampleClassification.ConsoleApp
dotnet run
```
## Create a DataRobot MLOps environment package {: #create-a-datarobot-mlops-environment-package }
While DataRobot provides many environment templates out of the box (including R, Python, Java, PyTorch, etc.), this section shows how to create your own runtime environment, from start to finish, using ML.NET.
To make an easy-to-use, reusable environment, follow the below guidelines:
* Your environment package must include a Dockerfile to install dependencies and an executable `start_server.sh` script to start the model server.
* Your custom models require a simple webserver to make predictions. The model server script can be co-located within the model package or separated into an environment package; however, it should be in a separate environment package, which allows you to use it for multiple models leveraging the same programming language.
* A Dockerfile that copies all code and the `start_server.sh` script to `/opt/code/`. You can download the code for [DataRobot's MLOps environment package](https://github.com/datarobot-community/ai_engineering/tree/master/mlops/DRMLOps_MLNET_model%20samples){ target=_blank } from the DataRobot Community GitHub.
* The web server must be listening on port `8080` and implement the following routes:
* `GET /{URL_PREFIX}/`: Checks if the model's server is running.
* `POST /{URL_PREFIX}/predict/`: Makes predictions. The `{URL_PREFIX}` is passed as an environment variable to the container and must be handled by your webserver accordingly. The data itself is expected in a multiform request.
**Request format**:
Binary Classification | Regression
----------------------|-----------
 | 
**Response format**:
Binary Classification | Regression
----------------------|-----------
`{"predictions":[{"True": 0.0, "False": 1.0}]}` | `{"predictions": [12.3]}`
DataRobot MLOps runs extensive tests before deploying a custom model to ensure reliability; therefore, it is essential that your webserver handles missing values and returns results in the expected response format as outlined above.
As previously mentioned, you need to use port `8080` so that DataRobot can correctly identify the webserver. Therefore, in `appsettings.json`, specify port `8080` for the `Kestrel` web server, as shown below:
``` http
{
"Kestrel": {
"EndPoints": {
"Http": {
"Url": "http://0.0.0.0:8080"
}
}
}
}
```
Initialize the model code (`mlContext`, `mlModel`, and `predEngine`) in the `Startup.cs` class. This allows .NET to recognize file changes whenever you create a new model package.
``` csharp
// Initialize MLContext
MLContext ctx = new MLContext();
// Load the model
DataViewSchema modelInputSchema;
ITransformer mlModel = ctx.Model.Load(modelPath, out modelInputSchema);
// Create a prediction engine & pass it to our controller
predictionEngine = ctx.Model.CreatePredictionEngine<ModelInput,ModelOutput>(mlModel);
```
The `start_server.sh` shell script is responsible for starting the model server in the container. If you packaged the model and server together, you only need the compiled version, and the shell script runs `dotnet consumeModelApp.dll`. Since you have the model code and the server environment code separated for reusability, recompile from the source at container startup, as in the command below:
``` bash
#!/bin/sh
export DOTNET_CLI_HOME="/tmp/DOTNET_CLI_HOME"
export DOTNET_CLI_TELEMETRY_OPTOUT="1"
export TMPDIR=/tmp/NuGetScratch/
mkdir -p ${TMPDIR}
rm -rf obj/ bin/
dotnet clean
dotnet build
dotnet run
# to ensure Docker container keeps running
tail -f /dev/null
```
Before uploading your custom environment to DataRobot MLOps, compress your custom environment code to a tarball, as shown below:
``` bash
tar -czvf mlnetenvironment.tar.gz -C DRMLOps_MLNET_environment/.
```
## Upload the DataRobot MLOps environment package {: #upload-the-datarobot-mlops-environment-package }
To upload the new MLOps ML.NET environment, refer to the instructions on [creating a new custom model environment](custom-environments) (see the screenshot below).

## Upload and test the ML.NET model in DataRobot MLOps {: #upload-and-test-the-mlnet-model-in-datarobot-mlops }
Once the environment is created, create a new custom model entity and upload the actual model (`MLModel.zip` and `ModelInput.cs`).
To upload the new ML.NET model, refer to the instructions on [creating a new custom inference model](custom-inf-model).

After creating the environment and model within DataRobot MLOps, upload test data to confirm it works as expected (as shown in the following screenshot).

During this phase, DataRobot runs a test to determine how the model handles missing values and whether or not the internal webserver adheres to the response format.
## Make predictions with the new ML.NET model {: #make-predictions-with-the-new-mlnet-model }
Once the tests are complete, deploy the custom model using the settings below:

When this step is complete, you can make predictions with your new custom model as with any other DataRobot model (see also [Postman collection](https://github.com/datarobot-community/ai_engineering/blob/master/mlops/DRMLOps_MLNET_model%20samples/DR%20MLNET%20Custom%20Models.postman_collection.json){ target=_blank }):

Even though you built the model outside of DataRobot with ML.NET, you can use it like any other DataRobot model and track service health and data drift in one central dashboard, shown below:

|
mlnet-deploy-and-monitor
|
---
title: Use Scoring Code with Azure ML
description: Import Scoring Code models to Azure ML to make prediction requests using Azure.
---
# Use Scoring Code with Azure ML {: #use-scoring-code-with-azure-ml }
You must complete the following before importing Scoring Code models to Azure ML:
* Install the <a target="_blank" href="https://docs.microsoft.com/en-us/cli/azure/install-azure-cli?view=azure-cli-latest">Azure CLI client</a> to configure your service to the terminal.
* Install the <a target="_blank" href="https://docs.microsoft.com/en-us/azure/machine-learning/service/reference-azure-machine-learning-cli">Azure Machine Learning CLI extension</a>.
To import a Scoring Code model to Azure ML:
1. Login to Azure with the login command.
az login
2. If you have not yet created a resource group, you can create one using <a target="_blank" href="https://docs.microsoft.com/en-us/cli/azure/group?view=azure-cli-latest#az-group-create">this command</a>:
az group create --location --name [--subscription] [--tags]
For example:
az group create --location westus2 --name myresourcegroup
3. If you do not have an existing container registry that you want to use for storing custom Docker images, you must create one. If you want to use a DataRobot Docker image instead of building your own, you do not need to create a container registry. Instead, skip ahead to step 6.
Create a container with <a target="_blank" href="https://docs.microsoft.com/en-us/cli/azure/acr?view=azure-cli-latest#az-acr-create">the following command</a>:
az acr create --name --resource-group --sku {Basic | Classic | Premium | Standard}
[--admin-enabled {false | true}] [--default-action {Allow | Deny}] [--location]
[--subscription] [--tags] [--workspace]
For example:
az acr create --name mycontainerregistry --resource-group myresourcegroup --sku Basic
4. Set up admin access using <a target="_blank" href="https://docs.microsoft.com/en-us/cli/azure/acr?view=azure-cli-latest#az-acr-update">the following commands</a>:
az acr update --name --admin-enabled {false | true}
For example:
az acr update --name mycontainerregistry --admin-enabled true
And print the <a target="_blank" href="https://docs.microsoft.com/en-us/cli/azure/acr/credential?view=azure-cli-latest#az-acr-credential-show">registry credentials</a>:
az acr credential show --name
For example:
az acr credential show --name mycontainerregistry
Returns:
{
"passwords": [
{
"name": "password",
"value": <password>
},
{
"name": "password2",
"value": <password>
}
],
"username": mycontainerregistry
}
5. Upload a custom Docker image that <a target="_blank" href="https://docs.microsoft.com/en-us/cli/azure/acr?view=azure-cli-latest#az-acr-build">runs Java</a>:
az acr build --registry [--auth-mode {Default | None}] [--build-arg] [--file] [--image]
[--no-format] [--no-logs] [--no-push] [--no-wait] [--platform] [--resource-group]
[--secret-build-arg] [--subscription] [--target] [--timeout] []
For example:
az acr build --registry mycontainerregistry --image myImage:1 --resource-group myresourcegroup --file Dockerfile .
The following is an example of a custom Docker image. Reference the <a target="_blank" href="https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-deploy-custom-docker-image#build-a-custom-base-image">Microsoft documentation</a> to read more about building an image.
FROM ubuntu:16.04
ARG CONDA_VERSION=4.5.12
ARG PYTHON_VERSION=3.6
ENV LANG=C.UTF-8 LC_ALL=C.UTF-8
ENV PATH /opt/miniconda/bin:$PATH
RUN apt-get update --fix-missing && \
apt-get install -y wget bzip2 && \
apt-get clean && \
rm -rf /var/lib/apt/lists/*
RUN wget --quiet https://repo.anaconda.com/miniconda/Miniconda3-${CONDA_VERSION}-Linux-x86_64.sh -O ~/miniconda.sh && \
/bin/bash ~/miniconda.sh -b -p /opt/miniconda && \
rm ~/miniconda.sh && \
/opt/miniconda/bin/conda clean -tipsy
RUN conda install -y conda=${CONDA_VERSION} python=${PYTHON_VERSION} && \
conda clean -aqy && \
rm -rf /opt/miniconda/pkgs && \
find / -type d -name __pycache__ -prune -exec rm -rf {} \;
RUN apt-get update && \
apt-get upgrade -y && \
apt-get install software-properties-common -y && \
add-apt-repository ppa:openjdk-r/ppa -y && \
apt-get update -q && \
apt-get install -y openjdk-11-jdk && \
apt-get clean
6. If you have not already created a workspace, <a target="_blank" href="https://docs.microsoft.com/en-us/cli/azure/ext/azure-cli-ml/ml/workspace?view=azure-cli-latest#ext-azure-cli-ml-az-ml-workspace-create">use the following command to create one</a>. Otherwise, skip to step 7.
az ml workspace create --workspace-name [--application-insights] [--container-registry]
[--exist-ok] [--friendly-name] [--keyvault] [--location] [--resource-group] [--sku]
[--storage-account] [--yes]
For example:
az ml workspace create --workspace-name myworkspace --resource-group myresourcegroup
7. <a target="_blank" href="https://docs.microsoft.com/en-us/cli/azure/ext/azure-cli-ml/ml/model?view=azure-cli-latest#ext-azure-cli-ml-az-ml-model-register">Register your Scoring Code model</a> to the Azure model storage.
!!! note
Make sure you have exported your Scoring Code JAR file from DataRobot before proceeding. You can download the JAR file from the [Leaderboard](sc-download-leaderboard) or from a [deployment](sc-download-deployment).
az ml model register --name [--asset-path] [--cc] [--description] [--experiment-name]
[--gb] [--gc] [--model-framework] [--model-framework-version] [--model-path]
[--output-metadata-file] [--path] [--property] [--resource-group] [--run-id]
[--run-metadata-file] [--sample-input-dataset-id] [--sample-output-dataset-id]
[--tag] [--workspace-name] [-v]
For example, to register model named `codegenmodel`:
az ml model register --name codegenmodel --model-path 5cd071deef881f011a334c2f.jar --resource-group myresourcegroup --workspace-name myworkspace
8. Prepare two configs and a <a target="_blank" href="https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-deploy-existing-model#entry-script">Python entry script</a> that will execute the prediction.
Below are some examples of configs with a Python entry script.
* `deploymentconfig.json`:
{
"computeType": "aci",
"containerResourceRequirements": {
"cpu": 0.5,
"memoryInGB": 1.0
},
"authEnabled": true,
"sslEnabled": false,
"appInsightsEnabled": false
}
* `inferenceconfig.json` (if you are <em>not</em> using a DataRobot Docker image):
{
"entryScript": "score.py",
"runtime": "python",
"enableGpu": false,
"baseImage": "<container-registry-name>.azurecr.io/<Docker-image-name>",
"baseImageRegistry": {
"address": "<container-registry-name>.azurecr.io",
"password": <password from the step 2>,
"username": <container-registry-name>
}
}
* `inferenceconfig.json`(if you <em>are</em> using a DataRobot Docker image):
{
"entryScript": "score.py",
"runtime": "python",
"enableGpu": false,
"baseImage": "datarobot/scoring-inference-code-azure:latest",
"baseImageRegistry": {
"address": "registry.hub.docker.com"
}
}
* `score.py`:
import os
import subprocess
import tempfile
import json
from azureml.core import Model
# Called when the deployed service starts
def init():
pass
# Handle requests to the service
def run(data):
try:
result_csv = ''
data = json.loads(data)
# Access your model registered in step 6
model_path = Model.get_model_path('codegenmodel')
with tempfile.NamedTemporaryFile() as output_file:
p = subprocess.run(['java', '-jar', model_path, 'csv', '--input=-',
'--output={}'.format(output_file.name)], input=bytearray(data['csv'].encode('utf-8')), stdout=subprocess.PIPE)
with open(output_file.name) as result_file:
result_csv = result_file.read()
# Return prediction
return result_csv
except Exception as e:
error = str(e)
return error
9. <a target="_blank" href="https://docs.microsoft.com/en-us/cli/azure/ext/azure-cli-ml/ml/model?view=azure-cli-latest#ext-azure-cli-ml-az-ml-model-deploy">Create a new prediction endpoint</a>:
az ml model deploy --name [--ae] [--ai] [--ar] [--as] [--at] [--autoscale-max-replicas]
[--autoscale-min-replicas] [--base-image] [--base-image-registry] [--cc] [--cf]
[--collect-model-data] [--compute-target] [--compute-type] [--cuda-version] [--dc]
[--description] [--dn] [--ds] [--ed] [--eg] [--entry-script] [--environment-name]
[--environment-version] [--failure-threshold] [--gb] [--gc] [--ic] [--id] [--kp]
[--ks] [--lo] [--max-request-wait-time] [--model] [--model-metadata-file] [--namespace]
[--no-wait] [--nr] [--overwrite] [--path] [--period-seconds] [--pi] [--po] [--property]
[--replica-max-concurrent-requests] [--resource-group] [--rt] [--sc] [--scoring-timeout-ms]
[--sd] [--se] [--sk] [--sp] [--st] [--tag] [--timeout-seconds] [--token-auth-enabled]
[--workspace-name] [-v]
For example, to create a new endpoint with the name `myservice`:
az ml model deploy --name myservice --model codegenmodel:1 --compute-target akscomputetarget --ic inferenceconfig.json --dc deploymentconfig.json --resource-group myresourcegroup --workspace-name myworkspace
10. <a target="_blank" href="https://docs.microsoft.com/en-us/azure/machine-learning/how-to-setup-authentication">Get a token</a> to make prediction requests:
az ml service get-keys --name [--path] [--resource-group] [--workspace-name] [-v]
For example:
az ml service get-keys --name myservice --resource-group myresourcegroup --workspace-name myworkspace
This command returns a JSON response:
{
"primaryKey": <key>,
"secondaryKey": <key>
}
You can now make prediction requests using Azure.
|
sc-azureml
|
---
title: Deploy and monitor models on GCP
description: Deploy and monitor DataRobot models on Google Cloud Platform (GCP).
---
# Deploy and monitor models on GCP {: #deploy-and-monitor-models-on-gcp }
!!! info "Availability information"
The MLOps model package export feature used in this procedure is off by default. Contact your DataRobot representative or administrator for information on enabling it.
**Feature flag**: Enable MMM model package export
The following describes the process of deploying a DataRobot model on the Google Cloud Platform (GCP) using the Google Kubernetes Engine (GKE).
## Overview {: #overview }
[DataRobot MLOps](mlops/index) provides a central hub to deploy, monitor, manage, and govern all your models in production. With MLOps, you aren't limited to serving DataRobot models on the dedicated scalable prediction servers inside the DataRobot cluster. You can also deploy DataRobot models into Kubernetes (K8s) clusters while maintaining the advantages of DataRobot's model monitoring capabilities.
This exportable DataRobot model is called a [_Portable Prediction Server_](portable-pps) (PPS) and is similar to Docker containers in the flexibility and portability it provides. A PPS is based on Docker containers and contains a DataRobot model with embedded monitoring agents. Using this approach, a DataRobot model is made available via a scalable deployment environment for usage, and associated data can be tracked in the centralized DataRobot MLOps dashboard with all of its monitoring and governance advantages.
Unifying the portability of DataRobot model Docker images with the scalability of a K8s platform results in a powerful ML solution ready for production usage.
### Prerequisites {: #prerequisites }
You must complete the following steps before creating the main configuration.
1. Install Google Cloud SDK appropriate for your operating system (see [Google's documentation](https://cloud.google.com/sdk/docs/quickstarts){ target=_blank }).
2. Run the following at a command prompt:
`gcloud init`
You will be asked to choose the existing project or to create a new one and also to select the compute zone. For example:

3. Install the Kubernetes command-line tool:
`gcloud components install kubectl`
The output will be similar to:

## Procedure {: #procedure }
The following sections, each a step in the process, describe the procedure for deploying and monitoring DataRobot models on the GCP platform via a PPS. The examples use the [Kaggle housing prices](https://www.kaggle.com/c/home-data-for-ml-course/data){ target=_blank } dataset.
### Download a model package {: #download-a-model-package }
Build models using the housing prices dataset. Once Autopilot finishes, you can create and download the MLOps model package. To do this, navigate to the **Models** tab to select a model and click **Predict > Deploy**. In the MLOps Package section, select **Generate & Download**.

DataRobot generates a model package (.mlpkg file) containing all the necessary information about the model.
### Create a Docker container image {: #create-a-docker-container-image }
To create a Docker container image with the MLOps package:
1. After the model package download (started in the previous step) completes, download the [PPS base image](portable-pps#obtain-the-pps-docker-image).
2. Once you have the PPS base image, use the following Dockerfile to generate an image that includes the DataRobot model package:
!!! note
To copy the `.mlpkg` file into the Docker image, make sure the Dockerfile and the `.mlpkg` file are in the same folder.
``` bash
FROM datarobot/datarobot-portable-prediction-api:<TAG>
COPY <MLPKG_FILE_NAME>.mlpkg /opt/ml/model
```
3. Set the `PROJECT_ID` environment variable to your Google Cloud project ID (the project ID you defined during the Google Cloud SDK installation). The `PROJECT_ID` associates the container image with your project's Container Registry:
`export PROJECT_ID= ai-XXXXXX-XXXXXX`
4. Build and tag the Docker image. For example:
`docker build -t gcr.io/${PROJECT_ID}/house-regression-model:v1`
5. Run the `docker images` command to verify that the build was successful:

The generated image contains the DataRobot model and the monitoring agent used to transfer the service and model health metrics back to the DataRobot MLOps platform.
### Run Docker locally {: #run-docker-locally }
While technically an optional step, best practice advises always testing your image locally to save time and network bandwidth.
To run locally:
1. Run your Docker container image:
`docker run --rm --name house-regression -p 8080:8080 -it gcr.io/${PROJECT_ID}/house-regression-model:v1`
2. Score the data locally to test if the model works as expected:
`curl -X POST http://localhost:8080/predictions -H "Content-Type: text/csv" --data-binary @/Users/X.X/community/docker/kaggle_house_test_dataset.csv`
!!! note
Update the path to the `kaggle_house_test_dataset.csv` dataset to match the path locally on your workstation.
### Push Docker image to the Container Registry {: #push-docker-image-to-the-container-registry }
Once you have tested and validated the container image locally, upload it to a registry so that your Google Kubernetes Engine (GKE) cluster can download and run it.
1. Configure the Docker command-line tool to authenticate to Container Registry:
`gcloud auth configure-docker`

2. Push the Docker image [you built](#create-a-docker-container-image) to the Container Registry:
`docker push gcr.io/${PROJECT_ID}/house-regression-model:v1`
!!! note
Pushing to the Container Registry may result in the `storage.buckets.create` permission issue. If you receive this error, contact the administrator of your GCP account.
### Create the GKE cluster {: #create-the-GKE-cluster }
After storing the Docker image in the Container Registry, you next create a GKE cluster, as follows:
1. Set your project ID and Compute Engine zone options for the `gcloud`tool:
`gcloud config set project $PROJECT_ID`
`gcloud config set compute/zone europe-west1-b`
2. Create the cluster:
`gcloud container clusters create house-regression-cluster`
This command finishes as follows:

3. After the command completes, run the following command to see the cluster worker instances:
`gcloud compute instances list`
The output is similar to:

!!! note
Pushing to the Container Registry may result in the `gcloud.container.clusters.create` permission issue. If you receive this error, contact the administrator of your GCP account.
### Deploy the Docker image to GKE
To deploy your image to GKE:
1. Create a Kubernetes deployment for your Docker image:
`kubectl create deployment house-regression-app --image=gcr.io/${PROJECT_ID}/house-regression-model:v1`
2. Set the baseline number of deployment replicas to 3 (i.e., the deployment will always have 3 running pods).
`kubectl scale deployment house-regression-app --replicas=3`
3. K8s provides the ability to manage resources in a flexible, automatic manner. For example, you can create a HorizontalPodAutoscaler resource for your deployment:
`kubectl autoscale deployment house-regression-app --cpu-percent=80 --min=1 --max=5`
4. Run the following command to check that the pods you created are all operational and in a running state (e.g., you may to see up to 5 running pods as requested in the previous autoscale step):
`kubectl get pods`
The output is similar to:

#### Expose your model
The default service type in GKE is called *ClusterIP*, where the service gets an IP address reachable only from inside the cluster. To expose a Kubernetes service outside of the cluster, you must create a service of type `LoadBalancer`. This type of service spawns an External Load Balancer IP for a set of pods, reachable via the internet.
1. Use the `kubectl expose` command to generate a Kubernetes service for the **house-regression-app** deployment:
`kubectl expose deployment house-regression-app --name=house-regression-app-service --type=LoadBalancer --port 80 --target-port 8080`
Where:
* **--port** is the port number configured on the Load Balancer
* **--target-port** is the port number that the **house-regression-app** container is listening on.
2. Run the following command to view service details:
`kubectl get service`
The output is similar to:

3. Copy the `EXTERNAL-IP` address from the service details.
4. Score your model using the `EXTERNAL-IP` address.
`curl -X POST http://XX.XX.XX.XX/predictions -H "Content-Type: text/csv" --data- binary @/Users/X.X/community/docker/kaggle_house_test_dataset.csv`
!!! note
Update the IP address placeholder above with the `EXTERNAL-IP` address you copied and update the path to the `kaggle_house_test_dataset.csv` dataset to match the path locally on your workstation.
!!! note
The cluster is open to all incoming requests at this point. See the [Google documentation](https://cloud.google.com/kubernetes-engine/docs/concepts/access-control){ target=_blank } to apply more fine-grained role-based access control (RBAC).
## Create an external deployment {: #create-an-external-deployment }
To create an external deployment in MLOps:
1. Navigate to the **Model Registry > Model Packages > Add New Package** and follow the instructions in the [documentation](reg-create#register-external-model-packages).

Click **Add new external model package**.
2. Make a note of the *MLOps model ID* found in the URL. You will use this when [linking PPS and MLops](#link-pps-on-k8s-to-mlops).
While still on the **Model Registry** page and within the expanded new package, select the **Deployments** tab and click **Create new deployment**.

The deployment page loads prefilled with information from the model package you created.
3. Make a note of the *MLOps deployment ID* (earlier, you copied the model ID). You will use this when [linking PPS and MLOps](#link-pps-on-k8s-to-mlops).
### Link PPS on K8s to MLOps {: #link-pps-on-k8s-to-mlops }
Finally, update the K8s deployment configuration with the PPS and monitoring agent configuration. Add the following environment variables into the K8s Deployment configuration (see the complete configuration file [here](#K8s-configuration-files)):
```
PORTABLE_PREDICTION_API_WORKERS_NUMBER=2
PORTABLE_PREDICTION_API_MONITORING_ACTIVE=True
PORTABLE_PREDICTION_API_MONITORING_SETTINGS=output_type=output_dir;path=/tmp;max_files=50;file_max_size=10240
000;model_id=<mlops_model_id>;deployment_id=<mlops_deployment_id>
MONITORING_AGENT=True
MONITORING_AGENT_DATAROBOT_APP_URL= <https://app.datarobot.com/>
MONITORING_AGENT_DATAROBOT_APP_TOKEN=<your token>
```
!!! note
You can obtain the `MONITORING_AGENT_DATAROBOT_APP_TOKEN` from the [Developer Tools](api-key-mgmt#api-key-management).
### Deploy new Docker image (optional) {: #deploy-new-docker-image-optional }
To upgrade the deployed Docker image, simply:
1. Create a new version of your Docker image:
`docker build -t gcr.io/${PROJECT_ID}/house-regression-model:v2`
2. Push the new image to the Container Registry:
docker push gcr.io/${PROJECT_ID}/house-regression-model:v2`
3. Apply a rolling update to the existing deployment with an image update:
`kubectl set image deployment/house-regression-app house-regression-model=gcr.io/${PROJECT_ID}/house-regression-model:v2 `
4. Watch the pods running the v1 image terminate, and new pods running the v2 image spin up:
`kubectl get pods`
### Clean up {: #clean-up }
To finish the process of setting up using the GCP platform via a Portable Prediction Server (PPS) for deployments, do the following.
1. Delete the service:
`kubectl delete service house-regression-app-service`
2. Delete the cluster:
`gcloud container clusters delete house-regression-cluster`
## K8s configuration files {: #K8s-configuration-files }
The following sections provide deployment and service configuration files for reference.
### Deployment configuration file {: #deployment-configuration }
``` yaml
apiVersion: apps/v1
kind: Deployment
metadata:
annotations:
deployment.kubernetes.io/revision: "2"
creationTimestamp: "2020-07-08T12:47:27Z"
generation: 8
labels:
app: house-regression-app
name: house-regression-app
namespace: default
resourceVersion: "14171"
selfLink: /apis/apps/v1/namespaces/default/deployments/house-regression-app
uid: 2de869fc-c119-11ea-8156-42010a840053
spec:
progressDeadlineSeconds: 600
replicas: 5
revisionHistoryLimit: 10
selector:
matchLabels:
app: house-regression-app
strategy:
rollingUpdate:
maxSurge: 25%
maxUnavailable: 25%
type: RollingUpdate
template:
metadata:
creationTimestamp: null
labels:
app: house-regression-app
spec:
containers:
- env:
- name: PORTABLE_PREDICTION_API_WORKERS_NUMBER
value: "2"
- name: PORTABLE_PREDICTION_API_MONITORING_ACTIVE
value: "True"
- name: PORTABLE_PREDICTION_API_MONITORING_SETTINGS
value: output_type=output_dir;path=/tmp;max_files=50;file_max_size=10240000;model_id=<your_mlops_model_id>;deployment_id=<your_mlops_deployment_id>
- name: MONITORING_AGENT
value: "True"
- name: MONITORING_AGENT_DATAROBOT_APP_URL
value: https://app.datarobot.com/
- name: MONITORING_AGENT_DATAROBOT_APP_TOKEN
value: <your_datarobot_api_token>
image: gcr.io/${PROJECT_ID}/house-regression-model:v1
imagePullPolicy: IfNotPresent
name: house-regression-model
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
terminationGracePeriodSeconds: 30
status:
availableReplicas: 5
conditions:
- lastTransitionTime: "2020-07-08T12:47:27Z"
lastUpdateTime: "2020-07-08T13:40:47Z"
message: ReplicaSet "house-regression-app-855b44f748" has successfully progressed.
reason: NewReplicaSetAvailable
status: "True"
type: Progressing
- lastTransitionTime: "2020-07-08T13:41:39Z"
lastUpdateTime: "2020-07-08T13:41:39Z"
message: Deployment has minimum availability.
reason: MinimumReplicasAvailable
status: "True"
type: Available
observedGeneration: 8
readyReplicas: 5
replicas: 5
updatedReplicas: 5
```
#### Service configuration file {: #service-configuration }
``` yaml
apiVersion: v1
kind: Service
metadata:
creationTimestamp: "2020-07-08T12:58:13Z"
labels:
app: house-regression-app
name: house-regression-app-service
namespace: default
resourceVersion: "5055"
selfLink: /api/v1/namespaces/default/services/house-regression-app-service
uid: aeb836cd-c11a-11ea-8156-42010a840053
spec:
clusterIP: 10.31.242.132
externalTrafficPolicy: Cluster
ports:
- nodePort: 30654
port: 80
protocol: TCP
targetPort: 8080
selector:
app: house-regression-app
sessionAffinity: None
type: LoadBalancer
status:
loadBalancer:
ingress:
- ip: XX.XX.XXX.XXX
```
|
google-cloud-platform
|
---
title: Deploy the MLOps agent on GKE
description: Deploy the MLOps agent on GKE to monitor DataRobot models.
---
# Deploy the MLOps agent on GKE {: #deploy-the-mlops-agent-on-gke }
The following steps describe how to deploy the MLOps agent on Google Kubernetes Engine (GKE) with Pub/Sub as a spooler. This allows you to monitor a custom Python model developed outside DataRobot. The custom model will be scored at the local machine and will send the statistics to Google Cloud Platform (GCP) [Pub/Sub](https://cloud.google.com/pubsub#section-5){ target=_blank }. Finally, the agent (deployed on GKE) will consume this data and send it back to the DataRobot MLOps dashboard.
## Overview {: #overview }
[DataRobot MLOps](mlops/index) offers the ability to monitor all your ML models (trained in DataRobot or outside) in a centralized dashboard with the DataRobot [MLOps agent](mlops-agent/index). The agent, a Java utility running in parallel with the deployed model, can monitor models developed in Java, Python, and R programming languages.
The MLOps agent communicates with the model via a spooler (i.e., file system, GCP Pub/Sub, AWS SQS, or RabbitMQ) and sends model statistics back to the MLOps dashboard. These can include the number of scored records, number of features, scoring time, data drift, and more. You can embed the agent can into a Docker image and deploy it on a Kubernetes cluster for scalability and robustness.
## Prerequisites {: #prerequisites }
You must complete the following steps before creating the main configuration.
1. Install the Google Cloud SDK [specific to your operating system](https://cloud.google.com/sdk/docs/install-sdk){ target=_blank }.
2. Run the following at a command prompt:
`gcloud init`
You will be asked to choose an existing project or create a new one, as well as to select the compute zone.
3. Install the Kubernetes command-line tool:
`gcloud components install kubectl`
4. Retrieve your Google Cloud service account credentials to call Google Cloud APIs. If you don’t have a [default service account](https://cloud.google.com/iam/docs/service-accounts#default){ target=_blank }, you can create it by following this [procedure](https://cloud.google.com/docs/authentication/production#create_service_account){ target=_blank }.
5. Once credentials are in place, download the JSON file that contains them. Later, when it is time to pass your credentials to the application that will call Google Cloud APIs, you can use one of these methods:
* Via the GOOGLE_APPLICATION_CREDENTIALS [environment variable](https://cloud.google.com/docs/authentication/production#passing_variable){ target=_blank }.
* With [code](https://cloud.google.com/docs/authentication/production#passing_code){ target=_blank }.
## Procedure {: #procedure }
The following sections, each a step in the process, describe the procedure for deploying the MLOps agent on GKE with the Pub/Sub.
### Create an external deployment {: #create-an-external-deployment }
First, [create an external deployment](deploy-external-model). You will use the resulting model ID and deployment ID to configure communications with the agent (described in the instructions for [running Docker locally](#run-docker-locally)).
### Create a Pub/Sub topic and subscription {: #create-a-pub-sub-topic-and-subscription }
Second, create a Pub/Sub topic and subscription:
1. Go to your Google Cloud console Pub/Sub service and create a topic (i.e., a named resource where publishers can send messages).

2. Create a subscription—a named resource representing the stream of messages from a single, specific topic, to be delivered to the subscribing application. Use the Pub/Sub topic from the previous step and set **Delivery type** to **Pull**. This provides a Subscription ID.
Additionally, you can configure message retention duration and other parameters.
### Embed MLOps agent in Docker {: #embed-mlops-agent-in-docker }
To create a Docker image that embeds the agent:
1. Create the working directory on the machine where you will prepare the necessary files.
2. Create a directory named **conf**.
3. Download and unzip the tarball file with the MLOps agent from [**Developer Tools**](api-key-mgmt#mlops-agent-tarball).

4. Copy the `mlops.log4j2.properties` file from `<unzipped directory>/conf` to your `<working directory/conf>`.
5. Copy the file `mlops.agent.conf.yaml` to the working directory. Provide the following parameters (the example uses defaults for all other parameters):
Parameter | Definition
--------- | ----------
`mlopsUrl` | Installation URL for Self-Managed AI Platform; `app.datarobot.com` for managed AI Platform
`apiToken` | [DataRobot key](api-key-mgmt#api-key-management)
`projectId`| GCP ProjectId
`topicName` | Created in the [Pub/Sub section](#create-a-pub-sub-topic-and-subscription)
For example:
```yaml
mlopsUrl: "MLOPS-URL"
apiToken: "YOUR-DR-API-TOKEN"
channelConfigs:
- type: "PUBSUB_SPOOL"
details: {name: "pubsub", projectId: "YOUR-GOOGLE-PROJECT-ID", topicName: "YOUR-PUBSUB-TOPIC-ID-DEFINED-AT-STEP-2"}
```
6. Copy the `<unzipped directory>/lib/mlops-agent-X.X.X.jar` file to your working directory.
7. In the working directory, create the Dockerfile using the following content:
```dockerfile
FROM openjdk:8
ENV AGENT_BASE_LOC=/opt/datarobot/ma
ENV AGENT_LOG_PROPERTIES=mlops.log4j2.properties
ENV AGENT_CONF_LOC=$AGENT_BASE_LOC/conf/mlops.agent.conf.yaml
COPY mlops-agent-*.jar ${AGENT_BASE_LOC}/mlops-agent.jar
COPY conf $AGENT_BASE_LOC/conf
COPY entrypoint.sh /
RUN chmod +x /entrypoint.sh
ENTRYPOINT ["./entrypoint.sh"]
```
8. Create `entrypoint.sh` with the following content:
```shell
#!/bin/sh
echo "######## STARTING MLOPS-AGENT ########"
echo
exec java -Dlog.file=$AGENT_BASE_LOC/logs/mlops.agent.log -Dlog4j.configurationFile=file:$AGENT_BASE_LOC/conf/$AGENT_LOG_PROPERTIES -cp $AGENT_BASE_LOC/mlops-agent.jar com.datarobot.mlops.agent.Agent --config $ AGENT_CONF_LOC
```
9. Create the Docker image, ensuring you include the period (`.`) at the end of the Docker build command.
```bash
export PROJECT_ID=ai-XXXXXXX-111111
docker build -t gcr.io/${PROJECT_ID}/monitoring-agents:v1 .
```
10. Run the `docker images` command to verify a successful build.

### Run Docker locally {: #run-docker-locally }
!!! note
While technically an optional step, best practice advises always testing your image locally to save time and network bandwidth.
The monitoring agent tarball includes the necessary library (along with Java and R libraries) for sending statistics from the custom Python model back to MLOps. You can find the libraries in the `lib` directory.
To run locally:
1. Install the `DataRobot_MLOps` library for Python:
`pip install datarobot_mlops_package-<VERSION>/lib/datarobot_mlops-<VERSION>-py2.py3-none-any.whl`
2. Run your Docker container image.
!!! note
You will need the JSON file with credentials that you downloaded in the [prerequisites](#prerequisites) (the step that describes downloading Google Cloud account credentials).
```bash
docker run -it --rm --name ma -v /path-to-you-directory/mlops.agent.conf.yaml:/ opt/datarobot/ma/conf/mlops.agent.conf.yaml -v /path-to-your-directory/your-google-application-credentials.json:/opt/datarobot/ma/conf/gac.json -e GOOGLE_APPLICATION_CREDENTIALS="/opt/datarobot/ma/conf/gac.json" gcr.io/${PROJECT_ID}/monitoring-agents:v1
```

The following is the example of the Python code where your model is scored (all package import statements are omitted from this example):
```python
from datarobot.mlops.mlops import MLOps
DEPLOYMENT_ID = "EXTERNAL-DEPLOYMENT-ID-DEFINED-AT-STEP-1"
MODEL_ID = "EXTERNAL-MODEL-ID-DEFINED-AT-STEP-1"
PROJECT_ID = "YOUR-GOOGLE-PROJECT-ID"
TOPIC_ID = "YOUR-PUBSUB-TOPIC-ID-DEFINED-AT-STEP-2"
# MLOPS: initialize the MLOps instance
mlops = MLOps() \
.set_deployment_id(DEPLOYMENT_ID) \
.set_model_id(MODEL_ID) \
.set_pubsub_spooler(PROJECT_ID, TOPIC_ID) \
.init()
# Read your custom model pickle file (model has been trained outside DataRobot)
model = pd.read_pickle('custom_model.pickle')
# Read scoring data
features_df_scoring = pd.read_csv('features.csv')
# Get predictions
start_time = time.time()
predictions = model.predict_proba(features_df_scoring)
predictions = predictions.tolist()
num_predictions = len(predictions)
end_time = time.time()
# MLOPS: report the number of predictions in the request and the execution time
mlops.report_deployment_stats(num_predictions, end_time - start_time)
# MLOPS: report the features and predictions
mlops.report_predictions_data(features_df=features_df_scoring, predictions=predictions)
# MLOPS: release MLOps resources when finished
mlops.shutdown()
```
3. Set the `GOOGLE_APPLICATION_CREDENTIALS` environment variable:
`export GOOGLE_APPLICATION_CREDENTIALS="<your-google-application-credentials.json>"`
4. Score your data locally to test if the model works as expected. You will then be able to see a new record in [monitoring agent log](agent-event-log):
`python score-your-model.py`
The statistics in the MLOps dashboard are updated as well:

### Push Docker image to the Container Registry {: #push-docker-image-to-the-container-registry }
After you have tested and validated the container image locally, upload it to a registry so that your Google Kubernetes Engine (GKE) cluster can download and run it.
1. Configure the Docker command-line tool to authenticate to Container Registry:
`gcloud auth configure-docker`

2. Push the Docker image [you built](#embed-mlops-agent-in-docker) to the Container Registry:
`docker push gcr.io/${PROJECT_ID}/monitoring-agents:v1`
### Create the GKE cluster {: #create-the-GKE-cluster }
After storing the Docker image in the Container Registry, you next create a GKE cluster, as follows:
1. Set your project ID and Compute Engine zone options for the `gcloud`tool:
`gcloud config set project $PROJECT_ID`
`gcloud config set compute/zone europe-west1-b`
2. Create a cluster.
!!! note
This example, for simplicity, creates a private cluster with unrestricted access to the public endpoint. For security, be sure to restrict access to the control plane for your production environment. Find detailed information about configuring different GKE private clusters [here](https://cloud.google.com/kubernetes-engine/docs/how-to/private-clusters#using-gcloud-config){ target=_blank }.
```bash
gcloud container clusters create monitoring-agents-cluster \
--network default \
--create-subnetwork name=my-subnet-0 \
--no-enable-master-authorized-networks \
--enable-ip-alias \
--enable-private-nodes \
--master-ipv4-cidr 172.16.0.32/28 \
--no-enable-basic-auth \
--no-issue-client-certificate
```
Where:
Parameter | Result
--------- | -------
`--create-subnetwork name=my-subnet-0` | Causes GKE to automatically create a subnet named `my-subnet-0 `.
`--no-enable-master-authorized-networks` | Disables authorized networks for the cluster.
`--enable-ip-alias` | Makes the cluster VPC-native.
`--enable-private-nodes` | Indicates that the cluster's nodes do not have external IP addresses.
`--master-ipv4-cidr 172.16.0.32/28` | Specifies an internal address range for the control plane. This setting is permanent for this cluster.
`--no-enable-basic-auth ` | Disables basic auth for the cluster.
`--no-issue-client-certificate` | Disables issuing a client certificate.
3. Run the following command to see the cluster worker instances:
`gcloud compute instances list`

### Create a cloud router {: #create-a-cloud-router }
The MLOps agent running on a GKE private cluster needs access to the DataRobot MLOps service. To do this, you must give the private nodes outbound access to the internet, which you can achieve using a NAT cloud router ([Google documentation here](https://cloud.google.com/nat/docs/gke-example#gcloud_4){ target=_blank }).
1. Create a cloud router:
```bash
gcloud compute routers create nat-router \
--network default \
--region europe-west1
```
2. Add configuration to the router.
```bash
gcloud compute routers nats create nat-config \
--router-region europe-west1 \
--router nat-router \
--nat-all-subnet-ip-ranges \
--auto-allocate-nat-external-ips
```
#### Create K8s ConfigMaps {: #create-k8s-configmaps }
With the cloud router configured, you can now create K8s ConfigMaps to contain the MLOps agent configuration and Google credentials. You will need the downloaded JSON credentials file created during the [prerequisites](#prerequisites) stage.
!!! note
Use K8s Secrets to save your configuration files for production usage.
Use the following code to create ConfigMaps:
```bash
kubectl create configmap ma-configmap --from-file=mlops.agent.conf.yaml=your-path/mlops.agent.conf.yaml
kubectl create configmap gac-configmap --from-file=gac.json=your-google-application-credentials.json
```
#### Create the K8s Deployment {: #create-the-k8s-deployment }
To create the deployment, create the `ma-deployment.yaml` file with the following content:
!!! note
This example uses three always-running replicas; for autoscaling, use `kubectl autoscale deployment`.
```yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: ma-deployment
labels:
app: ma
spec:
replicas: 3
selector:
matchLabels:
app: ma
template:
metadata:
labels:
app: ma
spec:
containers:
- name: ma
image: gcr.io/${PROJECT_ID}/monitoring-agents:v1
volumeMounts:
- name: agent-conf-volume
mountPath: /opt/datarobot/ma/conf/mlops.agent.conf.yaml
subPath: mlops.agent.conf.yaml
- name: gac-conf-volume
mountPath: /opt/datarobot/ma/conf/gac.json
subPath: gac.json
env:
- name: GOOGLE_APPLICATION_CREDENTIALS
value: /opt/datarobot/ma/conf/gac.json
ports:
- containerPort: 80
volumes:
- name: agent-conf-volume
configMap:
items:
- key: mlops.agent.conf.yaml
path: mlops.agent.conf.yaml
name: ma-configmap
- name: gac-conf-volume
configMap:
items:
- key: gac.json
path: gac.json
name: gac-configmap
```
Next, create the deployment with the following command:
`kubectl apply -f ma-deployment.yaml`
Finally, check the running pods:
`kubectl get pods`

### Score the model {: #score-the-model }
Score your local model and verify the output.
1. Score your local model:
`python score-your-model.py`
2. Check the GKE Pod log; it shows that one record has been sent to DataRobot.

3. Check the Pub/Sub log.

4. Check the DataRobot MLOps dashboard.

### Clean up {: #clean-up }
1. Delete the NAT in the cloud router:
`gcloud compute routers nats delete nat-config --router=nat-router --router-region=europe-west1`
2. Delete the cloud router:
`gcloud compute routers delete nat-router --region=europe-west1`
3. Delete the cluster:
`gcloud container clusters delete monitoring-agents-cluster`
|
mlops-agent-with-gke
|
---
title: Google
description: Deploying and monitoring DataRobot models on Google Cloud Platform (GCP) and Google Kubernetes Engine (GKE).
---
# Google {: #google }
The following sections describe techniques for integrating the Google Cloud Engine with DataRobot:
Topic | Describes...
----- | ------
[Deploy and monitor models on GCP](google-cloud-platform) | Deploying and monitoring DataRobot models on the Google Cloud Platform (GCP).
[Deploy the MLOps agent on GKE](mlops-agent-with-gke) | Deploying the MLOps agent on Google Kubernetes Engine (GKE) to monitor models.
|
index
|
---
title: Real-time predictions
description: Use the DataRobot Prediction API for real-time predictions.
---
# Real-time predictions {: #real-time-predictions }
Once data is ingested, DataRobot provides several options for scoring model data. The most tightly integrated and feature-rich scoring method is using the Prediction API. The API can be leveraged and scaled horizontally to support both real-time scoring requests and batch scoring. A single API request can be sent with a data payload of one or more records, and many requests can be sent concurrently. DataRobot keeps track of data coming in for scoring requests and compares it to training data used to build a model as well. Using model management, technical performance statistics around the API endpoint are delivered along with data drift and model drift (associated with the health of the model itself). See the [DataRobot Prediction API](dr-predapi) documentation for more information.
The information required to construct a successful API request can be collected from several places within DataRobot, although the quickest way to capture all values required is from the [**Predictions > Prediction API**](code-py#integration) tab. A sample Python script with relevant fields completed is available in the tab.
The API endpoint accepts CSV and JSON data. The *Content-Type* header value must be set appropriately for the type of data being sent ( `text/csv` or `application/json`); the raw API request always responds with JSON.
The following provides several examples of scoring data from Snowflake via a client request—from both a local or standing-server environment.
## Use the Prediction API {: #use-the-prediction-api }
Note the values in the following integration script:
```python
API_KEY = 'YOUR API KEY'
DEPLOYMENT_ID = 'YOUR DEPLOYMENT ID'
DATAROBOT_KEY = 'YOUR DR KEY'
```
The fields in the integration script are as follows:
=== "SaaS"
Field | Description
----- | -----------
`USERNAME` | Apply privileges associated with the named user.
`API_KEY` | Authenticate web requests to the DataRobot API and Prediction API.
`DEPLOYMENT_ID` | Specify the unique ID of the DataRobot deployment; this value sits in front of the model.
`DATAROBOT_KEY` <br>(Managed AI Platform users only) | Supply an additional engine access key. Self-Managed AI Platform installations can remove this option.
`Content-Type` header | Identify the type of input data being sent, either CSV and JSON.
`URL` | Identify the hostname for scoring data. Typically, this is a load balancer in front of one or more prediction engines.
=== "Self-Managed"
Field | Description
----- | -----------
`USERNAME` | Apply privileges associated with the named user.
`API_KEY` | Authenticate web requests to the DataRobot API and Prediction API.
`DEPLOYMENT_ID` | Specify the unique ID of the DataRobot deployment; this value sits in front of the model.
`Content-Type` header | Identify the type of input data being sent, either CSV and JSON.
`URL` | Identify the hostname for scoring data. Typically, this is a load balancer in front of one or more prediction engines.
The following script snippet shows how to extract data from Snowflake via the Python connector and send it to DataRobot for scoring. It creates a single thread with a single request. Maximizing speed involves creating parallel request threads with appropriately sized data payloads to handle input of any size.
Consider this basic example of creating a scoring request and working with results.
```python
import snowflake.connector
import datetime
import sys
import pandas as pd
import requests
from pandas.io.json import json_normalize
# snowflake parameters
SNOW_ACCOUNT = 'dp12345.us-east-1'
SNOW_USER = 'your user'
SNOW_PASS = 'your pass'
SNOW_DB = 'TITANIC'
SNOW_SCHEMA = 'PUBLIC'
# create a connection
ctx = snowflake.connector.connect(
user=SNOW_USER,
password=SNOW_PASS,
account=SNOW_ACCOUNT,
database=SNOW_DB,
schema=SNOW_SCHEMA,
protocol='https',
application='DATAROBOT',
)
# create a cursor
cur = ctx.cursor()
# execute sql
sql = "select passengerid, pclass, name, sex, age, sibsp, parch, fare, cabin, embarked " \
+ " from titanic.public.passengers"
cur.execute(sql)
# fetch results into dataframe
df = cur.fetch_pandas_all()
```

Fields are all capitalized in accordance with ANSI standard SQL. Because DataRobot is case-sensitive to feature names, ensure that the fields in DataRobot match the data provided. Depending on the model-building workflow used, this may mean that database extractions via SQL require aliasing of the columns to match model-feature case. At this point, the data is in a Python script. Any pre-processing that occurred outside of DataRobot before model building can now be applied to the scoring. Once the data is ready, you can begin model scoring.
```python
# datarobot parameters
API_KEY = 'YOUR API KEY'
USERNAME = 'mike.t@datarobot.com'
DEPLOYMENT_ID = 'YOUR DEPLOYMENT ID'
DATAROBOT_KEY = 'YOUR DR KEY'
# replace with the load balancer for your prediction instance(s)
DR_PREDICTION_HOST = 'https://app.datarobot.com'
# replace app.datarobot.com with application host of your cluster if installed locally
headers = {
'Content-Type': 'text/csv; charset=UTF-8',
'Datarobot-Key': DATAROBOT_KEY,
'Authorization': 'Bearer {}'.format(API_KEY)
}
predictions_response = requests.post(
url,
data=df.to_csv(index=False).encode("utf-8"),
headers=headers,
params={'passthroughColumns' : 'PASSENGERID'}
)
if predictions_response.status_code != 200:
print("error {status_code}: {content}".format(status_code=predictions_response.status_code, content=predictions_response.content))
sys.exit(-1)
# first 3 records json structure
predictions_response.json()['data'][0:3]
```

The above is a basic, straightforward call with little error handling; it is intended only as an example. The request includes a parameter value to request the logical or business key for the data being returned, along with the labels and scores. Note that `df.to_csv(index=False)` is required to remove the index column from the output, and `.encode("utf-8")` is required to convert the Unicode string to UTF-8 bytes (charset specified in `Content-Type`).
The API always returns records in JSON format. Snowflake is flexible when working with JSON, allowing you to simply load the response into the database.
```python
df_response = pd.DataFrame.from_dict(predictions_response.json())
df_response.head()
```

The following code creates a table and inserts the raw JSON. Note that it only does so with an abbreviated set of five records. For the sake of this demonstration, the records are being inserted one at a time via the Python Snowflake connector. This is ***not*** a best practice and is provided for demonstration only; when doing this yourself, make sure Snowflake instead ingests data via flat files and stage objects.
```python
ctx.cursor().execute('create or replace table passenger_scored_json(json_rec variant)')
df_head = df_response.head()
# this is not the proper way to insert data into snowflake, but is used for quick demo convenience.
# snowflake ingest should be done via snowflake stage objects.
for _ind_, row in df5.iterrows():
escaped = str(row['data']).replace("'", "''")
ctx.cursor().execute("insert into passenger_scored_json select parse_json('{rec}')".format(rec=escaped))
print(row['data'])
```
Use Snowflake's native JSON functions to parse and flatten the data. The code below retrieves all scores towards the positive class label `1` for survival from the binary classification model.
```python
select json_rec:passthroughValues.PASSENGERID::int as passengerid
, json_rec:prediction::int as prediction
, json_rec:predictionThreshold::numeric(10,9) as prediction_threshold
, f.value:label as prediction_label
, f.value:value as prediction_score
from titanic.public.passenger_scored_json
, table(flatten(json_rec:predictionValues)) f
where f.value:label = 1;
```

You can use a raw score and the output it provides against the threshold. In this example, passenger 892's chance of survival (11.69%) was less than the 50% threshold. As a result, the prediction towards the positive class survival label `1` was 0 (i.e., non-survival).
The original response in Python can be flattened within Python as well.
```python
df_results = json_normalize(data=predictions_response.json()['data'], record_path='predictionValues',
meta = [['passthroughValues', 'PASSENGERID'], 'prediction', 'predictionThreshold'])
df_results = df_results[df_results['label'] == 1]
df_results.rename(columns={"passthroughValues.PASSENGERID": "PASSENGERID"}, inplace=True)
df_results.head()
```

The above dataframe can be written to one or more CSVs or provided as compressed files in a Snowflake stage environment for ingestion into the database.
|
sf-client-scoring
|
---
title: Snowflake
description: Guides around integrating DataRobot with Snowflake
---
# Snowflake {: #snowflake }
The articles in this section progress from data ingest to creating projects and machine learning models based on historic training sets, and finally to scoring new data through deployed models via several deployment methodologies:
Topic | Describes...
----- | ------
[Data ingest](sf-project-creation) | Retrieving data for Snowflake project creation.
[Real-time predictions](sf-client-scoring) | Using the API to score Snowflake data.
[Server-side model scoring](sf-server-scoring) | Using the API to integrate your deployment with Snowflake to feed data to your model for predictions and to write those predictions back to your Snowflake Database.
[External functions and streams](sf-function-streams) | Using external API call functions to create a Snowflake scoring pipeline.
[Generate Snowflake UDF Scoring Code](snowflake-sc) | Using the DataRobot Scoring Code JAR as a user-defined function (UDF) on Snowflake.
|
index
|
---
title: Snowflake external functions and streams
description: Use external API call functions to create a Snowflake scoring pipeline.
---
# Snowflake external functions and streams {: #snowflake-external-functions-and-streams }
With Snowflake, you can call out to [external APIs](https://docs.snowflake.com/en/sql-reference/external-functions-introduction.html){ target=_blank } from user-defined functions (UDFs). Using a Snowflake scoring pipeline allows you to take advantage of these external API functions—leveraging Snowflake streams and tasks to create a streaming micro-batch ingestion flow that incorporates a DataRobot-hosted model.
There are several requirements and considerations when exploring this approach:
* Any API must be fronted by the trusted cloud native API service (in the case of AWS, the AWS API Gateway).
* There are [scaling, concurrency, and reliability](https://docs.snowflake.com/en/sql-reference/external-functions-implementation.html){ target=_blank } considerations.
* Max payload size for synchronous requests is 10MB for the API gateway and 6MB for Lambda (other cloud providers have different limitations).
When deciding how to score your models, consider these questions. How does the total infrastructure react when scoring 10 rows vs. 10,000 rows vs. 10 million rows? What kind of load is sent when a small 2-node cluster is vertically scaled to a large 8-node cluster or when it is scaled horizontally to 2 or 3 instances? What happens if a request times out or a resource is unavailable?
Alternatives for executing large batch scoring jobs on Snowflake simply and efficiently are described in the [client-request](sf-client-scoring) and [server-side](sf-server-scoring)) scoring examples. Generally speaking, this type of scoring is best done as part of ETL or ELT pipelines. Low-volume streaming ingest using internal Snowflake streaming is a suitable application for leveraging external functions with a UDF.
The following demonstrates an ETL pipeline using Snowpipe, Streams, and Tasks within Snowflake. The example scores records through a DataRobot-hosted model using Kaggle's [Titanic dataset](https://www.kaggle.com/c/titanic){ target=_blank }. It ingests data via a streaming pipeline with objects in an STG schema, scores it against the model, and then loads it to the `PUBLIC` schema presentation layer.
## Technologies used {: #technologies-used }
The examples uses the following technologies:
**Snowflake**:
- _Storage Integration_
- _Stage_
- _Snowpipe_
- _Streams_
- _Tasks_, _Tables_, and _External Function UDF_ objects (to assemble a streaming scoring pipeline for data as it is ingested)
**AWS**:
- _Lambda_, as a serverless compute service that acts as the intermediary between Snowflake and DataRobot (which is currently a requirement for using an external function).
- _API Gateway_, to provide an endpoint to front the Lambda function.
- _IAM_ policies to grant roles and privileges to necessary components.
- Incoming data, which is placed in an *S3* object store bucket.
- An *SQS* queue.
**DataRobot**:
- The model was built and deployed on the AutoML platform and is available for scoring requests via the DataRobot Prediction API. In this case, the model is served on horizontally scalable DataRobot cluster member hardware, dedicated solely to serving these requests.
## External UDF architecture {: #external-udf-architecture }
The following illustrates the Snowflake external API UDF architecture:

Although a native UDF in Snowflake is written in JavaScript, the external function is executed remotely and can be coded in any language the remote infrastructure supports. It is then coupled with an API integration in Snowflake to expose it as an external UDF. This integration sends the payload to be operated on to an API proxy service (an AWS API Gateway in this case). The Gateway then satisfies this request through the remote service behind it—a microservice backed by a container or by a Lambda piece of code.
## Create the remote service (AWS Lambda) {: #create-the-remote-service-aws-lambda}
Hosting DataRobot models inside AWS Lambda takes advantage of AWS scalability features. For examples, see:
* [Using DataRobot Prime with AWS Lambda] (prime-lambda)
* [Using Scoring Code with AWS Lambda] (sc-lambda)
* [Exporting a model outside of DataRobot as a Docker container] (aks-deploy-and-monitor)
This section provides an example of treating the gateway as a proxy for a complete passthrough and sending the scoring request to a DataRobot-hosted prediction engine. Note that in this approach, scalability also includes horizontally scaling prediction engines on the DataRobot cluster.
See the articles mentioned above for additional Lambda-creation workflows to gain familiarity with the environment and process. Create a Lambda named `proxy_titanic` with a Python 3.7 runtime environment. Leverage an existing IAM role or create a new one with default execution permissions.

Connecting to the DataRobot cluster requires some sensitive information:
* The load balancing hostname in front of the DataRobot Prediction Engine (DPE) cluster.
* The user's API token.
* The deployment for the model to be scored.
* The DataRobot key (managed AI Platform users only).
These values can be stored in the Lambda Environment variables section.

Lambda layers let you build Lambda code on top of libraries and separate that code from the delivery package. You don't have to separate the libraries, although using layers simplifies the process of bringing in necessary packages and maintaining code. This example requires the `requests` and `pandas` libraries, which are not part of the base Amazon Linux image, and must be added via a layer (by creating a virtual environment). In this example, the environment used is an Amazon Linux EC2 box. Instructions to install Python 3 on Amazon Linux are [here](https://aws.amazon.com/premiumsupport/knowledge-center/ec2-linux-python3-boto3/){ target=_blank }.
Create a ZIP file for a layer as follows:
```python
python3 -m venv my_app/env
source ~/my_app/env/bin/activate
pip install requests
pip install pandas
deactivate
```
Per the [Amazon documentation](https://docs.aws.amazon.com/lambda/latest/dg/configuration-layers.html), this must be placed in the `python` or `site-packages` directory and is expanded under `/opt`.
```python
cd ~/my_app/env
mkdir -p python/lib/python3.7/site-packages
cp -r lib/python3.7/site-packages/* python/lib/python3.7/site-packages/.
zip -r9 ~/layer.zip python
```
Copy the `layer.zip` file to a location on S3; this is required if the Lambda layer is > 10MB.
```python
aws s3 cp layer.zip s3://datarobot-bucket/layers/layer.zip
```
Navigate to the **Lambda service > Layers > Create Layer** tab. Provide a name and link to the file in S3; note that this will be the Object URL of the uploaded ZIP. It is recommended, but not necessary, to set compatible environments, which makes them more easily accessible in a dropdown menu when adding them to a Lambda. Select to save the layer and its Amazon Resource Name (ARN).

Navigate back to the Lambda and click **Layers** under the Lambda title; add a layer and provide the ARN from the previous step.
Navigate back to the Lambda code. The following Python code will:
1. Accept a payload from Snowflake.
2. Pass the payload to DataRobot's Prediction API for scoring.
3. Return a Snowflake-compatible response.
```python
import os
import json
#from pandas.io.json import json_normalize
import requests
import pandas as pd
import csv
def lambda_handler(event, context):
# set default status to OK, no DR API error
status_code = 200
dr_error = ""
# The return value will contain an array of arrays (one inner array per input row).
array_of_rows_to_return = [ ]
try:
# obtain secure environment variables to reach out to DataRobot API
DR_DPE_HOST = os.environ['dr_dpe_host']
DR_USER = os.environ['dr_user']
DR_TOKEN = os.environ['dr_token']
DR_DEPLOYMENT = os.environ['dr_deployment']
DR_KEY = os.environ['dr_key']
# retrieve body containing input rows
event_body = event["body"]
# retrieve payload from body
payload = json.loads(event_body)
# retrieve row data from payload
payload_data = payload["data"]
# map list of lists to expected inputs
cols = ['row', 'NAME', 'SEX', 'PCLASS', 'FARE', 'CABIN', 'SIBSP', 'EMBARKED', 'PARCH', 'AGE']
df = pd.DataFrame(payload_data, columns=cols)
print("record count is: " + str(len(df.index)))
# assemble and send scoring request
headers = {'Content-Type': 'text/csv; charset=UTF-8', 'Accept': 'text/csv', 'datarobot-key': DR_KEY}
response = requests.post(DR_DPE_HOST + '/predApi/v1.0/deployments/%s/predictions' % (DR_DEPLOYMENT),
auth=(DR_USER, DR_TOKEN), data=df.to_csv(), headers=headers)
# bail if anything other than a successful response occurred
if response.status_code != 200:
dr_error = str(response.status_code) + " - " + str(response.content)
print("dr_error: " + dr_error)
raise
array_of_rows_to_return = []
row = 0
wrapper = csv.reader(response.text.strip().split('\n'))
header = next(wrapper)
idx = header.index('SURVIVED_1_PREDICTION')
for record in wrapper:
array_of_rows_to_return.append([row, record[idx]])
row += 1
# send data back in required snowflake format
json_compatible_string_to_return = json.dumps({"data" : array_of_rows_to_return})
except Exception as err:
# 400 implies some type of error.
status_code = 400
# Tell caller what this function could not handle.
json_compatible_string_to_return = 'failed'
# if the API call failed, update the error message with what happened
if len(dr_error) > 0:
print("error")
json_compatible_string_to_return = 'failed; DataRobot API call request error: ' + dr_error
# Return the return value and HTTP status code.
return {
'statusCode': status_code,
'body': json_compatible_string_to_return
}
```
[Lambda code for this example is available in GitHub](https://github.com/datarobot-community/ai_engineering/tree/master/databases/snowflake/external_function){ target=_blank }. You can configure a test event to make sure the Lambda acts as expected. A DataRobot payload can be represented for this model with a few JSON records in the following format:
```
{
"body": "{\"data\": [[0, \"test one\", \"male\", 3, 7.8292, null, 0, \"Q\", 0, 34.5 ], [1, \"test two\", \"female\",
3, 7, null, 1, \"S\", 0, 47 ] ] }"
}
```
Once this event is created, select it from the test dropdown and click **Test**. The test returns a 200-level success response with a JSON-encapsulated list of lists, containing the 0-based row number and the returned model value. In this case, that model value is a score towards the positive class of label `1` (e.g., Titanic passenger survivability from a binary classifier model).

You can set additional Lambda configuration under **Basic Settings**. Lambda serverless costs are based on RAM "used seconds" duration. The more RAM allowed, the more virtual CPU is allocated. This allows handling and manipulating larger-sized input loads and for processing inside the Lambda to occur more quickly. Note that this Lambda defers to DataRobot for the heavier work; it just needs to accommodate for the data movement. If a Lambda exits prematurely due to exceeding resources, these values may need to be edited. The timeout default is 3 seconds; if the response from DataRobot for the micro-batch of records the Lambda is responsible for takes longer to process than the default value, the Lambda does not detect activity and shuts down. DataRobot tested and recommends the following values: 256 MB and a 10-second timeout. Actual usage for each executed Lambda can be found in the associated CloudWatch logs, available under the **Monitoring** tab of the Lambda.
## Configure the proxy service {: #configure-the-proxy-service }
The following creates the AWS API Gateway proxy service.
### Create IAM role {: #create-iam-role }
For a Snowflake-owned IAM user to be granted permission, you must create a role that the user can then assume within the AWS account. In the console, navigate to **IAM > Roles > Create role**. When asked to **Select type of trusted entity**, choose **Another AWS account** and fill in the **Account ID** box with the AWS Account ID for the currently logged-in account. This can be found in the ARN of other roles, from the **My Account** menu or from various other places. A Snowflake external ID for the account is applied later.
Proceed through the next screens and save this role as `snowflake_external_function_role`. Save the role as Amazon Resource Name (ARN).
### Create API Gateway entry {: #create-api-gateway-entry }
Navigate to the API Gateway service console and click **Create API**. Choose to build a REST API and select the REST protocol. Select to **Create a New API**. Create a friendly, readable name and click **Create API**. On the next screen, choose **Actions > Create Resource**. Set the resource name and path to score.

Next, choose **Actions> Create Method**. In the dropdown menu under the endpoint, choose **POST**. Select the checkbox next to **Use Lambda Proxy Integration**, select the previously created Lambda, and save.

Lastly, choose **Actions > Deploy API**. You must create a stage, such as *test*, and then click **Deploy** once you complete the form.
!!! note
The **Invoke URL** field on the subsequent editor page will later be used in creating an integration with Snowflake.
### Secure the API Gateway endpoint {: #secure-the-api-gateway-endpoint }
Navigate back to the **Resources** of the created API (in the left menu above **Stages**). Click on **POST** under the endpoint to bring up the **Method Execution**. Click the **Method Request**, toggle the Authorization dropdown to **AWS_IAM**, and then click the checkmark to save. Navigate back to the **Method Execution** and note the ARN within the **Method Request**.
Navigate to **Resource Policy** in the left menu. Add a policy that is populated with the AWS account number and the name of the previously created IAM role above (described in the [Snowflake documentation](https://docs.snowflake.com/en/sql-reference/external-functions-creating-aws.html#secure-your-aws-api-gateway-proxy-service-endpoint){ target=_blank }).

### Create an API Integration object in Snowflake {: #create-an-api-integration-object-in-snowflake }
The API Integration object will map Snowflake to the AWS Account role. Provide the role ARN and set the allowed prefixes to include the Invoke URL from the stage referenced above (a privilege level of *accountadmin* is required to create an API Integration).
```python
use role accountadmin;
create or replace api integration titanic_external_api_integration
api_provider=aws_api_gateway
api_aws_role_arn='arn:aws:iam::123456789012:role/snowflake_external_function_role'
api_allowed_prefixes=('https://76abcdefg.execute-api.us-east-1.amazonaws.com/test/')
enabled=true;
```
Describe the integration:
```python
describe integration titanic_external_api_integration;
```
Copy out the values for:
``
API_AWS_IAM_USER_ARN and API_AWS_EXTERNAL_ID
``
## Configure the Snowflake-to-IAM role trust relationship {: #configure-the-snowflake-to-IAM-role-trust-relationship }
Navigate back to the **AWS IAM service > Roles** , and to the `snowflake_external_function_role` role.
At the bottom of the **Summary** page, choose the **Trust relationships** tab and click the **Edit trust relationship** button. This opens a policy document to edit. As per the [Snowflake documentation](https://docs.snowflake.com/en/sql-reference/external-functions-creating-aws.html#set-up-the-trust-relationship-s-between-snowflake-and-the-new-iam-role){ target=_blank }, edit the **Principal attribute AWS key** by replacing the existing value with the `API_AWS_IAM_USER_ARN` from Snowflake. Next to the `sts:AssumeRole` action, there will be a Condition key with an empty value between curly braces. Inside the braces, paste the following, replacing the `API_AWS_EXTERNAL_ID` with the value from Snowflake:
```python
"StringEquals": { "sts:ExternalId": "API_AWS_EXTERNAL_ID" }
```
Click **Update Trust Policy** to save out of this screen.
## Create the external function {: #create-the-external-function }
You can now create the external function inside Snowflake. It will reference the trusted endpoint to invoke via the previously built API integration. Be sure to match the expected parameter value in the function definition to the function the Lambda is expecting.
```python
create or replace external function
udf_titanic_score(name string, sex string, pclass int, fare numeric(10,5),
cabin string, sibsp int, embarked string, parch int, age numeric(5,2))
returns variant
api_integration = titanic_external_api_integration
as 'https://76abcdefg.execute-api.us-east-1.amazonaws.com/test/score';
```
The function is now ready for use.
## Call the external function {: #call-the-external-function }
You can call the function as expected. This code scores 100,000 Titanic passenger records:
```python
select passengerid
, udf_titanic_score(name, sex, pclass, fare, cabin, sibsp, embarked, parch, age) as score
from passengers_100k;
```

In the above prediction, Passenger 7254024 has an 84.4% chance of Titanic survivability.
### External function performance considerations {: #external-function-performance-considerations }
Some observations:
* Full received payloads *in this case* contained ~1860 records. Payloads were roughly 0.029 MB in size (perhaps Snowflake is limiting them to 0.03 MB).
* Whether scoring from an extra small-, small-, or medium-sized compute warehouse on Snowflake, the Lambda concurrency CloudWatch metrics dashboard always showed a concurrent execution peak of 8. Overall, this represents a rather gentle load on scoring infrastructure.
* Performance should be satisfactory whether the model is run in the Lambda itself or offset to a DataRobot Prediction Engine. Note that for larger batch jobs and maximum throughput, other methods are still more efficient with time and resources.
* Testing against an r4.xlarge Dedicated Prediction Engine on DataRobot produced a rate of roughly 13,800 records for this particular dataset and model.
* Snowflake determines payload size and concurrency based on a [number of factors](https://docs.snowflake.com/en/sql-reference/external-functions-implementation.html#concurrency){ target=_blank }. A controllable payload ceiling can be specified with a `MAX_BATCH_ROWS` value during external function creation. Future options may allow greater control over payload size, concurrency, and scaling with warehouse upsizing.
## Streaming ingest with streams and tasks {: #streaming-ingest-with-streams-and-tasks }
There are multiple options to bring data into Snowflake using streaming. One option is to use Snowflake's native periodic data-loading capabilities with Snowpipe. By using Snowflake streams and tasks, you can handle new records upon arrival without an external driving ETL/ELT.
### Ingest pipeline architecture {: #ingest-pipeline-architecture }
The following illustrates this ingest architecture:

### Create staging and presentation tables {: #create-staging-and-presentation-tables }
You must create tables to hold the newly arrived records loaded from Snowpipe and to hold the processed and scored records for reporting. In this example, a raw passengers table is created in an `STG` schema and a scored passengers table is presented in the `PUBLIC` schema.
```python
create or replace TABLE TITANIC.STG.PASSENGERS (
PASSENGERID int,
PCLASS int,
NAME VARCHAR(100),
SEX VARCHAR(10),
AGE NUMBER(5,2),
SIBSP int,
PARCH int,
TICKET VARCHAR(30),
FARE NUMBER(10,5),
CABIN VARCHAR(25),
EMBARKED VARCHAR(5)
);
create or replace TABLE TITANIC.PUBLIC.PASSENGERS_SCORED (
PASSENGERID int,
PCLASS int,
NAME VARCHAR(100),
SEX VARCHAR(10),
AGE NUMBER(5,2),
SIBSP int,
PARCH int,
TICKET VARCHAR(30),
FARE NUMBER(10,5),
CABIN VARCHAR(25),
EMBARKED VARCHAR(5),
SURVIVAL_SCORE NUMBER(11,10)
);
```
### Create the Snowpipe {: #create-the-snowpipe }
Snowflake needs to be connected to an external stage object. Use the [Snowflake documentation](https://docs.snowflake.com/en/user-guide/data-load-s3-config.html#option-1-configuring-a-snowflake-storage-integration){ target=_blank } to set up a storage integration with AWS and IAM.
```python
use role accountadmin;
--note a replace will break all existing associated stage objects!
create or replace storage integration SNOWPIPE_INTEGRATION
type = EXTERNAL_STAGE
STORAGE_PROVIDER = S3
STORAGE_AWS_ROLE_ARN = 'arn:aws:iam::123456789:role/snowflake_lc_role'
enabled = true
STORAGE_ALLOWED_LOCATIONS = ('s3://bucket');
```
Once the integration is available, you can use it to create a stage that maps to S3 and uses the integration to apply security.
```python
CREATE or replace STAGE titanic.stg.snowpipe_passengers
URL = 's3://bucket/snowpipe/input/passengers'
storage_integration = SNOWPIPE_INTEGRATION;
```
Lastly, create the Snowpipe to map this stage to a table. A file format is created for it below as well.
```python
CREATE OR REPLACE FILE FORMAT TITANIC.STG.DEFAULT_CSV TYPE = 'CSV' COMPRESSION = 'AUTO' FIELD_DELIMITER = ','
RECORD_DELIMITER = '\n' SKIP_HEADER = 1 FIELD_OPTIONALLY_ENCLOSED_BY = '\042' TRIM_SPACE = FALSE
ERROR_ON_COLUMN_COUNT_MISMATCH = TRUE ESCAPE = 'NONE' ESCAPE_UNENCLOSED_FIELD = '\134' DATE_FORMAT = 'AUTO'
TIMESTAMP_FORMAT = 'AUTO' NULL_IF = ('');
create or replace pipe titanic.stg.snowpipe auto_ingest=true as
copy into titanic.stg.passengers
from @titanic.stg.snowpipe_passengers
file_format = TITANIC.STG.DEFAULT_CSV;
```
### Automate the Snowpipe loading {: #automate-the-snowpipe-loading}
Snowflake provides options for loading new data as it arrives. This example applies option 1 (described in the [Snowflake documentation](https://docs.snowflake.com/en/user-guide/data-load-snowpipe-auto-s3.html#step-1-create-a-stage-if-needed){ target=_blank }) to use a Snowflake SQS queue directly. Note that step 4 to [create new file event notifications](https://docs.snowflake.com/en/user-guide/data-load-snowpipe-auto-s3.html#step-4-configure-event-notifications){ target=_blank } is required.
To enable Snowpipe, navigate to the S3 bucket, click the **Properties tab > Events** tile, and then click **Add notification**. Create a notification to add a message to the specified SQS queue retrieved from the Snowflake pipe for every new file arrival.

The pipe is now ready to accept and load data.
### Create the stream {: #create-the-stream }
You can create two types of stream objects in Snowflake—standard and append-only. Standard stream objects capture any type of change to a table; append-only stream objects capture inserted rows. Use the former for general Change Data Capture (CDC) processing. Use the latter (used in this example) for simple new row ingest processing.
In the append-only approach, think of the stream as a table that contains only records that are new since the last time any data was selected from it. Once a DML query that sources a stream is made, the rows returned are considered consumed and the stream becomes empty. In programming terms, this is similar to a queue.
```python
create or replace stream TITANIC.STG.new_passengers_stream
on table TITANIC.STG.PASSENGERS append_only=true;
```
### Create the task {: #create-the-task }
A task is a step or series of cascading steps that can be constructed to perform an ELT operation. Tasks can be scheduled—similar to cron jobs— and set to run by days, times, or periodic intervals.
The following basic task scores the Titanic passengers through the UDF and loads the scored data to the presentation layer. It checks to see if new records exist in the stream every 5 minutes; if records are found, the task runs. The task is created in a suspended state; enable the task by resuming it. Note that many timing options are available for scheduling based on days, times, or periods.
```python
CREATE or replace TASK TITANIC.STG.score_passengers_task
WAREHOUSE = COMPUTE_WH
SCHEDULE = '5 minute'
WHEN
SYSTEM$STREAM_HAS_DATA('TITANIC.STG.NEW_PASSENGERS_STREAM')
AS
INSERT INTO TITANIC.PUBLIC.PASSENGERS_SCORED
select passengerid, pclass, name, sex, age, sibsp, parch, ticket, fare, cabin, embarked,
udf_titanic_score(name, sex, pclass, fare, cabin, sibsp, embarked, parch, age) as score
from TITANIC.STG.new_passengers_stream;
ALTER TASK score_passengers_task RESUME;
```
## Ingest and scoring pipeline complete {: #ingest-and-scoring-pipeline-complete }
The end-to-end pipeline is now complete. A `PASSENGERS.csv` file (available in [GitHub](https://github.com/datarobot-community/ai_engineering/tree/master/databases/snowflake/external_function){ target=_blank }) can run the pipeline, copying it into the watched bucket. The file prefix results in the data being ingested into a staging schema, scored through a DataRobot model, and then loaded into the presentation schema—all without any external ETL tooling.
```bash
aws s3 cp PASSENGERS.csv s3://bucket/snowpipe/input/passengers/PASSENGERS.csv
```
|
sf-function-streams
|
---
title: Data ingest and project creation
description: Retrieve data for Snowflake project creation.
---
# Data ingest and project creation {: #data-ingest-and-project-creation }
To create a project in DataRobot, you first need to ingest a training dataset. This dataset may or may not go through data engineering or feature engineering processes before being used for modeling.
!!! note
The usage examples provided are not exclusive to Snowflake and can be applied in part or in whole to other databases.
At a high level, there are two approaches for getting this data into DataRobot:
* **PUSH**
Send data to DataRobot and create a project with it. Examples include dragging a supportable file type into the GUI or leveraging the DataRobot API.
* **PULL**
Create a project by pulling data from somewhere, such as the URL to a dataset, or via a database connection.
Both examples are demonstrated below. Using the well-known [Kaggle Titanic Survival dataset](https://www.kaggle.com/c/titanic/data){ target=_blank }, a single tabular dataset is created with one new feature-engineered column, specifically:
```
total_family_size = sibsp + parch + 1
```
## PUSH: DataRobot Modeling API {: #push-datarobot-modeling-api }
You can interact with DataRobot [via the UI](import-to-dr) or programmatically through a REST API.
The API is wrapped by an available [R SDK](https://cran.r-project.org/web/packages/datarobot/index.html){ target=_blank } or [Python SDK](https://pypi.org/project/datarobot/){ target=_blank }, which simplifies calls and workflows with common multistep and asynchronous processes.
The process below leverages Python 3 and the DataRobot Python SDK package for project creation.
The data used in these examples is obtained via the [Snowflake Connector for Python](https://docs.snowflake.net/manuals/user-guide/python-connector.html){ target=_blank }. For easier data manipulation, the Pandas-compatible driver installation option is used to accommodate feature engineering using dataframes.
First, import the necessary libraries and credentials; for convenience, they have been hardcoded into the script in this example.
```python
import snowflake.connector
import datetime
import datarobot as dr
import pandas as pd
# snowflake parameters
SNOW_ACCOUNT = 'my_creds.SNOW_ACCOUNT'
SNOW_USER = 'my_creds.SNOW_USER'
SNOW_PASS = 'my_creds.SNOW_PASS'
SNOW_DB = 'TITANIC'
SNOW_SCHEMA = 'PUBLIC'
# datarobot parameters
DR_API_TOKEN = 'YOUR API TOKEN'
# replace app.datarobot.com with application host of your cluster if installed locally
DR_ENDPOINT = 'https://app.datarobot.com/api/v2'
DR_HEADERS = {'Content-Type': 'application/json', 'Authorization': 'token %s' % DR_API_TOKEN}
```
Below, the training dataset is loaded into the table `TITANIC.PUBLIC.PASSENGERS_TRAINING` and then retrieved and brought into a Pandas dataframe.
```python
# create a connection
ctx = snowflake.connector.connect(
user=SNOW_USER,
password=SNOW_PASS,
account=SNOW_ACCOUNT,
database=SNOW_DB,
schema=SNOW_SCHEMA,
protocol='https',
application='DATAROBOT',
)
# create a cursor
cur = ctx.cursor()
# execute sql
sql = "select * from titanic.public.passengers_training"
cur.execute(sql)
# fetch results into dataframe
df = cur.fetch_pandas_all()
df.head()
```

You can then perform feature engineering within Python (in this case, using the Pandas library).
!!! note
Feature names are uppercase because Snowflake follows the ANSI standard SQL convention of capitalizing column names and treating them as case-insensitive unless quoted.
```python
# feature engineering a new column for total family size
df['TOTAL_FAMILY_SIZE'] = df['SIBSP'] + df['PARCH'] + 1
df.head()
```

The data is then submitted to DataRobot to start a new modeling project.
```python
# create a connection to datarobot
dr.Client(token=DR_API_TOKEN, endpoint=DR_MODELING_ENDPOINT)
# create project
now = datetime.datetime.now().strftime('%Y-%m-%dT%H:%M')
project_name = 'Titanic_Survival_{}'.format(now)
proj = dr.Project.create(sourcedata=df,
project_name=project_name)
# further work with project via the python API, or work in GUI (link to project printed below)
print(DR_MODELING_ENDPOINT[:-6] + 'projects/{}'.format(proj.id))
```
You can interact further with the project using the SDK.
## PULL: Snowflake JDBC SQL {: #pull-snowflake-jdbc-sql }
Snowflake is cloud-native and publicly available by default. The DataRobot platform supports the installation of JDBC drivers to establish database connectivity. To connect from a locally hosted database, you must open firewall ports to provide DataRobot access and whitelist its IP addresses for incoming traffic. You might need to take additional similar steps if a service like [AWS PrivateLink](https://aws.amazon.com/privatelink/){ target=_blank } is leveraged in front of a Snowflake instance.
!!! note
If your database is protected by a network policy that only allows connections from specific IP addresses, contact [DataRobot Support](intake-options#source-ip-addresses-for-whitelisting) for a list of addresses that an administrator must add to your network policy (whitelist).
The DataRobot managed AI Platform has a JDBC driver installed and available; customers with a Self-Managed AI Platform installation must add the driver (contact DataRobot Support for assistance, if needed).
You can establish a JDBC connection and initiate a project from the source via SQL using the [AI Catalog](catalog). Use the JDBC connection to set up objects connected to Snowflake.
1. [Create a Snowflake data connection](data-conn#create-a-new-connection) in DataRobot.
The JDBC driver connection string's format can be found in the [Snowflake documentation](https://docs.snowflake.net/manuals/user-guide/jdbc-configure.html#jdbc-driver-connection-string){ target=_blank }. In this example, the database is named `titanic`; if any parameters are left unspecified, the defaults associated with the Snowflake account login are used.

2. With the data connection created, you can now import assets into the AI Catalog. In the **AI Catalog**, click **Add to Catalog > Existing Data Connection**, choose the newly created connection, and respond to the credentials prompt. If you previously connected to the database, DataRobot provides the option to select from your saved credentials.
When connectivity is established, DataRobot displays metadata of accessible objects. For this example, feature engineering of the new column will be done in SQL.
4. Choose [**SQL query**](catalog#use-a-sql-query) rather than the object browsing option. The SQL to extract and create the new feature can be written and tested here.

Note the [**Create Snapshot**](catalog#create-a-snapshot) option. When checked, DataRobot extracts the data and materializes the dataset in the catalog (and adds a *Snapshot* label to the catalog listing). The snapshot can be shared among users and used to create projects, but it will not update from the database again unless you create a new snapshot for the dataset to refresh the materialized data with the latest dataset. Alternatively, you can add a *Dynamic* dataset. When dynamic, each subsequent use results in DataRobot re-executing the query against the database, pulling the latest data. When registration completes successfully, the dataset is published and available for use.
Some additional considerations with this method:
* The SQL for a dynamic dataset cannot be edited.
* You might want to order the data because it can affect training dataset partitions from one project to the next.
* A best practice for a dynamic dataset is to list each column of interest rather than using "*" for all columns.
* For time series datasets, you need to order the data by grouping it and sorting on the time element.
* You may want to connect to views with underlying logic that can be changed during project iterations. If implemented, that workflow may make it difficult to associate a project instance to a particular view logic at the time of extract.
[Code for this example is available in GitHub](https://github.com/datarobot-community/ai_engineering/tree/master/databases/snowflake/project_creation/snowflake_api_push.ipynb){ target=_blank }.
See [Real-time predictions](sf-client-scoring) to learn about this model scoring techniques using DataRobot and Snowflake.
|
sf-project-creation
|
---
title: Server-side model scoring
description: Use the DataRobot Batch Prediction API for server-side model scoring.
---
# Server-side scoring {: #server-side-scoring }
The following describes advanced scoring options with Snowflake as a cloud-native database, leveraging the DataRobot Batch Prediction API from the UI or directly. The UI approach is good for ad-hoc use cases and smaller table sandbox scoring jobs. As complexity grows, the API offers flexibility to run more complex, multistep pipeline jobs. As data volumes grow, using S3 as an intermediate layer is one option for keeping strict control over resource usage and optimizing for cost efficiency.
* DataRobot UI: Table scoring (JDBC supported by the API "behind the scenes")
* DataRobot API: Query-as-source (JDBC, API)
* DataRobot API: S3 Scoring with pre- or post-SQL (S3, API)
Each option has trade-offs between simplicity and performance to meet business requirements. Following is a brief overview of the Batch Prediction API and prerequisites universal to all scoring approaches.
## Batch Prediction API {: #batch-prediction-api }
The Batch Prediction API allows a dataset of any size to be sent to DataRobot for scoring. This data is sliced up into individual HTTP requests and sent in parallel threads to saturate the [Dedicated Prediction Servers (DPSs)](intake-options) available to maximize scoring throughput. Source data and target data can be local files, S3/object storage, or JDBC data connections, and can be mixed and matched as well.
See the [batch prediction documentation](batch-prediction-api/index) for additional information.
## Considerations {: #considerations }
!!! note
Access and privileges for deployments vs. projects may differ. For example, an account may have the ability to <a href="https://www.datarobot.com/wiki/scoring/" target="_blank">score</a> a model, but not be able to see the project or data that went into creating it. As a best practice, associate production workflows with a service account instead of a specific employee to abstract employees from your production scoring pipeline.
* DataRobot Self-Managed AI Platform users may already have connectivity between their Snowflake account and their DataRobot environment. If additional network access is required, your infrastructure teams can fully control network connectivity.
* DataRobot managed AI Platform users who want DataRobot to access their Snowflake instance may require additional infrastructure configuration; contact [DataRobot support](mailto:support@datarobot.com) for assistance. Snowflake is, by default, publicly accessible. Customers may have set up easy/vanity local DNS entries (customer.snowflakecomputing.com) which DataRobot cannot resolve, or be leveraging <a href="https://docs.snowflake.com/en/user-guide/admin-security-privatelink.html" target="_blank">AWS PrivateLink</a> with the option to block public IPs.
* The Snowflake write-back account requires `CREATE TABLE`, `INSERT`, and `UPDATE` privileges, depending on use case and workflow. Additionally, the JDBC driver requires the `CREATE STAGE` privilege to perform [faster stage bulk inserts vs. regular array binding inserts](https://docs.snowflake.com/en/user-guide/jdbc-using.html#batch-updates){ target=_blank }. This creates a temporary stage object that can be used for the duration of the JDBC session.
## DataRobot UI {: #datarobot-ui}
### Table scoring {: #table-scoring}
You can configure quick and simple batch scoring jobs directly within the DataRobot application. Jobs can be run ad-hoc or can be scheduled. Generally speaking, this scoring approach is the best option for use cases that only require scoring for reasonably small-sized tables. It enables you to perform some scoring and write back to the database, for example to sandbox/analysis area.
See the documentation on [Snowflake prediction job examples](pred-job-examples-snowflake) for detailed workflows on setting up batch prediction job definitions for Snowflake using either a JDBC connector with Snowflake as an external data source or the Snowflake adapter with an external stage.
<!--
This example uses the [Kaggle Titanic Survival dataset](https://www.kaggle.com/c/titanic/data){ target=_blank } with a binary classifier, predicting either `1` or `0`, with the positive class label `1` indicating survival.
1. Navigate to a deployed model. There you find the Snowflake tile available under the [**Integrations**](code-py#integration) tab for the deployment.

2. Choose the **Snowflake** tile to open the **Snowflake Integration > Source** window to access a browsable list of AI Catalog items. Select your source data and click **Next**.
!!! note
In this case, the test dataset has been uploaded to the `titanic.public.passengers` table, which has a dynamic AI Catalog item associated with it. Because it is _dynamic_, the data has not been snapshotted and stored in Snowflake; the dataset is pulled from the database at run-time. See [Snowflake data ingest](sf-project-creation) for an example of creating an AI Catalog item.

2. After entering credentials, the **Prediction options** window opens. Set the number of Prediction Explanations (three in this example) and the destination features and click **Next**.
!!! note
As a best practice, the surrogate key rather than the entire record has been added as a passthrough to the result table, as the scoring results can be joined back to the original data in the database on `PASSENGERID`. Not including the full record is more network efficient; however, in the event of no key being present, additional values or the entire record are necessary to understand input data associated with the values produced by the deployed model.

3. In the **Destinations** window, specify the destination. DataRobot quotes the schema and table name in the `CREATE` table SQL—making them case sensitive—a best practice is to specify the values in ALL CAPS to create an ANSI standard case-insensitive object. To create the table, click **Next**.

!!! note
There is no AI Catalog item for this example, as the catalog is typically for datasets that you would create a project from, although in the case of this scoring job, a table is being written to. DataRobot is creating the table (`community_passengers`) in the database to write to. The interface requires that the JDBC URL be specified. An example URL is as follows, with the working/default database being specified as part of the connection string:
`jdbc:snowflake://dp12345.us-east-1.snowflakecomputing.com/?db=titanic`
5. The **Schedule** window allows you to name the job and set a regular running schedule (the scheduling option is not used in this example; the job runs on demand). For more information, see the documentation on [scheduling Batch Prediction jobs](job-scheduling#schedule-batch-prediction-jobs).

The job can be edited or can be run on demand using the **Run now** button.

6. Once set to run, the job is added to the Batch Prediction API queue. One batch job is run at a time, with results steaming to Snowflake until all records are scored through the pipeline. Completed results are shown below.

-->
## DataRobot API {: #datarobot-api }

DataRobot's Batch Prediction API can also be used programmatically. Benefits of using the API over the UI include:
* The request code can be inserted into any production pipeline to sit between pre-and post-scoring steps.
* The code can be triggered by an existing scheduler or in response to events as they take place.
* It is not necessary to create an AI Catalog entry—the API will accept a table, view, or query.
### Query as source {: #query-as-source }
Consider the following when working with the API:
* Batch prediction jobs must be initialized and then added to a job queue.
* Jobs from local files do not begin until data is uploaded.
* For a Snowflake-to-Snowflake job, both ends of the pipeline must be set with Snowflake source and target.
Additional details about the job (deployment, prediction host, columns to be passed through) can be specified as well (see the [Batch Prediction API](batch-prediction-api/index) documentation for a full list of available options).
Below is an example of how DataRobot's Batch Prediction API can be used to score Snowflake data via a basic JDBC connection.
```python
import pandas as pd
import requests
import time
from pandas.io.json import json_normalize
import json
import my_creds
# datarobot parameters
API_KEY = my_creds.API_KEY
USERNAME = my_creds.USERNAME
DEPLOYMENT_ID = my_creds.DEPLOYMENT_ID
DATAROBOT_KEY = my_creds.DATAROBOT_KEY
# replace with the load balancer for your prediction instance(s)
DR_PREDICTION_HOST = my_creds.DR_PREDICTION_HOST
DR_APP_HOST = 'https://app.datarobot.com'
DR_MODELING_HEADERS = {'Content-Type': 'application/json', 'Authorization': 'token %s' % API_KEY}
headers = {'Content-Type': 'text/plain; charset=UTF-8', 'datarobot-key': DATAROBOT_KEY}
url = '{dr_prediction_host}/predApi/v1.0/deployments/{deployment_id}/'\
'predictions'.format(dr_prediction_host=DR_PREDICTION_HOST, deployment_id=DEPLOYMENT_ID)
# snowflake parameters
SNOW_USER = my_creds.SNOW_USER
SNOW_PASS = my_creds.SNOW_PASS
```
You can leverage an existing data connection to connect to a database (see the [data ingest page](sf-project-creation) for an example using the UI). In the example below, the data connection uses a name lookup.
```python
"""
get a data connection by name, return None if not found
"""
def dr_get_data_connection(name):
data_connection_id = None
response = requests.get(
DR_APP_HOST + '/api/v2/externalDataStores/',
headers=DR_MODELING_HEADERS,
)
if response.status_code == 200:
df = pd.io.json.json_normalize(response.json()['data'])[['id', 'canonicalName']]
if df[df['canonicalName'] == name]['id'].size > 0:
data_connection_id = df[df['canonicalName'] == name]['id'].iloc[0]
else:
print('Request failed; http error {code}: {content}'.format(code=response.status_code, content=response.content))
return data_connection_id
data_connection_id = dr_get_data_connection('snow_3_12_0_titanic')
```
A Batch Prediction job needs credentials specified; Snowflake user credentials can be saved securely to the server to run the job. Note that applied DataRobot privileges are established via the DataRobot API token in the header level of the request or session. These privileges will "own" the prediction job created and must be able to access the deployed model. You can create or look up credentials for the database with the following code snippets.
```python
# get a saved credential set, return None if not found
def dr_get_catalog_credentials(name, cred_type):
if cred_type not in ['basic', 's3']:
print('credentials type must be: basic, s3 - value passed was {ct}'.format(ct=cred_type))
return None
credentials_id = None
response = requests.get(
DR_APP_HOST + '/api/v2/credentials/',
headers=DR_MODELING_HEADERS,
)
if response.status_code == 200:
df = pd.io.json.json_normalize(response.json()['data'])[['credentialId', 'name', 'credentialType']]
if df[(df['name'] == name) & (df['credentialType'] == cred_type)]['credentialId'].size > 0:
credentials_id = df[(df['name'] == name) & (df['credentialType'] == cred_type)]['credentialId'].iloc[0]
else:
print('Request failed; http error {code}: {content}'.format(code=response.status_code, content=response.content))
return credentials_id
# create credentials set
def dr_create_catalog_credentials(name, cred_type, user, password, token=None):
if cred_type not in ['basic', 's3']:
print('credentials type must be: basic, s3 - value passed was {ct}'.format(ct=cred_type))
return None
if cred_type == 'basic':
json = {
"credentialType": cred_type,
"user": user,
"password": password,
"name": name
}
elif cred_type == 's3' and token != None:
json = {
"credentialType": cred_type,
"awsAccessKeyId": user,
"awsSecretAccessKey": password,
"awsSessionToken": token,
"name": name
}
elif cred_type == 's3' and token == None:
json = {
"credentialType": cred_type,
"awsAccessKeyId": user,
"awsSecretAccessKey": password,
"name": name
}
response = requests.post(
url = DR_APP_HOST + '/api/v2/credentials/',
headers=DR_MODELING_HEADERS,
json=json
)
if response.status_code == 201:
return response.json()['credentialId']
else:
print('Request failed; http error {code}: {content}'.format(code=response.status_code, content=response.content))
# get or create a credential set
def dr_get_or_create_catalog_credentials(name, cred_type, user, password, token=None):
cred_id = dr_get_catalog_credentials(name, cred_type)
if cred_id == None:
return dr_create_catalog_credentials(name, cred_type, user, password, token=None)
else:
return cred_id
credentials_id = dr_get_or_create_catalog_credentials('snow_community_credentials',
'basic', my_creds.SNOW_USER, my_creds.SNOW_PASS)
```
Create a session to define the job, which then submits the job and slots it to run asynchronously. DataRobot returns an HTTP 202 status code upon successful submission. You can retrieve the job state by querying the API for the current state of the job.
```python
session = requests.Session()
session.headers = {
'Authorization': 'Bearer {}'.format(API_KEY)
}
```
A table to hold the results is created in Snowflake with the following SQL statement, reflecting the structure below:
```python
create or replace TABLE PASSENGERS_SCORED_BATCH_API (
SURVIVED_1_PREDICTION NUMBER(10,9),
SURVIVED_0_PREDICTION NUMBER(10,9),
SURVIVED_PREDICTION NUMBER(38,0),
THRESHOLD NUMBER(6,5),
POSITIVE_CLASS NUMBER(38,0),
PASSENGERID NUMBER(38,0)
);
```
The job specifies the following parameters:
Name | Description
---------- | -----------
Source | `Snowflake JDBC`
Source data | Query results (simple select from passengers)
Source fetch size | `100,000` (max fetch data size)
Job concurrency | 4 prediction core threads requested
Passthrough columns | Keep the surrogate key `PASSENGERID`
Target table | `PUBLIC.PASSENGERS_SCORED_BATCH_API`
`statementType` | _insert_ (data will be inserted into the table)
```python
job_details = {
"deploymentId": DEPLOYMENT_ID,
"numConcurrent": 4,
"passthroughColumns": ["PASSENGERID"],
"includeProbabilities": True,
"predictionInstance" : {
"hostName": DR_PREDICTION_HOST,
"sslEnabled": false,
"apiKey": API_KEY,
"datarobotKey": DATAROBOT_KEY,
},
"intakeSettings": {
"type": "jdbc",
"fetchSize": 100000,
"dataStoreId": data_connection_id,
"credentialId": credentials_id,
#"table": "PASSENGERS_500K",
#"schema": "PUBLIC",
"query": "select * from PASSENGERS"
},
'outputSettings': {
"type": "jdbc",
"table": "PASSENGERS_SCORED_BATCH_API",
"schema": "PUBLIC",
"statementType": "insert",
"dataStoreId": data_connection_id,
"credentialId": credentials_id
}
}
```
Upon successful job submission, the DataRobot response provides a link to check job state and details.
```python
response = session.post(
DR_APP_HOST + '/api/v2/batchPredictions',
json=job_details
)
```
The job may or may not be in the queue, depending on whether other jobs are in front of it. Once launched, it proceeds to initialization and then runs through stages until aborted or completed. You can create a loop to repetitively check the state of the asynchronous job and hold control of a process until the job completes with an `ABORTED` or `COMPLETED` status.
```python
if response.status_code == 202:
job = response.json()
print('queued batch job: {}'.format(job['links']['self']))
while job['status'] == 'INITIALIZING':
time.sleep(3)
response = session.get(job['links']['self'])
response.raise_for_status()
job = response.json()
print('completed INITIALIZING')
if job['status'] == 'RUNNING':
while job['status'] == 'RUNNING':
time.sleep(3)
response = session.get(job['links']['self'])
response.raise_for_status()
job = response.json()
print('completed RUNNING')
print('status is now {status}'.format(status=job['status']))
if job['status'] != 'COMPLETED':
for i in job['logs']:
print(i)
else:
print('Job submission failed; http error {code}: {content}'.format(code=response.status_code, content=response.content))
```
[Code for this exercise is available in GitHub](https://github.com/datarobot-community/ai_engineering/tree/master/databases/snowflake/batch_prediction_api/snow_batch_prediction_api_query.ipynb){ target="_blank"}.
### S3 scoring with pre/post SQL (new records) {: #s3-scoring-with-prepost-sql-new-records }
This example highlights using an S3 pipeline between Snowflake sources and targets. Pre- or post-processing in SQL is not required.
This example shows:
* Scoring changes of new records based on pre-scoring retrieval of the last successful scoring run.
* A post-scoring process that populates a target table and updates the successful ETL run history.
In the example, data is loaded into an STG schema on Snowflake that exists to support an ETL/ELT pipeline. It is then updated into the target presentation table in the `PUBLIC` schema via a bulk update. It uses bulk updates because individual update statements would be very slow on Snowflake and other analytic databases vs. traditional row-store operational databases.
The target presentation table contains a single field for reporting purposes from the scored results table (the `SURVIVAL` field). Using S3 allows using stage objects on data extract and load. Using discrete operations separate from scoring can minimize the time an ETL compute warehouse is up and running during the pipeline operations.
Considerations that may result in S3 being part of a scoring pipeline include:
* Leveraging Snowflake's native design to write to S3 (and possibly shred the data into multiple files).
* Using the native bulk insert capability.
* Currently, Snowflake compute warehouses charge based on the first 60 seconds of spin-up for a cluster, then each second after that. The prior methods (above) stream data out and in via JDBC and will keep a cluster active throughout the scoring process. Discretely separating out extract, scoring, and ingest steps may reduce the time the compute warehouse is actually running, which can result in cost reductions.
* S3 inputs and scored sets could easily be used to create a point-in-time archive of data.

In this example, a simple `ETL_HISTORY` table shows the scoring job history. The name of the job is `pass_scoring`, and the last three times it ran were March 3rd, 7th, and 11th.

The next job scores any changed records greater than or equal to the last run, but before the current job run timestamp. Upon successful completion of the job, a new record will be placed into this table.

Of the 500k records in this table that were scored:
* Row 1 in this example will not be scored; it has not changed since the prior successful ETL run on the 11th.
* Row 2 will be re-scored, as it was updated on the 20th.
* Row 3 will be scored for the first time, as it was newly created on the 19th.
Following are initial imports and various environment variables for DataRobot, Snowflake, and AWS S3:
```python
import pandas as pd
import requests
import time
from pandas.io.json import json_normalize
import snowflake.connector
import my_creds
# datarobot parameters
API_KEY = my_creds.API_KEY
USERNAME = my_creds.USERNAME
DEPLOYMENT_ID = my_creds.DEPLOYMENT_ID
DATAROBOT_KEY = my_creds.DATAROBOT_KEY
# replace with the load balancer for your prediction instance(s)
DR_PREDICTION_HOST = my_creds.DR_PREDICTION_HOST
DR_APP_HOST = 'https://app.datarobot.com'
DR_MODELING_HEADERS = {'Content-Type': 'application/json', 'Authorization': 'token %s' % API_KEY}
# snowflake parameters
SNOW_ACCOUNT = my_creds.SNOW_ACCOUNT
SNOW_USER = my_creds.SNOW_USER
SNOW_PASS = my_creds.SNOW_PASS
SNOW_DB = 'TITANIC'
SNOW_SCHEMA = 'PUBLIC'
# ETL parameters
JOB_NAME = 'pass_scoring'
```
Similar to the previous example, you must specify credentials to leverage S3. You can create, save, or look up credentials for S3 access with the following code snippets. The account must have privileges to access the same area that the Snowflake Stage object is using to read/write data from (see Snowflake's [Creating the Stage](https://docs.snowflake.com/en/sql-reference/sql/create-stage.html){ target=_blank } article for more information).
```python
# get a saved credential set, return None if not found
def dr_get_catalog_credentials(name, cred_type):
if cred_type not in ['basic', 's3']:
print('credentials type must be: basic, s3 - value passed was {ct}'.format(ct=cred_type))
return None
credentials_id = None
response = requests.get(
DR_APP_HOST + '/api/v2/credentials/',
headers=DR_MODELING_HEADERS,
)
if response.status_code == 200:
df = pd.io.json.json_normalize(response.json()['data'])[['credentialId', 'name', 'credentialType']]
if df[(df['name'] == name) & (df['credentialType'] == cred_type)]['credentialId'].size > 0:
credentials_id = df[(df['name'] == name) & (df['credentialType'] == cred_type)]['credentialId'].iloc[0]
else:
print('Request failed; http error {code}: {content}'.format(code=response.status_code, content=response.content))
return credentials_id
# create credentials set
def dr_create_catalog_credentials(name, cred_type, user, password, token=None):
if cred_type not in ['basic', 's3']:
print('credentials type must be: basic, s3 - value passed was {ct}'.format(ct=cred_type))
return None
if cred_type == 'basic':
json = {
"credentialType": cred_type,
"user": user,
"password": password,
"name": name
}
elif cred_type == 's3' and token != None:
json = {
"credentialType": cred_type,
"awsAccessKeyId": user,
"awsSecretAccessKey": password,
"awsSessionToken": token,
"name": name
}
elif cred_type == 's3' and token == None:
json = {
"credentialType": cred_type,
"awsAccessKeyId": user,
"awsSecretAccessKey": password,
"name": name
}
response = requests.post(
url = DR_APP_HOST + '/api/v2/credentials/',
headers=DR_MODELING_HEADERS,
json=json
)
if response.status_code == 201:
return response.json()['credentialId']
else:
print('Request failed; http error {code}: {content}'.format(code=response.status_code, content=response.content))
# get or create a credential set
def dr_get_or_create_catalog_credentials(name, cred_type, user, password, token=None):
cred_id = dr_get_catalog_credentials(name, cred_type)
if cred_id == None:
return dr_create_catalog_credentials(name, cred_type, user, password, token=None)
else:
return cred_id
credentials_id = dr_get_or_create_catalog_credentials('s3_community',
's3', my_creds.SNOW_USER, my_creds.SNOW_PASS)
```
Next, create a connection to Snowflake and use the last successful run time and current time to create the bounds for determining which newly created or recently updated rows must be scored:
```python
# create a connection
ctx = snowflake.connector.connect(
user=SNOW_USER,
password=SNOW_PASS,
account=SNOW_ACCOUNT,
database=SNOW_DB,
schema=SNOW_SCHEMA,
protocol='https',
application='DATAROBOT',
)
# create a cursor
cur = ctx.cursor()
# execute sql to get start/end timestamps to use
sql = "select last_ts_scored_through, current_timestamp::TIMESTAMP_NTZ cur_ts " \
"from etl_history " \
"where job_nm = '{job}' " \
"order by last_ts_scored_through desc " \
"limit 1 ".format(job=JOB_NAME)
cur.execute(sql)
# fetch results into dataframe
df = cur.fetch_pandas_all()
start_ts = df['LAST_TS_SCORED_THROUGH'][0]
end_ts = df['CUR_TS'][0]
```
Dump the data out to S3.
```python
# execute sql to dump data into a single file in S3 stage bucket
# AWS single file snowflake limit 5 GB
sql = "COPY INTO @S3_SUPPORT/titanic/community/" + JOB_NAME + ".csv " \
"from " \
"( " \
" select passengerid, pclass, name, sex, age, sibsp, parch, ticket, fare, cabin, embarked " \
" from passengers_500k_ts " \
" where nvl(updt_ts, crt_ts) >= '{start}' " \
" and nvl(updt_ts, crt_ts) < '{end}' " \
") " \
"file_format = (format_name='default_csv' compression='none') header=true overwrite=true single=true;".format(start=start_ts, end=end_ts)
cur.execute(sql)
```
Next, create a session to perform the DataRobot Batch Prediction API scoring job submission and monitor its progress.
```python
session = requests.Session()
session.headers = {
'Authorization': 'Bearer {}'.format(API_KEY)
}
```
The job is defined to take the file dump from Snowflake as input and then create a file with `_scored` appended in the same S3 path. The example specifies a concurrency of `4` prediction cores with passthrough of the surrogate key `PASSENGERID` to be joined on later.
```python
INPUT_FILE = 's3://'+ my_creds.S3_BUCKET + '/titanic/community/' + JOB_NAME + '.csv'
OUTPUT_FILE = 's3://'+ my_creds.S3_BUCKET + '/titanic/community/' + JOB_NAME + '_scored.csv'
job_details = {
'deploymentId': DEPLOYMENT_ID,
'passthroughColumns': ['PASSENGERID'],
'numConcurrent': 4,
"predictionInstance" : {
"hostName": DR_PREDICTION_HOST,
"datarobotKey": DATAROBOT_KEY
},
'intakeSettings': {
'type': 's3',
'url': INPUT_FILE,
'credentialId': credentials_id
},
'outputSettings': {
'type': 's3',
'url': OUTPUT_FILE,
'credentialId': credentials_id
}
}
```
Submit the job for processing and retrieve a URL for monitoring.
```python
response = session.post(
DR_APP_HOST + '/api/v2/batchPredictions',
json=job_details
)
```
Hold control until the job completes.
```python
if response.status_code == 202:
job = response.json()
print('queued batch job: {}'.format(job['links']['self']))
while job['status'] == 'INITIALIZING':
time.sleep(3)
response = session.get(job['links']['self'])
response.raise_for_status()
job = response.json()
print('completed INITIALIZING')
if job['status'] == 'RUNNING':
while job['status'] == 'RUNNING':
time.sleep(3)
response = session.get(job['links']['self'])
response.raise_for_status()
job = response.json()
print('completed RUNNING')
print('status is now {status}'.format(status=job['status']))
if job['status'] != 'COMPLETED':
for log in job['logs']:
print(log)
else:
print('Job submission failed; http error {code}: {content}'.format(code=response.status_code, content=response.content))
```
Upon completion, load the staging table into the STG schema table `PASSENGERS_SCORED_BATCH_API` with the prediction results via a truncate and bulk load operation.
```python
# multi-statement executions
# https://docs.snowflake.com/en/user-guide/python-connector-api.html#execute_string
# truncate and load STG schema table with scored results
sql = "truncate titanic.stg.PASSENGERS_SCORED_BATCH_API; " \
" copy into titanic.stg.PASSENGERS_SCORED_BATCH_API from @S3_SUPPORT/titanic/community/" + JOB_NAME + "_scored.csv" \
" FILE_FORMAT = 'DEFAULT_CSV' ON_ERROR = 'ABORT_STATEMENT' PURGE = FALSE;"
ctx.execute_string(sql)
```
Finally, create a transaction to update the presentation table with the latest survivability scores towards the positive class label `1` of survival. The ETL history is updated upon successful completion of all tasks.
```python
# update target presentation table and ETL history table in transaction
sql = \
"begin; " \
"update titanic.public.passengers_500k_ts trg " \
"set trg.survival = src.survived_1_prediction " \
"from titanic.stg.PASSENGERS_SCORED_BATCH_API src " \
"where src.passengerid = trg.passengerid; " \
"insert into etl_history values ('{job}', '{run_through_ts}'); " \
"commit; ".format(job=JOB_NAME, run_through_ts=end_ts)
ctx.execute_string(sql)
```
Rows 2 and 3 are updated with new survival scores as expected.

ETL history is updated and subsequent runs are now based on the (most recent) successful timestamp.

[Code for this example is available in GitHub](https://github.com/datarobot-community/ai_engineering/tree/master/databases/snowflake/batch_prediction_api/snow_batch_prediction_api_query_s3.ipynb){ target=_blank }.
Enhancements to consider:
* Add error handling, scoring or otherwise, that suits your workflow and toolset.
* Incorporate serverless technology, like AWS Lambda, into scoring workflows to kick off a job based on an event, like S3 object creation.
* As data volumes grow, consider the following. Snowflake single statement dumps and ingests seem to perform best around 8 threads per cluster node, e.g., [a 2-node Small will not ingest a single file any faster than 1-node XSmall instance](https://www.doyouevendata.com/2018/12/21/how-to-load-data-into-snowflake-snowflake-data-load-best-practices/){target=_blank }. An XSmall would likely perform best with 8 or more file shreds.
|
sf-server-scoring
|
---
title: Generate Snowflake UDF Scoring Code
description: Use the DataRobot Scoring Code JAR as a user-defined function (UDF) on Snowflake.
---
# Generate Snowflake UDF Scoring Code {: #generate-snowflake-udf-scoring-code }
Scoring Code makes it easy for you to perform predictions on DataRobot models anywhere you want by exporting a model to a Java JAR file. Snowflake user-defined functions (UDFs) allow you to execute arbitrary Java code on Snowflake. DataRobot provides an API to Scoring Code for you to use Scoring Code as a UDF on Snowflake without writing any additional code.
To download and execute scoring code with Snowflake UDFs, you must meet the following prerequisites:
* Prepare a DataRobot model that supports Scoring Code for deployment and [create a model package](reg-create) with it.
* Register your Snowflake prediction environment to use with the model.
## Access Scoring Code {: #access-scoring-code }
When you have your model package and prediction environment prepared in DataRobot, you can deploy the model in order to access the Scoring Code for use with Snowflake.
1. Navigate to the **Model Registry** and select your model package. On the **Deployments** tab, select **Create new deployment**.

2. Complete the fields and [configure the deployment](deploy-external-model#deploy-an-external-model-package) as desired. To generate Snowflake UDF Scoring Code, specify your Snowflake prediction environment under the **Inference** header.

3. Once fully configured, select **Create deployment** at the top of the screen.
4. After deploying the model, access the deployment from the inventory and navigate to the **Predictions > Portable Predictions** tab. This is where Datarobot hosts the Scoring Code for the model.

5. Optionally, toggle on **Include prediction explanations** to include prediction explanations with the prediction results, then click **Download**. The Scoring Code JAR file appears in your browser bar.

6. When the Scoring Code download completes, copy the installation script and update it with your Snowflake warehouse, database, schema, and the path to the Scoring Code JAR file, then execute the script.

??? tip "Copy and paste (for regression)"
```
-- Replace with the warehouse to use
USE WAREHOUSE my_warehouse;
-- Replace with the database to use
USE DATABASE my_database;
-- Replace with the schema to use
CREATE SCHEMA IF NOT EXISTS scoring_code_udf_schema;
USE SCHEMA scoring_code_udf_schema;
-- Update this path to match the Scoring Code JAR location
PUT 'file:///path/to/downloaded_scoring_code.jar' '@~/jars/' AUTO_COMPRESS=FALSE;
-- Create the UDF
CREATE OR REPLACE FUNCTION datarobot_udf(RowValue OBJECT)
RETURNS FLOAT
LANGUAGE JAVA
IMPORTS=('@~/jars/downloaded_scoring_code.jar')
HANDLER='com.datarobot.prediction.simple.RegressionPredictor.score';
```
??? tip "Copy and paste (for classification)"
```
-- Replace with the warehouse to use
USE WAREHOUSE my_warehouse;
-- Replace with the database to use
USE DATABASE my_database;
-- Replace with the schema to use
CREATE SCHEMA IF NOT EXISTS scoring_code_udf_schema;
USE SCHEMA scoring_code_udf_schema;
-- Update this path to match the Scoring Code JAR location
PUT 'file:///path/to/downloaded_scoring_code.jar' '@~/jars/' AUTO_COMPRESS=FALSE;
-- Create the UDF
CREATE OR REPLACE FUNCTION datarobot_udf(RowValue OBJECT)
RETURNS OBJECT
LANGUAGE JAVA
IMPORTS=('@~/jars/downloaded_scoring_code.jar')
HANDLER='com.datarobot.prediction.simple.ClassificationPredictor.score';
```
The script uploads the JAR file to a Snowflake stage and creates a UDF for making predictions with Scoring Code.
!!! note
To run these scripts, you *must* use the SnowSQL command provided in the next step (7). You can't execute these scripts in Snowflake's UI.
7. Execute the script in SnowSQL by providing your credentials and the script location:

??? tip "Copy and paste"
```
snowsql --accountname $ACCOUNT_NAME --username $USERNAME --filename $SCRIPT_PATH
```
8. With your UDF successfully created, you can now use Snowflake to score data. Use the UDF in any manner supported by SQL.

??? tip "Copy and paste"
```
/*
Scoring your data
The Scoring Code UDF accepts rows of data as objects. The OBJECT_CONSTRUCT_KEEP_NULL method can be used to turn a table row into an object.
*/
-- Scoring without specifying columns. Data can contain nulls
SELECT my_datarobot_model(OBJECT_CONSTRUCT_KEEP_NULL(*)) FROM source_table;
```
9. After scoring data, you can [upload actuals via the Settings tab](accuracy-settings) in order to enable accuracy monitoring and more.

## Feature Considerations
Consider the following when using Scoring Code as a UDF on Snowflake.
* Keras models cannot be executed in Snowflake.
* Time series Scoring Code is not supported for Snowflake.
* Scoring Code JARs created prior to the release of the Snowflake Scoring Code feature cannot be run in Snowflake.
|
snowflake-sc
|
---
title: Large batch scoring
description: Use the DataRobot Batch Prediction API to mix and match source and target data sources.
---
# DataRobot and Snowflake: Large batch scoring and object storage {: #datarobot-and-snowflake-large-batch-scoring-and-object-storage}
With DataRobot's Batch Prediction API, you can construct jobs to mix and match source and target data sources with scored data destinations (JDBC databases and cloud object storage options such as AWS S3, Azure Blob, and Google GCS) across various local files . For examples of leveraging the Batch Prediction API via the UI, as well as raw HTTP endpoint requests to create batch scoring jobs, see:
* [Server-side scoring](sf-server-scoring) with Snowflake
* [Batch Prediction API](batch-prediction-api/index) documentation
* [Python SDK library functions](https://datarobot-public-api-client.readthedocs-hosted.com/en/v2.24.0/entitiesbatch_predictions.html){ target=_blank }
The critical path in a scoring pipeline is typically the amount of resources available to actually run a deployed machine learning model. Although you can extract data from a database quickly, scoring throughput is limited to available scoring compute. Inserts to shredded columnar cloud databases (e.g., Snowflake, Synapse) are also most efficient when done with native object storage bulk load operations, such as [`COPY INTO`](https://docs.snowflake.com/en/sql-reference/sql/copy-into-table.html){ target=_blank } when using a Snowflake Stage. An added benefit, particularly in Snowflake, is that warehouse billing can be limited to running just a bulk load vs. a continual set of JDBC inserts during a job. This reduces warehouse running time and thus warehouse compute costs. Snowflake and Synapse adapters can leverage bulk extract and load operations to object storage, as well as object storage scoring pipelines.
## Snowflake adapter integration {: #snowflake-adapter-integration }
The examples provided below leverage some of the credential management Batch Prediction API helper code presented in the [server-side scoring](sf-server-scoring) example.
Rather than using the Python SDK (which may be preferred for simplicity), this section demonstrates how to use the raw API with minimal dependencies. As scoring datasets grow larger, the object storage approach described here can be expected to reduce both the end-to-end scoring time and the database write time.
Since the Snowflake adapter type leverages object storage as an intermediary, batch jobs require two sets of credentials: one for Snowflake and one for the storage layer, like S3. Also, similar to jobs, adapters can be mixed and matched.
## Snowflake JDBC to DataRobot to S3 stage to Snowflake {: #snowflake-jdbc-to-datarobot-to-s3-stage-to-snowflake }

This first example leverages the prior existing JDBC adapter intake type as well as the Snowflake adapter output type, which uses the Snowflake stage object on bulk load. Only job details are specified below; see the full code on [GitHub](https://github.com/datarobot-community/ai_engineering/blob/master/databases/snowflake/snowflake_adapter/Snowflake_Adapter_Type.ipynb){ target=_blank }. The job explicitly provides all values, although many have defaults that could be used without specification. This survival model scores Titanic passengers by specifying input tables.
```python
job_details = {
"deploymentId": DEPLOYMENT_ID,
"numConcurrent": 16,
"passthroughColumns": ["PASSENGERID"],
"includeProbabilities": True,
"predictionInstance" : {
"hostName": DR_PREDICTION_HOST,
"datarobotKey": DATAROBOT_KEY
},
"intakeSettings": {
"type": "jdbc",
"dataStoreId": data_connection_id,
"credentialId": snow_credentials_id,
"table": "PASSENGERS_6M",
"schema": "PUBLIC",
},
'outputSettings': {
"type": "snowflake",
"externalStage": "S3_SUPPORT",
"dataStoreId": data_connection_id,
"credentialId": snow_credentials_id,
"table": "PASSENGERS_SCORED_BATCH_API",
"schema": "PUBLIC",
"cloudStorageType": "s3",
"cloudStorageCredentialId": s3_credentials_id,
"statementType": "insert"
}
}
```
## Snowflake to S3 stage to DataRobot to S3 stage to Snowflake {: #snowflake-to-s3-stage-to-datarobot-to-s3-stage-to-snowflake }

This second example uses the Snowflake adapter for both intake and output operations, with data dumped to an object stage, scored through an S3 pipeline, and loaded in bulk back from stage. **This is the recommended flow for performance and cost**.
* The stage pipeline (from S3 to S3) will keep a constant flow of scoring requests against Dedicated Prediction Engine (DPE) scoring resources and will fully saturate their compute.
* No matter how long the scoring component takes, the Snowflake compute resources only need to run for the duration of the initial extract and, once all data is scored, for a single final bulk load of the scored data. This maximizes the efficiency of the load, which is beneficial for the costs of running all Snowflake compute resources.
In this example, the job is similar to the first example. To illustrate the option, a SQL query is used as input rather than the source table name.
```python
job_details = {
"deploymentId": DEPLOYMENT_ID,
"numConcurrent": 16,
"chunkSize": "dynamic",
"passthroughColumns": ["PASSENGERID"],
"includeProbabilities": True,
"predictionInstance" : {
"hostName": DR_PREDICTION_HOST,
"datarobotKey": DATAROBOT_KEY
},
"intakeSettings": {
"type": "snowflake",
"externalStage": "S3_SUPPORT",
"dataStoreId": data_connection_id,
"credentialId": snow_credentials_id,
"query": "select * from PASSENGERS_6m",
"cloudStorageType": "s3",
"cloudStorageCredentialId": s3_credentials_id
},
'outputSettings': {
"type": "snowflake",
"externalStage": "S3_SUPPORT",
"dataStoreId": data_connection_id,
"credentialId": snow_credentials_id,
"table": "PASSENGERS_SCORED_BATCH_API",
"schema": "PUBLIC",
"cloudStorageType": "s3",
"cloudStorageCredentialId": s3_credentials_id,
"statementType": "insert"
}
}
```
[Code for this exercise is available in GitHub](https://github.com/datarobot-community/ai_engineering/blob/master/databases/snowflake/snowflake_adapter/Snowflake_Adapter_Type.ipynb){ target=_blank}.
If running large scoring jobs with Snowflake or Azure Synapse, it's best to take advantage of the related adapter. Using one of these adapters for both intake and output ensures the scoring pipelines scale as data volumes increase in size.
|
sf-large-batch
|
---
title: Ingest data with AWS Athena
description: Ingest AWS Athena and Parquet data for machine learning.
---
# Ingest data with AWS Athena {: #ingest-data-with-aws-athena }
Multiple big data formats now offer different approaches to compressing large amounts of data for storage and analytics; some of these formats include Orc, Parquet, and Avro. Using and querying these datasets can present some challenges. This section shows one way for DataRobot to ingest data in Apache Parquet format that is sitting at rest in AWS S3. Similar techniques can be applied in other cloud environments.
## Parquet overview {: #parquet-overview }
[Parquet](https://en.wikipedia.org/wiki/Apache_Parquet){ target=_blank } is an open source columnar data storage format. It is often misunderstood to be used primarily for compression. Additionally, note that using compressed data introduces a CPU cost to both compress and decompress it; there's no speed advantage when using all of the data either. Snowflake addresses this in an article [showing several approaches to load 10TB of benchmark data](https://community.snowflake.com/s/article/How-to-Load-Terabytes-Into-Snowflake-Speeds-Feeds-and-Techniques){ target=_blank }.

Snowflake demonstrates the load of full data records to be far higher for a simple CSV format. So what is the advantage of Parquet?
Columnar data storage offers little to no advantage when you are interested in a full record. The more columns requested, the more work must be done to read and uncompress them. This is why the full data exercise displayed above shows such a high performance for basic CSV files. However, selecting a subset of the data is where columnar really shines. If there are 50 columns of data in a loan dataset and the only one of interest is the `loan_id`, reading a CSV file will require reading 100% of the data file. However, reading a parquet file requires reading only 1 of 50 columns. Assume for simplicity's sake that all of the columns take up exactly the same space—this would translate into needing to read only 2% of the data.
You can make further read reductions by partitioning the data. To do so, create a path structure based on data values for a field. The SQL engine `WHERE` clause is applied to the folder path structure to decide whether a Parquet file inside it needs to be read. For example, you could partition and store daily files in a structure of `YYYY/MM/DD` for a loans datasource:
`loans/2020/1/1/jan1file.parquet`
`loans/2020/1/2/jan2file.parquet`
`loans/2020/1/3/jan3file.parquet`
The "hive style" of this would include the field name in the directory (partition):
`loans/year=2020/month=1/day=1/jan1file.parquet`
`loans/year=2020/month=1/day=2/jan2file.parquet`
`loans/year=2020/month=1/day=3/jan3file.parquet`
If the original program was only interested in the `loan_id` and specifically those `loan_id` values from January 2, 2020, then the 2% read would be reduced further still. Evenly distributed, this would reduce the read and decompress operation down to just 0.67% of the data, resulting in a faster read, a faster return of the data, and a lower bill for the resources required to retrieve the data.
## Data for project creation {: #data-for-project-creation }
Find the data used in this page's examples [on GitHub](https://github.com/datarobot-community/ai_engineering/blob/master/databases/athena/athena_demo_file_create.ipynb){ target=_blank }. The dataset uses 20,000 records of loan data from LendingClub and was uploaded to S3 using the [AWS Command Line Interface](https://aws.amazon.com/cli/){ target=_blank }.
```
aws --profile support s3 ls s3://engineering/athena --recursive | awk '{print $4}'
athena/
athena/ loan_credit /20k_loans_credit_data.csv
athena/ loan_history /year=2015/1/1020de8e664e4584836c3ec603c06786.parquet
athena/loan_history/year=2015/1/448262ad616e4c28b2fbd376284ae203.parquet
athena/loan_history/year=2015/2/5e956232d0d241558028fc893a90627b.parquet
athena/loan_history/year=2015/2/bd7153e175d7432eb5521608aca4fbbc.parquet
athena/loan_history/year=2016/1/d0220d318f8d4cfd9333526a8a1b6054.parquet
athena/loan_history/year=2016/1/de8da11ba02a4092a556ad93938c579b.parquet
athena/loan_history/year=2016/2/b961272f61544701b9780b2da84015d9.parquet
athena/loan_history/year=2016/2/ef93ffa9790c42978fa016ace8e4084d.parquet
```
`20k_loans_credit_data.csv` contains credit scores and histories for every loan. Loans are partitioned by year and month in impartial hive format to demonstrate the steps to work with either format within AWS Athena. Multiple parquet files are represented within the `YYYY/MM` structure, potentially representing different days a loan was created. All `.parquet` files represent loan application and repayment. This data is in a bucket in the `AWS Region US-East-1` (N. Virginia).
## AWS Athena {: #aws-athena }
AWS Athena is a managed service on AWS that provides serverless access to use ANSI SQL against S3 objects. It uses Apache Presto and can read the following file formats:
* CSV
* TSV
* JSON
* ORC
* Apache Parquet
* Avro
Athena also has support for compressed data in Snappy, Zlib, LZO, and GZIP formats. It charges on a pay-per-query model based on the amount of data read. AWS also provides [an article](https://aws.amazon.com/blogs/big-data/analyzing-data-in-s3-using-amazon-athena/){ target=_blank } on using Athena against both regular text files, as well as parquet, describing the amount of data read, time taken, and cost spent for a query:

## AWS Glue {: #aws-glue }
AWS Glue is an Extract Transform Load (ETL) tool that supports the workflow captured in this example. ETL jobs are not constructed and scheduled out. Use Glue to discover files and structure on the hosted S3 bucket in order to apply a high-level schema against the contents, so that Athena is able to understand how to read and query the contents. Glue stores contents in a hive-like meta store. Hive DDL can be written explicitly, but this example assumes a large number of potentially different files and leverages Glue's [crawler](https://docs.aws.amazon.com/glue/latest/dg/add-crawler.html){ target=_blank } to discover schemas and define tables.
AWS Glue makes a crawler and points it at an S3 bucket. The crawler is set to output its data into an [AWS Glue Data Catalog](https://docs.aws.amazon.com/glue/latest/dg/populate-data-catalog.html){ target=_blank } which is then leveraged by Athena. The Glue job should be created in the same region as the AWS S3 bucket (`US-East-1` for the example on this page). This process is outlined below.
1. Click **Add crawler** in the AWS Glue service in the AWS console to add a crawler job.

2. Name the crawler.

3. Choose the **Data stores** type and specify the bucket of interest.


4. Choose or create an Identity and Access Management (IAM) role for the crawler. Note that managing IAM is out of scope for this example, but you can reference AWS documentation for more information about IAM privileges.

5. Set the frequency to run on demand, or update as necessary to meet requirements.

6. The crawler-discovered metadata needs to write to a destination. Choose a new or existing database to serve as a catalog.

Crawler creation can now be completed. A prompt asks if the demand crawler should be run now; choose **Yes**. In this example, you can see the crawler has discovered and created two tables for the paths: `loan_credit` and `loan_history`.

The log shows the created tables as well as partitions for the `loan_history`.

The year partition was left in Hive format while the month was not, to show what happens if this methodology is not applied. Glue assigns it a generic name.
7. Navigate to tables and open `loan_history`.

8. Choose to edit the schema and click on the column name to rename the secondary partition to month and save.

This table is now available for querying in Athena.
## Create a project in DataRobot {: #create-a-project-in-datarobot }
This section outlines four methods for starting a project with data queried through Athena. All programmatic methods will use the Python SDK and some helper functions as defined in these [DataRobot Community GitHub examples](https://github.com/datarobot-community/ai_engineering/blob/master/databases/athena/athena_demo_create_project.ipynb){ target=_blank }.
The four methods of providing data are:
* [JDBC Driver](#jdbc-driver)
* [Retrieve SQL results locally](#retrieve-sql-results-locally)
* [Retrieve S3 CSV results locally](#retrieve-s3-csv-results-locally)
* [AWS S3 bucket with a signed URL](#aws-s3-bucket-with-a-signed-url)
### JDBC Driver {: #jdbc-driver }
You can install JDBC drivers and use them with DataRobot to ingest data (contact Support for installation assistance for the driver, which is not addressed in this workflow).
As of DataRobot 6.0 for the managed AI Platform offering, version 2.0 of the JDBC driver is available.
Specifically, 2.0.5 is installed and available on the cloud.
A catalog item dataset can be constructed by navigating to **AI Catalog** **>** **Add New Data** and then selecting **Data Connection** **> Add a new data connection**.
For the purposes of this example, the Athena JDBC driver connection is set up to explicitly specify the address. `Awsregion` and `S3OutputLocation` (required) are also specified. Once configured, query results write to this location as a CSV file.

Authentication takes place with an `AWSAccessKey` and `AWSSecretAccessKey` for user and password on the last step. As AWS users often have access to many services, including the ability to spin up many resources, a best practice is to create an IAM account within AWS with specific permissions for querying and then only work with Athena and S3.
After creating the data connection, select it in the **Add New Data from Data Connection** modal and use it to create an item and project.
### Retrieve SQL results locally {: #retrieve-sql-results-locally }
The snippet below sends a query to retrieve the first 100 records of loan history from the sample dataset. The results are provided back in a dictionary after paginating through the result set from Athena and loading it to local memory. You can then load the results to a dataframe, manipulate it to engineer new features, and push it into DataRobot to create a new project. The `s3_out` variable is a required parameter for Athena, which is where Athena writes CSV query results. This file is used in subsequent examples.
```python
athena_client = session.client('athena')
database = 'community_athena_demo_db'
s3_out = 's3://engineering/athena/output/'
query = "select * from loan_history limit 100"
query_results = fetchall_athena_sql(query, athena_client, database, s3_out)
# Convert to dataframe to view and manipulate
df = pd.DataFrame(query_results)
df.head(2)
proj = dr.Project.create(sourcedata=df,
project_name='athena load query')
# Continue work with this project via the DataRobot python package, or work in GUI using the link to the project printed below
print(DR_APP_ENDPOINT[:-7] + 'projects/{}'.format(proj.id))
```
DataRobot only recommends this method for smaller-sized datasets; it may be both easier and faster to simply download the data as a file rather than spool it back in paginated query results. The method uses a pandas dataframe for convenience and ease of potential data manipulation and feature engineering; it is not required for working with the data or creating a DataRobot project. Additionally, note that the machine that this code runs on requires adequate memory to work with a pandas dataframe for the size of the dataset being used in this example.
### Retrieve S3 CSV results locally {: #retrieve-s3-csv-results-locally }
The snippet below shows a more complicated query than the method above. It pulls all loans and joins CSV credit history data with Parquet loan history data. Upon completion, the S3 results file itself is downloaded to a local Python environment. From here, additional processing can be performed or the file can be pushed directly to DataRobot for a new project as shown in the snippet.
```python
athena_client = session.client('athena')
s3_client = session.client('s3')
database = 'community_athena_demo_db'
s3_out_bucket = 'engineering'
s3_out_path = 'athena/output/'
s3_out = 's3://' + s3_out_bucket + '/' + s3_out_path
local_path = '/Users/mike/Documents/community/'
local_path = !pwd
local_path = local_path[0]
query = "select lh.loan_id, " \
"lh.loan_amnt, lh.term, lh.int_rate, lh.installment, lh.grade, lh.sub_grade, " \
"lh.emp_title, lh.emp_length, lh.home_ownership, lh.annual_inc, lh.verification_status, " \
"lh.pymnt_plan, lh.purpose, lh.title, lh.zip_code, lh.addr_state, lh.dti, " \
"lh.installment / (lh.annual_inc / 12) as mnthly_paymt_to_income_ratio, " \
"lh.is_bad, " \
"lc.delinq_2yrs, lc.earliest_cr_line, lc.inq_last_6mths, lc.mths_since_last_delinq, lc.mths_since_last_record, " \
"lc.open_acc, lc.pub_rec, lc.revol_bal, lc.revol_util, lc.total_acc, lc.mths_since_last_major_derog " \
"from community_athena_demo_db.loan_credit lc " \
"join community_athena_demo_db.loan_history lh on lc.loan_id = lh.loan_id"
s3_file = fetch_athena_file(query, athena_client, database, s3_out, s3_client, local_path)
# get results file from S3
s3_client.download_file(s3_out_bucket, s3_out_path + s3_file, local_path + '/' + s3_file)
proj = dr.Project.create(local_path + '/' + s3_file,
project_name='athena load file')
# further work with project via the python API, or work in GUI (link to project printed below)
print(DR_APP_ENDPOINT[:-7] + 'projects/{}'.format(proj.id))
```
### AWS S3 bucket with a signed URL {: #aws-s3-bucket-with-a-signed-url}
Another method for creating a project in DataRobot is to ingest data from S3 using URL ingest.
There are several ways this can be done based on the data, environment, and configuration used.
This example leverages a private dataset on the cloud and creates a Signed URL for use in DataRobot.
| Dataset | DataRobot Environment | Approach | Description |
| -------- | ----------- | -------- | --------------------- |
| Public | Local install, Cloud | Public | If a dataset is in a public bucket, the direct HTTP link to the file object can be ingested. |
| Private | Local install | Global IAM role | You can install DataRobot with an IAM role granted to the DataRobot service account that has its own access privileges to S3. Any URL passed in that the DataRobot service account can see can be used to ingest data. |
| Private | Local install | IAM impersonation | You can implement finer-grained security control by having DataRobot assume the role and S3 privileges of a user. This requires LDAP authentication and LDAP fields containing S3 information be made available. |
| Private | Local install, Cloud | Signed S3 URL | AWS users can create a signed URL to an S3 object, providing a temporary link that expires after a specified amount of time. |
The snippet below builds on the work presented in the method to [retrieve S3 CSV results locally](#retrieve-s3-csv-results-locally).
Rather than download the file to a local environment, you can use AWS credentials to sign the URL for temporary usage.
The response variable contains a link to the results file, with an authentication string good for 3600 seconds.
Anyone with the entire string URL will be able to access the file for the duration requested.
In this way, rather than downloading the results locally, a DataRobot project can be initiated by referencing the URL value.
```python
response = s3_client.generate_presigned_url('get_object',
Params={'Bucket': s3_out_bucket,
'Key': s3_out_path + s3_file},
ExpiresIn=3600)
proj = dr.Project.create(response,
project_name='athena signed url')
# further work with project via the python API, or work in GUI (link to project printed below)
print(DR_APP_ENDPOINT[:-7] + 'projects/{}'.format(proj.id))
```
Helper functions and full code is available in the [DataRobot Community GitHub repo](https://github.com/datarobot-community/ai_engineering/tree/master/databases/athena){ target=_blank }.
## Wrap-up {: #wrap-up }
After using any of the methods detailed above, your data should be ingested in DataRobot. AWS Athena and Apache Presto have enabled SQL against varied data sources to produce results that can be used for data ingestion. Similar approaches can be used to work with this type of input data in Azure and Google cloud services as well.
|
ingest-athena
|
---
title: Path-based routing to PPS
description: Path-based routing to Portable Prediction Servers on AWS.
---
# Path-based routing to PPSs on AWS {: #path-based-routing-to-ppss-on-aws }
Using DataRobot MLOps, users can deploy DataRobot models into their own Kubernetes clusters—managed or Self-Managed AI Platform—using Portable Prediction Servers (PPSs). A PPS is a Docker container that contains a DataRobot model with a monitoring agent, and can be deployed using container orchestration tools such as Kubernetes. Then you can use the [monitoring](monitor/index) and [governance](governance/index) capabilities of MLOps.
When deploying multiple PPSs in the same Kubernetes cluster, you often want to have a single IP address as the entry point to all of the PPSs. A typical approach to this is path-based routing, which can be achieved using different Kubernetes Ingress Controllers. Some of the existing approaches to this include [Traefik](https://github.com/traefik/traefik){ target=_blank }, [HAProxy](https://haproxy-ingress.github.io){ target=_blank }, and [NGINX](https://www.nginx.com/products/nginx/kubernetes-ingress-controller){ target=_blank }.
The following sections describe how to use the NGINX Ingress controller for path-based routing to PPSs deployed on Amazon EKS.
## Before you start {: #before-you-start }
There are some prerequisites to interacting with AWS and the underlying services. If any (or all) of these tools are already installed and configured for you, you can skip the corresponding steps. See [Getting started with Amazon EKS – eksctl](https://docs.aws.amazon.com/eks/latest/userguide/getting-started-eksctl.html){ target=_blank } for detailed instructions.
### Install necessary tools {: #install-necessary-tools }
1. Install the AWS CLI, version 2.
2. Configure your AWS CLI credentials.
3. Install eksctl.
4. Install and configure kubectl (CLI for Kubernetes clusters).
5. Check that you successfully installed the tools.

### Set up PPS containers {: #set-up-pps-containers }
This procedure assumes that you have created and locally tested PPS containers for DataRobot AutoML models and pushed them to Amazon Elastic Container Registry (ECR). See [Deploy models on AWS EKS](deploy-dr-models-on-aws) for instructions.
This walkthrough is based on two PPSs created with the models of linear regression and image classification use cases, using a Kaggle [housing prices dataset](https://www.kaggle.com/c/home-data-for-ml-course/data) and [Food 101 dataset](https://www.kaggle.com/dansbecker/food-101), respectively.
The first PPS (housing prices) contains an eXtreme Gradient Boosted Trees Regressor (Gamma Loss) model. The second PPS (image binary classification - hot dog not hot dog), contains a SqueezeNet Image Pretrained Featurizer + Keras Slim Residual Neural Network Classifier using Training Schedule model.
The latter model has been trained using DataRobot Visual AI functionality .
## Create an Amazon EKS cluster {: #create-an-amazon-eks-cluster }
With the [Docker images stored in ECR](deploy-dr-models-on-aws#push-the-docker-image-to-amazon-ecr), you can spin up an Amazon EKS cluster. The EKS cluster needs a VPC with either of the following:
* Two public subnets and two private subnets
* Three public subnets
Amazon EKS requires subnets in at least two Availability Zones. A VPC with public and private subnets is recommended so that Kubernetes can create public load balancers in the public subnets to control traffic to the pods that run on nodes in private subnets.
1. Optionally, create or choose two public and two private subnets in your VPC. Make sure that “Auto-assign public IPv4 address” is enabled for the public subnets.
!!! note
The **eksctl** tool creates all necessary subnets behind the scenes if you don’t provide the corresponding `--vpc-private-subnets` and `--vpc-public-subnets` parameters.
2. Create the cluster.
```python
eksctl create cluster \
--name multi-app \
--vpc-private-subnets=subnet-XXXXXXX,subnet-XXXXXXX \
--vpc-public-subnets=subnet-XXXXXXX,subnet-XXXXXXX \
--nodegroup-name standard-workers \
--node-type t3.medium \
--nodes 2 \
--nodes-min 1 \
--nodes-max 3 \
--ssh-access \
--ssh-public-key my-public-key.pub \
--managed
```
**Notes**
* Usage of the `--managed` parameter enables [Amazon EKS-managed nodegroups](https://docs.aws.amazon.com/eks/latest/userguide/managed-node-groups.html){ target=_blank }. This feature automates the provisioning and lifecycle management of nodes (EC2 instances) for Amazon EKS Kubernetes clusters. You can provision optimized groups of nodes for their clusters. EKS will keep their nodes up-to-date with the latest Kubernetes and host OS versions. The **eksctl** tool makes it possible to choose the specific size and instance type family via command line flags or config files.
* Although `--ssh-public-key` is optional, it is highly recommended that you specify it when you create your node group with a cluster. This option enables SSH access to the nodes in your managed node group. Enabling SSH access allows you to connect to your instances and gather diagnostic information if there are issues. You cannot enable remote access after the node group is created.
Cluster provisioning usually takes between 10 and 15 minutes and results in the following:

3. When your cluster is ready, test that your kubectl configuration is correct:
```bash
kubectl get svc
```

## Deploy the NGINX Ingress controller {: #deploy-the-nginx-ingress-controller }
AWS Elastic Load Balancing supports three types of load balancers: Application Load Balancers (ALB), Network Load Balancers (NLB), and Classic Load Balancers (CLB). See [Elastic Load Balancing features](https://aws.amazon.com/elasticloadbalancing/features/?nc=sn&loc=2){ target=_blank } for details.
The NGINX Ingress controller uses NLB on AWS. NLB is best suited for load balancing of TCP, UDP, and TLS traffic when extreme performance is required. Operating at the connection level ([Layer 4 of the OSI model](https://en.wikipedia.org/wiki/OSI_model#Layer_4:_Transport_Layer){ target=_blank }), NLB routes traffic to targets within Amazon VPC and is capable of handling millions of requests per second while maintaining ultra-low latencies. NLB is also optimized to handle sudden and volatile traffic patterns.
Deploy the NGINX Ingress controller (this manifest file also launches the NLB):
```
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-
nginx/master/deploy/static/provider/aws/deploy.yaml
```
## Create and deploy services to Kubernetes {: #create-and-deploy-services-to-kubernetes }
1. Create a Kubernetes namespace:
```
kubectl create namespace aws-tlb-namespace
```
2. Save the following contents to a `yaml` file on your local machine (in this case, `house-regression-deployment.yaml`), replacing the values for your project, for example:
```python
apiVersion: apps/v1
kind: Deployment
metadata:
name: house-regression-deployment
namespace: aws-tlb-namespace
labels:
app: house-regression-app
spec:
replicas: 3
selector:
matchLabels:
app: house-regression-app
template:
metadata:
labels:
app: house-regression-app
spec:
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: beta.kubernetes.io/arch
operator: In
values:
- amd64
containers:
- name: house-regression-model
image: <your_image_in_ECR>
ports:
- containerPort: 80
```
3. Save the following contents to a `yaml` file on your local machine (in this case, `house-regression-service.yaml`), replacing the values for your project, for example:
```python
apiVersion: v1
kind: Service
metadata:
name: house-regression-service
namespace: aws-tlb-namespace
labels:
app: house-regression-app
spec:
selector:
app: house-regression-app
ports:
- protocol: TCP
port: 8080
targetPort: 8080
type: NodePort
```
4. Create a Kubernetes service and deployment:
```
kubectl apply -f house-regression-deployment.yaml
kubectl apply -f house-regression-service.yaml
```
5. Save the following contents to a `yaml` file on your local machine (in this case, `hot-dog-deployment.yaml`), replacing the values for your project, for example:
```python
apiVersion: apps/v1
kind: Deployment
metadata:
name: hot-dog-deployment
namespace: aws-tlb-namespace
labels:
app: hot-dog-app
spec:
replicas: 3
selector:
matchLabels:
app: hot-dog-app
template:
metadata:
labels:
app: hot-dog-app
spec:
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: beta.kubernetes.io/arch
operator: In
values:
- amd64
containers:
- name: hot-dog-model
image: <your_image_in_ECR>
ports:
- containerPort: 80
```
6. Save the following contents to a `yaml` file on your local machine (in this case, `hot-dog-service.yaml`), replacing the values for your project, for example:
```python
apiVersion: v1
kind: Service
metadata:
name: hot-dog-service
namespace: aws-tlb-namespace
labels:
app: hot-dog-app
spec:
selector:
app: hot-dog-app
ports:
- protocol: TCP
port: 8080
targetPort: 8080
type: NodePort
```
7. Create a Kubernetes service and deployment:
```
kubectl apply -f hot-dog-deployment.yaml
kubectl apply -f hot-dog-service.yaml
```
8. View all resources that exist in the namespace:
```
kubectl get all -n aws-tlb-namespace
```

## Create and deploy Ingress resource for path-based routing {: #create-and-deploy-ingress-resource-for-path-based-routing }
1. Save the following contents to a `yaml` file on your local machine (in this case, `nginx-redirect-ingress.yaml`), replacing the values for your project, for example:
```python
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: nginx-redirect-ingress
namespace: aws-tlb-namespace
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/rewrite-target: /$2
labels:
app: nginx-redirect-ingress
spec:
rules:
- http:
paths:
- path: /house-regression(/|$)(.*)
backend:
serviceName: house-regression-service
servicePort: 8080
- path: /hot-dog(/|$)(.*)
backend:
serviceName: hot-dog-service
servicePort: 8080
```
!!! note
The `nginx.ingress.kubernetes.io/rewrite-target` annotation rewrites the URL before forwarding the request to the backend pods. As a result, the paths */house-regression/some-house-path* and */hot-dog/some-dog-path* transform to */some-house-path* and */some-dog-path*, respectively.
2. Create Ingress for path-based routing:
```
kubectl apply -f nginx-redirect-ingress.yaml
```
3. Verify that Ingress has been successfully created:
```
kubectl get ingress/nginx-redirect-ingress -n aws-tlb-namespace
```

4. Optionally, use the following if you want to access the detailed output about this ingress:
```
kubectl describe ingress/nginx-redirect-ingress -n aws-tlb-namespace
```

Note the value of `Address` in the output for the next two scoring requests.
5. Score the *house-regression* model:
```
curl -X POST http://<ADDRESS>/house-regression/predictions -H "Content-Type: text/csv" --data-binary @kaggle_house_test_dataset_10.csv
```

6. Score the *hot-dog* model:
```
curl -X POST http://<ADDRESS>/hot-dog/predictions -H "Content-Type: text/csv; charset=UTF-8" --data-binary @for_pred.csv
```
!!! note
`for_pred.csv` is a CSV file containing one column with the header. The content for that column is a Base64 encoded image.
Original photo for prediction (downloaded from [here](https://unsplash.com/photos/8pK37xtN4bo){ target=_blank }):

The image is predicted to be a hot dog:

Another photo for prediction (downloaded from [here](https://unsplash.com/photos/rKX8mC89CtY){ target=_blank }):

The image is predicted not to be a hot dog:

## Clean up {: #clean-up }
1. Remove the sample services, deployments, pods, and namespaces:
```
kubectl delete namespace aws-tlb-namespace
kubectl delete namespace ingress-nginx
```
2. Delete the cluster:
```
eksctl delete cluster --name multi-app
```

## Wrap-up {: #wrap-up }
The deployment of a few Kubernetes services behind the same IP address allows you to minimize the number of load balancers needed and facilitate the maintenance of the applications. Applying Kubernetes Ingress Controllers makes it possible.
This walkthrough described how to develop the path-based routing to a few Portable Prediction Servers (PPSs) deployed on the Amazon EKS platform. This solution has been implemented via NGINX Ingress Controller.
|
path-based-routing-to-pps-on-aws
|
---
title: Score Snowflake data on AWS EMR Spark
description: Scoring Snowflake data using DataRobot Scoring Code on AWS EMR Spark.
---
# Score Snowflake data on AWS EMR Spark {: #score-snowflake-data-on-aws-emr-spark }
DataRobot provides exportable Scoring Code that you can use to score millions of records on Spark. This topic shows how to do so with Snowflake as the data source and target. The steps can be used as a template you can modify to create Spark scoring jobs with different sources and targets.
## About the technologies {: #about-the-technologies }
Click a tab to learn about the technologies discussed in this topic.
=== "AWS EMR and Apache Spark"
Apache Spark is an open source cluster computing framework considered to be in the "Big Data" family of technologies. Spark is used for large volumes of data in structured or semi-structured forms—in streaming or batch modes. Spark does not have its own persistent storage layer. It relies on file systems like HDFS, object storage like AWS S3, and JDBC interfaces for data.
Popular Spark platforms include Databricks and AWS Elastic Map Reduce (EMR). The example in this topic shows how to score using EMR Spark. This is a Spark cluster that can be spun up for work as needed and shut down when work is completed.
=== "AWS S3"
S3 is the object storage service of AWS. It is used in this example to store and retrieve the job's database query dynamically. S3 can also write to as a job completion target. In addition, cluster log files are written to S3.
=== "AWS Secrets Manager"
Hardcoding credentials can be done during development or for ad-hoc jobs, although as a best practice it is ideal, even in development, to score these in a secure fashion. This is a requirement for safely protecting them in production scoring jobs. The Secrets Manager service will allow only trusted users or roles to be able to access securely stored secret information.
=== "AWS Command Line Interface (CLI)"
For brevity and ease of use, the AWS CLI will be used to perform command line operations for several activities related to AWS activities throughout this article. These activities could also be performed manually via the GUI. See [AWS Command Line Interface Documentation](https://docs.aws.amazon.com/cli/index.html){ target=_blank } for more information on configuring the CLI.
=== "Snowflake Database"
Snowflake is a cloud-based database platform designed for data warehouse and analytic workloads. It allows for easy scale-up and scale-out capabilities for working on large data volume use cases and is available as a service across all major cloud platforms. For the scoring example in this topic, Snowflake is the source and target, although both can be swapped for other databases or storage platforms for Spark scoring jobs.
=== "DataRobot Scoring Code"
You can quickly and easily deploy models in DataRobot for API hosting within the platform. In some cases, rather than bringing the data to the model in the API, it can be beneficial to bring the model to the data, for example, for very large scoring jobs. The example that follows scores three million Titanic passengers for survival probability from an enlarged [Kaggle dataset](https://www.kaggle.com/c/titanic){ target=_blank }. Although not typically an amount that would warrant considering using Spark over the API, here it serves as a good technical demonstration.
You can export models from DataRobot in Java or Python as a rules-based approximation with DataRobot RuleFit models. A second export option is Scoring Code, which provides source code and a compiled Java binary JAR which holds the exact model chosen.
=== "Programming Languages"
Structured Query Language (SQL) is used for the database, Scala for Spark. Python/PySpark can also be leveraged for running jobs on Spark.
## Architecture {: #architecture }

## Development Environment {: #development-environment }
AWS EMR includes a Zeppelin Notebook service, which allows for interactive development of Spark code. To set up a development environment, first create an EMR cluster. You can do this via GUI options on AWS; the defaults are acceptable. Be sure to choose the **Spark** option. Note the advanced settings allow for more granular choices of software installation.

Upon successful creation, when viewing the Summary tab on the cluster the AWS CLI export button provides a CLI script to recreate the instance, which can be saved and edited for the future. An example is as follows:
```
aws emr create-cluster \
--applications Name=Spark Name=Zeppelin \
--configurations '[{"Classification":"spark","Properties":{}}]' \
--service-role EMR_DefaultRole \
--enable-debugging \
--release-label emr-5.30.0 \
--log-uri 's3n://mybucket/emr_logs/' \
--name 'Dev' \
--scale-down-behavior TERMINATE_AT_TASK_COMPLETION \ --region us-east-1 \
--tags "owner=doyouevendata" "environment=development" "cost_center=comm" \
--ec2-attributes '{"KeyName":"creds","InstanceProfile":"EMR_EC2_DefaultRole","SubnetId":"subnet-0e12345","EmrManaged
SlaveSecurityGroup":"sg-123456","EmrManagedMasterSecurityGroup":"sg-01234567"}' \
--instance-groups '[{"InstanceCount":1,"EbsConfiguration":{"EbsBlockDeviceConfigs":[{"VolumeSpecification":{"SizeInGB"
:32,"VolumeType":"gp2"},"VolumesPerInstance":2}]},"InstanceGroupType":"MASTER","InstanceType":"m5.xlarge","Name":"Master
Instance Group"},{"InstanceCount":2,"EbsConfiguration":{"EbsBlockDeviceConfigs":[{"VolumeSpecification":{"SizeInGB":32,"
VolumeType":"gp2"},"VolumesPerInstance":2}]},"InstanceGroupType":"CORE","InstanceType":"m5.xlarge","Name":"Core Instance
Group"}]'
```
You can find connectivity details about the cluster in the GUI. Log on to the server to provide additional configuration items. You can access a terminal via SSH; this requires a public-facing IP or DNS address, and that the VPC inbound ruleset applied to the EC2 cluster master instance allows incoming connections over SSH port 22. If connectivity is refused because the machine is unreachable, add source IP/subnets to the security group.
```
ssh -i ~/creds.pem hadoop@ec2-54-121-207-147.compute-1.amazonaws.com
```
Several packages are used to support database connectivity and model scoring. These JARs can be loaded to cluster nodes when the cluster is created to have them available in the environment. They can also be compiled into JARs for job submission, or they can be downloaded from a repository at run time. This example uses the last option.
The AWS environment used in this article is based on EWS EMR 5.30 with Spark 2.11. Some changes may be necessary to follow along as new versions of referenced environments and packages are released. In addition to those already provided by AWS, two Snowflake and two DataRobot packages are used:
* [spark-snowflake](https://mvnrepository.com/artifact/net.snowflake/spark-snowflake){ target=_blank }
* [snowflake-jdbc](https://mvnrepository.com/artifact/net.snowflake/snowflake-jdbc){ target=_blank }
* [scoring-code-spark-api](https://mvnrepository.com/artifact/com.datarobot/scoring-code-spark-api){ target=_blank }
* [datarobot-prediction](https://mvnrepository.com/artifact/com.datarobot/datarobot-prediction){ target=_blank }
To leverage these packages in the Zeppelin notebook environment, edit the `zeppelin-env` file to add the packages when the interpreter is invoked. Edit this file on the master node.
```
sudo vi /usr/lib/zeppelin/conf/zeppelin-env.sh
```
Edit the `export SPARK_SUBMIT_OPTIONS` line at the bottom of the file and add the packages flag to the string value.
```
--packages net.snowflake:snowflake-jdbc:3.12.5,net.snowflake:spark-
snowflake_2.11:2.7.1-spark_2.4,com.datarobot:scoring-code-spark-api_2.4.3:0.0.19,com.datarobot:datarobot-
prediction:2.1.4
```
If you make further edits while working in Zeppelin, you'll need to restart the interpreter within the Zeppelin environment for the edits to take effect.
You can now establish an SSH tunnel to access the remote Zeppelin server from a local browser. The following command forwards port 8890 on the master node to the local machine. Without using a public DNS entry, additional proxy configuration may be required. This statement leverages "Option 1" in the following topic. A proxy for the second option as well as additional ports and services can be found [here](https://docs.aws.amazon.com/emr/latest/ManagementGuide/emr-web-interfaces.html){ target=_blank }.
```
ssh -i ~/creds.pem -L 8890:ec2-54-121-207-147.compute-1.amazonaws.com:8890
hadoop@ec2-54-121-207-147.compute-1.amazonaws.com -Nv
```
Navigating to port 8890 on the local machine now brings up the Zeppelin instance where a new note can be created along with the packages, as defined in the environment shell script.

Several helper tools are provided on GitHub to aid in quickly and programmatically performing this process (and others described in this article) via the AWS CLI from a local machine.
`env_config.sh` contains AWS environment variables, such as profile (if used), tags, VPCs, security groups, and other elements used in specifying a cluster.
`snow_bootstrap.sh` is an optional file to perform tasks on the EMR cluster nodes after they are allocated, but before applications like Spark are installed.
`create_dev_cluster.sh` uses the above to create a cluster and provides connectivity strings. It takes no arguments.
## Create Secrets {: #create-secrets }
You can code credentials into variables during development, although this topic demonstrates how to create a production EMR job with auto-termination upon completion. It is a good practice to store secret values such as database usernames and passwords in a trusted environment. In this case, the IAM Role applied to the EC2 instances has been granted the privilege to interact with the [AWS Secrets Manager](https://aws.amazon.com/secrets-manager/){ target=_blank } service.
The simplest form of a secret contains a string reference name and a string of values to store. The process for creating one is straightforward in the AWS GUI and will guide the creation of a secret, with a string representing provided keys and values in JSON. Some helper files are available to do this with the CLI.
`secrets.properties` is a JSON list of secrets to store.
Example contents:
```python
{
"dr_host":"https://app.datarobot.com",
"dr_token":"N1234567890",
"dr_project":"5ec1234567890",
"dr_model":"5ec123456789012345",
"db_host":"dp12345.us-east-1.snowflakecomputing.com",
"db_user":"snowuser",
"db_pass":"snow_password",
"db_db":"TITANIC",
"db_schema":"PUBLIC",
"db_query_file":"s3://bucket/ybspark/snow.query",
"db_output_table":"PASSENGERS_SCORED",
"s3_output_loc":"s3a://bucket/ybspark/output/",
"output_type":"s3"
}
```
`create_secrets.sh` is a script which leverages the CLI to create (or update) the secret name specified within the script with the properties file.
## Source SQL Query {: #source-sql-query }
Instead of putting a SQL extract statement into the code, instead it can be provided dynamically at runtime. It is not necessarily a secret and, given its potential length and complexity, it fits better as simply a file in S3. One of the secrets is pointing to this location, the `db_query_file` entry. The contents of this file on S3—`s3://bucket/ybspark/snow.query` is simply a SQL statement against a table with three million passenger records:
```
select * from passengers_3m
```
## Spark Code (Scala) {: #spark-code-scala }
With supporting components in place, code to construct the model scoring pipeline can begin. It can be run on a spark-shell instance directly on the machine, with a helper to include the necessary packages with `run_spark-shell.sh` and `spark_env.sh`. This interactive session may assist in some quick debugging, but it only uses the master node and is not a friendly environment to iterate code development in. The Zeppelin notebook is a more friendly environment to do so in and runs the code in yarn-cluster mode, leveraging the multiple worker nodes available. The code below can be copied or the note can simply be imported from the `snowflake_scala_note.json` in the GitHub repo for this project.
### Import package dependencies {: #import-package-dependencies }
```python
import org.apache.spark.sql.functions.{col}
import org.apache.spark.sql.{DataFrame, Dataset, SparkSession}
import org.apache.spark.sql.SaveMode
import java.time.LocalDateTime
import com.amazonaws.regions.Regions
import com.amazonaws.services.secretsmanager.AWSSecretsManagerClientBuilder
import com.amazonaws.services.secretsmanager.model.GetSecretValueRequest
import org.json4s.{DefaultFormats, MappingException}
import org.json4s.jackson.JsonMethods._
import com.datarobot.prediction.spark.Predictors.{getPredictorFromServer, getPredictor}
```
### Create helper functions to simplify process {: #create-helper-functions-to-simplify-process }
```python
/* get secret string from secrets manager */
def getSecret(secretName: String): (String) = {
val region = Regions.US_EAST_1
val client = AWSSecretsManagerClientBuilder.standard()
.withRegion(region)
.build()
val getSecretValueRequest = new GetSecretValueRequest()
.withSecretId(secretName)
val res = client.getSecretValue(getSecretValueRequest)
val secret = res.getSecretString
return secret
}
/* get secret value from secrets string once provided key */
def getSecretKeyValue(jsonString: String, keyString: String): (String) = {
implicit val formats = DefaultFormats
val parsedJson = parse(jsonString)
val keyValue = (parsedJson \ keyString).extract[String]
return keyValue
}
/* run sql and extract sql into spark dataframe */
def snowflakedf(defaultOptions: Map[String, String], sql: String) = {
val spark = SparkSession.builder.getOrCreate()
spark.read
.format("net.snowflake.spark.snowflake")
.options(defaultOptions)
.option("query", sql)
.load()
}
```
### Retrieve and parse secrets {: #retrieve-and-parse-secrets }
Next, retrieve and parse the secrets data stored in AWS to support the scoring job.
```python
val SECRET_NAME = "snow/titanic"
printMsg("db_log: " + "START")
printMsg("db_log: " + "Creating SparkSession...")
val spark = SparkSession.builder.appName("Score2main").getOrCreate();
printMsg("db_log: " + "Obtaining secrets...")
val secret = getSecret(SECRET_NAME)
printMsg("db_log: " + "Parsing secrets...")
val dr_host = getSecretKeyValue(secret, "dr_host")
val dr_project = getSecretKeyValue(secret, "dr_project")
val dr_model = getSecretKeyValue(secret, "dr_model")
val dr_token = getSecretKeyValue(secret, "dr_token")
val db_host = getSecretKeyValue(secret, "db_host")
val db_db = getSecretKeyValue(secret, "db_db")
val db_schema = getSecretKeyValue(secret, "db_schema")
val db_user = getSecretKeyValue(secret, "db_user")
val db_pass = getSecretKeyValue(secret, "db_pass")
val db_query_file = getSecretKeyValue(secret, "db_query_file")
val output_type = getSecretKeyValue(secret, "output_type")
```
### Read query into a variable {: #read-query-into-a-variable }
Next, read the query hosted on S3 and specified in `db_query_file` into a variable.
```python
printMsg("db_log: " + "Retrieving db query...")
val df_query = spark.read.text(db_query_file)
val query = df_query.select(col("value")).first.getString(0)
```
### Retrieve the Scoring Code {: #retrieve-the-scoring-code }
Next, retrieve the Scoring Code for the model from DataRobot. Although this can be done from a local JAR, the code here retrieves it from DataRobot on the fly. This model can be easily swapped out for another by changing the `dr_model` value referenced in the secrets.
```python
printMsg("db_log: " + "Loading Model...")
val spark_compatible_model = getPredictorFromServer(host=dr_host, projectId=dr_project, modelId=dr_model, token=dr_token)
```
### Run the SQL {: #run-the-sql }
Now, run the SQL retrieved against the database and bring it into a Spark dataframe.
```python
printMsg("db_log: " + "Extracting data from database...")
val defaultOptions = Map(
"sfURL" -> db_host,
"sfAccount" -> db_host.split('.')(0),
"sfUser" -> db_user,
"sfPassword" -> db_pass,
"sfDatabase" -> db_db,
"sfSchema" -> db_schema
)
val df = snowflakedf(defaultOptions, query)
```

### Score the dataframe {: #score-the-dataframe }
The example below scores the dataframe through the retrieved DataRobot model. It creates a subset of the output containing just the identifying column (Passenger ID) and the probability towards the positive class label 1, i.e., the probability of survival for the passenger.
```python
printMsg("db_log: " + "Scoring Model...")
val result_df = spark_compatible_model.transform(df)
val subset_df = result_df.select("PASSENGERID", "target_1_PREDICTION")
subset_df.cache()
```

### Write the results {: #write-the-results }
The value `output_type` dictates whether the scored data is written back to a table in the database or a location in S3.
```python
if(output_type == "s3") {
val s3_output_loc = getSecretKeyValue(secret, "s3_output_loc")
printMsg("db_log: " + "Writing to S3...")
subset_df.write.format("csv").option("header","true").mode("Overwrite").save(s3_output_loc)
}
else if(output_type == "table") {
val db_output_table = getSecretKeyValue(secret, "db_output_table")
subset_df.write
.format("net.snowflake.spark.snowflake")
.options(defaultOptions)
.option("dbtable", db_output_table)
.mode(SaveMode.Overwrite)
.save()
}
else {
printMsg("db_log: " + "Results not written to S3 or database; output_type value must be either 's3' or 'table'.")
}
printMsg("db_log: " + "Written record count - " + subset_df.count())
printMsg("db_log: " + "FINISH")
```
This approach works well for development and manual or ad-hoc scoring needs. You can terminate the EMR cluster when all work is complete. AWS EMR can also be leveraged to create routinely run production jobs on a schedule as well.
## Productionalize the Pipeline {: #productionalize-the-pipeline }
A production job can be created to run this job on regular intervals. The process of creating an EMR instance is similar; however, the instance will be set to run some job steps after it comes online. After the steps are completed, the cluster will be automatically terminated as well.
The Scala code however cannot be run as a scripted step. It must be compiled into a JAR for submission. The open source build tool [sbt](https://www.scala-sbt.org/){ target=_blank } is used for compiling Scala and Java code. In the repo, sbt was installed already (using commands in the `snow_bootstrap.sh` script). Note this is only required for development to compile the JAR and can be removed from any production job run. Although the code does not need to be developed on the actual EMR master node, it does present a good environment for development because that is where the code will ultimately be run. The main files of interest in the project are:
`snowscore/build.sbt`
`snowscore/create_jar_package.sh`
`snowscore/spark_env.sh`
`snowscore/run_spark-submit.sh`
`snowscore/src/main/scala/com/comm_demo/SnowScore.scala`
* `build.sbt` contains the prior referred to packages for Snowflake and DataRobot, and includes two additional packages for working with AWS resources.
* `create_jar_package.sh`, `spark_env.sh`, and `run_spark-submit.sh` are helper functions. The first function runs a clean package build of the project, and the latter two functions allow for submission to the spark cluster of the built package JAR simply from the command line.
* `SnowScore.scala` contains the same code referenced above, arranged in a main class to be called when submit to the cluster for execution.
Run the `create_jar_package.sh` to create the output package JAR, which calls `sbt clean` and `sbt package`. This creates the JAR ready for job submission, `target/scala-2.11/score_2.11-0.1.0-SNAPSHOT.jar`.

The JAR can be submitted with the `run_spark-submit.sh` script; however, to use it in a self-terminating cluster it needs to be hosted on S3.
In this example, it has been copied over to `s3://bucket/ybspark/score_2.11-0.1.0-SNAPSHOT.jar`.
If on a development EMR instance, after the JAR has been copied over to S3, the instance can be terminated.
Lastly, the `run_emr_prod_job.sh` script can be run to call an EMR job using the AWS CLI to create an EMR instance, run a bootstrap script, install necessary applications, and execute a step function to call the main class of the S3 hosted package JAR. The `--steps` argument in the script creates the step to call the spark-submit job on the cluster. Note that the `--packages` argument submits the snapshot JAR and the main class that are specified in this attribute at runtime. Upon completion of the JAR, the EMR instance self-terminates.
The production job creation is now complete. This may be run by various triggers or scheduling tools. By updating the `snow.query` file hosted on S3, the input can be modified; in addition, the output targets of tables in the database or object storage on S3 can also be modified. Different machine learning models from DataRobot can easily be swapped out as well, with no additional compilation or coding required.
### Performance and cost considerations {: #performance-and-cost-considerations }
Consider this example as a reference: A MacBook containing a i5-7360U CPU @ 2.30GHz and running a local (default option) Scoring Code CSV job scored at a rate of 5,660 rec/s. When using a system with m5.xlarge (4 vCPU 16GB RAM) for MASTER and CORE EMR nodes, running a few tests from 3 million to 28 million passenger records ran from 12,000–22,000 rec/s *per CORE node* .
There is startup time required to construct an EMR cluster; this varies and takes 7+ minutes. There is additional overhead in simply running a Spark job. Scoring 418 records on a 2-CORE node system through the entire pipeline took 512 seconds total. However, scoring 28 million on a 4-CORE node system took 671 seconds total. [Pricing](https://aws.amazon.com/emr/pricing/){ target=_blank }, another consideration, is based on instance hours for EC2 compute and EMR services.
Examination of the scoring pipeline job alone as coded, without any tweaks to code, shows Spark, or EMR, scaling of 28 million records—from 694 seconds on 2 CORE nodes to 308 seconds on 4 CORE nodes.
AWS EMR cost calculations can be a bit challenging due to the way they are measured with normalized instance hours and when the clock starts ticking for cluster payment. A GitHub project is available to create approximate costs for resources over a given time period or when given a specific cluster-id. This project can be found on [GitHub](https://github.com/marko-bast/aws-emr-cost-calculator){ target=_blank }.
An estimate for the 28 million passenger scoring job with 4 CORE servers follows:
```
$ ./aws-emr-cost-calculator cluster --cluster_id=j-1D4QGJXOAAAAA
CORE.EC2 : 0.16
CORE.EMR : 0.04
MASTER.EC2 : 0.04
MASTER.EMR : 0.01
TOTAL : 0.25
```
As scoring pipelines may contain additional pre- and post-processing steps, it is best to use this tool with various cluster options to determine cost vs. performance optimizations for each scoring pipeline constructed on a use case by use case basis.
Related code for this article can be found on [DataRobot Community GitHub](https://github.com/datarobot-community/dr_spark_examples/tree/master/scala/snowscore){ target=_blank }.
|
score-snowflake-aws-emr-spark
|
---
title: AWS
description: Integrate DataRobot with Amazon Web Services.
---
# AWS {: #aws }
The sections described below provide techniques for integrating Amazon Web Services with DataRobot.
Topic | Describes...
----- | ------
[Import data from AWS S3](import-from-aws-s3) | Importing data from AWS S3 to AI Catalog and creating an ML project.
[Deploy models on EKS](deploy-dr-models-on-aws) | Deploying and monitor DataRobot models on AWS Elastic Kubernetes Service (EKS) clusters.
[Path-based routing to PPS on AWS](path-based-routing-to-pps-on-aws) | Using a single IP address for all Portable Prediction Servers through path-based routing.
[Score Snowflake data on AWS EMR Spark](score-snowflake-aws-emr-spark) | Scoring Snowflake data via DataRobot models on AWS Elastic Map Reduce (EMR) Spark.
[Ingest data with AWS Athena](ingest-athena) | Ingesting AWS Athena and Parquet data for machine learning.
## Lambda {: #lambda }
Topic | Describes...
----- | ------
[Serverless MLOps agents](monitor-serverless-mlops-agents) | Monitoring external models with serverless MLOps agents running on AWS Lambda.
[AWS Lambda reporting to MLOps](aws-lambda-reporting-to-mlops) | AWS Lambda serverless reporting of actuals to DataRobot MLOps.
[Use DataRobot Prime models with AWS Lambda](prime-lambda) | Using DataRobot Prime models with AWS Lambda.
[Use Scoring Code with AWS Lambda](sc-lambda) | Making predictions using Scoring Code deployed on AWS Lambda.
## SageMaker {: #sagemaker }
Topic | Describes...
----- | ------
[Deploy models on Sagemaker](sagemaker-deploy) | Deploying on SageMaker and monitoring with MLOps agents.
[Monitor SageMaker models in MLOps](sagemaker-monitor) | Monitoring a SageMaker model that has been deployed to AWS for real-time scoring in DataRobot MLOps.
[Use Scoring Code with AWS SageMaker](sc-sagemaker) | Making predictions using Scoring Code deployed on AWS SageMaker.
|
index
|
---
title: Deploy models on AWS EKS
description: Deploy and monitor DataRobot models on AWS Elastic Kubernetes Service (EKS).
---
# Deploy models on AWS EKS {: #deploy-models-on-aws-eks }
With DataRobot MLOps, you can deploy DataRobot models into your own AWS Elastic Kubernetes Service (EKS) clusters and still have the advantages of using the [monitoring](monitor/index) and [governance](governance/index) capabilities of MLOps.
These exportable DataRobot models are known as [Portable Prediction Servers (PPSs)](glossary/index#portable-prediction-server-pps). The models are embedded into Docker containers, providing flexibility and portability, and making them suitable for container orchestration tools such as Kubernetes.
The following steps show how to deploy a DataRobot model on EKS.
## Before you start {: #before-you-start }
!!! info "Availability information"
To deploy a DataRobot model on EKS, you need to export a model package which requires the "Enable MMM model package export" flag. Contact your DataRobot representative for more information about enabling this feature.
Before deploying to Amazon EKS, you need to create an EKS cluster. There are two approaches to spin up the cluster:
* Using the **eksctl** tool (CLI for Amazon EKS). This is the simplest and fastest way to create an EKS cluster.
* Using AWS Management Console. This method provides more fine-grained tuning (for example, IAM role and VPC creation).
This topic shows how to install using the **eksctl** tool. See [Getting started with Amazon EKS – eksctl](https://docs.aws.amazon.com/eks/latest/userguide/getting-started-eksctl.html){ target=_blank } for detailed instructions.
If any of the tools are already installed and configured, you can skip the corresponding steps.
### Install and configure AWS and EKS {: #install-and-configure-aws-and-eks }
Follow the steps described below to install and configure Amazon Web Services CLI, EKS, and Kubernetes CLI.
1. Install the AWS CLI, version 2.
`aws --version`
2. Configure your AWS CLI credentials.
3. Install **eksctl**.
`eksctl version`
4. Install and configure **kubectl** (CLI for Kubernetes clusters).
`kubectl version --short --client`
## Deploy models to EKS {: #deploy-models-to-eks }
Deploying DataRobot models on a Kubernetes infrastructure consists of three main activities:
* Preparing and pushing the Docker container with the MLOps package to the container registry
* Creating the external deployment in DataRobot
* Creating the Kubernetes cluster
### Configure and run the PPS Docker image {: #configure-and-run-the-pps-docker-image }
To complete the following steps, you need to first generate your model and [create an MLOps model package](reg-create).
DataRobot provides a UI to help you configure the Portable Prediction Server and create the Docker image. Follow these steps:
1. [Configure the Portable Prediction Server (PPS)](portable-pps#configure-the-portable-prediction-server).
2. [Obtain the PPS Docker image](portable-pps#obtain-the-pps-docker-image).
3. [Load the image to Docker](portable-pps#load-the-image-to-docker).
4. [Download the model package](portable-pps#deployment-download).
5. [Run your Docker image](portable-pps#running-modes).
6. [Monitor your model](portable-pps#monitoring).
7. [Create an external deployment](deploy-external-model). When you create the deployment, make note of the MLOps model ID and the MLOps deployment ID. You’re going to need these IDs when you deploy the MLOps package to Kubernetes.
### Push the Docker image to Amazon ECR {: #push-the-docker-image-to-amazon-ecr }
You need to upload the container image to Amazon Elastic Container Registry (ECR) so that your Amazon EKS cluster can download and run it.
1. Configure the Docker CLI tool to authenticate to Elastic Container Registry:
```bash
aws ecr get-login-password --region us-east-1 | docker login --username XXX --password-stdin 00000000000.xxx.ecr.us-east-1.amazonaws.com
```
2. Push the Docker image you just built to ECR:
```bash
docker push 00000000000.xxx.ecr.us-east-1.amazonaws.com/house-regression-model:latest
```
### Create an Amazon EKS cluster {: #create-an-amazon-eks-cluster }
Now that the Docker image is stored in ECR and the external deployment is created, you can spin up an Amazon EKS cluster. The EKS cluster needs VPC with either of the following:
* Two public subnets and two private subnets
* A VPC with three public subnets
The Amazon EKS requires subnets in at least two Availability Zones. A VPC with public and private subnets is recommended so that Kubernetes can create public load balancers in the public subnets to control traffic to the pods that run on nodes in private subnets.
To create the Amazon EKS cluster:
1. Optionally, create or choose two public and two private subnets in your VPC. Make sure that “Auto-assign public IPv4 address” is enabled for the public subnets.
!!! note
The **eksctl** tool creates all necessary subnets behind the scenes if you don’t provide the corresponding `--vpc-private-subnets` and `--vpc-public-subnets` parameters.
2. Create the cluster:
```bash
eksctl create cluster \
--name house-regression \
--vpc-private-subnets=subnet-xxxxxxx,subnet-xxxxxxx \
--vpc-public-subnets=subnet-xxxxxxx,subnet-xxxxxxx \
--ssh-access \
--ssh-public-key my-public-key.pub \
--managed
```
** Notes**
* Usage of the `--managed` parameter enables [Amazon EKS-managed nodegroups](https://docs.aws.amazon.com/eks/latest/userguide/managed-node-groups.html){ target=_blank }. This feature automates the provisioning and lifecycle management of nodes (EC2 instances) for Amazon EKS Kubernetes clusters. You can provision optimized groups of nodes for their clusters. EKS will keep their nodes up-to-date with the latest Kubernetes and host OS versions. The **eksctl** tool makes it possible to choose the specific size and instance type family via command line flags or config files.
* Although `--ssh-public-key` is optional, it is highly recommended that you specify it when you create your node group with a cluster. This option enables SSH access to the nodes in your managed node group. Enabling SSH access allows you to connect to your instances and gather diagnostic information if there are issues. You cannot enable remote access after the node group is created.
Cluster provisioning usually takes between 10 and 15 minutes and results in the following:

3. When your cluster is ready, test that your kubectl configuration is correct:
```bash
kubectl get svc
```

### Deploy the MLOps package to Kubernetes {: #deploy-the-mlops-package-to-kubernetes }
To deploy the MLOps package to Kubernetes:
1. Create a Kubernetes namespace, for example:
```bash
kubectl create namespace house-regression-namespace
```
2. Save the following contents to a `yaml` file on your local machine (in this case, `house-regression-service.yaml`), replacing the values for your project. Provide the values of **image**, **DataRobot API token**, **model ID**, and **deployment ID**. (You should have saved the IDs when you [created the external deployment in MLOps](deploy-external-model).)
```yaml
apiVersion: v1
kind: Service
metadata:
name: house-regression-service
namespace: house-regression-namespace
labels:
app: house-regression-app
spec:
selector:
app: house-regression-app
ports:
- protocol: TCP
port: 80
targetPort: 8080
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: house-regression-deployment
namespace: house-regression-namespace
labels:
app: house-regression-app
spec:
replicas: 3
selector:
matchLabels:
app: house-regression-app
template:
metadata:
labels:
app: house-regression-app
spec:
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: beta.kubernetes.io/arch
operator: In
values:
- amd64
containers:
- name: house-regression-model
image: <your_aws_account_endpoint>.ecr.us-east-1.amazonaws.com/house-regression-model:latest
env:
- name: PORTABLE_PREDICTION_API_WORKERS_NUMBER
value: "2"
- name: PORTABLE_PREDICTION_API_MONITORING_ACTIVE
value: "True"
- name: PORTABLE_PREDICTION_API_MONITORING_SETTINGS
value: output_type=spooler_type=filesystem;directory=/tmp;max_files=50;file_max_size=10240000;model_id=<your mlops_model_id_obtained_at_step_5>;deployment_id=<your mlops_deployment_id_obtained_at_step_5>
- name: MONITORING_AGENT
value: "True"
- name: MONITORING_AGENT_DATAROBOT_APP_URL
value: https://app.datarobot.com/
- name: MONITORING_AGENT_DATAROBOT_APP_TOKEN
value: <your_datarobot_api_token>
ports:
- containerPort: 80
```
3. Create a Kubernetes service and deployment:
```bash
kubectl apply -f house-regression-service.yaml
```
4. View all resources that exist in the namespace:
```bash
kubectl get all -n house-regression-namespace
```

### Set up horizontal pod autoscaling {: #set-up-horizontal-pod-autoscaling }
The Kubernetes Horizontal Pod Autoscaler automatically scales the number of pods in a deployment, replication controller, or replica set based on that resource's CPU utilization. This can help your applications scale up to meet increased demand or scale back when resources are not needed, thus freeing up your nodes for other applications. When you set a target CPU utilization percentage, the Horizontal Pod Autoscaler scales your application up or back to try to meet that target.
1. Create a Horizontal Pod Autoscaler resource for the php-apache deployment.
```bash
kubectl autoscale deployment house-regression-deployment -n house-regression-namespace --cpu-percent=80 --min=1 --max=5
```
2. View all resources that exist in the namespace.
```bash
kubectl get all -n house-regression-namespace
```
Horizontal Pod Autoscaler appears in the resources list.

### Expose your model to the world (load balancing) {: #expose-your-model-to-the-world-load-balancing }
Amazon EKS supports the Network Load Balancer and the Classic Load Balancer for pods running on Amazon EC2 instance nodes through the Kubernetes service of type LoadBalancer.
1. Tag the public subnets in your VPC so that Kubernetes knows to use only those subnets for external load balancers instead of choosing a public subnet in each Availability Zone (in lexicographical order by subnet ID):
```
kubernetes.io/role/elb = 1
```
2. Tag the private subnets in the following way so that Kubernetes knows it can use the subnets for internal load balancers.
```
kubernetes.io/role/internal-elb = 1
```
!!! info "Important"
If you use an Amazon EKS AWS CloudFormation template to create your VPC after March 26, 2020, then the subnets created by the template are tagged when they're created as explained [here](https://docs.aws.amazon.com/eks/latest/userguide/load-balancing.html){ target=_blank }.
3. Use the kubectl `expose` command to generate a Kubernetes service for the deployment.
```bash
kubectl expose deployment house-regression-deployment -n house-regression-namespace --name=house-regression-external --type=LoadBalancer --port 80 --target-port 8080
```
* `--port` is the port number configured on the Load Balancer.
* `--target-port` is the port number that the deployment container is listening on.
2. Run the following command to get the service details:
```bash
kubectl get service -n house-regression-namespace
```

3. Copy the `EXTERNAL_IP` address.
4. Score your model using the `EXTERNAL_IP` address:
```bash
curl -X POST http://<EXTERNAL_IP>/predictions -H "Content-Type: text/csv" --data-binary @kaggle_house_test_dataset.csv
```

5. Check the service health of [the external deployment you created](deploy-external-model). The scoring request is now included in the statistics.

### Clean up {: #clean-up }
1. Remove the sample service, deployment, pods, and namespace:
```bash
kubectl delete namespace house-regression-namespace
```
2. Delete the cluster:
```bash
eksctl delete cluster --name house-regression
```

## Wrap-up {: #wrap-up }
This walkthrough described how to deploy and monitor DataRobot models on the Amazon EKS platform via a Portable Prediction Server (PPS). A PPS is embedded into Docker containers alongside the MLOps agents, making it possible to acquire the principal IT (service health, number of requests, etc.) and ML (accuracy, data drift etc.) metrics in the cloud and monitor them on the centralized DataRobot MLOps dashboard.
Using DataRobot PPSs allows you to avoid vendor lock-in and easily migrate between cloud environments or deploy models across them simultaneously.
|
deploy-dr-models-on-aws
|
---
title: Import data from AWS S3
description: Import data from AWS S3 to start a DataRobot ML project.
---
# Import data from AWS S3 {: #import-data-from-aws-s3 }
This section shows how to ingest data from an Amazon Web Services S3 bucket into the DataRobot AI Catalog so that you can use it for ML modeling.
To build an ML model based on an object saved in an S3 bucket:
1. Navigate to the dataset object in AWS S3 and copy the object’s URL.

2. Select the **AI Catalog** tab in DataRobot.

3. Click **Add to catalog** and select **URL**.

4. In the **Add from URL** window, paste the URL of the object and click **Save**.

DataRobot automatically reads the data and infers data types and the schema of the data, as it does when you upload a CSV file from your local machine.
5. Now that your data has been successfully uploaded, click **Create project** in the upper right corner to start an ML project.

## Private S3 buckets {: #private-s3-buckets }
You can also ingest data into DataRobot from private S3 buckets. For example, you can create a temporary link from a pre-signed S3 URL that DataRobot can then use to retrieve the file.
A straightforward way to accomplish this is by using the [AWS Command Line Interface (CLI)](https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-welcome.html){ target=_blank }.
After you install and configure the CLI, use a command like the following:
```
aws s3 presign --expires-in 600 s3://bucket-name/path/to/file.csv
https://bucket-name.s3.amazonaws.com/path/to/file.csv?AWSAccessKeyId=<key>
```
The URL produced in this example allows whoever has it to read the private file, `file.csv`, from the private bucket, `bucket-name`. The `expires-in` parameter makes the signed link available for 600 seconds upon creation.
If you have your own DataRobot installation, you can also:
* Provide the application's DataRobot service account with IAM privileges to read private S3 buckets. DataRobot can then ingest from any S3 location that it has privileges to access.
* Implement S3 impersonation of the user logging in to DataRobot to limit access to S3 data. This requires LDAP for authentication, with authorized roles for the user specified within LDAP attributes.
Both of these options accept an `s3://` URI path.
|
import-from-aws-s3
|
---
title: Monitor with serverless MLOps agents
description: Monitor models using serverless MLOps agents.
---
# Monitor with serverless MLOps agents {: #monitor-with-serverless-mlops-agents }
DataRobot can monitor model performance and drift statistics for models deployed on external systems. These externally deployed models can be:
* Docker containers of DataRobot-generated models
* DataRobot models exported as Java or Python Scoring Code
* Custom-coded models or models created in another tool

This section shows how to scale DataRobot MLOps agents on AWS to handle small and large data queues using serverless resources to track deployed models.
## High-level MLOps agent architecture {: #high-level-mlops-agent-architecture }

At a high level, an externally deployed ML model leverages the DataRobot MLOps library to produce reporting records and send them to DataRobot to process and report on. The DataRobot MLOps agent consumes these records and pipes them into a DataRobot instance. You can configure the agent to read from various spool sources, including flat files, AWS SQS, RabbitMQ, and so on (for the full list, see [Prediction intake options](intake-options)). This topic describes how serverless MLOps agents can scale to consume an AWS SQS Queue, focusing solely on the queue data consumption and reporting to DataRobot.
## Standard solution architecture {: #standard-solution-architecture }

The **External Model** in this diagram represents a model of any of the types mentioned above, deployed outside of the DataRobot cluster. One approach is to run the MLOps agent on a standing compute environment. This involves standing up a server resource, installing the agent on the resource, and having it continually poll for new data to consume and report back to DataRobot. This is an adequate solution, but does present two drawbacks: cost and scaling.
The server is always up and running, which can be costly for even the smallest solutions. Scaling is an issue when the solution needs to consume a queue that has many elements, for example, many models or very busy models writing to the SQS queue. For example, what if the SQS queue has a million elements backlogged? How long until the queue is fully processed by a single agent? You can run multiple agents to consume and send back data in a concurrent fashion. EC2 auto-scaling does not solve this problem; the triggering mechanisms for scaling more machines relates to how busy the EC2 server itself is, rather than the actual quantity of items in its backlog (that it needs to process from an external queue).
## Serverless solution architecture {: #serverless-solution-architecture }

A serverless architecture can leverage AWS Lambda to create an MLOps agent on demand, and scale additional agents via multiple Lambda functions to consume a populated queue based on its backlog.
The MLOps agent code is stored and retrieved from S3 and brought into a Lambda function to instantiate it. There it polls and consumes records written by the External Model into a Data SQS queue. The function includes logic to specify when to run additional MLOps agents (and how many). Inserting messages into the agent SQS queue "recruits" agents by triggering their creation.
Concurrency of Lambda functions is managed by keeping track of the functions in a running state via the DynamoDB noSQL database. Although the Lambda service has a concurrency reservation setting, it does not operate as a number of open slots where a Lambda function can run immediately once a slot is open.
Instead, it works by sleeping Lambda function invocations past the reservation limit; then, until concurrency slots open up, the lengths of sleep times increase. This is rarely the desired effect when processing data because it can lead to periods of idle wait time where nothing is being processed.
A CloudWatch schedule is set to insert a message into the Agent SQS queue every 15 minutes. This operation runs an initial Lambda function to see if the data queue itself is in a non-empty state. After running, it must be cleared.
A Lambda function has a maximum duration of 15 minutes. If the backlog still remains after that time, the MLOps agent gracefully terminates and passes another message to the Agent SQS queue to trigger another new Lambda function instance to take its place.
## Add dependent items to S3 {: #add-depended-items-to-s3 }
The Lambda function retrieves the agent installer and configuration files from S3, and creates or leverages an existing bucket for these items. The MLOps agent tarball and its configuration will are hosted in S3. These are both provided to the function via its environment config. (This topic refers to these objects as the agent *archive* and *config*, respectively.)
```bash
s3://my_bucket/community/agent/datarobot-mlops-agent-6.1.3-272.tar.gz
s3://my_bucket/community/agent/mlops.agent.conf.yaml
```
## Create data Simple Queue Service (SQS) queue {: #create-data-simple-queue-service-sqs-queue }
This queue is written to by one to many externally deployed models. Scored data records and metadata are sent to this queue. An MLOps agent reads from the queue and directs the records to DataRobot for model monitoring.
1. Navigate to **Services > SQS**.
2. Click **Create New Queue**.
3. Provide the queue name: `sqs_mlops_data_queue`.
4. Choose the **Standard Queue** option.
5. Configure **Queue Attributes** if desired, or choose the **Quick-Create Queue** with defaults.
Notice some of the configurable options, such as **Default visibility timeout**. Once an element is read from the queue, it is invisible to other queue consumers for this amount of time, after which it becomes visible again if the consumer did not also report back that the item was processed successfully and that it can be deleted from the queue.
Another configurable option **Message retention** defines how long an unprocessed item is allowed to stay in the queue. If it is not processed by this time, the item is removed. If agents are down for some reason, it is a good idea to set this value to a week or longer so that queue items are not lost before being consumed by an MLOps agent.
Make note of the Amazon Resource Name (ARN) upon creation.
## Create agent SQS queue {: #create-agents-sqs-queue }
This queue is used to kick off Lambda services which will run MLOps agents. These agents dequeue elements from the Data Queue and report them to the DataRobot platform for monitoring.
1. Navigate to **Services > SQS**.
2. Click **Create New Queue**.
3. Provide the queue name: `sqs_mlops_agent_queue`. (Notice that it is **agent** and not **data** this time.)
4. Choose the **Standard Queue** option.
5. Set the default queue attribute **Default Visibility Timeout** to 16 minutes.
6. Configure Queue Attributes if desired, or choose the **Quick-Create Queue** with defaults.
A read element will not become readable for the 16 minutes specified for the **Default Visibility Timeout**. In theory, a successful Lambda service (AWS limit of 15 minutes) will have been triggered by the element, and a successful return from the function will result in the SQS element being removed permanently from the queue. If it fails for some reason, read element visibility returns to the queue. This timeout should allow for each element to be sent to DataRobot once, and will prevent any record from being read and sent in duplicate.
## Create DynamoDB tables to track Lambda agents and errors {: #create-dynamodb-tables-to-track-lambda-agents-and-errors }
A table is used to track concurrent Lambda functions running MLOps agents.
1. Navigate to the DynamoDB (managed NoSQL AWS key-value/document store database) service on AWS.
2. Select the **Create table** option.
3. Name a new table `lambda_mlops_agents` and set a primary key of `aws_request_id`.
4. Create the table with default settings.
5. Navigate to the **Overview** tab and make note of the ARN for the table.

Another table is used to track agent errors.
1. Navigate to the DynamoDB (managed NoSQL AWS key-value/document store database) service on AWS.
2. Select the **Create table** option.
3. Name a new table `lambda_mlops_agents_error` and set a primary key of `aws_request_id`.
4. Create the table with default settings.
5. Navigate to the **Overview** tab and make note of the ARN for the table.
You should now have two tables. One named `lambda_mlops_agents` and another named `lambda_mlops_agents_error` and you should have made a note of the ARN for both of them.
## Configure the IAM role for a Lambda function {: #configure-the-iam-role-for-a-lambda-function }
The following sections provide steps for configuring the IAM role for a Lambda function.
### Add an inline policy for the database {: #add-an-inline-policy-for-the-database }
1. Navigate to Identity and Access Management (IAM).
2. Under **Roles**, select **Create** role.
3. Select **AWS service** and **Lambda** as a use case and then navigate to **Next: Permissions**.
4. Search for and add the **AWSLambdaBasicExecutionRole** policy.
5. Provide the Role name `lambda_mlops_agent_role`, and then click on **Create role**.
6. In the **Roles** page, filter on the newly created role; choose it.
7. Click on the **Permissions** tab, and then select **Add inline policy**.
8. Select **choose a service**, then filter on “DynamoDB” and select it from the returned options.
9. Under actions, choose the following privileges:

10. Under Resources, choose **Specific** and click the **Add ARN** link under the table option.
11. Specify the ARN of the DynamoDB `lambda_mlops_agents` and `lambda_mlops_agents_error` tables created previously.
12. Choose **Review policy**.
13. Provide the name `lambda_mlops_agent_role_dynamodb_policy`.
14. Complete the task by clicking **Create policy**.
### Add an inline policy for the queues {: #add-an-inline-policy-for-the-queues }
1. Navigate to Identity and Access Management (IAM).
2. Under **Roles**, select **Create** role.
3. Select **AWS service** and **Lambda** as a use case and then navigate to **Next: Permissions**.
4. Search for and add the **AWSLambdaBasicExecutionRole** policy.
5. Provide the Role name `lambda_mlops_agent_role`, and then click on **Create role**.
6. In the **Roles** page, filter on the newly created role; choose it.
7. Click on the **Permissions** tab, and then select **Add inline policy**.
8. Select **choose a service**, then filter on the **“SQS”** service.
9. Select **Read** and **Write** checkboxes and **Add ARN**.
10. Add the ARN of each SQS queue.
11. Review the policy and name it `lambda_mlops_agent_role_sqs_policy`.
### Add an inline policy for S3 {: #add-an-inline-policy-for-s3 }
1. Navigate to Identity and Access Management (IAM).
2. Under **Roles**, select **Create** role.
3. Select **AWS service** and **Lambda** as a use case and then navigate to **Next: Permissions**.
4. Search for and add the **AWSLambdaBasicExecutionRole** policy.
5. Provide the Role name `lambda_mlops_agent_role`, and then click on **Create role**.
6. In the **Roles** page, filter on the newly created role; choose it.
7. Click on the **Permissions** tab, and then select **Add inline policy**.
8. Select **choose a service**, then filter on the **“S3”** service.
9. Choose the **Read > GetObject** privilege, and under Resources choose the specific bucket and all objects in it.
10. Review the policy and save out as `lambda_mlops_agent_role_s3_policy`.

### Create Python Lambda {: #create-python-lambda }
1. Navigate to the Lambda service Create function and, from scratch, create a new Python 3.7 Lambda service named `mlops_agent_processor`.
2. Under Permissions, choose **Use an existing role** and select `lambda_mlops_agent_role`.
### Create the Lambda environment variables {: #create-the-lambda-environment-variables }
To configure the Lambda service, you need to set configuration and environment variables. The Lambda service routinely generates output, although you should set a timeout—30 seconds is likely adequate. The Lambda service doesn't do a large amount of local processing, so 512MB should be sufficient to run the agent efficiently and at a low Lambda service cost (the service is billed by GB-seconds). You need to set environment variables to indicate the location where the agent and its configuration are stored, the queues the agent interacts with, and the target concurrency.

### Populate the Lambda function code {: #populate-the-lambda-function-code }
Use the following code in the `lambda_function.py` window.
```python
from urllib.parse import unquote_plus
import boto3
import os
import time
from boto3.dynamodb.conditions import Key, Attr
import subprocess
import datetime
def get_approx_data_queue_size(sqs_resource, queue_name):
data_queue = sqs_resource.get_queue_by_name(QueueName=queue_name)
return int(data_queue.attributes.get('ApproximateNumberOfMessages'))
def lambda_handler(event, context):
request_id = context.aws_request_id
# the lambda environment is coming with openjdk version "1.8.0_201"
#os.system("java -version")
try:
# get and parse environment values
ENV_AGENT_BUCKET = os.environ['dr_mlops_agent_bucket']
ENV_AGENT_ZIP = os.environ['dr_mlops_agent_zip']
ENV_AGENT_CONFIG = os.environ['dr_mlops_agent_config']
ENV_TARGET_CONCURRENCY = int(os.environ['dr_mlops_agent_target_concurrency'])
ENV_DATA_QUEUE = os.environ['data_queue']
ENV_AGENT_QUEUE = os.environ['agent_queue']
agent_config = os.path.basename(ENV_AGENT_CONFIG)
# datarobot_mlops_package-6.3.3-488.tar.gz
agent_zip = os.path.basename(ENV_AGENT_ZIP)
# datarobot_mlops_package-6.3.3-488
temp_agent_dir = agent_zip.split(".tar")[0]
# datarobot_mlops_package-6.3.3
temp_agent_dir = temp_agent_dir.split("-")
agent_dir = temp_agent_dir[0] + '-' + temp_agent_dir[1]
except:
raise Exception("Problem retrieving and parsing environment variables!")
# lambda max runtime allowed (15 minute AWS maximum duration, recommended value to use here is 14 for MAXIMUM_MINUTES)
MAXIMUM_MINUTES = 14
start_time_epoch = int(time.time())
time_epoch_15m_ago = start_time_epoch - int(60 * 14.7)
time_epoch_60m_ago = start_time_epoch - 60 * 60
max_time_epoch = start_time_epoch + 60 * int(MAXIMUM_MINUTES)
# check number of items in data queue to process
sqs = boto3.resource('sqs')
approx_data_queue_size = get_approx_data_queue_size(sqs, ENV_DATA_QUEUE)
# exit immediately if data queue has nothing to process
if approx_data_queue_size == 0:
print('nothing to process, queue is empty.')
return None
# connect to database
dynamodb = boto3.resource('dynamodb')
# count running agents in dynamo in the last 15 minutes
table = dynamodb.Table('lambda_mlops_agents')
response = table.scan(
FilterExpression=Attr('start_time_epoch').gte(time_epoch_15m_ago)
)
agent_count = int(response['Count'])
print ('agent count started and running in the last 15 minutes is: ' + str(agent_count))
# count error agent records in dynamo in the last hour
error_table = dynamodb.Table('lambda_mlops_agents_error')
response = error_table.scan(
FilterExpression=Attr('start_time_epoch').gte(time_epoch_60m_ago)
)
error_count = int(response['Count'])
print ('agent errors count in the past 60 minutes: ' + str(error_count))
# exit immediately if there has been an error in the last 60 minutes
if error_count > 0:
print('exiting - lambda agents have errored within the last hour.')
return None
# create agent queue in case recruitment is needed
agent_queue = sqs.get_queue_by_name(QueueName=ENV_AGENT_QUEUE)
# exit immediately if target concurrent lambda count has already been reached
if agent_count >= ENV_TARGET_CONCURRENCY:
print('exiting without creating a new agent, already hit target concurrency of: ' + str(ENV_TARGET_CONCURRENCY))
return None
else:
# how many items does it take to be in the queue backlog for each additional agent to recruit?
SQS_QUEUE_ITEMS_PER_LAMBDA = 500
# add agent record to table for this lambda instance
table.put_item(Item= {'aws_request_id': request_id, 'start_time_epoch': start_time_epoch})
# total lambdas, minimum of queue size / items per lambda or target concurrency, -1 for this lambda, - current running agent count
lambdas_to_recruit = min(-(-approx_data_queue_size // SQS_QUEUE_ITEMS_PER_LAMBDA), ENV_TARGET_CONCURRENCY) - 1 - agent_count
if lambdas_to_recruit < 0:
lambdas_to_recruit = 0
for x in range(lambdas_to_recruit):
print('adding new agent: ' + str(x))
agent_queue.send_message(MessageBody='{ request_id: "' + request_id + '", source: "lambda_new_' + str(x) + '" }')
# install agent
try:
# switch to a local workspace
os.chdir("/tmp")
# get agent zip if it is not already here, and install it
if os.path.isfile("/tmp/" + agent_zip) == False:
print('agent does not exist. installing...')
print("bucket: " + ENV_AGENT_BUCKET + " agent zip: " + ENV_AGENT_ZIP)
# get agent zip
s3 = boto3.resource('s3')
s3.Bucket(ENV_AGENT_BUCKET).download_file(ENV_AGENT_ZIP, agent_zip)
# unzip contents
os.system('tar -xvf ' + agent_zip + ' 2> /dev/null')
# replace config
os.chdir('/tmp/' + agent_dir + '/conf')
s3.Bucket(ENV_AGENT_BUCKET).download_file(ENV_AGENT_CONFIG, agent_config)
else:
print('agent already exists, hot agent!')
except:
raise Exception("Problem installing the agent!")
# start the agent
os.chdir('/tmp/' + agent_dir)
os.system('bin/start-agent.sh')
time.sleep(5)
output = subprocess.check_output("head logs/mlops.agent.log", shell=True)
print('head mlops.agent.log --- \n' + str(output))
output = subprocess.check_output("head logs/mlops.agent.out", shell=True)
print('head mlops.agent.out --- \n' + str(output))
output = subprocess.check_output("tail -3 logs/mlops.agent.log | grep 'Fail\|Error\|ERROR' | wc -l", shell=True)
print('tail -3 logs/mlops.agent.log log count --- \n' + str(int(output)) + ' errors.')
# write to error_table if agent is failing
if int(output) > 0:
error_table.put_item(Item= {'aws_request_id': request_id, 'start_time_epoch': start_time_epoch})
print('exiting - lambda agent errored.')
# stop agent and clean up
os.system('bin/stop-agent.sh ')
os.system('rm -rf bin/PID.agent')
# remove dynamo record for this lambda
table.delete_item(Key= {'aws_request_id': request_id})
return None
# time to let the agent do its thing...
current_epoch = int(time.time())
# while there is still time left to run the Lambda and there are items on the queue
while current_epoch < max_time_epoch and approx_data_queue_size > 0:
time.sleep(10)
current_epoch = int(time.time())
approx_data_queue_size = get_approx_data_queue_size(sqs, ENV_DATA_QUEUE)
print(datetime.datetime.now(datetime.timezone.utc).strftime("%Y-%m-%d %H:%M:%S") + \
' - approx_data_queue_size: ' + str(approx_data_queue_size))
#output = subprocess.check_output("tail logs/mlops.agent.log", shell=True)
#print('tail mlops.agent.log --- \n' + str(output))
# exited processing loop, time to stop agent and clean up
os.system('bin/stop-agent.sh ')
os.system('rm -rf bin/PID.agent')
# remove dynamo record for this lambda
table.delete_item(Key= {'aws_request_id': request_id})
# if we ran out of time and there are more items in the backlog, recruit a replacement for this lambda
if current_epoch > max_time_epoch:
print('ran out of time...')
# if there are still elements to process
if approx_data_queue_size > 0:
print('adding replacement agent...')
agent_queue.send_message(MessageBody='{ request_id: "' + request_id + '", source: "lambda_replacement" }')
```
### Create the Lambda function SQS trigger {: #create-the-lambda-function-sqs-trigger }
1. Click **+Add trigger** to create a trigger to initiate the Lambda function.
2. Search for "SQS" under Trigger configuration.
3. Choose `sqs_mlops_agent_queue`.
4. Specify a batch size of 1.
5. Enable and add the trigger.
### Configure the agent account {: #configure-the-agent-account }
Set an API token for a DataRobot account in `mlops.agent.conf.yaml`. It is best to use the token for a service account rather than a particular user to avoid the risk of the account being deactivated in the future. It is also best for the admin to exempt this account from any API rate limiting, which can be done in the user's profile settings.

## Test the queue processor {: #test-the-queue-processor }
The following sections explain how to set up an MLOps agent user and an external deployment in order to test the queue processor.
### Create an IAM MLOps agent user {: #create-an-iam-mlops-agent-user }
This user will be used for generating records to fill the data queue.
1. Navigate to the IAM service and choose **Add user.**
2. Name the user `community_agent_user_iam`.
3. Select the checkbox for **Programmatic access.**
4. Click on the Permissions page.
5. Choose **Attach existing policies** and filter on “SQS”.
6. Choose the **AmazonSQSFullAccess** checkbox and click through next steps to create the user.
Upon successful creation, the Access key ID and Secret access key are displayed. Save both of these values.
### Create an external deployment {: #create-an-external-deployment }
Complete this step and the following in a client environment within the unzipped MLOps agent directory, such as **datarobot-mlops-agent-6.1.3**.
1. Install the Python wheel file from within the unzipped MLOps agent directory.
```bash
pip install lib/datarobot_mlops-*-py2.py3-none-any.whl --user
```
2. The `conf/mlops.agent.conf.yaml` must be updated as well. Specify the `mlopsURL` value to be that of the app server, for instance [https://app.datarobot.com](https://app.datarobot.com/).
3. Update the `apiToken`.
4. Navigate to `channelConfigs` and comment out the `FS_SPOOL` and associated `spoolDirectoryPath` line.
5. Specify SQS channel values to take its place, using the URL of the SQS data queue.
```yaml
- type: "SQS_SPOOL"
- details: {name: "sqsSpool", queueUrl: " <https://sqs.us-east-1.amazonaws.com/1234567/sqs_mlops_data_queue> "}
```
6. The model that will be used is the `examples/python/BinaryClassificationExample`. Change into this directory and create the deployment. This will produce a `MLOPS_DEPLOYMENT_ID` and `MLOPS_MODEL_ID`; make note of these values.
### Send records to the SQS queue via a script {: #send-records-to-the-sqs-queue-via-a-script }
1. From the same example directory, edit the `run_example.sh` script.
2. Add the following the lines to the script to specify the target SQS queue for tracking record data and the AWS credentials for authentication:
```bash
# values created from create_deployment.sh
export MLOPS_DEPLOYMENT_ID=5e123456789012
export MLOPS_MODEL_ID=5e123456789032
# where the tracking records will be sent
export MLOPS_OUTPUT_TYPE=SQS
export MLOPS_SQS_QUEUE_URL=' <https://sqs.us-east-1.amazonaws.com/1234567/sqs_mlops_data_queue> '
# credentials
export AWS_DEFAULT_REGION=us-east-1
export AWS_ACCESS_KEY_ID='AKIAABCDEABCDEABCDEPY'
export AWS_SECRET_ACCESS_KEY='GpzABCDEABCDEABCDEl6dcm'
```
3. Comment out the existing `export MLOPS_OUTPUT_TYPE=OUTPUT_DIR` line.
4. Run the example script. It should successfully populate records into the queue, which the agent process will then consume and report back to DataRobot.
## Create a schedule {: #create-a-schedule }
The last step is to run the Lambda function on a schedule using the AWS CloudWatch service. Check if any records are in the queue, and subsequently begin processing them and scaling up as needed.
1. In AWS, navigate to the CloudWatch service.
2. Then click on **Events > Rules**, and click **Create rule**.
3. Click **Schedule**, and set the rule to run every 15 minutes.
4. Next, click **Add target**.
5. From the dropdown, choose the **SQS queue** option.
6. Select `sqs_mlops_agent_queue`, and then click **Configure details**.

7. Provide the name of `sqs_mlops_agent_queue_trigger_15m`.
8. Enable the rule, and select **Create rule**.
This procedure enables the rule which will execute every 15 minutes. Use a cron expression instead to run on a round number (at the top of the minute), if desired.
## Wrap-up {: #wrap-up }
The serverless architecture spins up a single MLOps agent on the schedule you enabled. If the defined logic determines that the queue has many items, it recruits friends to process the queue. With this architecture in place, the actual costs for processing the queue are minimal since compute is only paid for when necessary. You can configure the flexible scaling options to scale up and handle a large amount of demand, should the need arise.
|
monitor-serverless-mlops-agents
|
---
title: AWS Lambda reporting to MLOps
description: A serverless method of AWS Lambda reporting actuals to DataRobot MLOps.
---
# AWS Lambda reporting to MLOps {: #aws-lambda-reporting-to-mlops }
This topic describes a serverless method of reporting actuals data back to DataRobot once results are available for predicted items. Python 3.7 is used for the executable.
## Architecture {: #architecture }

The process works as follows:
1. CSV file(s) arrive at AWS S3 containing results to report back to DataRobot. These can be files created from any process. Examples above include a database writing results to S3 or a process sending a CSV file to S3.
2. Upon arrival in the monitored directory, a serverless compute AWS Lambda function is triggered.
3. The related deployment in DataRobot is specified in the S3 bucket path name to the CSV file, so the Lambda works generically for any deployment.
4. The Lambda parses out the deployment, reads through the CSV file, and reports results back to DataRobot for processing. You can then explore the results from various angles within the platform.

## Create or use an existing S3 bucket {: #create-or-use-an-existing-s3-bucket }
Actual CSV prediction results are written to a monitored area of an AWS S3 bucket. If one does not exist, create the new area to receive the results. Files are expected to be copied into this bucket from external sources such as servers, programs, or databases. To create a bucket, navigate to the S3 service within the AWS console and click **Create bucket**. Provide a name (like `datarobot-actualbucket`) and region for the bucket, then click **Create bucket**. Change the defaults if required for organizational policies.

## Create an IAM Role for the Lambda {: #create-an-iam-role-for-the-lambda }
Navigate to Identity and Access Management (IAM). Under **Roles**, select **Create role**. Select **AWS service** and **Lambda** as a use case and then navigate to **Next: Permissions**. Search for and add the *AWSLambdaBasicExecutionRole* policy. Proceed with the next steps, provide the Role name `lambda_upload_actuals_role`. Complete the task by clicking **Create role**.
Two policies must be attached to this role:
* The AWS-managed policy, *AWSLambdaBasicExecutionRole*.
* An inline policy used for accessing and managing the S3 objects/files associated with this Lambda. Specify the inline policy for monitoring the S3 bucket as follows:
```json
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "VisualEditor0",
"Effect": "Allow",
"Action": [
"s3:GetObject",
"s3:ListBucket",
"s3:DeleteObject"
],
"Resource": [
"arn:aws:s3:::datarobot-actualbucket",
"arn:aws:s3:::datarobot-actualbucket/*"
]
}
]
}
```
## Create the Lambda {: #create-the-lambda }
Navigate to the AWS Lambda service in the GUI console and from the dashboard, click **Create function**. Provide a name, such as `lambda_upload_actuals`. In the Runtime environment section, choose Python 3.7. Expand the execution role section, select **Use an existing role**, and choose the `lambda_upload_actuals_role` created above.
## Add the Lambda trigger {: #add-the-lambda-trigger }
This Lambda will run any time a CSV file lands in the path it is monitoring. From the Designer screen, choose **+Add trigger**, and select **S3** from the dropdown list. For the bucket, choose the one specified in the IAM role policy you created above. Optionally, specify a prefix if the bucket is used for other purposes. For example, use the value `upload_actuals/` as a prefix if you want to only monitor objects that land in `s3://datarobot-actualbucket/upload_actuals/`.
(Note that the data for this example would be expected to arrive similar to `s3://datarobot-actualbucket/upload_actuals/2f5e3433_DEPLOYMENT_ID_123/actuals.csv`.) Click **Add** to save the trigger.
## Create and add a Lambda layer {: #create-and-add-a-lambda-layer }
Lambda layers provide the opportunity to build Lambda code on top of libraries, and separate that code from the delivery package. Although not required to separate the libraries, using layers simplifies the process of bringing in necessary packages and maintaining code. This code will require the requests and pandas libraries, which are not part of the base Amazon Linux image Lambda runs in, and must be added via a layer. This can be done by creating a virtual environment. In this example, the environment used to execute the code below is an Amazon Linux EC2 box. (See the instructions to install Python 3 on Amazon Linux [here](https://aws.amazon.com/premiumsupport/knowledge-center/ec2-linux-python3-boto3/){ target=_blank }.)
Creating a ZIP file for a layer can then be done as follows:
```bash
python3 -m venv my_app/env
source ~/my_app/env/bin/activate
pip install requests
pip install pandas
deactivate
```
Per the [Amazon documentation](https://docs.aws.amazon.com/lambda/latest/dg/configuration-layers.html){ target=_blank }, this must be placed in the python or site-packages directory and expanded under `/opt`.
```bash
cd ~/my_app/env
mkdir -p python/lib/python3.7/site-packages
cp -r lib/python3.7/site-packages/* python/lib/python3.7/site-packages/.
zip -r9 ~/layer.zip python
```
Copy the layer.zip file to a location on S3; this is required if the Lambda layer is larger than 10MB.
```bash
aws s3 cp layer.zip s3://datarobot-bucket/layers/layer.zip
```
Navigate to **Lambda service** > **Layers** > **Create Layer**. Provide a name and link to the file in S3. Note that this will be the Object URL of the uploaded ZIP. It is recommended but not necessary to set compatible environments. This will make them more easily accessible in a dropdown menu when adding them to a Lambda. Select to save the layer and its ARN.

Navigate back to the Lambda and click **Layers** (below the Lambda title). Add a layer and provide the ARN from the previous step.
## Define the Lambda code {: #define-the-lambda-code }
```python
import boto3
import os
import os.path
import urllib.parse
import pandas as pd
import requests
import json
# 10,000 maximum allowed payload
REPORT_ROWS = int(os.environ["REPORT_ROWS"])
DR_API_TOKEN = os.environ["DR_API_TOKEN"]
DR_INSTANCE = os.environ["DR_INSTANCE"]
s3 = boto3.resource("s3")
def report_rows(list_to_report, url, total):
print("reporting " + str(len(list_to_report)) + " records!")
df = pd.DataFrame(list_to_report)
# this must be provided as a string
df["associationId"] = df["associationId"].apply(str)
report_json = json.dumps({"data": df.to_dict("records")})
response = requests.post(
url,
data=report_json,
headers={
"Authorization": "Bearer " + DR_API_TOKEN,
"Content-Type": "application/json",
},
)
print("response status code: " + str(response.status_code))
if response.status_code == 202:
print("success! reported " + str(total) + " total records!")
else:
print("error reporting!")
print("response content: " + str(response.content))
def lambda_handler(event, context):
# get the object that triggered lambda
bucket = event["Records"][0]["s3"]["bucket"]["name"]
key = urllib.parse.unquote_plus(event["Records"][0]["s3"]["object"]["key"], encoding="utf-8")
filenm = os.path.basename(key)
fulldir = os.path.dirname(key)
deployment = os.path.basename(fulldir)
print("bucket is " + bucket)
print("key is " + key)
print("filenm is " + filenm)
print("fulldir is " + fulldir)
print("deployment is " + deployment)
url = DR_INSTANCE + "/api/v2/deployments/" + deployment + "/actuals/fromJSON/"
session = boto3.session.Session()
client = session.client("s3")
line_no = -1
total = 0
rows_list = []
for lines in client.get_object(Bucket=bucket, Key=key)["Body"].iter_lines():
# if the header, make sure the case sensitive required fields are present
if line_no == -1:
header = lines.decode("utf-8").split(",")
col1 = header[0]
col2 = header[1]
expectedHeaders = ["associationId", "actualValue"]
if col1 not in expectedHeaders or col2 not in expectedHeaders:
print("ERROR: data must be csv with 2 columns, headers case sensitive: associationId and actualValue")
break
else:
line_no = 0
else:
input_dict = {}
input_row = lines.decode("utf-8").split(",")
input_dict.update({col1: input_row[0]})
input_dict.update({col2: input_row[1]})
rows_list.append(input_dict)
line_no += 1
total += 1
if line_no == REPORT_ROWS:
report_rows(rows_list, url, total)
rows_list = []
line_no = 0
if line_no > 0:
report_rows(rows_list, url, total)
# delete the processed input
s3.Object(bucket, key).delete()
```
## Set Lambda environment variables {: #set-lambda-environment-variables }
Three variables need to be set for the Lambda.

* `DR_API_TOKEN` is the API token of the account with access to the deployment, which will be used for submitting the actuals to the DataRobot environment. It is advised to use a service account for this configuration, rather than a personal user account.
* `DR_INSTANCE` is the application server of the DataRobot instance that is being used.
* `REPORT_ROWS` is the number of actuals records to upload in a payload; 10000 is the maximum.
## Set Lambda resource settings {: #set-lambda-resource-settings }
Edit the **Basic settings** to set configuration items for the Lambda. When reading input data to buffer and submit payloads, the Lambda uses a fairly low amount of local compute and memory. 512MB should be more than sufficient for memory settings, allocating half a vCPU accordingly. The **Timeout** is the amount of time allowed for the Lambda to not provide output, causing AWS to terminate it. This is most likely to happen when waiting for a response after submitting the payload. The default of 3 seconds is likely too short, especially when using the max size: 10,000-record payloads. Although 5-6 seconds is likely adequate, the configuration tested in this example was set to 30 seconds.
## Run the Lambda {: #run-the-lambda }
The Lambda is coded to expect a report-ready pair of data columns. It expects a CSV file with a header and case-sensitive columns `associationId` and `actualValue`. Sample file contents are shown below for a Titanic passenger scoring model.
```csv
associationId,actualValue
892,1
893,0
894,0
895,1
896,1
```
The following is an AWS CLI command to leverage the S3 service and copy a local file to the monitored directory:
```bash
aws s3 cp actuals.csv s3://datarobot-actualbucket/upload_actuals/deploy/ 5aa1a4e24eaaa003b4caa4 /actuals.csv
```
Note that the deployment ID is included as part of the path (shown above in red). This is the DataRobot deployment with which the actuals will be associated. Similarly, files from a process or database export can be written directly to S3.
Consider that the maximum length of time for a Lambda to run is 15 minutes. In the testing for this article, this length of time was sufficient for 1 million records. In production usage, you may want to explore approaches that include more files with fewer records. Also, you may want to report actuals for multiple deployments simultaneously. It may be prudent to disable API rate limiting for the associated API token/service account reporting these values.
Flesh out any additional error handling, such as sending an email, sending a queue data message, creating custom code to fit into an environment, moving the S3 file, etc. This Lambda deletes the input file upon successful processing and writes errors to the log in the event of failure.
## Wrap-up {: #wrap-up }
At this point, the Lambda is complete and ready to report any actuals data fed to it (i.e., the defined S3 location receives a file in the expected format). Set up a process to perform this operation once actual results arrive, then monitor and manage the model with DataRobot MLOps to understand how it’s performing for your use case.
|
aws-lambda-reporting-to-mlops
|
---
title: Use DataRobot Prime with AWS Lambda
description: Use DataRobot Prime to download a DataRobot model and deploy it using AWS Lambda.
---
# Use DataRobot Prime models with AWS Lambda {: #use-datarobot-prime-models-with-aws-lambda }
!!! info "Availability information"
The ability to create _new_ DataRobot Prime models has been removed from the application. This does not affect _existing_ Prime models or deployments. To export Python code in the future, use the Python Scoring Code export function in any RuleFit model.
This page outlines how you can use the DataRobot Prime model functionality to download a DataRobot model and deploy it using AWS Lambda.
DataRobot Prime can be initiated on most DataRobot models and allows you to download an approximation of a DataRobot model either as a Python or Java module.
The code is then easily deployable in any environment and is not dependent on the DataRobot application.
Before proceeding with this workflow, be sure to download a DataRobot Prime model as a Java module.
## Why deploy on AWS Lambda {: #why-deploy-on-aws-lambda }
While DataRobot provides its own scalable prediction servers that are fully integrated with the platform, there are multiple reasons why you would want to deploy on AWS Lambda:
* Company policy or governance decision.
* Serverless architecture.
* Cost reduction.
* Custom functionality on top of the DataRobot model.
* The ability to integrate models into systems that cannot communicate with the DataRobot API.
## Setup {: #setup }
Follow the steps below to complete setup for this example.
1. Rename the Java file to `Prediction.java`.
2. Create a project, “lambda_prime_example”, in the IDE of your choice.
3. Copy `Prediction.java` to the project.

4. Compile the package using the `mvn package` command.
5. Click **Create function** to create a new AWS function.

6. Choose Java 11 to create Lambda.

7. Give the function a name and choose the permissions.

8. Upload the compiled JAR file to Lambda.

9. Change the Lambda handler to the name of the Java package method:
```
com.datarobot.examples.scoring.prime.PrimeLambdaExample::handleRequest
```
Setup is now complete. Next, perform testing to verify that the deployment is working as intended.
## Test the Lambda function {: #test-the-lambda-function }
To test the Lambda function:
1. Go to the TEST event configuration page.

2. Put JSON with features as test event:

3. Click the **Test** button.

Now, you can configure the integration with an AWS API Gateway or other services so that data is sent to the Lambda function and you receive results back.
|
prime-lambda
|
---
title: Use Scoring Code with AWS Lambda
description: Learn how to integrate Scoring Code models with AWS Lambda.
---
# Use Scoring Code with AWS Lambda {: #use-scoring-code-with-aws-lambda }
This topic describes how you can use DataRobot’s Scoring Code functionality to download a model's Scoring Code and deploy it using AWS Lambda.
DataRobot automatically runs code generation for those models that support it, and indicates code availability with an icon on the Leaderboard.

This option allows you to download validated Java Scoring Code for a predictive model without approximation; the code is easily deployable in any environment and is not dependent on the DataRobot application.
### Why deploy on AWS Lambda {: #why-deploy-on-aws-lambda }
While DataRobot provides its own scalable prediction servers that are fully integrated with the platform, there are multiple reasons why you would want to deploy on AWS Lambda:
* Company policy or governance decision.
* Serverless architecture.
* Cost reduction.
* Custom functionality on top of the DataRobot model.
* The ability to integrate models into systems that cannot communicate with the DataRobot API. In this case, AWS Lambda can be used either as a primary means of scoring for fully offline systems or as a backend for systems that are using the DataRobot API.
### Download Scoring Code {: #download-scoring-code }
The first step to deploying a DataRobot model to AWS Lambda is to download the Scoring Code JAR file from the [Leaderboard](sc-download-leaderboard) or the [deployment](sc-download-deployment).

Next, create your own Lambda scoring project.

In `pom.xml`, change the path to the downloaded JAR file.

### Compile the project {: #compile-the-project }
After downloading Scoring Code and creating a Lambda project, finalize the setup by compiling the project.
To start with, find this line in `CodeGenLambdaExample.java` and impute with your model ID:
```
public static String modelId = "<Put exported Scoring Code model\_id here>";
```
If you have a classification model, you need to use the IClassificationPredictor interface:
```
public static IClassificationPredictor model;
```
If it’s a regression model, use the the IRegressionPredictor interface:
```
public static IRegressionPredictor model;
```
Now you can run the maven command `mvn package` to compile the code. The packaged JAR file will appear in the target folder of the project.
### Deploy to AWS Lambda {: #deploy-to-aws-lambda }
To deploy to AWS Lambda, use the following steps:
1. Select **Create function** from the AWS menu:

2. Choose Java 11 or Java 8 to create the Lambda function:

3. Enter a name for the function.

4. Configure Lambda permissions as shown below.

5. Upload the compiled JAR file to Lambda. View the upload options below.

You can view the JAR file location below.

6. Choose the Lambda handler for your Java package name:

The setup is now complete. DataRobot recommends testing the function to see that the deployment is working as intended.
### Test the Lambda function {: #test-the-lambda-function }
To test the Lambda function:
1. Go to the TEST event configuration page.

2. Add JSON with features as a test event.

3. Click the **Test** button.

After testing is complete, you can integrate with the AWS API Gateway or other services so that data is sent to the Lambda function and it returns results.
|
sc-lambda
|
---
title: AWS Lambda
description: Integrate DataRobot with AWS Lambda.
---
# AWS Lambda {: #aws-lambda }
The sections described below provide techniques for integrating AWS Lambda with DataRobot.
Topic | Describes...
----- | ------
[Serverless MLOps agents](monitor-serverless-mlops-agents) | Monitoring external models with serverless MLOps agents running on AWS Lambda.
[AWS Lambda reporting to MLOps](aws-lambda-reporting-to-mlops) | AWS Lambda serverless reporting of actuals to DataRobot MLOps.
[Use DataRobot Prime models with AWS Lambda](prime-lambda) | Using DataRobot Prime models with AWS Lambda.
[Use Scoring Code with AWS Lambda](sc-lambda) | Making predictions using Scoring Code deployed on AWS Lambda.
|
index
|
---
title: Amazon SageMaker
description: Integrate DataRobot with Amazon SageMaker.
---
# Amazon SageMaker {: #amazon-sagemaker }
The sections described below provide techniques for integrating Amazon SageMaker with DataRobot.
Topic | Describes...
----- | ------
[Deploy models on Sagemaker](sagemaker-deploy) | Deploying on SageMaker and monitoring with MLOps agents.
[Monitor SageMaker models in MLOps](sagemaker-monitor) | Monitoring a SageMaker model that has been deployed to AWS for real-time scoring in DataRobot MLOps.
[Use Scoring Code with AWS SageMaker](sc-sagemaker) | Making predictions using Scoring Code deployed on AWS SageMaker.
|
index
|
---
title: Deploy models on SageMaker
description: Deploy models on SageMaker and monitor them with MLOps Agents.
---
# Deploy models on SageMaker
This article showcases how to make predictions and monitor external models deployed on AWS SageMaker using DataRobot’s [Scoring Code](scoring-code/index) and [MLOps agent](mlops-agent/index).
DataRobot automatically runs Scoring Code generation for models that support it and indicates code availability with an icon on the Leaderboard.
This option allows you to download validated Java Scoring Code for a model without approximation; the code is easily deployable in any environment and is not dependent on the DataRobot application.
### Why deploy on AWS SageMaker {: #why-deploy-on-aws-sagemaker }
While DataRobot provides its own scalable prediction servers that are fully integrated with the platform, there are multiple reasons why someone would want to deploy on AWS SageMaker:
* Company policy or governance decision.
* Custom functionality on top of the DataRobot model.
* Low-latency scoring without the overhead of API calls. Java code is typically faster than scoring through the Python API.
* The ability to integrate models into systems that can’t necessarily communicate with the DataRobot API.
Note that data drift and accuracy tracking are unavailable out-of-the-box unless you configure the [MLOps agent](mlops-agent/index).
You can leverage AWS SageMaker as a deployment environment for your Scoring Code. AWS SageMaker allows you to bring in your machine learning models (in several supported formats) and expose them as API endpoints. DataRobot packages the MLOps agent along with the model in a Docker container which will be deployed on AWS SageMaker.

### Download Scoring Code {: #download-scoring-code }
The first step to deploying a DataRobot model to AWS Sagemaker is to download the Scoring Code JAR file from the [Leaderboard](sc-download-leaderboard) or the [deployment](sc-download-deployment) found under the Downloads tab from within the model menu.

### Configure the MLOps agent {: #configure-the-mlops-agent }
The MLOps library provides a way for you to get the same monitoring features with your own models as you can with DataRobot models.
The MLOps library provides an interface that you can use to report metrics to DataRobot's MLOps service; from there, you can monitor deployment statistics and predictions, track feature drift, and get other insights to analyze model performance.
For more information, reference the [MLOps agent documentation](mlops-agent/index).
You can use the MLOps library with any type of model, including Scoring Code models downloaded from DataRobot.
DataRobot supports versions of the MLOps library in Java or Python (Python3).
The MLOps agent can be <a href="https://app.datarobot.com/api/v2/mlopsInstaller" target="_blank">downloaded using the DataRobot API</a>, or as a tarball from the DataRobot UI.
From the application, select your user icon and navigate to the Developer Tools page to find the tarball available for download.


1. Install the DataRobot MLOps agent and libraries.
2. Configure the agent.
3. Start the agent service.
4. Ensure that the agent buffer directory (`MLOPS_SPOOLER_DIR_PATH` in the config file) exists.
5. Configure the channel you want to use for reporting metrics. (Note that the MLOps agent can be configured to work with a number of channels, including SQS, Google Pub Sub, Spool File and RabbitMQ. This example uses SQS.
6. Use the MLOps library to report metrics from your deployment.
The MLOps library buffers the metrics locally, which enables high throughput without slowing down the deployment. It also forwards the metrics to the MLOps service so that you can monitor model performance via the deployment inventory.
### Create a deployment {: #create-a-deployment }
Helper scripts for creating deployments are available in the examples directories of the MLOps agent tarball.
Every example has its own script to create the related deployment, and the `tools/create_deployment.py` script is available to create your own deployment.
Deployment creation scripts interact with the MLOps service directly, so they must run on a machine with connectivity to the MLOps service.
Every example has a description file ( `<name>_deployment_info`) and a script to create a deployment.
1. Edit the description file to configure your deployment.
2. If you want to enable or disable feature drift tracking, configure the description file by adding or excluding the `trainingDataset` field.
3. Create a new deployment by running the script, `<name>_create_deployment.sh`.
Running this script returns a deployment ID and initial model ID that can be used to instrument your deployment. Alternatively, create the deployment from the DataRobot GUI.

To create a deployment from the DataRobot GUI, use the the following steps:
1. Log in to the DataRobot GUI.
2. Select **Model Registry** (1) and click **Add New Package** (2).
3. In the dropdown, select **New external model package** (3).

4. Complete all the information needed for your deployment, and then click **Create package.**

5. Select the **Deployments** tab and click **Deploy Model Package**, validate the details on this page, and click **Create deployment** (right-hand side at the top of the page).
6. You can use the toggle buttons to enable drift tracking, segment analysis of predictions, and more deployment settings.

7. Once you click **Create Deployment** after providing necessary details, the following dialog box appears.

8. You can see the deployment details of the newly created deployment.

If you select the **Integrations** tab for the deployment, you can see the monitoring code.

When you scroll down through this monitoring code, you can see `DEPLOYMENT_ID` and `MODEL_ID` which are used by the MLOps library to monitor the specific model deployment.

### Upload Scoring Code {: #upload-scoring-code }
After downloading the Scoring Code JAR file, upload it to an AWS S3 bucket that is accessible to SageMaker.
SageMaker expects a `tar.gz` archive format to be uploaded to the S3 bucket. Compress your model (the Scoring Code JAR file) using the following command:
```
tar -czvf 5e8471fa169e846a096d5137.jar.tar.gz 5e8471fa169e846a096d5137.jar
```
Note that if you are using macOS, the `tar` command adds hidden files in the `tar.gz `package that create problems during deployment; use the below command instead of the one shared above:
```
COPYFILE_DISABLE=1 tar -czvf 5e8471fa169e846a096d5137.jar.tar.gz 5e8471fa169e846a096d5137.jar
```
Once you have created the `tar.gz` archive, upload it to the S3 bucket.

### Customize the Docker image {: #customize-the-docker-image }
DataRobot has a published Docker image ([scoring-inference-code-SageMaker:latest](https://hub.docker.com/r/datarobot/scoring-inference-code-sagemaker){ target=_blank }) which contains the inference code to the Amazon ECR. You can use this Docker image as the base image, and then add a customized Docker container layer containing the MLOps agent.

The shell script `agent-entrypoint.sh` will run the Scoring Code as a JAR file and also start the MLOps agent JAR.

The MLOps configuration file is configured, by default, to report metrics in the Amazon SQS service. Provide the URL for accessing SQS in the `mlops.agent.conf.yaml`:
```yaml
- type: SQS_SPOOL
- details: {name: "sqsSpool", queueUrl: " [https://sqs.us-east-1.amazonaws.com/123456789000/mlops-agent-sqs](https://sqs.us-east-1.amazonaws.com/123456789000/mlops-agent-sqs%C2%A0) "}
```
Now, create a Docker image from Dockerfile. Go to the directory containing Dockerfile and run the following command:
```bash
docker build -t codegen-mlops-SageMaker
```

This creates a Docker image from Dockerfile (a reference Dockerfile is shared with the source code).
### Publish the Docker image to Amazon ECR {: #publish-a-docker-image-to-amazon-ecr }
Next, publish the Docker image to the Amazon ECR:
1. Authenticate your Docker client to the Amazon ECR registry to which you intend to push the image. Authentication tokens must be obtained for each registry used, and the tokens are valid for 12 hours. You can refer to Amazon documentation for the various authentication options listed in this example.
2. This example uses token-based authentication:
```bash
TOKEN=$(aws ecr get-authorization-token --output text --query 'authorizationData[].authorizationToken')
curl -i -H "Authorization: Basic $TOKEN" <https://123456789000.dkr.ecr.us-east-1.amazonaws.com/v2/SageMakertest/tags/list>
```
3. Create an Amazon ECR registry where you can push your image:
```bash
aws ecr create-repository --repository-name SageMakerdemo
```
This results in the output as shown below:

You can also create a registry from the AWS Management console, from **ECR Service** > **Create Repository** (you must provide the repository name).

4. Identify the image to push. Run the Docker images command to list the images on your system:
```bash
docker image ls
```
5. Tag the image to push to AWS ECR. Find the Docker image's ID containing the inference code and the MLOps agent.
6. Tag the image with the Amazon ECR registry, repository, and the optional image tag name combination to use. The registry format is `aws_account_id.dkr.ecr.region.amazonaws.com`. The repository name should match the repository that you created for your image. If you omit the image tag, DataRobot uses the latest tag:
```bash
docker tag <image_id> "${account}.dkr.ecr.${region}.amazonaws.com/SageMakerdemo"
```

7. Push the image:
```bash
docker push ${account}.dkr.ecr.${region}.amazonaws.com/SageMakermlopsdockerized
```


Once the image is pushed, you can validate from the AWS management console.

### Create a model {: #create-a-model }
1. Sign into AWS and enter “SageMaker” in the search bar. Select the first result (Amazon SageMaker) to enter the SageMaker console and create a model.
2. In the **IAM role** field, select **Create a new role** from the dropdown if you do not have an existing role on your account. This option creates a role with the required permissions and assigns it to your instance.

3. Select **Amazon SageMaker > Models > Create model**.
4. For the Container input options field (1), select **Provide model artifacts and inference image location**. Specify the location of the Scoring Code image (your model) in the S3 bucket (2) and the registry path to the Docker image containing the inference code (3).

5. Click **Add container** below the fields when complete.

Finally, your model configuration will look like this:

6. Open the dashboard on the left side and navigate to the **Endpoint configurations** page to create a new endpoint configuration. Select the model you have uploaded.

7. Name the endpoint configuration (1) and provide an encryption key (2), if desired. When complete, select the **Create endpoint configuration** at the bottom of the page.
8. Use the dashboard to navigate to **Endpoints** and create a new endpoint:

9. Name the endpoint (1) and opt to use an existing endpoint configuration (2). Select the configuration you just created (3) and click **Select endpoint configuration.** When endpoint creation is complete, you can make prediction requests with your model. When the endpoint is ready to service requests, the **Status** will change to **InService.**

### Making predictions {: #making-predictions }
Once the SageMaker endpoint status changes to **InService**, you can start making predictions against this endpoint. This example tests predictions using the Lending Club data.
Test the endpoint from the command line to make sure the endpoint is responding.
Use the following command to make test predictions and pass data into the body of a CSV string. Before using it, make sure you have installed AWS CLI.
```bash
aws SageMaker-runtime invoke-endpoint --endpoint-name mlops-dockerized-endpoint-new
```

You can also use a Python script (outlined below) to make predictions.
This script uses the DataRobot MLOps library to report metrics back to the DataRobot application which you can see from the deployment you created.
```python
import time
import random
import pandas as pd
import json
import boto3
from botocore.client import Config
import csv
import itertools
from datarobot.mlops.mlops import MLOps
import os
from io import StringIO
"""
This is sample code and may not be production ready
"""
runtime_client = boto3.client('runtime.sagemaker')
endpoint_name = 'mlops-dockerized-endpoint-new'
cur_dir = os.path.dirname(os.path.abspath(__file__))
#dataset_filename = os.path.join(cur_dir, "CSV_10K_Lending_Club_Loans_cust_id.csv")
dataset_filename = os.path.join(cur_dir, "../../data/sagemaker_mlops.csv")
def _feature_df(num_samples):
df = pd.read_csv(dataset_filename)
return pd.DataFrame.from_dict(df)
def _predictions_list(num_samples):
with open(dataset_filename, 'rb') as f:
payload = f.read()
result = runtime_client.invoke_endpoint(
EndpointName=endpoint_name,
Body=payload,
ContentType='text/csv',
Accept='Accept'
)
str_predictions = result['Body'].read().decode()
df_predictions = pd.read_csv(StringIO(str_predictions))
#list_predictions = df_predictions['target_1_PREDICTION'].values.tolist()
list_predictions = df_predictions.values.tolist()
print("number of predictions made are : ",len(list_predictions))
return list_predictions
def main():
num_samples = 10
# MLOPS: initialize mlops library
# If deployment ID is not set, it will be read from MLOPS_DEPLOYMENT_ID environment variable.
# If model ID is not set, it will be ready from MLOPS_MODEL_ID environment variable.
mlops = MLOps().init()
features_df = _feature_df(num_samples)
#print(features_df.info())
start_time = time.time()
predictions_array = _predictions_list(num_samples)
print(len(predictions_array))
end_time = time.time()
# MLOPS: report the number of predictions in the request and the execution time.
mlops.report_deployment_stats(len(predictions_array), (end_time - start_time) * 1000)
# MLOPS: report the prediction results.
mlops.report_predictions_data(features_df=features_df, predictions=predictions_array)
# MLOPS: release MLOps resources when finished.
mlops.shutdown()
if __name__ == "__main__":
main()
```
### Model monitoring {: #model-monitoring }
Return to the deployment and check the **Service Health** tab to monitor the model. In this case, the MLOps Library is reporting predictions metrics to the Amazon SQS channel. The MLOps agent deployed on SageMaker along with Scoring Code reads these metrics from the SQS channel and reports them to the **Service Health** tab.

|
sagemaker-deploy
|
---
title: Monitor SageMaker models in MLOps
description: Monitoring a SageMaker-deployed model in DataRobot MLOps.
---
# Monitor SageMaker models in MLOps {: #monitor-sagemaker-models-in-mlops }
This topic outlines how to monitor an AWS SageMaker model developed and deployed on AWS for real-time API scoring.
DataRobot can monitor models through a remote agent architecture that does not require a direct connection between an AWS model and DataRobot.
This topic explains how to add data to a monitoring queue.
See the [Monitor with serverless MLOps agents](monitor-serverless-mlops-agents) topic to learn how to consume data from a queue.
## Technical architecture {: #technical-architecture }

You can construct the deployment architecture above by following the steps in this article. Review details of each component in the list below:
1. An API client assembles a single-line JSON request of raw data input for scoring, which is posted to an API Gateway-exposed endpoint.
2. The API Gateway acts as a pass-through and submits the request to an associated Lambda function for handling.
3. Logic in the Lambda processes the raw input data and parses it into the format required to score through the SageMaker endpoint. This example parses the data into a headerless CSV for an XGBoost model. The SageMaker endpoint is then invoked.
4. The SageMaker endpoint satisfies the request by passing it to a standing deployed EC2 instance hosting the real-time model. The model deployment line from the AWS code in <a href="https://github.com/datarobot-community/ai_engineering/blob/master/mlops/DRMLOps_SageMaker_examples/xgboost_direct_marketing_sagemaker_v2.ipynb" target="_blank">Community AI Engineering GitHub repo</a> (`xgb.deploy`) handles standing up this machine and bringing the trained AWS ECR hosted model to it.
5. The raw score is processed by the Lambda; in this example, a threshold is applied to select a binary classification label.
6. Timing, input data, and model results are written to an SQS queue.
7. The processed response is sent back to the API Gateway.
8. The processed response is passed back to the client.
### Create a custom SageMaker model {: #create-a-custom-sagemaker-model }
This article is based on the SageMaker notebook example in the <a href="https://github.com/aws/amazon-sagemaker-examples/blob/af6667bd0be3c9cdec23fecda7f0be6d0e3fa3ea/introduction_to_applying_machine_learning/xgboost_direct_marketing/xgboost_direct_marketing_sagemaker.ipynb" target="_blank">AWS GitHub repo</a>.
The use case aims to predict which customers will respond positively to a direct marketing campaign.
The code has been updated to confirm with the <a href="https://sagemaker.readthedocs.io/en/stable/v2.html" target="_blank">v2 version of the SageMaker SDK</a> and can be found in the <a href="https://github.com/datarobot-community/ai_engineering/blob/master/mlops/DRMLOps_SageMaker_examples/xgboost_direct_marketing_sagemaker_v2.ipynb" target="_blank">DataRobot Community AI Engineering GitHub repo</a>.
Completing the notebook in AWS SageMaker will result in a deployed model at a SageMaker endpoint named `xgboost-direct-marketing` hosted on a standing `ml.m4.xlarge` instance.
!!! note
The endpoint expects fully prepared and preprocessed data (one-hot encoding applied, for example) in the same order it was provided in during training.
There are several ways to test the SageMaker endpoint; the snippet below is a short Python script that can score a record from the validation set (the target column has been dropped).
```python
import boto3
import os
import json
runtime = boto3.Session().client('sagemaker-runtime',use_ssl=True)
endpoint_name = 'xgboost-direct-marketing'
payload = '29,2,999,0,1,0,0,1,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,1,0,0,0,0,0,1,0,0,1,0,0,1,0,0,0,1,0,0,0,0,1,0,0,0,0,0,0,0,0,1,0,0,1,0'
response = runtime.invoke_endpoint(EndpointName=endpoint_name,
ContentType='text/csv',
Body=payload)
result = json.loads(response['Body'].read())
print(result)
```
## Create an external deployment in DataRobot {: #create-an-external-deployment-in-datarobot }
Inside DataRobot, a deployment entry must be created to monitor the SageMaker model. Data and statistics are reported to this deployment for processing, visualization, and analysis. To create one:
1. Navigate to **Model Registry > Model Packages** tab.
2. Click **Add New Package**.
3. Select **New external model package**.

4. Provide the required information as shown below.

5. Download the `bank-additonal-full.csv` file from <a href="https://github.com/datarobot-community/ai_engineering/blob/master/mlops/DRMLOps_SageMaker_examples/xgboost_direct_marketing_sagemaker_v2.ipynb" target="_blank">Community GitHub</a>.
6. Upload `bank-additional-full.csv` as the training dataset. Although this is an example, it is important to note that the model in this example is not trained on 100% of the data before full deployment. In real-world machine learning, this is a good practice to consider.
7. Click **Create package** to complete creation of the model package and add it to the Model Registry.
8. Locate the package in the registry and click on the **Actions** menu for the package.

9. Toggle on **Enable target monitoring** and **Enable feature drift tracking** and select **Deploy model**.
You can configure additional prediction environment metadata if needed, providing the details of where the external model resides (AWS in this case).
Upon completion of deployment creation, some ID values need to be retrieved. These will be associated with the model in SageMaker.
10. Under the deployment, navigate to the **Predictions** > **Monitoring** tab, and view the Monitoring Code. Copy the values for `MLOPS_DEPLOYMENT_ID` and `MLOPS_MODEL_ID`.
!!! note
Note that the `MLOPS_DEPLOYMENT_ID` is associated with the entry within model monitoring, while the `MLOPS_MODEL_ID` is an identifier provided for the actual scoring model behind it.
The `MLOPS_DEPLOYMENT_ID` should stay static. However, you may replace the SageMaker model at some point. If so, one of two actions should be taken:
* Create a completely new external deployment in DataRobot following the same steps as above.
* Register a new model package and replace the model currently hosted at this deployment with that new package.
In this scenario, a new `MLOPS_MODEL_ID` is assigned and used to update the Lambda environment variables. Additionally, the same `MLOPS_DEPLOYMENT_ID` entry in DataRobot will show statistics for both models under the same entry, and note when the change occurred.
### Create an IAM role {: #create-an-iam-role }
Lambda functions use a role to score data through the SageMaker endpoint. To create a role:
1. Navigate to **IAM Service** within the AWS Console.
2. Click **Create Role** and choose **Lambda** for the use case. Then, click **Next: Permissions**.
3. Select **Create Policy**, choose the **JSON** tab, and paste the following snippet:
```json
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "VisualEditor0",
"Effect": "Allow",
"Action": "sagemaker:InvokeEndpoint",
"Resource": "*"
}
]
}
```
4. Select **Review policy** and name the policy `lambda_sagemaker_execution_policy`.
5. Click **Create policy.**
6. You can now attach the policy to the role created in the steps above. Select the refresh button and filter on the string `sagemaker`.
7. Select the policy from the list.
8. Click **Next: Tags**, set any desired tags, and select **Next: Review**.
9. Name the role `lambda_sagemaker_execution_role` and click **Create role.**
10. This role requires additional resources so that the Lambda can send reporting data to an SQS queue.
See [Monitor with serverless MLOps Agents](monitor-serverless-mlops-agents) to create a queue to receive reporting data. The queue created in that topic, `sqs_mlops_data_queue`, is also used here.
11. To add the additional resources, view the IAM role `lambda_sagemaker_execution_role`.
12. Select **Add inline policy**, and perform a search for the SQS service.
13. Select **List**, **Read**, and **Write** access levels.
14. Optionally, deselect **ReceiveMessage** under the **Read** heading so that this role does not move items off the queue.
15. Expand **Resources** to limit the role to only use the specific data queue and populate the ARN of the queue.

16. Click **Review policy** and name it `lambda_agent_sqs_write_policy`.
17. To complete the policy, select **Create policy**.
!!! note
Note that additional privileges are required to allow Lambda to write log entries to CloudWatch.
18. Select **Attach policies**.
19. Filter on `AWSLambdaBasicExecutionRole`, select that privilege, and click **Attach policy**. The completed permissions for the role should look similar to the example below.

### Create a layer for the Lambda {: #create-a-layer-for-the-lambda }
Lambda allows for layers: additional libraries that can be used by Lambda at runtime.
1. Download the MLOps agent library from the DataRobot application in the **Profile > Developer Tools** menu.
!!! note
The package used in this example is `datarobot_mlops_package-6.3.3-488`. It includes numpy and pandas as well. These packages were also used in data prep for the model, and the same code is used in the Lambda function.
2. The Lambda environment is Python 3.7 on Amazon Linux. To ensure a layer will work with the Lambda, you can first create one on an Amazon Linux EC2 instance. Instructions to install Python 3 on Amazon Linux are available <a href="https://aws.amazon.com/premiumsupport/knowledge-center/ec2-linux-python3-boto3/" target="_blank">here</a>.
3. Once the model package is on the server, perform the following steps.
```bash
gunzip datarobot_mlops_package-6.3.3-488.tar.gz
tar -xvf datarobot_mlops_package-6.3.3-488.tar
cd datarobot_mlops_package-6.3.3
python3 -m venv my_agent/env
source my_agent/env/bin/activate
pip install lib/datarobot_mlops-*-py2.py3-none-any.whl
deactivate
cd my_agent/env
mkdir -p python/lib/python3.7/site-packages
cp -r lib/python3.7/site-packages/* python/lib/python3.7/site-packages/.
zip -r9 ../agent_layer.zip python
cd ..
aws support s3 cp agent_layer.zip s3://some-bucket/layers/agent613_488_python37.zip
```
4. In AWS, navigate to **Lambda > Additional resources > Layers**.
5. Select **Create layer**.
6. Name the layer `python37_agent633_488` and optionally choose a Python 3.7 runtime.
7. Select **Upload a file from S3** and provide the S3 address of the file, `s3://some-bucket/layers/agent613_488_python37.zip`.
8. Select **Create layer** to save the configuration.
### Create a Lambda {: #create-a-lambda }
The following steps outline how to create a Lambda that calls the SageMaker runtime invoke endpoint. The endpoint accepts raw, ready-to-score data. However, this method is not friendly to API clients. The following example shows a record that is ready to be scored:
```csv
54,3,999,0,1,0,0,0,0,0,0,0,1,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,1,0,1,0,0,1,0,0,1,0,0,1,0,0,0,0,0,0,0,0,0,1,0,0,0,1,0,0,0,1,0
```
Another example can be found on AWS <a href="https://aws.amazon.com/blogs/machine-learning/call-an-amazon-sagemaker-model-endpoint-using-amazon-api-gateway-and-aws-lambda/" target="_blank">here</a>. DataRobot recommends performing data prep on the client application.
Create a Lambda to process the actual data used by a client to make it ready for scoring. The returned score will be decoded as well, making it much friendlier for calling applications. To do so:
1. Navigate to the AWS Lambda service in the console.
2. Click **Create function**.
3. Select **Author > From scratch**.
4. Choose the Python 3.7 runtime.
5. Under **Permissions**, choose the default execution role to be `lambda_sagemaker_execution_role`.
6. Name the function `lambda-direct-marketing` and select **Create function**.
7. On the next screen, edit the environment variables as desired.
8. Replace the values as appropriate for the `MLOPS_DEPLOYMENT_ID` and `MLOPS_MODEL_ID` variables.
9. Provide the URL for the AWS SQS queue to use as a reporting channel.
| Name | Value |
|-----|-----|
| ENDPOINT\_NAME | xgboost-direct-marketing |
| MLOPS\_DEPLOYMENT\_ID | 1234567890 |
| MLOPS\_MODEL\_ID | 12345 |
| MLOPS\_OUTPUT\_TYPE | SQS |
| MLOPS\_SQS\_QUEUE\_URL | <https://sqs.us-east-1.amazonaws.com/1234567/sqs_mlops_data_queue> |
The Lambda designer window also has a location for selecting layers.
10. Choose this box and then select **Add a layer** from the layers form.
11. Select **Custom layers** and choose the created layer.
!!! note
Only layers that have a runtime matching the Lambda runtime show up in this list, although a layer can be explicitly chosen by the Amazon Resource Name if you opt to specify one.
12. Use the following code for the Lambda body:
```python
import os
import io
import boto3
import json
import csv
import time
import pandas as pd
import numpy as np
from datarobot.mlops.mlops import MLOps
# grab environment variables
ENDPOINT_NAME = os.environ['ENDPOINT_NAME']
runtime= boto3.client('runtime.sagemaker')
def lambda_handler(event, context):
# this is designed to work with only one record, supplied as json
# start the clock
start_time = time.time()
# parse input data
print("Received event: " + json.dumps(event, indent=2))
parsed_event = json.loads(json.dumps(event))
payload_data = parsed_event['data']
data = pd.DataFrame(payload_data, index=[0])
input_data = data
# repeat data steps from training notebook
data['no_previous_contact'] = np.where(data['pdays'] == 999, 1, 0) # Indicator variable to capture when pdays takes a value of 999
data['not_working'] = np.where(np.in1d(data['job'], ['student', 'retired', 'unemployed']), 1, 0) # Indicator for individuals not actively employed
model_data = pd.get_dummies(data)
model_data = model_data.drop(['duration', 'emp.var.rate', 'cons.price.idx', 'cons.conf.idx', 'euribor3m', 'nr.employed'], axis=1)
# xgb sagemaker endpoint features
# order/type required as was deployed in sagemaker notebook
model_features = ['age', 'campaign', 'pdays', 'previous', 'no_previous_contact',
'not_working', 'job_admin.', 'job_blue-collar', 'job_entrepreneur',
'job_housemaid', 'job_management', 'job_retired', 'job_self-employed',
'job_services', 'job_student', 'job_technician', 'job_unemployed',
'job_unknown', 'marital_divorced', 'marital_married', 'marital_single',
'marital_unknown', 'education_basic.4y', 'education_basic.6y',
'education_basic.9y', 'education_high.school', 'education_illiterate',
'education_professional.course', 'education_university.degree',
'education_unknown', 'default_no', 'default_unknown', 'default_yes',
'housing_no', 'housing_unknown', 'housing_yes', 'loan_no',
'loan_unknown', 'loan_yes', 'contact_cellular', 'contact_telephone',
'month_apr', 'month_aug', 'month_dec', 'month_jul', 'month_jun',
'month_mar', 'month_may', 'month_nov', 'month_oct', 'month_sep',
'day_of_week_fri', 'day_of_week_mon', 'day_of_week_thu',
'day_of_week_tue', 'day_of_week_wed', 'poutcome_failure',
'poutcome_nonexistent', 'poutcome_success']
# create base generic single row to score with defaults
feature_dict = { i : 0 for i in model_features }
feature_dict['pdays'] = 999
# get column values from received and processed data
input_features = model_data.columns
# replace value in to be scored record, if input data provided a value
for feature in input_features:
if feature in feature_dict:
feature_dict[feature] = model_data[feature]
# make a csv string to score
payload = pd.DataFrame(feature_dict).to_csv(header=None, index=False).strip('\n').split('\n')[0]
print("payload is:" + str(payload))
# stamp for data prep
prep_time = time.time()
print('data prep took: ' + str(round((prep_time - start_time) * 1000, 1)) + 'ms')
response = runtime.invoke_endpoint(EndpointName=ENDPOINT_NAME,
ContentType='text/csv',
Body=payload)
# process returned data
pred = json.loads(response['Body'].read().decode())
#pred = int(result['predictions'][0]['score'])
# if scored value is > 0.5, then return a 'yes' that the client will subscribe to a term deposit
predicted_label = 'yes' if pred >= 0.5 else 'no'
# initialize mlops monitor
m = MLOps().init()
# MLOPS: report test features and predictions and association_ids
m.report_predictions_data(
features_df=input_data
, class_names = ['yes', 'no']
, predictions = [[pred, 1-pred]] # yes, no
)
# report lambda timings (excluding lambda startup and imports...)
# MLOPS: report deployment metrics: number of predictions and execution time
end_time = time.time()
m.report_deployment_stats(1, (end_time - start_time) * 1000)
print("pred is: " + str(pred))
print("label is: " + str(predicted_label))
return predicted_label
```
### Test the Lambda {: #test-the-lmbda }
1. Click **Configure test** > **events** in the upper right corner of the Lambda screen to configure a test JSON record.
2. Use the following JSON record format:
```json
{
"data": {
"age": 56,
"job": "housemaid",
"marital": "married",
"education": "basic.4y",
"default": "no",
"housing": "no",
"loan": "no",
"contact": "telephone",
"month": "may",
"day_of_week": "mon",
"duration": 261,
"campaign": 1,
"pdays": 999,
"previous": 0,
"poutcome": "nonexistent",
"emp.var.rate": 1.1,
"cons.price.idx": 93.994,
"cons.conf.idx": -36.4,
"euribor3m": 4.857,
"nr.employed": 5191
}
}
```
3. Select **Test** to score a record through the Lambda service and SageMaker endpoint.
### Resource settings and performance considerations {: #resource-settings-and-performance-considerations }
Serverless computational resources are allocated from 128MB to 10240MB, which you can change on the Lambda console under **Basic settings.** This results in the allocation of a partial vCPU to six full vCPUs during each Lambda run. Lambda cold and warm starts and the EC2 host sizing and scaling for the SageMaker endpoint are beyond the scope of this topic, but the resources for the Lambda itself impacts pre- and post-scoring processing and overall Lambda performance.
128MB for this code will produce noticeably slower processing times, although diminishing returns are to be expected as RAM and CPO are upsized. For this example, 1706MB (and one full vCPU) provided good results.
### Expose the Lambda via API Gateway {: #expose-the-lambda-via-api-gateway }
1. Navigate to the API Gateway service in AWS and click **Create API.**
2. Choose to build a REST API, name it `lambda-direct-marketing-api`.
3. Click **Create API** again.
4. Under the Resources section of the entry, choose **Actions -> Create Resource**.
5. Name it `predict` and select **Create Resource**.
6. Highlight the resource, choose **Actions -> Create Methods** and select a **POST** method.
7. Choose the Integration Type `Lambda Function` and the Lambda Function `lambda-direct-marketing`, then click **Save.**


!!! note
You can select the **TEST** button on the client if the same payload was used in the Lambda test event (see [Test the Lambda](#test-the-lambda)).
8. Next, choose **Actions** > **Deploy API,** choose a Stage name (e.g., "test"), and click **Deploy.**
The model is now deployed and available via the **Invoke URL** provided after deployment.
### Test the exposed API
The same test record used above (for [Test the Lambda](#test-the-lambda)) can be used to score the model via an HTTP request. Below is an example of doing so using curl and an inline JSON record.
#### Expected no
```bash
curl -X POST " <https://tfff5ffffk6.execute-api.us-east-1.amazonaws.com/test/predict> " --data '{"data": {"age": 56,
"job": "housemaid", "marital": "married", "education": "basic.4y", "default": "no", "housing": "no", "loan": "no",
"contact": "telephone", "month": "may", "day\_of\_week": "mon", "duration": 261, "campaign": 1, "pdays": 999,
"previous": 0, "poutcome": "nonexistent", "emp.var.rate": 1.1, "cons.price.idx": 93.994, "cons.conf.idx": -36.4,
"euribor3m": 4.857, "nr.employed": 5191}}'
```
#### Expected yes
```bash
curl -X POST " <https://tfff5ffffk6.execute-api.us-east-1.amazonaws.com/test/predict> " --data '{"data": {"age": 34,
"job": "blue-collar", "marital": "married", "education": "high.school", "default": "no", "housing": "yes", "loan": "no",
"contact": "cellular", "month": "may", "day\_of\_week": "tue", "duration": 863, "campaign": 1, "pdays": 3, "previous":
2, "poutcome": "success", "emp.var.rate": -1.8, "cons.price.idx": 92.893, "cons.conf.idx": -46.2, "euribor3m": 1.344,
"nr.employed": 5099.1}}'
```
### Review and monitor the deployment in DataRobot
Once data reports from the data queue back to DataRobot, the external model contains metrics relevant to the model and its predictions. You can select the deployment from the DataRobot UI to view operational service health:

You can also view data drift metrics that compare data in scoring requests to that of the original training set.

Not only can DataRobot be used to build, host, and monitor its own models (with its own resources or deployed elsewhere), but, as shown here, it can also be used to monitor completely custom models created and hosted on external architecture.
In addition to service health and drift tracking statistics of unprocessed features, models with association IDs and actual results can be used to track model accuracy as well.
|
sagemaker-monitor
|
---
title: Use Scoring Code with AWS SageMaker
description: Using Scoring Code models with AWS SageMaker.
---
# Use Scoring Code with AWS SageMaker {: #use-scoring-code-with-aws-sagemaker }
This topic describes how to make predictions using DataRobot’s Scoring Code deployed on AWS SageMaker. Scoring Code allows you to download machine learning models as JAR files which can then be deployed in the environment of your choice.
AWS SageMaker allows you to bring in your machine-learning models and expose them as API endpoints, and DataRobot can export models in Java and Python. Once exported, you can deploy the model on AWS SageMaker. This example focuses on the DataRobot Scoring Code export, which provides a Java JAR file.
Make sure the model you want to import supports [Scoring Code](scoring-code/index). Models that support Scoring Code export are indicated by the Scoring Code icon.

??? "Why deploy on AWS SageMaker?"
While DataRobot provides scalable prediction servers that are fully integrated with the platform, there are reasons why someone would want to deploy on AWS SageMaker instead:
* Company policy or governance decision.
* Configure custom functionality on top of a DataRobot model.
* Low-latency scoring without API call overhead. Executing Java code is typically faster than scoring through the Python API.
* The ability to integrate models into systems that cannot communicate with the DataRobot API.
## Download Scoring Code {: #download-scoring-code }
The first step to deploying a DataRobot model to AWS SageMaker is to create a TAR.GZ archive that contains your model (the Scoring Code JAR file provided by DataRobot). You can download the JAR file from the [Leaderboard](sc-download-leaderboard) or from a [deployment](sc-download-deployment).
!!! note
Depending on your DataRobot license, the code may only be available through the **Deployments** page.

## Upload Scoring Code to an AWS S3 bucket {: #upload-scoring-code-to-an-aws-s3-bucket }
Once you have downloaded the Scoring Code JAR file, you need to upload your Codegen JAR file to an AWS S3 bucket so that SageMaker can access it.
SageMaker expects the archive (`tar.gz` format) to be uploaded to an S3 bucket. Compress your model as a `tar.gz` archive using one of the following commands:
=== "Linux"
```
tar -czvf 5e8471fa169e846a096d5137.jar.tar.gz 5e8471fa169e846a096d5137.jar
```
=== "MacOS"
MacOS adds hidden files to the `tar.gz` package that can introduce issues during deployment. To prevent these issues, use the following command:
```
COPYFILE\_DISABLE=1 tar -czvf 5e8471fa169e846a096d5137.jar.tar.gz 5e8471fa169e846a096d5137.jar
```
Once you have created the `tar.gz` archive, upload it to S3:
1. Enter the Amazon S3 console.
2. Click **Upload** and provide your `tar.gz` archive to the S3 bucket.

## Publish a Docker image to Amazon ECR {: #publish-a-docker-image-to-amazon-ecr }
Next, publish a Docker image containing inference code to the Amazon ECR. In this example, you can download the DataRobot-provided Docker image with the following command:
```
docker pull datarobot/scoring-inference-code-sagemaker:latest
```
To publish the image to Amazon ECR:
1. Authenticate your Docker client to the Amazon ECR registry to which you intend to push your image. Authentication tokens must be obtained for each registry used, and the tokens are valid for 12 hours. You can refer to Amazon documentation for various authentication options listed [here](https://docs.aws.amazon.com/AmazonECR/latest/userguide/Registries.html#registry_auth).
2. Use token-based authentication:
```
TOKEN=$(aws ecr get-authorization-token --output text --query 'authorizationData[].authorizationToken')
```
```
curl -i -H "Authorization: Basic $TOKEN" <https://xxxxxxx.dkr.ecr.us-east-1.amazonaws.com/v2/sagemakertest/tags/list>
```
3. Next, create an Amazon ECR Registry where you can push your image:
```
aws ecr create-repository --repository-name sagemakerdemo
```
Using this command returns the output shown below:

You can also create the repository from the AWS Management console:
1. Navigate to **ECR Service > Create Repository** and provide the repository name.

2. Identify the image to push. Run the docker images command to list the images on your system.
3. Tag the image you want to push to AWS ECR.
4. The `xxxxxxxx` placeholder represents the image ID of the DataRobot-provided Docker image containing the inference code (`scoring-inference-code-sagemaker:latest`) that you downloaded from Docker Hub.
5. Tag the image with the Amazon ECR registry, repository, and optional image tag name combination to use. The registry format is `*aws_account_id.dkr.ecr.region.amazonaws.com*`. The repository name should match the repository that you created for the image. If you omit the image tag, then DataRobot assumes the tag is the latest.
```
docker tag xxxxxxxx "${account}.dkr.ecr.${region}.amazonaws.com/sagemakerdemo"
```
6. Push the image:
```
docker push ${account}.dkr.ecr.${region}.amazonaws.com/sagemakermlopsdockerized
```
Once pushed, you can validate the image from the AWS management console.

## Create the model {: #create-the-model }
1. Sign in to AWS and search for `SageMaker`. Select the first search result, `Amazon SageMaker`, to enter the SageMaker console and create a model.
2. In the IAM role field, select **Create a new role** from the dropdown if you do not have an existing role on your account. This option creates a role with the required permissions and assigns it to your instance.

3. For the **Container input options** field (1), select **Provide model artifacts and inference image location**. Then, specify the location of the Scoring Code image (your model) in the S3 bucket (2) and the registry path to the Docker image containing the inference code (3).

4. Click **Add container** below the fields when complete.

Your model configurations should match the example below:

## Create an endpoint configuration {: #create-an-endpoint-configuration }
To set up an endpoint for predictions:
1. Open the dashboard on the left side and navigate to the **Endpoint configurations** page to create a new endpoint configuration. Select the uploaded model.
2. Enter an **Endpoint configuration name** (1) and provide an **Encryption key** if desired (2). When complete, select **Create endpoint configuration** at the bottom of the page.

3. Use the dashboard to navigate to **Endpoints** and create a new endpoint:

Enter an **Endpoint name** (1) and **Use an existing endpoint configuration** (2). Then, click the configuration you just created (3). When finished, click **Select endpoint configuration**. When the endpoint creation is complete, you can make prediction requests with your model.
Once the endpoint is ready to service requests, the status will change to **InService**:

## Make predictions {: #make-predictions }
Once the SageMaker endpoint status changes to **InService** you can start making predictions against the endpoint.
Test the endpoint from the command line first to make sure the endpoint is responding. Use the command below to make a test prediction and pass the data in the body of the CSV string:
```
aws sagemaker-runtime invoke-endpoint --endpoint-name mlops-dockerized-endpoint-new
```
!!! note
To run the command above, ensure you have installed AWS CLI.
## Monitor the deployed model {: #monitor-the-deployed-model }
Now that your model is deployed, you can [monitor and manage it using the MLOps agent](sagemaker-monitor).
## Considerations
Note the following when deploying on SageMaker:
* There is no out-of-the-box data drift and accuracy tracking unless MLOps agents are configured.
* You may experience additional time overhead as a result of deploying to AWS SageMaker.
|
sc-sagemaker
|
---
title: Data Export tab
description: Export a deployment's stored prediction and training data to compute and monitor custom business or performance metrics.
---
# Data Export tab {: #data-export-tab }
You can export a deployment's stored training data, prediction data, and actuals to compute and monitor custom business or performance metrics on the [Custom Metrics tab](custom-metrics) or outside DataRobot. To export deployment data for custom metrics, make sure your deployment stores prediction data, generate data for a specified time range, and then view or download that data.
To access deployment data export:
1. In the top navigation bar, click **Deployments**.
2. On the **Deployments** tab, click on the deployment from which you want to access stored training data, prediction data, or actuals.
!!! note
To access the Data Export tab, the deployment must store prediction data. Ensure that you [Enable prediction rows storage for challenger analysis](challengers-settings) in the deployment settings. The Data Export tab doesn't store or export Prediction Explanations, even if they are requested with the predictions.
3. In the deployment, click the **Data Export** tab.
## Access or download data {: #access-or-download-data }
To access or download prediction data, training data, or actuals:
1. Configure the following settings to specify the stored training data, prediction data, or actuals you want to export:

| | Setting | Description |
|-|---------|-------------|
|  | Model | Select the deployment's model, current or previous, to export prediction data for. |
|  | Range (UTC) | Select the start and end dates of the period you want to export prediction data from. |
|  | Resolution | Select the granularity of the date slider. Select from hourly, daily, weekly, and monthly granularity based on the time range selected. If the time range is longer than 7 days, hourly granularity is not available. |
|  | Reset | Reset the data export settings to the default. |
2. In the **Training Data**, **Prediction Data**, and **Actuals** panels:
* Click **Generate Training Data** to generate training data for the specified time range.
* Click **Generate Prediction Data** to generate prediction data for the specified time range.
* Click **Generate Actuals** to generate actuals for the specified time range.
??? note "Prediction data and actuals considerations"
<span id="prediction-data-and-actuals-considerations">When generating prediction data or actuals, consider the following:</span>
* When generating prediction data, you can export up to 200,000 rows per export. If the time range you set exceeds 200,000 rows of prediction data, decrease the range.
* In the AI Catalog, you can have up to 100 prediction export items. If generating prediction data for export would cause the number of prediction export items in the AI Catalog to exceed that limit, delete old prediction export AI Catalog items.
* When generating prediction data for time series deployments, two prediction export items are added to the AI Catalog. One item is for the prediction data, and the other is for the prediction results. The Data Export tab links to the prediction results.
* When generating actuals, you can export up to 1,000,000 rows per export. If the time range you set exceeds 1,000,000 rows of actuals, decrease the time range.
* In the AI Catalog, you can have up to 100 actuals export items. If generating actuals data for export would cause the number of actuals export items in the AI Catalog to exceed that limit, delete old actuals export AI Catalog items.
* Up to 10,000,000 actuals are stored for a deployment; therefore, exporting old actuals can result in an error if no actuals are currently stored for that time period.
The training data appears in the **Training Data** panel. Prediction data and actuals appear in the table below, identified by **Prediction Data** or **Actuals** in the **Type** column.

3. After the prediction data, training data, or actuals are generated:
* Click the open icon  to open the prediction data in the AI Catalog.
* Click the download icon  to download the prediction data.
!!! note
You can also click **Use data in Notebook** to open a [DataRobot notebook](dr-notebooks/index) with cells for exporting training data, prediction data, and actuals.
## Use exported deployment data for custom metrics {: #use-exported-deployment-data-for-custom-metrics }
To use the exported deployment data to create your own custom metrics, you can implement a script to read from the CSV file containing the exported data and then calculate metrics using the resulting values, including [columns automatically generated during the export process](#datarobot-column-reference).
This example uses the exported prediction data to calculate and plot the change in the `time_in_hospital` feature over a 30-day period using the DataRobot prediction timestamp (`DR_RESERVED_PREDICTION_TIMESTAMP`) as the DateFrame index (or row labels). It also uses the exported training data as the plot's baseline:
``` py
import pandas as pd
feature_name = "<numeric_feature_name>"
training_df = pd.read_csv("<path_to_training_data_csv>")
baseline = training_df[feature_name].mean()
prediction_df = pd.read_csv("<path_to_prediction_data_csv>")
prediction_df["DR_RESERVED_PREDICTION_TIMESTAMP"] = pd.to_datetime(
prediction_df["DR_RESERVED_PREDICTION_TIMESTAMP"]
)
predictions = prediction_df.set_index("DR_RESERVED_PREDICTION_TIMESTAMP")["time_in_hospital"]
ax = predictions.rolling('30D').mean().plot()
ax.axhline(y=baseline - 2, color="C1", label="training data baseline")
ax.legend()
ax.figure.savefig("feature_over_time.png")
```
### DataRobot column reference {: #datarobot-column-reference }
DataRobot automatically adds the following columns to the prediction data generated for export:
Column | Description
-------|------------
`DR_RESERVED_PREDICTION_TIMESTAMP` | Contains the prediction timestamp.
`DR_RESERVED_PREDICTION` | Identifies regression prediction values.
`DR_RESERVED_PREDICTION_<Label>` | Identifies classification prediction values.
|
data-export
|
---
title: Usage tab
description: Tracks prediction processing progress for use in accuracy, data drift, and predictions over time analysis.
---
# Usage tab {: #usage-tab }
After deploying a model and making predictions in production, monitoring model quality and performance over time is critical to ensure the model remains effective. This monitoring occurs on the [Data Drift](data-drift) and [Accuracy](deploy-accuracy) tabs and requires processing large amounts of prediction data. Prediction data processing can be subject to delays or rate limiting.
## Prediction Tracking chart {: #prediction-tracking-chart }
On the left side of the **Usage** tab is the **Prediction Tracking** chart, a bar chart of the prediction processing status over the last 24 hours or 7 days, tracking the number of processed, missing association ID, and rate-limited prediction rows. Depending on the selected view (24-hour or 7-day), the histogram's bins are hour-by-hour or day-by-day.

| | Chart element | Description |
|-|-----------------------------|-----------------|
 | Select time period | Selects the **Last 24 hours** or **Last 7 days** view.
 | Use log scaling | Applies log scaling to the Prediction Tracking chart for deployments with more than 250,000 rows of predictions.
 | Time of Receiving Predictions Data <br> (X-axis) | Displays the time range (by day or hour) represented by a bin, tracking the rows of prediction data received within that range. Predictions are timestamped when a prediction is received by the system for processing. This "time received" value is not equivalent to the timestamp in service health, data drift, and accuracy. For DataRobot prediction environments, this timestamp value can be slightly later than prediction timestamp. For agent deployments, the timestamp represents when the DataRobot API received the prediction data from the agent.
 | Row Count <br> (Y-axis) | Displays the number of prediction rows timestamped within a bin's time range (by day or hour).
 | Prediction processing categories | Displays a bar chart tracking the status of prediction rows: <ul><li>**Processed**: Tracked for drift and accuracy analysis.</li><li>**Rate Limited**: Not tracked because prediction processing exceeded the hourly rate limit.</li><li>**Missing Association ID**: Not tracked because the prediction rows don't include the association ID and drift tracking isn't enabled.</li></ul>
!!! note
For a monitoring agent deployment, if you [implement large-scale monitoring](agent-use#enable-large-scale-monitoring), the prediction rows won't appear in this bar chart; however, the **Predictions Processing (Champion)** delay will track the pre-aggregated data.
To view additional information on the **Prediction Tracking** chart, hover over a column to see the time range during which the predictions data was received and the number of rows that were **Processed**, **Rate Limited**, or **Missing Association ID**:

## Prediction and actuals processing delay {: #prediction-and-actuals-processing-delay }
On the right side of the **Usage** tab are the processing delays for **Predictions Processing (Champion)** and **Actuals Processing** (the delay in actuals processing is for _ALL_ models in the deployment):

The **Usage** tab recalculates the processing delays without reloading the page. You can check the **Updated** value to determine when the delays were last updated.
|
deploy-usage
|
---
title: Data Drift tab
description: How to use the Data Drift dashboard to analyze a deployed model's performance. It provides four interactive, exportable visualizations that communicate model health.
---
# Data Drift tab {: #data-drift-tab }
As training and production data change over time, a deployed model loses predictive power. The data surrounding the model is said to be *drifting*. By leveraging the training data and prediction data (also known as inference data) that is added to your deployment, the **Data Drift** dashboard helps you analyze a model's performance after it has been deployed.
{% include 'includes/how-dr-tracks-drift-include.md' %}
Target and feature tracking are enabled by default. You can control these drift tracking features by navigating to a deployment's [**Data Drift > Settings**](data-drift-settings) tab.
!!! info "Availability information"
If feature drift tracking is turned off, a message displays on the **Data Drift** tab to remind you to enable [feature drift tracking](data-drift-settings).
To receive email notifications on data drift status, [configure notifications](deploy-notifications), [schedule monitoring](data-drift-settings#schedule-data-drift-monitoring-notifications), and [configure data drift monitoring settings](data-drift-settings#define-data-drift-monitoring-notifications).
The **Data Drift** dashboard provides four interactive and exportable visualizations that help identify the health of a deployed model over a specified time interval.
!!! note
The **Export** button allows you to download each chart on the **Data Drift** dashboard as a PNG, CSV, or ZIP file.

| | Chart | Description |
|------|-------|---------|
|  | [**Feature Drift vs. Feature Importance**](#feature-drift-vs-feature-importance-chart) | Plots the importance of a feature in a model against how much the distribution of actual feature values has changed, or drifted, between one point in time and another. |
|  | [**Feature Details**](#feature-details-chart) | Plots percentage of records, i.e., the distribution, of the selected feature in the training data compared to the inference data. |
|  | [**Drift Over Time**](#drift-over-time-chart) | Illustrates the difference in distribution over time between the training dataset of the deployed model and the datasets used to generate predictions in production. This chart tracks the change in the Population Stability Index (PSI), which is a measure of [data drift](glossary/index#data-drift). |
|  | [**Predictions Over Time**](#predictions-over-time-chart) | Illustrates how the distribution of a model's predictions has changed over time (*target drift*). The display differs depending on whether the project is [regression](#for-regression-projects) or [binary classification](#for-binary-classification-projects). |
In addition to the visualizations above, you can use the [**Data Drift > Drill Down** tab](#drill-down-on-the-data-drift-tab) to compare data drift heat maps across the features in a deployment to identify drift trends.

## Configure the Data Drift dashboard {: #configure-the-data-drift-dashboard }
You can [customize how a deployment calculates data drift status](data-drift-settings) by configuring drift and importance thresholds and additional definitions on the **Data Drift > Settings** page. You can also use the following controls to configure the **Data Drift** dashboard as needed:

| | Control | Description |
|------|---------|---------|
|  | [Model](#use-the-version-selector) version selector | Updates the dashboard displays to reflect the model you selected from the dropdown. |
|  | [Date Slider](#use-the-date-slider) | Limits the range of data displayed on the dashboard (i.e., zooms in on a specific time period). |
|  | [Range (UTC)](#use-the-time-range-and-resolution-dropdowns) | Sets the date range displayed for the [deployment date slider](#use-the-date-slider). |
|  | [Resolution](#use-the-time-range-and-resolution-dropdowns) | Sets the time granularity of the [deployment date slider](#use-the-date-slider). |
|  | Selected Feature | Sets the feature displayed on the [Feature Details chart](#feature-details-chart) and the [Drift Over Time chart](#drift-over-time-chart). |
|  | Refresh | Initiates an on-demand update of the dashboard with new data. Otherwise, DataRobot refreshes the dashboard every 15 minutes. |
|  | Reset | Reverts the dashboard controls to the default settings. |
The **Data Drift** dashboard also supports [segmented analysis](deploy-segment), allowing you to view data drift while comparing a subset of training data to the predictions data for individual attributes and values using the **Segment Attribute** and **Segment Value** dropdowns.
## Feature Drift vs Feature Importance chart {: #feature-drift-vs-feature-importance-chart }
The **Feature Drift vs. Feature Importance** chart monitors the 25 most impactful numerical, categorical, and text-based features in your data.

Use the chart to see if data is different at one point in time compared to another. Differences may indicate problems with your model or in the data itself. For example, if users of an auto insurance product are getting younger over time, the data that built the original model may no longer result in accurate predictions for your newer data. Particularly, drift in features with high importance can be a warning flag about your model accuracy. Hover over a point in the chart to identify the feature name and report the precise values for drift (Y-axis) and importance (X-axis).
### Feature Drift
The Y-axis reports the **Drift** value for a feature. This value is a calculation of the <a target="_blank" href="https://www.kaggle.com/code/podsyp/population-stability-index/notebook">Population Stability Index (PSI)</a>, a measure of the difference in distribution over time.
{% include 'includes/drift-metrics-support.md' %}
### Feature Importance
The X-axis reports the **Importance** score for a feature, calculated when ingesting the learning (or training) data. DataRobot calculates feature importance differently depending on the model type. For DataRobot models and custom models, the **Importance** score is calculated using [**Permutation Importance**](feature-impact#permutation-based-feature-impact). For external models, the importance score is an [**ACE Score**](glossary/index#ace-scores). The dot resting at the Importance value of `1` is the target prediction . The most important feature in the model will also appear at 1 (as a solid green dot).

### Interpret the quadrants {: #interpret-the-quadrants }
The quadrants represented in the chart help to visualize feature-by-feature data drift plotted against the feature's importance. Quadrants can be loosely interpreted as follows:

| Quadrant | Read as... | Color indicator |
|----------|------------|-----------------|
|  | High importance feature(s) are experiencing high drift. Investigate immediately. | Red |
|  | Lower importance feature(s) are experiencing drift above the set threshold. Monitor closely. | Yellow |
|  | Lower importance feature(s) are experiencing minimal drift. No action needed. | Green |
|  | High importance feature(s) are experiencing minimal drift. No action needed, but monitor features that approach the threshold. | Green |
Note that points on the chart can also be gray or white. Gray circles represent features that have been excluded from drift status calculation, and white circles represent features set to high importance.
If you are the project owner, you can click the gear icon in the upper right chart corner to reset the quadrants. By default, the drift threshold defaults to .15. The Y-axis scales from 0 to the higher of 0.25 and the highest observed drift value. These quadrants can be customized by [changing the drift and importance thresholds](data-drift-settings).
## Feature Details chart {: #feature-details-chart }

The **Feature Details** chart provides a histogram that compares the distribution of a selected feature in the training data to the distribution of that feature in the inference data.
### Numeric features {: #numeric-features }
For numeric data, DataRobot computes an efficient and precise approximation of the distribution of each feature. Based on this, drift tracking is conducted by comparing the normalized histogram for the training data to the scoring data using the selected drift metrics.
The chart displays 13 bins for numeric features:
* 10 bins capture the range of items observed in the training data.
* Two bins capture very high and very low values—extreme values in the scoring data that fall outside the range of the training data.
* One bin for the **Missing** count, containing all records with [missing feature values](model-ref#missing-values).
### Categorical features {: #categorical-features }
Unlike numeric data, where binning cutoffs for a histogram result from a data-dependent calculation, categorical data is inherently discrete in form (that is, not continuous), so binning is based on a defined category. Additionally, there could be missing or unseen category levels in the scoring data.
The process for drift tracking of categorical features is to calculate the fraction of rows for each categorical level ("bin") in the training data. This results in a vector of percentages for each level. The 25 most frequent levels are directly tracked—all other levels are aggregated to an **Other** bin. This process is repeated for the scoring data, and the two vectors are compared using the selected drift metric.
For categorical features, the chart includes two unique bins:
* The **Other** bin contains all categorical features outside the 25 most frequent values. This aggregation is performed for drift tracking purposes; it doesn't represent the model's behavior.
* The **New level** bin only displays after you make predictions with data that has a new value for a feature not in the training data. For example, consider a dataset about housing prices with the categorical feature `City`. If your inference data contains the value `Boston` and your training data did not, the `Boston` value (and other unseen cities) are represented in the New level bin.
To use the chart, select a feature from the dropdown. The list, which defaults to the target feature, includes any of the features tracked. Click a point in the **Feature Drift vs. Feature Importance** chart:

### Text features {: #text-features }
Text features are a high-cardinality problem, meaning the addition of new words does not have the impact of, for example, new levels found in categorical data. The method DataRobot uses to track drift of text features accounts for the fact that writing is subjective and cultural and may have spelling mistakes. In other words, to identify drift in text fields, it is more important to identify a shift in the whole language rather than in individual words.
Drift tracking for a text feature is conducted by:
1. Detecting occurrences of the 1000 most frequent words from rows found in the training data.
2. Calculating the fraction of rows that contain these terms for that feature in the training data and separately in the scoring data.
3. Comparing the fraction in the scoring data to that in the training data.
The two vectors of occurrence fractions (one entry per word) are compared with the available drift metrics. Prior to applying this methodology, DataRobot performs basic tokenization by splitting the text feature into words (or characters in the case of Japanese or Chinese).
For text features, the [Feature Details chart](#feature-details-chart) replaces the feature drift bar chart with a word cloud visualizing data distributions for each token and revealing how much each individual token contributes to data drift in a feature.
To access the feature drift word cloud for a text feature:
1. Open the **Data Drift** tab of a [drift-enabled](data-drift-settings) deployment.
2. On the **Summary** tab, in the **Feature Details** chart, select a text feature from dropdown list.
!!! note
You can also select a text feature from the **Selected Feature** dropdown list in the **Data Drift** dashboard controls.

3. Use the dashboard controls to [configure the Data Drift dashboard](data-drift#configure-the-data-drift-dashboard).
4. To interpret the feature drift word cloud for a text feature, you can hold the pointer over a token to view the following details:
!!! tip
When your pointer is over the word cloud, you can scroll up to zoom in and view the text of smaller tokens.

Chart element | Description
-------------------|------------
Token | The tokenized text. Text size represents the token's drift contribution and text color represents the dataset prevalence. Stop words are hidden from this chart.
Drift contribution | How much this particular token contributes to the feature's drift value, as reported in the [Feature Drift vs. Feture Importance](data-drift#feature-drift-vs-feature-importance-chart) and [Drift Over Time](data-drift#drift-over-time-chart) charts.
Data distribution | How much more often this particular token appears in the training data or the predictions data. <ul><li><span style="color: blue">Blue</span>: This token appears `X`% more often in training data.</li><li><span style="color: red">Red</span>: This token appears`X`% more often in predictions data.</li></ul>
!!! note
The **Export** button does not export the feature drift word cloud, only the standard chart. Next to the **Export** button, you can click the settings icon () and clear the **Display text features as word cloud** check box to disable the feature drift word cloud and view the standard chart:

## Drift Over Time chart {: #drift-over-time-chart }
The **Drift Over Time** chart visualizes the difference in distribution over time between the training dataset of the deployed model and the datasets used to generate predictions in production. The drift away from the baseline established with the training dataset is measured using the Population Stability Index (PSI). As a model continues to make predictions on new data, the change in the PSI over time is visualized for each tracked feature, allowing you to identify [data drift](glossary/index#data-drift) trends.
As data drift can decrease your model's predictive power, determining when a feature started drifting and monitoring how that drift changes (as your model continues to make predictions on new data) can help you estimate the severity of the issue. You can then compare data drift trends across the features in a deployment to identify correlated drift trends between specific features. In addition, the chart can help you identify seasonal effects (significant for time-aware models). This information can help you identify the cause of data drift in your deployed model, including data quality issues, changes in feature composition, or changes in the context of the target variable. The example below shows the PSI consistently increasing over time, indicating worsening data drift for the selected feature.
The **Drift Over Time** chart includes the following elements and controls:

| | Chart element | Description |
|-|-----------------------------|-----------------|
 | Selected Feature | Selects a feature for drift over time analysis, which is then reported in the Drift Over Time chart and the [Feature Details chart](#feature-details-chart).
 | Time of Prediction / Sample size <br> (X-axis) | Represents the time range of the predictions used to calculate the corresponding drift value (PSI). Below the X-axis, a bar chart represents the number of predictions made during the corresponding Time of Prediction.
 | Drift <br> (Y-axis) | Represents the range of drift values (PSI) calculated for the corresponding Time of Prediction.
 | Training baseline | Represents the `0` PSI value of the training baseline dataset. |
 | Drift status information | Displays the drift status and threshold information for the selected feature. Drift status visualizations are based on the [monitoring settings configured by the deployment owner](data-drift-settings). The deployment owner can also set the drift and importance thresholds in the [Feature Drift vs Feature Importance chart settings](#feature-drift-vs-feature-importance-chart).<br> The possible drift status classifications are: <ul><li><span style="color:#00c96e">**Healthy (Green)**</span>: The feature is experiencing minimal drift. No action needed, but monitor features that approach the threshold.</li><li><span style="color:#e4b207">**At risk (Yellow)**</span>: A lower importance feature is experiencing drift above the set threshold. Monitor closely.</li><li><span style="color:#e64d4d">**Failing (Red)**</span>: A high importance feature is experiencing drift above the set threshold. Investigate immediately</li></ul> Feature importance is determined by comparing the feature impact score with the importance threshold value. For an important feature, the feature impact score is greater than or equal to the importance threshold.
 | Export | Exports the Drift Over Time chart.
To view additional information on the **Drift Over Time** chart, hover over a marker in the chart to see the **Time of Prediction**, **PSI**, and **Sample size**:

!!! tip
The X-axis of the Drift Over Time chart aligns with the X-axis of the Predictions Over Time chart below to make comparing the two charts easier. In addition, the *Sample size* data on the Drift Over Time chart is equivalent to the *Number of Predictions* data from the Predictions Over Time chart.
## Predictions Over Time chart {: #predictions-over-time-chart }
The **Predictions Over Time** chart provides an at-a-glance determination of how the model's predictions have changed over time. For example:
> Dave sees that his model is predicting `1` (readmitted) noticeably more frequently over the past month. Because he doesn't know of a corresponding change in the actual distribution of readmissions, he suspects that the model has become less accurate. With this information, he investigates further whether he should consider retraining.
Although the charts for binary classification and regression differ slightly, the takeaway is the same—are the plot lines relatively stable across time? If not, is there a business reason for the anomaly (for example, a blizzard)? One way to check this is to look at the bar chart below the plot. If the point for a binned period is abnormally high or low, check the histogram below to ensure there are enough predictions for this to be a reliable data point.
{% include 'includes/service-health-prediction-time.md' %}
Additionally, both charts have `Training` and `Scoring` labels across the X-axis. The `Training` label indicates the section of the chart that shows the distribution of predictions made on the holdout set of training data for the model. It will always have one point on the chart. The `Scoring` label indicates the section of the chart showing the distribution of predictions made on the deployed model. `Scoring` indicates that the model is in use to make predictions. It will have multiple points along the chart to indicate how prediction distributions change over time.
### For regression projects {: #for-regression-projects }
The **Predictions Over Time** chart for regression projects plots the average predicted value, as well as a visual indicator of the middle 80% range of predicted values for both training and prediction data. If training data is uploaded, the graph displays both the 10th-90th percentile and the mean value of the target ().

Hover over a point on the chart to view its details:

* **Date**: The starting date of the bin data. Displayed values are based on counts from this date to the next point along the graph. For example, if the date on point A is 01-07 and point B is 01-14, then point A covers everything from 01-07 to 01-13 (inclusive).
* **Average Predicted Value**: For all points included in the bin, this is the average of their values.
* **Predictions**: The number of predictions included in the bin. Compare this value against other points if you suspect anomalous data.
* **10th-90th Percentile**: Percentile of predictions for that time period.
Note that you can also display this information for the mean value of the target by hovering on the point in the training data.
### For binary classification projects {: #for-binary-classification-projects }
The **Predictions Over Time** chart for binary classification projects plots the class percentages based on the labels you set when you [added the deployment](deploy-methods/index) (in this example, `0` and `1`). It also reports the threshold set for prediction output. The threshold is set when adding your deployment to the inventory and cannot be revised.

Hover over a point on the chart to view its details:

* **Date**: The starting date of the bin data. Displayed values are based on counts from this date to the next point along the graph. For example, if the date on point A is 01-07 and point B is 01-14, then point A covers everything from 01-07 to 01-13 (inclusive).
* **<class-label>**: For all points included in the bin, the percentage of those in the "positive" class (`0` in this example).
* **<class-label>**: For all points included in the bin, the percentage of those in the "negative" class (`1` in this example).
* **Number of Predictions**: The number of predictions included in the bin. Compare this value against other points if you suspect anomalous data.
Additionally, the chart displays the mean value of the target in the training data. As with all plotted points, you can hover over it to see the specific values.

The chart also includes a toggle in the upper-right corner that allows you to switch between continuous and binary modes (only for binary classification deployments):

Continuous mode shows the positive class predictions as probabilities between 0 and 1, without taking the prediction threshold into account:

Binary mode takes the prediction threshold into account and shows, of all predictions made, the percentage for each possible class:

### Prediction warnings integration {: #prediction-warnings-integration }
If you have enabled [prediction warnings](humility-settings#prediction-warnings) for a deployment, any anomalous prediction values that trigger a warning are flagged in the [Predictions Over Time](#predictions-over-time-chart) bar chart.
!!! note
Prediction warnings are only available for regression model deployments.
The yellow section of the bar chart represents the anomalous predictions for a point in time.

To view the number of anomalous predictions for a specific time period, hover over the point on the plot corresponding to the flagged predictions in the bar chart.

## Use the version selector {: #use-the-version-selector }

You can change the data drift display to analyze the current, or any previous, version of a model in the deployment. Initially, if there has been no model replacement, you only see the **Current** option. The models listed in the dropdown can also be found in the **History** section of the [**Overview**](dep-overview) tab. This functionality is only supported with deployments made with models or model images.
## Use the time range and resolution dropdowns {: #use-the-time-range-and-resolution-dropdowns }
The **Range** and **Resolution** dropdowns help diagnose deployment issues by allowing you to change the granularity of the three deployment monitoring tabs: **Data Drift**, [**Service Health**](service-health), and [**Accuracy**](deploy-accuracy).
Expand the **Range** dropdown (1) to select the start and end dates for the time range you want to examine. You can specify the time of day for each date (to the nearest hour, rounded down) by editing the value after selecting a date. When you have determined the desired time range, click **Update range** (2). Select the **Range** reset icon () (3) to restore the time range to the previous setting.

!!! note
Note that the date picker only allows you to select dates and times between the start date of the deployment's [current version](#use-the-version-selector) of a model and the current date.
After setting the time range, use the **Resolution** dropdown to determine the granularity of the date slider. Select from hourly, daily, weekly, and monthly granularity based on the time range selected. If the time range is longer than 7 days, hourly granularity is not available.
When you choose a new value from the **Resolution** dropdown, the resolution of the date selection [slider](#use-the-date-slider) changes. Then, you can select start and end points on the slider to hone in on the time range of interest.

Note that the selected slider range also carries across the [**Service Health**](service-health) and [**Accuracy**](deploy-accuracy) tabs (but not across deployments).
## Use the date slider {: #use-the-date-slider }
The date slider limits the time range used for comparing prediction data to training data. The upper dates displayed in the slider, left and right edges, indicate the range currently used for comparison in the page's visualizations. The lower dates, left and right edges, indicate the full date range of prediction data available. The circles mark the "data buckets," which are determined by the time range.

To use the slider, click a point to move the line or drag the endpoint left or right.

The visualizations use predictions from the starting point of the updated time range as the baseline reference point, comparing them to predictions occurring up to the last date of the selected time range.
You can also move the slider to a different time interval while maintaining the periodicity. Click anywhere on the slider between the two endpoints to drag it (you will see a hand icon on your cursor).

In the example above, you see the slider spans a 3-month time interval. You can drag the slider and maintain the time interval of 3 months for different dates.

By default, the slider is set to display the same date range that is used to calculate and display drift status. For example, if drift status captures the last week, then the default slider range will span from the last week to the current date.
You can move the slider to any date range without affecting the data drift status display on the health dashboard. If you do so, a **Reset** button appears above the slider. Clicking it will revert the slider to the default date range that matches the range of the drift status.
## Use the class selector {: #use-the-class-selector }
Multiclass deployments offer [class-based configuration](deploy-accuracy#class-selector) to modify the data displayed on the Data Drift graphs.
Predictions over Time multiclass graph:

Feature Details multiclass graph:

## Drill down on the Data Drift tab {: #drill-down-on-the-data-drift-tab }
The **Data Drift** > **Drill Down** chart visualizes the difference in distribution over time between the training dataset of the deployed model and the datasets used to generate predictions in production. The drift away from the baseline established with the training dataset is measured using the Population Stability Index (PSI). As a model continues to make predictions on new data, the change in the drift status over time is visualized as a heat map for each tracked feature, allowing you to identify [data drift](glossary/index#data-drift) trends.
Because data drift can decrease your model's predictive power, determining when a feature started drifting and monitoring how that drift changes (as your model continues to make predictions on new data) can help you estimate the severity of the issue. Using the **Drill Down** tab, you can compare data drift heat maps across the features in a deployment to identify correlated drift trends. In addition, you can select one or more features from the heat map to view a **Feature Drift Comparison** chart, comparing the change in a feature's data distribution between a reference time period and a comparison time period to visualize drift. This information helps you identify the cause of data drift in your deployed model, including data quality issues, changes in feature composition, or changes in the context of the target variable.
To access the **Drill Down** tab:
1. Click **Deployments**, and then select a [drift-enabled](data-drift-settings) deployment from the **Deployments** inventory.
2. In the deployment, click **Data Drift**, and then click **Drill Down**:

3. On the **Drill Down** tab:
* [Configure the display settings.](#configure-the-drill-down-display-settings)
* [Use the feature drift heat map.](#use-the-feature-drift-heat-map)
* [Use the feature drift comparison chart.](#use-the-feature-drift-comparison-chart)
### Configure the drill down display settings {: #configure-the-drill-down-display-settings }
The **Drill Down** tab includes the following display controls:

| | Control | Description |
|-|---------|-------------|
 | Model | Updates the heatmap to display the model you selected from the dropdown.
 | Date slider | Limits the range of data displayed on the dashboard (i.e., zooms in on a specific time period).
 | Range (UTC) | Sets the date range displayed for the deployment date slider.
 | Resolution | Sets the time granularity of the deployment date slider.
 | Reset | Reverts the dashboard controls to the default settings.
### Use the feature drift heat map {: #use-the-feature-drift-heat-map }
The **Feature Drift for all features** heat map includes the following elements and controls:

| | Element | Description |
|-|---------|-------------|
 | Prediction time <br> (X-axis) | Represents the time range of the predictions used to calculate the corresponding drift value (PSI). Below the X-axis, the **Prediction sample size** bar chart represents the number of predictions made during the corresponding prediction time range.
 | Feature <br> (Y-axis) | Represents the features in a deployment's dataset. Click a feature name to generate the [feature drift comparison](#use-the-feature-drift-comparison-chart) below.
 | Status heat map | Displays the drift status over time for each of a deployment's features. Drift status visualizations are based on the [monitoring settings configured by the deployment owner](data-drift-settings). The deployment owner can also set the drift and importance thresholds in the [Feature Drift vs Feature Importance chart settings](data-drift#feature-drift-vs-feature-importance-chart).<br> The possible drift status classifications are: <ul><li><span style="color:#00c96e">**Healthy (Green)**</span>: The feature is experiencing minimal drift. No action needed, but monitor features that approach the threshold.</li><li><span style="color:#e4b207">**At risk (Yellow)**</span>: A lower importance feature is experiencing drift above the set threshold. Monitor closely.</li><li><span style="color:#e64d4d">**Failing (Red)**</span>: A high importance feature is experiencing drift above the set threshold. Investigate immediately</li></ul> Feature importance is determined by comparing the feature impact score with the importance threshold value. For an important feature, the feature impact score is greater than or equal to the importance threshold.
 | Prediction sample size | Displays the number of rows of prediction data used to calculate the data drift for the given time period. To view additional information on the prediction sample size, hover over a bin in the chart to see the time of prediction range and the sample size value.
### Use the feature drift comparison chart {: #use-the-feature-drift-comparison-chart }
The **Feature Drift Comparison** section includes the following elements and controls:

| | Element | Description |
|-|---------|-------------|
 | Reference period | Sets the date range of the period to use as a baseline for the drift comparison charts.
 | Comparison period | Sets the date range of the period to compare data distribution against the reference period. You can also [select an area of interest on the heat map to serve as the comparison period](#set-comparison-period).
 | Feature values <br> (X-axis) | Represents the range of values in the dataset for the feature in the Feature Drift Comparison chart.
 | Percentage of Records <br> (y-axis) | Represents the percentage of the total dataset represented by a range of values and provides a visual comparison between the selected reference and comparison periods.
 | Add a feature drift comparison chart | Generates a Feature Drift Comparison chart for a selected feature.
 | Remove this chart | Removes a Feature Drift Comparison chart.
??? tip "Set the comparison period on the feature drift heat map"
<span id="set-comparison-period">To select an area of interest on the heat map to serve as the comparison period, click and drag to select the period you want to target for feature drift comparison:

To view additional information on a **Feature Drift Comparison** chart, hover over a bar in the chart to see the range of values contained in that bar, the percentage of the total dataset those values represent in the **Reference period**, and the percentage of the total dataset those values represent in the **Comparison period**:

|
data-drift
|
---
title: Segmented analysis
description: Segmented analysis filters data drift and accuracy statistics into unique segment attributes and values to identify potential issues in your training and prediction data.
---
# Segmented analysis {: #segmented-analysis }
Segmented analysis identifies operational issues with training and prediction data requests for a deployment. DataRobot enables the drill-down analysis of data drift and accuracy statistics by filtering them into unique segment attributes and values.
Reference the guidelines below to understand how to [configure](#configure-segmented-analysis), [view](#view-segmented-analysis), and [apply](#apply-segmented-analysis) segmented analysis.
## Configure segmented analysis {: #configure-segmented-analysis }
To use segmented analysis for service health, data drift, and accuracy, you must enable the following deployment settings:
* [Enable target monitoring](data-drift-settings) (required to enable data drift _and_ accuracy tracking)
* [Enable feature drift tracking](data-drift-settings) (required to enable data drift tracking)
* [Track attributes for segmented analysis of training data and predictions](service-health-settings) (required to enable segmented analysis for service health, data drift, _and_ accuracy)
!!! note
Only the deployment owner can configure these settings.
## View segmented analysis {: #view-segmented-analysis }
If you have enabled [segmented analysis](#configure-segmented-analysis) for your deployment and have [made predictions](../../predictions/index), you can access various statistics by segment. By default, statistics for a deployment are displayed without any segmentation.
There are two dropdown menus used for segment analysis: **Segment Attribute** and **Segment Value**.

#### Service health {: #service-health }
Segmented analysis for service health uses fixed segment attributes for every deployment. The segment attributes represent the different ways in which prediction requests can be viewed. Segment value is a single value of the selected segment attribute present in one or more prediction requests. They are represented by different values depending on the segment attribute applied:
| Segment Attribute | Description | Segment Value | Example |
|-------------------|-------------|---------------|---------|
| DataRobot-Consumer | Segments prediction requests by the users of a deployment that have made prediction requests. | Each segment value is the email address of a user. | **Segment Attribute:** DataRobot-Consumer <br> **Value:** nate@datarobot.com |
| DataRobot-Host-IP | Segments prediction requests by the IP address of the prediction server used to make prediction requests. | Each segment value is a unique IP address. | **Segment Attribute:** DataRobot-Host-IP <br> **Value:** 168.212. 226.204 |
| DataRobot-Remote-IP | Segments prediction requests by the IP address of a caller (the machine used to make prediction requests). | Each segment value is a unique IP address. | **Segment Attribute:** DataRobot-Remote-IP <br> **Value:** 63.211. 546.231 |
Select a segment attribute, then select a segment value for that attribute. When both are selected, the service health tab automatically refreshes to display the statistics for the selected segment value.
!!! note
The segment values that appear are tied to the specified time range. If a user only contributed prediction requests outside the specified time range, that user does not appear as a selectable segment value in the dropdown menu.
#### Data drift and accuracy {: #data-drift-and-accuracy }
Segmented analysis for data drift and accuracy allows for custom attributes in addition to fixed attributes for every deployment. The segment attributes represent the different ways in which the data can be viewed. Segment value is a single value of the selected segment attribute present in one or more prediction requests. They are represented by different values depending on the segment attribute applied:
| Segment Attribute | Description | Segment Value | Example |
|-------------------|-------------|---------------|---------|
| DataRobot-Consumer | Segments prediction requests by the users of a deployment that have made prediction requests. | Each segment value is the email address of a user. | **Segment Attribute:** DataRobot-Consumer <br> **Value:** nate@datarobot.com |
| Custom attribute | Segments based on a column in the training data that is indicated when configuring segmented analysis. For example, if your training data includes a "Country" column, you could select it as a custom attribute and segment the data by individual countries (which make up the segment values for the custom attribute). | Based on the segment attribute you provide. | **Segment Attribute:** "Country" <br> **Value:** "Spain" |
| None | Displays the data drift statistics without any segmentation. | All (no segmentation applied). | N/A |
Select a segment attribute, and then select a segment value for that attribute. When both are selected, the **Data Drift** tab automatically refreshes to display the statistics for the selected segment value.
Note that the segment values that appear are tied to the specified time range. If a tracked segment attribute or value was present only in prediction requests outside the specified time range, that attribute or value does not appear in the dropdown menu.
## Apply segmented analysis {: #apply-segmented-analysis }
An example use case for segment analysis is determining the source of the data error rate for a deployment. For example, this deployment, without segmentation, displays an error rate of 14.39% for the given time range:

Segment analysis helps to understand where an error rate is coming from. For example, selecting "DataRobot-Consumer" from the **Segment Attribute** dropdown shows the Data Error Rate for the prediction requests made by individual users for a specified time window. Selecting an individual user from the **Segment Value** dropdown shows service health statistics for their segment of the prediction requests.
In this case, by selecting the user john.bledsoe@datarobot.com, the statistics refresh to display this user's stats. He made 25,000 predictions over 250 requests, with an error rate of 0%:

You can interpret this to mean that the user did not contribute to the overall error rate for this deployment. However, selecting a different user making predictions requests for this deployment shows that they made 1010 predictions over 160 requests, with an error rate of 36.875%:

The information gathered from segment analysis clearly indicates where a deployment's error rate is coming from, allowing the admin to contact the user contributing the erroneous data and rectify any issues.
|
deploy-segment
|
---
title: Custom Metrics tab
description: Create and monitor up to 25 custom business or performance metrics.
---
# Custom Metrics tab {: #custom-metrics-tab }
On a deployment's **Custom Metrics** tab, you can use the data you collect from the [**Data Export** tab](data-export) (or data calculated through other custom metrics) to compute and monitor up to 25 custom business or performance metrics. These metrics are recorded on the configurable **Custom Metric Summary** dashboard, where you monitor, visualize, and export each metric's change over time. This feature allows you to implement your organization's specialized metrics, expanding on the insights provided by DataRobot's built-in [Service Health](service-health), [Data Drift](data-drift), and [Accuracy](deploy-accuracy) metrics.
!!! info "Availability information"
Custom metrics don't support [segmented modeling](deploy-segment).
To access custom metrics:
1. In the top navigation bar, click **Deployments**.
2. On the **Deployments** tab, click on the deployment for which you want to create custom metrics.
3. In the deployment, click the **Custom Metrics** tab.
## Add custom metrics {: #add-custom-metrics }
The **Custom Metrics** tab can track up to 25 metrics. To add custom metrics:
1. On the **Custom Metrics** tab, click **+ Add Custom Metric**.
2. In the **Add Custom Metric** dialog box, click **Add external metric**, click **Next**, and then configure the metric settings:

Field | Description
------------------|------------
Name | (Required) A descriptive name for the metric. This name appears on the **Custom Metric Summary** dashboard.
Description | A description of the custom metric; for example, you could describe the purpose, calculation method, and more.
Name of y-axis | (Required) A descriptive name for the dependent variable. This name appears on the custom metric's chart on the **Custom Metric Summary** dashboard.
Default interval | Determines the default interval used by the selected **Aggregation type**. Only **HOUR** is supported.
Baseline | Determines the value used as a basis for comparison when calculating the **x% better** or **x% worse** values.
Aggregation type | Determines if the metric is calculated as a **Sum**, **Average**, or **Gauge**.
Metric direction | Determines the directionality of the metric and changes how changes the metric are visualized. You can select **Bigger is better** or **Lower is better**. For example, if you choose **Lower is better** a 10% decrease in the calculated value of your custom metric will be considered **10% better**, displayed in green.
Timestamp column | (Required) The column in the dataset containing a timestamp. This field can't be edited after you create the metric.
Value column | (Required) The column in the dataset containing the values used for custom metric calculation. This field can't be edited after you create the metric.
Date format | The date format used by the timestamp column. This field can't be edited after you create the metric.
Is Model Specific | When enabled, this setting links the metric to the model with the **Model Package ID** provided in the dataset. This setting influences when values are aggregated (or uploaded). For example: <ul><li>*Model specific (enabled)*: Model accuracy metrics are model specific, so the values are aggregated completely separately. When you replace a model, the chart for your custom accuracy metric onlys show data for the days after the replacement.</li><li>*Not model specific (disabled)*: Revenue metrics aren't model specific, so the values are aggregated together. When you replace a model, the chart for your custom revenue metric doesn't change.</li></ul> This field can't be edited after you create the metric.
3. Click **Add custom metric**.
## Upload data to custom metrics {: #upload-data-to-custom-metrics }
After you create a custom metric, you can provide data to calculate the metric:
1. On the **Custom Metrics** tab, locate the custom metric for which you want to upload data, and then click the **Upload Data** icon.

2. In the **Upload Data** dialog box, select an upload method, and then click **Next**:

Upload method | Description
--------------------|------------
Upload data as file | In the **Choose file** dialog box, drag and drop file(s) to upload, or click **Choose file > Local file** to browse your local filesystem, and then click **Submit data**. You can upload up to 10GB uploaded in one file.
Use AI Catalog | In the **Select a dataset from the AI Catalog** dialog box, click a dataset from the list, and then click **Select a dataset**. The AI Catalog includes datasets from the [**Data Export** tab](data-export).
Use API | In the **Use API Client** dialog box, click **Copy to clipboard**, and then modify and use the API snippet to upload a dataset. You can upload up to 10,000 values in one API call.
## Manage custom metrics {: #manage-custom-metrics }
On the **Custom Metrics** dashboard, after you've added your custom metrics, you can edit, arrange, or delete them.
=== "Edit or delete metrics"
To edit or delete a metric, on the **Custom Metrics** tab, locate the custom metric you want to manage, and then click the more options icon:

* To edit a metric, click **Edit**, update any configurable settings, and then click **Update custom metric**.
* To delete a metric, click **Delete**.
=== "Arrange or hide metrics"
To arrange or hide metrics on the **Custom Metric Summary** dashboard, locate the custom metric you want to move or hide:

* To move a metric, click the grid icon () on the left side of the metric tile and then drag the metric to a new location.
* To hide a metric's chart, clear the checkbox next to the metric name.
## Configure the custom metric dashboard display settings {: #configure-the-custom-metric-dasboard-display-settings }
Configure the following settings to specify the custom metric calculations you want to view on the dashboard:

| | Setting | Description |
|-|---------|-------------|
|  | Model | Select the deployment's model, current or previous, to show custom metrics for. |
|  | Range (UTC) | Select the start and end dates of the period from which you want to view custom metrics. |
|  | Resolution | Select the granularity of the date slider. Select from hourly, daily, weekly, and monthly granularity based on the time range selected. If the time range is longer than 7 days, hourly granularity is not available. |
|  | Refresh | Refresh the custom metric dashboard. |
|  | Reset | Reset the custom metric dashboard's display settings to the default. |
|
custom-metrics
|
---
title: Service Health tab
description: How to use the Service Health tab, which tracks metrics for how quickly a deployment responds to prediction requests to find bottlenecks and assess capacity.
---
# Service Health tab {: #service-health-tab }
The **Service Health** tab tracks metrics about a deployment's ability to respond to prediction requests quickly and reliably. This helps identify bottlenecks and assess capacity, which is critical to proper provisioning.
For example, if a model seems to have generally slowed in its response times, the **Service Health** tab for the model's deployment can help. You might notice in the tab that median latency goes up with an increase in prediction requests. If latency increases when a new model is switched in, you can consult with your team to determine whether the new model can instead be replaced with one offering better performance.
To access **Service Health**, select an individual deployment from the deployment inventory page and, from the resulting **Overview** page, choose the **Service Health** tab. The tab provides informational [tiles](#understanding-the-metric-tiles) and a [chart](#understanding-the-service-health-chart) to help assess the activity level and health of the deployment.

{% include 'includes/service-health-prediction-time.md' %}
## Use the time range and resolution dropdowns {: #use-the-time-range-and-resolution-dropdowns }
The controls—model version and data time range selectors—work the same as those available on the [**Data Drift**](data-drift#use-the-time-range-and-resolution-dropdowns) tab. The **Service Health** tab also supports [segmented analysis](deploy-segment), allowing you to view service health statistics for individual segment attributes and values.

## Understand the metric tiles {: #understand-the-metric-tiles }
DataRobot displays informational statistics based on your current settings for model and time frame. That is, tile values correspond to the same units as those selected on the slider. If the slider interval values are weekly, the displayed tile metrics show values corresponding to weeks. Clicking a metric tile updates the chart below.
**Service Health** reports on the following metrics:
| Statistic | Reports for selected time period... |
|-------------------|-------------------------|
| Total Predictions | The number of *predictions* the deployment has made. |
| Total Requests | The number of prediction requests the deployment has received (a single request can contain multiple prediction requests). |
| Requests over... | The number of requests where the response time was longer than the specified number of milliseconds. The default is 2000 ms; click in the box to enter a time between 10 and 100,000 ms or adjust with the controls. |
| Response Time | The time (in milliseconds) DataRobot spent receiving a prediction request, calculating the request, and returning a response to the user. The report does not include time due to network latency. Select the median prediction request time or 90th, 95th, or 99th percentile. The display reports a dash if you have made no requests against it or if it's an external deployment. |
| Execution Time | The time (in milliseconds) DataRobot spent calculating a prediction request. Select the median prediction request time or 90th, 95th, or 99th percentile. |
| Median/Peak Load | The median and maximum number of requests per minute. |
| Data Error Rate | The percentage of requests that result in a 4xx error (problems with the prediction request submission). This is a component of the value reported as the Service Health Summary in the [**Deployments**](deploy-inventory) page top banner. |
| System Error Rate | The percentage of well-formed requests that result in a 5xx error (problem with the DataRobot prediction server). This is a component of the value reported as the Service Health Summary in the [**Deployments**](deploy-inventory) page top banner. |
| Consumers | The number of distinct users (identified by API key) who have made prediction requests against this deployment. |
| Cache Hit Rate | The percentage of requests that used a cached model (the model was recently used by other predictions). If not cached, DataRobot has to look the model up, which can cause delays. The prediction server cache holds 16 models by default, dropping the least-used dropped when the limit is reached. |
## Understand the Service Health chart {: #understand-the-service-health-chart }
The chart below the tiled metrics displays individual metrics over time, helping to identify patterns in the quality of service. Clicking on a metric tile updates the chart to represent that information; you can also export it. Adjust the data range slider to narrow in on a specific period:

Some charts will display multiple metrics:

## View MLOps Logs {: #view-mlops-logs }
On the MLOps Logs tab, you can view important deployment events. These events can help diagnose issues with a deployment or provide a record of the actions leading to the current state of the deployment. Each event has a type and a status. You can filter the event log by event type, event status, or time of occurrence, and you can view more details for an event on the Event Details panel.
1. On a deployment's **Service Health** page, scroll to the **Recent Activity** section at the bottom of the page.
2. In the **Recent Activity** section, click **MLOps Logs**.
3. Under **MLOps Logs**, configure any of the following filters:

| Element | Description |
|---|---|
|  | Set the **Categories** filter to display log events by deployment feature: <ul><li>**Accuracy**: events related to actuals processing.</li><li>**Challengers**: events related to challengers functionality.</li><li>**Monitoring**: events related to general deployment actions; for example, model replacements or clearing deployment stats.</li><li>**Predictions**: events related to predictions processing.</li><li>**Retraining**: events related to deployment retraining functionality.</li></ul>The default filter displays all event categories. |
|  | Set the **Status Type** filter to display events by status: <ul><li>**Success**</li><li>**Warning**</li><li>**Failure**</li><li>**Info**</li></ul> The default filter displays **Any** status type. |
|  | Set the **Range (UTC)** filter to display events logged within the specified range (UTC). The default filter displays the last seven days up to the current date and time. |
??? faq "What errors are surfaced in the MLOps Logs?"
* Actuals with missing values
* Actuals with duplicate association ID
* Actuals with invalid payload
* Challenger created
* Challenger deleted
* Challenger replay error
* Challenger model validation error
* Deployment historical stats reset
* Model replacement validation warning
* Prediction processing limit reached
* Predictions missing required association ID
* Retraining policy success
* Retraining policy error
4. On the left panel, the **MLOps Logs** list displays deployment events with any selected filters applied. For each event, you can view a summary that includes the event name and status icon, the timestamp, and an event message preview.
5. Click the event you want to examine and review the **Event Details** panel on the right.

=== "General event details"
This panel includes the following details:
* Title
* Status Type (with a success, warning, failure, or info label)
* Timestamp
* Message (with text describing the event)
=== "Event-specific details"
You can also view the following details if applicable to the current event:
* Model ID
* Model Package ID (with a link to the package in the Model Registry if MLOps is enabled)
* Catalog ID (with a link to the dataset in the AI Catalog)
* Challenger ID
* Prediction Job ID (for the related batch prediction job)
* Affected Indexes (with a list of indexes related to the error event)
* Start/End Date (for events covering a specified period; for example, resetting deployment stats)
!!! tip
For ID fields without a link, you can copy the ID by clicking the copy button .
|
service-health
|
---
title: Challengers tab
description: How to use the Challengers tab to submit challenger models that shadow a deployed model and replay predictions made against the deployed model. If a challenger outperforms the deployed model, you can replace the model.
---
# Challengers tab {: #challengers-tab }
!!! info "Availability information"
The **Challengers** tab is a feature exclusive to DataRobot MLOps users. Contact your DataRobot representative for information on enabling it.
During model development, many models are often compared to one another until one is chosen to be deployed into a production environment. The **Challengers** tab provides a way to continue model comparison post-deployment. You can submit challenger models that shadow a deployed model and replay predictions made against the deployed model. This allows you to compare the predictions made by the challenger models to the currently deployed model (the "champion") to determine if there is a superior DataRobot model that would be a better fit.
## Enable challenger models {: #enable-challenger-models }
To enable challenger models for a deployment, you must enable the **Challengers** tab and [prediction row storage](challengers-settings). To do so, configure the deployment's data drift settings either when [creating a deployment](add-deploy-info#challenger-analysis) or on the [**Challengers > Settings**](challengers-settings) tab. If you enable Challenger models, prediction row storage is automatically enabled for the deployment. It cannot be turned off, as it is required for challengers.
!!! info "Availability information"
To enable challengers and replay predictions against them, the deployed model must support target drift tracking *and* not be a [Feature Discovery](fd-overview) or [Unstructured custom inference](unstructured-custom-models) model.

## Select a challenger model {: #select-a-challenger-model }
Before adding a challenger model to a deployment, you must first build and select the model to be added as a challenger. Complete the [modeling process](model-data) and choose a model from the Leaderboard, or deploy a [custom model](reg-create#add-a-custom-inference-model) as a model package. When selecting a challenger model, consider the following:
* It *must* have the same target type as the champion model.
* It *cannot* be the same Leaderboard model as an existing champion or challenger; each challenger *must* be a unique model. If you create multiple model packages from the same Leaderboard model, you can't use those models as challengers in the same deployment.
* It *cannot* be a Feature Discovery model.
* It *does not* need to be trained on the same feature list as the champion model; however, it *must* share some features, and, to successfully [replay predictions](#replay-predictions), you *must* send the union of all features required for champion and challengers.
* It *does not* need to be built from the same project as the champion model.
When you have selected a model to serve as a challenger, from the Leaderboard, navigate to **Predict** > **Deploy** and select **Add model package to registry**. This creates a [model package](reg-create) for the selected model in the [**Model Registry**](registry/index), so you can add the model to a deployment as a challenger.

## Add challengers to a deployment {: #add-challengers-to-a-deployment }
To add a challenger model to a deployment, navigate to the **Challengers** tab and select **Add challenger model > Select existing model**. You can add up to 5 challengers to each deployment.

!!! note
The selection list contains only model packages where the target type and name are the same as the champion model.
The modal prompts you to select a model package from the registry to serve as a challenger model. Choose the model to add and click **Select model package**.
DataRobot verifies that the model shares features and a target type with the champion model. Once verified, click **Add Challenger**. The model is now added to the deployment as a challenger.

## Replay predictions {: #replay-predictions }
After adding a challenger model, you can replay stored predictions made with the champion model for all challengers, allowing you to compare performance metrics such as predicted values, accuracy, and data errors across each model.
To replay predictions, select **Update challenger predictions**.

??? "Organization considerations"
If you aren't in the [Organization](admin-overview#what-are-organizations) associated with the deployment, you don't have the required permissions to replay predictions against challenger models. This restriction also applies to deployment [Owners](roles-permissions#deployment-roles).
The champion model computes and stores up to 100,000 prediction rows per hour. The challengers replay the first 10,000 rows of the prediction requests made for each hour within the time range specified by the [date slider](data-drift#use-the-time-range-and-resolution-dropdowns). Note that for time series deployments, this limit does not apply. All prediction data is used by the challengers to compare statistics.
After predictions are made, click **Refresh** on the date slider to view an updated display of [performance metrics](#challenger-performance-metrics) for the challenger models.

## Schedule prediction replay {: #schedule-prediction-replay }
You can replay predictions with challengers on a periodic schedule instead of doing so manually. Navigate to a deployment's [**Challengers > Settings**](challengers-settings) tab, enable the **Automatically replay challengers** toggle, and configure the preferred cadence and time of day for replaying predictions:

!!! note
Only the deployment [_Owner_](roles-permissions#deployment-roles) can schedule challenger replay.
Once enabled, the replay will trigger at the configured time for all challengers. Note that if you have a deployment with prediction requests made in the past and choose to add challengers at the current time, the scheduled job scores the newly added challenger models upon the next run cycle.
## View challenger job history {: #view-challenger-job-history }
After adding one or more challenger models and replaying predictions, you can view challenger prediction jobs for a deployment's challengers on the [Deployments > Prediction Jobs](batch-pred-jobs#manage-prediction-jobs) page.
To view challenger prediction jobs, click **Job History**.

The Prediction Jobs page opens and is filtered to display challenger jobs for the deployment you accessed the Job History from.
## Challenger models overview {: #challenger-models-overview }
The **Challengers** tab displays information about the champion model and each challenger.

| | Element | Description |
|---|---|---|
|| Display Name | The display name for each model. Use the pencil icon to edit the display name. This field is useful for describing the purpose or strategy of each challenger (e.g., "reference model," "former champion," "reduced feature list").|
|| Challenger models | The list of challenger models. Each model is associated with a color. These colors allow you to compare the models using visualization tools.|
|| Model data | The metadata for each model, including the project name, model name, and the execution environment type.|
|| Prediction Environment | The external environment the model uses to manage deployment predictions on a system outside of DataRobot. For more information, see [Prediction environments](pred-env).|
|| Accuracy | The model's accuracy metric calculation for the selected date range and, for challengers, a comparison with the champion's accuracy metric calculation. Use the **Accuracy metric** dropdown menu to compare different metrics. For more information on model accuracy, see the [Accuracy chart](challengers#accuracy-chart).|
|| Training Data | The filename of the data used to train the model.|
|| Actions | The actions available for each model:<ul><li>**Replace**: Promotes a challenger to the champion (the currently deployed model) and demotes the current champion to a challenger model. </li><li>**Remove**: Removes the model from the deployment as a challenger. Only challengers can be deleted; a champion must be demoted before it can be deleted.</li></ul>|
### Challenger performance metrics {: #challenger-performance-metrics }
After prediction data is replayed for challenger models, you can examine the charts displayed below that capture the various performance metrics recorded for each model.
Each model is listed with its corresponding color. Uncheck a model's box to stop displaying the model's performance data on the charts.

#### Predictions chart {: #predictions-chart }
The Predictions chart records the average predicted value of the target for each model over time. Hover over a point to compare the average value for each model at a specific point in time.

For binary classification projects, use the **Class** dropdown to select the class for which you want to analyze the average predicted values. The chart also includes a toggle that allows you to switch between continuous and binary modes. Continuous mode shows the positive class predictions as probabilities between 0 and 1 without taking the prediction threshold into account. Binary mode takes the prediction threshold into account and shows, for all predictions made, the percentage for each possible class.
#### Accuracy chart {: #accuracy-chart }
The Accuracy chart records the change in a selected [accuracy](deploy-accuracy) metric value (LogLoss in this example) over time. These metrics are identical to those used for the evaluation of the model before deployment. Use the dropdown to change the accuracy metric. You can select from [any of the supported metrics](deploy-accuracy#available-accuracy-metrics) for the deployment's modeling type.
!!! important
You must [set an association ID](accuracy-settings#select-an-association-id) _before_ making predictions to include those predictions in accuracy tracking.

#### Data Errors chart {: #data-errors-chart }
The Data Errors chart records the [data error rate](service-health) for each model over time. Data error rate measures the percentage of requests that result in a 4xx error (problems with the prediction request submission).

## Challenger model comparisons {: #challenger-model-comparisons }
MLOps allows you to compare challenger models against each other and against the currently deployed model (the "champion") to ensure that your deployment uses the best model for your needs. After evaluating DataRobot's model comparison visualizations, you can replace the champion model with a better-performing challenger.
DataRobot renders visualizations based on a dedicated comparison dataset, which you select, ensuring that you're comparing predictions based on the same dataset and partition while still allowing you to train champion and challenger models on different datasets. For example, you may train a challenger model on an updated snapshot of the same data source used by the champion.
!!! warning
Make sure your comparison dataset is out-of-sample for the models being compared (i.e., it doesn't include the training data from any models included in the comparison).
### Generate model comparisons {: #generate-model-comparisons }
After you [enable challengers](challengers#enable-challenger-models) and [add one or more challengers](challengers#add-challengers-to-a-deployment) to a deployment, you can generate comparison data and visualizations.
1. On the **Deployments** page, locate and expand the deployment with the champion and challenger models you want to compare.
2. Click the **Challengers** tab.
3. On the **Challengers Summary** tab, if necessary, [add a challenger model](challengers#add-challengers-to-a-deployment) and [replay the predictions](challengers#replay-predictions) for challengers.
4. Click the **Model Comparison** tab.
The following table describes the elements of the **Model Comparison** tab:

| | Element | Description |
|---|---------|-------------|
|| Model 1 | Defaults to the champion model—the currently deployed model. Click to select a different model to compare.|
|| Model 2 | Defaults to the first challenger model in the list. Click to select a different model to compare. If the list doesn't contain a model you want to compare to Model 1, click the **Challengers Summary** tab to add a new challenger.|
|| Open model package | Click to view the model's details. The details display in the **Model Packages** tab in the Model Registry.|
|| Promote to champion | If the challenger model in the comparison is the best model, click **Promote to champion** to replace the deployed model (the "champion") with this model.|
|| Add comparison dataset | Select a dataset for generating insights on both models. Be sure to select a dataset that is out-of-sample for both models (see [stacked predictions](data-partitioning#what-are-stacked-predictions)). Holdout and validation partitions for Model 1 and Model 2 are available as options if these partitions exist for the original model. By default, the holdout partition for Model 1 is selected. To specify a different dataset, click **+ Add comparison dataset** and choose a local file or a [snapshotted](glossary/index#snapshot) dataset from the AI Catalog.|
|| Prediction environment | Select a [prediction environment](pred-env) for scoring both models.|
|| Model Insights | Compare model predictions, metrics, and more.|
5. Scroll to the **Model Insights** section of the Challengers page and click **Compute insights**.
You can generate new insights using a different dataset by clicking **+ Add comparison dataset**, then selecting **Compute insights** again.
### View model comparisons {: #view-model-comparisons }
Once you compute model insights, the **Model Insights** page displays the following tabs depending on the project type:
!!! note
Multiclass classification projects only support accuracy comparison.
<table>
<tr>
<th></th>
<th scope="col">Accuracy</th>
<th scope="col">Dual lift</th>
<th scope="col">Lift</th>
<th scope="col">ROC</th>
<th scope="col">Predictions Difference</th>
</tr>
<tr>
<th scope="row">Regression</th>
<td>✔</td>
<td>✔</td>
<td>✔</td>
<td></td>
<td>✔</td>
</tr>
<tr>
<th scope="row">Binary</th>
<td>✔</td>
<td>✔</td>
<td>✔</td>
<td>✔</td>
<td>✔</td>
</tr>
<tr>
<th scope="row">Multiclass</th>
<td>✔</td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<th scope="row">Time series</th>
<td>✔</td>
<td>✔</td>
<td>✔</td>
<td></td>
<td>✔</td>
</tr>
</table>
=== "Accuracy"
After DataRobot computes model insights for the deployment, you can compare model accuracy.
Under **Model Insights**, click the **Accuracy** tab to compare accuracy metrics:

The two columns show the metrics for each model. Highlighted numbers represent favorable values. In this example, the champion, **Model 1**, outperforms **Model 2** for most metrics shown.
For time series projects, you can evaluate accuracy metrics by applying the following filters:
* **Forecast distance**: View accuracy for the selected [forecast distance](glossary/index#forecast-distance) row within the [forecast window](glossary/index#forecast-window) range.
* **For all *x* series**: View accuracy scores by metric. This view reports scores in all available accuracy metrics for both models across the entire [time series](glossary/index#time-series) range (*x*).
* **Per series**: View accuracy scores by series within a [multiseries](glossary/index#multiseries) comparison dataset. This view reports scores in a single accuracy metric (selected in the **Metric** dropdown menu) for each **Series ID** (e.g., store number) in the dataset for both models.
For multiclass projects, you can evaluate accuracy metrics by applying the following filters:
* **For all *x* classes**: View accuracy scores by metric. This view reports scores in all available accuracy metrics for both models across the entire [multiclass](glossary/index#classification) range (*x*).
* **Per class**: View accuracy scores by class within a [multiclass classification](glossary/index#classification) problem. This view reports scores in a single accuracy metric (selected in the **Metric** dropdown menu) for each **Class** (e.g., buy, sell, or hold) in the dataset for both models.
=== "Dual lift"
A [dual lift chart](model-compare#dual-lift-chart) is a visualization comparing two selected models against each other. This visualization can reveal how models underpredict or overpredict the actual values across the distribution of their predictions. The prediction data is evenly distributed into equal size bins in increasing order.
To view the dual lift chart for the two models being compared, under **Model Insights**, click the **Dual lift** tab:

The curves for the two models represented on this chart maintain the color they were assigned when added to the deployment (as either a champion or challenger). To interact with the dual lift chart, you can hide the model curves and the actual curve.
* The **+** icons in the plot area of the chart represent the models' predicted values. Click the **+** icon next to a model name in the header to hide or show the curve for a particular model.
* The orange <span style="color: #ff6d00">**o**</span> icons in the plot area of the chart represent the actual values. Click the orange <span style="color: #ff6d00">**o**</span> icon next to **Actual** to hide or show the curve representing the actual values.
=== "Lift"
A [lift chart](lift-chart) depicts how well a model segments the target population and how capable it is of predicting the target, allowing you to visualize the model's effectiveness.
To view the lift chart for the models being compared, under **Model Insights**, click the **Lift** tab:

The curves for the two models represented on this chart maintain the color they were assigned when added to the deployment (as either a champion or challenger).
=== "ROC"
!!! note
The ROC tab is only available for binary classification projects.
An [ROC curve](roc-curve) plots the true-positive rate against the false-positive rate for a given data source. Use the ROC curve to explore classification, performance, and statistics for the models you're comparing.
To view the ROC curves for the models being compared, under **Model Insights**, click the **ROC** tab:

The curves for the two models represented on this chart maintain the color they were assigned when added to the deployment (as either a champion or challenger). You can update the prediction thresholds for the models by clicking the pencil icons.
=== "Predictions Difference"
Click the **Predictions Difference** tab to compare the predictions of two models on a row-by-row basis. The histogram shows the percentage of predictions that fall within the match threshold you specify in the **Prediction match threshold** field (along with the corresponding numbers of rows).
The header of the histogram displays the percentage of predictions:
* Between the positive and negative values of the match threshold (shown in green)
* Greater than the upper (positive) match threshold (shown in red)
* Less than the lower (negative) match threshold (shown in red)

??? note "How are bin sizes calculated?"
The size of the **Predictions Difference** bins in the histogram depends on the **Prediction match threshold** you set. The value of the prediction match threshold bin is equal to the difference between the upper match threshold (positive) and the lower match threshold (negative). The default prediction match threshold value is 0.0025, so for that value, the center bin is 0.005 (0.0025 + |-0.0025|). The bins on either side of the central bin are ten times larger than the previous bin. The last bin on either end expands to fit the full Prediction Difference range. For example, based on the default **Prediction match threshold**, the bin sizes would be as follows (where x is the difference between 250 and the maximum Prediction Difference):
<table>
<tr>
<th></th>
<th scope="col">Bin -5</th>
<th scope="col">Bin -4</th>
<th scope="col">Bin -3</th>
<th scope="col">Bin -2</th>
<th scope="col">Bin -1</th>
<th scope="col">Bin 0</th>
<th scope="col">Bin 1</th>
<th scope="col">Bin 2</th>
<th scope="col">Bin 3</th>
<th scope="col">Bin 4</th>
<th scope="col">Bin 5</th>
</tr>
<tr>
<th scope="row">Range</th>
<td>(−250 + x) to −25</td>
<td>−25 to −2.5</td>
<td>−2.5 to −0.25</td>
<td>−0.25 to −0.025</td>
<td>−0.025 to −0.0025</td>
<td>−0.0025 to +0.0025</td>
<td>+0.0025 to +0.025</td>
<td>+0.025 to +0.25</td>
<td>+0.25 to +2.5</td>
<td>+2.5 to +25</td>
<td>+25 to (+250 + x)</td>
</tr>
<tr>
<th scope="row">Size</th>
<td>225 + x</td>
<td>22.5</td>
<td>2.25</td>
<td>0.225</td>
<td>0.0225</td>
<td>0.005</td>
<td>0.0225</td>
<td>0.225</td>
<td>2.25</td>
<td>22.5</td>
<td>225 + x</td>
</tr>
</table>
If many matches dilute the histogram, you can toggle **Scale y-axis to ignore perfect matches** to focus on the mismatches.
The bottom section of the **Predictions Difference** tab shows the 1000 most divergent predictions (in terms of absolute value).

The **Difference** column shows how far apart the predictions are.
### Replace champion with challenger
After comparing models, if you find a model that outperforms the deployed model, you can set it as the new champion.
1. Evaluate the comparison model insights to determine the best-performing model.
2. If a challenger model outperforms the deployed model, click **Promote to champion**.
3. Select a **Replacement Reason** and click **Accept and Replace**.

The challenger model is now the champion (deployed) model.
## Challengers for external deployments {: #challengers-for-external-deployments }
External deployments with [remote prediction environments](pred-env) can also use the **Challengers** tab. Remote models can serve as the champion model, and you can compare them to DataRobot and custom models serving as challengers.
The [workflow](challengers#enable-challenger-models) for adding challenger models is largely the same; however, there are unique differences for external deployments outlined below.
### Add challenger models to external deployments {: #add-challenger-models-to-external-deployments }
To enable challenger support, access an external deployment (one created with an external model package). In the **Settings** tab, under the **Data Drift** header, enable challenger models and [prediction row storage](challengers-settings).

The **Challengers** tab is now accessible. To add challenger models to the deployment, navigate to the tab and click **Add challenger model > Select existing model**.

Select a model package for the challenger you want to add (custom and DataRobot models only). Additionally, you must indicate a prediction environment used by the model package; this details where the model runs predictions. DataRobot or custom model can only use a DataRobot prediction environment for challengers models (unlike the champion model, deployed to an external prediction environment). When you have chosen the desired prediction environment, click **Select**.

The tab updates to display the model package you wish to add, verifying that the features used in the model package match the deployed model. Select **Add challenger**.

The model package is now serving as a challenger model for the remote deployment.
### Add external challenger comparison dataset {: #add-external-challenger-comparison-dataset }
To compare an external model challenger, you need to provide a dataset that includes the actuals *and* the prediction results. When you upload the comparison dataset, you can specify a column containing the prediction results.
To add a comparison dataset for an external model challenger, follow the [Generate model comparisons](#generate-model-comparions) process, and on the **Model Comparison** tab, upload your comparison dataset with a **Prediction column** identifier. Make sure the prediction dataset you provide includes the prediction results generated by the external model at the location identified by the **Prediction column**.

### Manage challengers for external deployments {: #manage-challengers-for-external-deployments }
You can manage challenger models for remote deployments with various actions:
* To edit the prediction environment used by a challenger, select the pencil icon and choose a new prediction environment from the dropdown.
* To replace the deployed model with a challenger, the challenger must have a compatible prediction environment. Once replaced, the champion <em>does not</em> become a challenger because remote models are ineligible.
#### Challenger promotion to champion {: #challenger-promotion-to-champion}
A deployment's champion can't switch between an external prediction environment and a DataRobot prediction environment. When a challenger replaces a champion running in an external prediction environment, that challenger inherits the external environment of the former champion. If the Management Agent isn't configured in the external prediction environment, you must manually deploy the new champion in the external environment to continue making predictions.
#### Champion demotion to challenger {: #champion-demotion-to-challenger}
If the former champion isn't an external model package, it is compatible with DataRobot hosting and can become a challenger. In that scenario, the former champion moves to a DataRobot prediction environment where the deployment can replay the champion's predictions against it.
|
challengers
|
---
title: Performance monitoring
description: You can use monitoring tools to monitor deployed or remote models, data drift, model accuracy over time, and more.
---
# Performance monitoring {: #performance-monitoring }
To trust a model to power mission-critical operations, users need to have confidence in all aspects of model deployment. Model monitoring is the close tracking of the performance of ML models in production used to identify potential issues before they impact the business. Monitoring ranges from whether the service is reliably providing predictions in a timely manner and without errors to ensuring the predictions themselves are reliable.
The predictive performance of a model typically starts to diminish as soon as it’s deployed. For example, someone might be making live predictions on a dataset with customer data, but the customer’s behavioral patterns might have changed due to an economic crisis, market volatility, natural disaster, or even the weather. Models trained on older data that no longer represents the current reality might not just be inaccurate, but irrelevant, leaving the prediction results meaningless or even harmful. Without dedicated production model monitoring, the user or business owner cannot know or be able to detect when this happens. If model accuracy starts to decline without detection, the results can impact a business, expose it to risk, and destroy user trust.
DataRobot automatically monitors model deployments and offers a central hub for detecting errors and model accuracy decay as soon as possible. For each deployment, DataRobot provides a status banner—model-specific information is also available on the **Deployments** [inventory](deploy-inventory) page.

These sections describe the tools available for monitoring model deployments:
| Topic | Describes... | Data Required for Monitoring |
|-------|-----------|---------------|
| [Deployments](deploy-inventory) | Viewing deployment inventory. | N/A |
| [Notifications](deploy-notifications) tabs on the Settings page| Configuring notifications and monitoring. | N/A |
| [Service Health](service-health) | Tracking model-specific deployment latency, throughput, and error rate. | Prediction data |
| [Data Drift](data-drift) | Monitoring model accuracy based on data distribution. | Prediction and training data|
| [Accuracy](deploy-accuracy)| Analyzing performance of a model over time. | Training data, prediction data, and actuals data |
| [Challenger Models](challengers) | Comparing model performance post-deployment. | Prediction data |
| [Usage](deploy-usage) | Tracking prediction processing progress for use in accuracy, data drift, and predictions over time analysis. | Prediction data or actuals |
| [Data Export](data-export) | Export a deployment's stored prediction data, actuals, and training data to compute and monitor custom business or performance metrics. | Training data, prediction data, or actuals data |
| [Custom Metrics](custom-metrics) | Creating and monitoring up to 25 custom business or performance metrics. | Prediction data |
| [MLOps agent](mlops-agent/index) | Monitoring remote models. | Requires a remote model and an external model package deployment |
| [Segmented analysis](deploy-segment) | Tracking attributes for segmented analysis of training data and predictions. | Prediction data (training data also required to track data drift or accuracy) |
|
index
|
---
title: Accuracy tab
description: How to use the Accuracy tab to determine whether a model's quality is decaying and if you should consider replacing it.
---
# Accuracy tab {: #accuracy-tab }
The **Accuracy** tab allows you to analyze the performance of model deployments over time using standard statistical measures and exportable visualizations.

Use this tool to determine whether a model's quality is decaying and if you should consider replacing it. The **Accuracy** tab renders insights based on the problem type and its associated optimization metrics—metrics that vary depending on regression or binary classification projects.
=== "SaaS"
!!! note
The accuracy scores displayed on this tab are estimates and may differ from accuracy scores computed using every prediction row in the raw data. This is due to hourly data processing limits. Within the hourly limit, DataRobot cannot compute accuracy scores using more than 100,000 rows and instead provides scores based on the rows it was able to compute for accuracy. To achieve a more precise accuracy score, span prediction requests across multiple hours to avoid reaching the hourly computation limit.
=== "Self-Managed"
!!! note
The accuracy scores displayed on this tab are estimates and may differ from accuracy scores computed using every prediction row in the raw data. This is due to data processing limits (hourly or daily, depending on the configuration). DataRobot cannot compute accuracy scores using every row of larger prediction requests and instead provides scores based on the rows it was able to compute for accuracy. To achieve a more precise accuracy score, span prediction requests over multiple hours or days to avoid reaching the computation limit.
## Enable the Accuracy tab {: #enable-the-accuracy-tab }
The **Accuracy** tab is not enabled for deployments by default. To enable it, enable target monitoring, set an association ID, and upload the data that contains predicted and actual values for the deployment collected outside of DataRobot. Reference the overview of [setting up accuracy](accuracy-settings) for deployments by adding [actuals](glossary/index#actuals) for more information.
The following errors can prevent accuracy analysis:
Problem | Resolution
--------|------------
Disabled target monitoring setting | [Enable target monitoring](data-drift-settings) on the **Data Drift > Settings** tab. A message appears on the **Accuracy** tab to remind you to enable target monitoring.
Missing Association ID at prediction time | [Set an association ID](accuracy-settings#select-an-association-id) _before_ making predictions to include those predictions in accuracy tracking.
Missing actuals | [Add actuals](accuracy-settings#add-actuals) on the **Accuracy > Settings** tab.
Insufficient predictions to enable accuracy analysis | [Add more actuals](accuracy-settings#add-actuals) on the **Accuracy > Settings** tab. A minimum of 100 rows of predictions with corresponding actual values are required to enable the **Accuracy** tab.
Missing data for the selected time range | [Ensure predicted and actual values match the selected time range](#time-range-and-resolution-dropdowns) to view accuracy metrics for that range.
## Time range and resolution dropdowns {: #time-range-and-resolution-dropdowns }
The controls—model version and data time range selectors—work the same as those available on the [**Data Drift**](data-drift#use-the-time-range-and-resolution-dropdowns) tab. The **Accuracy** tab also supports [segmented analysis](deploy-segment), allowing you to view accuracy for individual segment attributes and values.

!!! note
To receive email notifications on accuracy status, [configure notifications](deploy-notifications#configure-notifications), [schedule monitoring](accuracy-settings#schedule-accuracy-monitoring-notifications), and [configure accuracy monitoring settings](accuracy-settings#define-accuracy-monitoring-notifications).
## Configure accuracy metrics {: #configure-accuracy-metrics }
Deployment [owners](roles-permissions#deployment-roles) can configure multiple accuracy metrics for each deployment. The accuracy metrics a deployment uses display as individual tiles above the accuracy graphs. Select **Customize Tiles** to edit the metrics used.

The dialog box lists all of the metrics currently enabled for the deployment. They are listed from top to bottom in order of their appearance as tiles, from left to right.

To change the positioning of a tile, select the up arrow to move it to the left and the down arrow to move it to the right.
To add a new metric tile, click **Add another metric**. Each deployment can display up to 10 accuracy tiles.
To change a tile's accuracy metric, click the dropdown for the metric you wish to change and choose the metric to replace it.

When you have made all of your changes, click **OK**. The **Accuracy** tab updates to reflect the changes made to the displayed metrics.
### Available accuracy metrics {: #available-accuracy-metrics }
The metrics available depend on the type of modeling project used for the deployment: regression, binary classification, or multiclass.
| Modeling type | Available metrics |
|-----------------------|-----------------------|
| Regression | RMSE, MAE, Gamma Deviance, Tweedie Deviance, R Squared, FVE Gamma, FVE Poisson, FVE Tweedie, Poisson Deviance, MAD, MAPE, RMSLE |
| Binary classification | LogLoss, AUC, Kolmogorov-Smirnov, Gini-Norm, Rate@Top10%, Rate@Top5%, TNR, TPR, FPR, PPV, NPV, F1, MCC, Accuracy, Balanced Accuracy, FVE Binomial |
| Multiclass | LogLoss, FVE Multinomial |
For more information on these metrics, see the [Optimization metrics documentation](opt-metric).
## Interpret results {: #interpret-results }
The **Accuracy** tab displays slightly different results based on whether the deployment is a regression or binary classification project.
{% include 'includes/service-health-prediction-time.md' %}
### Accuracy over Time graph {: #accuracy-over-time-graph }
The **Accuracy over Time** graph displays the change over time for a selected accuracy metric value (LogLoss in this example):

The **Start** value (the baseline accuracy score) and the plotted accuracy baseline represent the accuracy score for the model, which is calculated using the trained model’s predictions on the [holdout partition](unlocking-holdout):

!!! note "Holdout partition for custom models"
* For [structured custom models](structured-custom-models), you define the holdout partition based on the partition column in the training dataset. You can specify the partition column while [adding training data](custom-model-training-data).
* For [unstructured custom models](unstructured-custom-models) and [external models](ext-model-reg), you provide separate training and holdout datasets.
Click on any metric tile above the graph to change the display:

Hover over a point on the graph to see specific details:

| Field | Regression | Classification |
|---------------|-----------------|-----------------|
| Timestamp (1) | The period of time that the point captures. | ~~ |
| Metric (2) | The selected optimization metric value for the point’s time period. It reflects the score of the corresponding metric tile above the graph, adjusted for the displayed time period. | ~~ |
| Predicted (3) | The average predicted value (derived from the prediction data) for the point's time period. Values are reflected by the blue points along the Predicted & Actual graph. | The frequency, as a percentage, of how often the prediction data predicted the value label (true or false) for the point’s time period. Values are represented by the blue points along the Predicted & Actual graph. See the image below for information on setting the label. |
| Actual (4) | The average actual value (derived from the actuals data) for the point's time period. Values are reflected by the orange points along the Predicted & Actual graph. | The frequency, as a percentage, that the *actual* data is the value 1 (true) for the point's time period. These values are represented by the orange points along the Predicted & Actual graph. See the image below for information on setting the label. |
| Row count (5) | The number of rows represented by this point on the chart. | ~~ |
| Missing Actuals (6) | The number of prediction rows that do not have corresponding actual values recorded. This value is not specific to the point selected. | ~~ |
You can select which classification value to show (0 or 1 in this example) from the dropdown menu at the top of the Predicted & Actual graph:

### Predicted & Actual graph {: #predicted-actual-graph }

The graph above shows the predicted and actual values along a timeline of a binary classification dataset. Hovering over a point in either plot shows the same values as those on the [Data Drift](data-drift) tab (assuming the time sliders are set to the same time range).
For a binary classification project, the timeline and bucketing work the same as for regression projects, but with this project type, you can select the class to display results for (as described in the Accuracy over Time graph above).
The volume chart below the graph displays the number of actual values that correspond to the predictions made at each point. The shaded area represents the number of uploaded actuals, and the striped area represents the number of predictions missing corresponding actuals.

To identify predictions that are missing actuals, click the **Download IDs of missing actuals** link. This prompts the download of a CSV file (`missing_actuals.csv`) that lists the predictions made that are missing actuals, along with the [association ID](accuracy-settings#association-id) of each prediction. Use the association IDs to [upload the actuals](accuracy-settings#add-actuals) with matching IDs.
### Class selector {: #class-selector }
Multiclass deployments offer class-based configuration to modify the data displayed on the Accuracy graphs. By default, the graphs display the five most common classes in the training data. All other classes are represented by a single line. Above the date slider, there is a **Target Class** dropdown. This indicates which classes are selected to display on the selected tab.

Click the dropdown to select the classes you want to display. Choose **Use all classes** or **Select specific classes**.

If you want to display all classes, select the first option and then click **Apply**.
To display a specific class, select the second option. Type the class names in the subsequent field to indicate those that you want to display (up to five classes can display at once). DataRobot provides quick select shortcuts for classes: the five most common classes in the training data, the five with the lowest accuracy score, and the five with the greatest amount of data drift. Once you have specified the five classes to display, click **Apply**.

Once specified, the charts on the tab (deploy-accuracy or Data Drift) update to display the selected classes.
#### Accuracy multiclass graphs {: #accuracy-multiclass-graphs }
Accuracy over Time:

Predicted vs. Actual:

## Interpret alerts {: #interpret-alerts }
DataRobot uses the optimization metric tile selected for a deployment as the accuracy score to create an alert status. Interpret the alert statuses as follows:
Color | Accuracy | Action
------|----------|--------
 Green / Passing | Accuracy is similar to when the model was deployed. | No action needed.
 Yellow / At risk | Accuracy has declined since the model was deployed. | Concerns found but no immediate action needed; monitor.
 Red / Failing | Accuracy has severely declined since the model was deployed. | Immediate action needed.
 Gray / Unknown | No accuracy data is available. Insufficient predictions made (min. 100 required) | [Make predictions](../../predictions/index). |
### For example... {: #for-example }
You have training data from the XYZhistorical database table, which includes the target "is activity fraudulent?" After building your model, you score it against the XYZDaily table (which does not have the target) and write out the predictions to the XYZscored database table. Downstream applications use XYZscored; instances written to at prediction time are later independently added to XYZhistorical.
To determine whether your model is making accurate predictions, every month, you join together XYZhistorical and XYZscored. This provides you with the predicted fraudulent value and the actually fraudulent value in a single table.
Finally, you add this prediction dataset to your DataRobot deployment, setting the actual and predicted columns. DataRobot then analyzes the results and provides metrics to help identify any model deterioration and need for replacement.
|
deploy-accuracy
|
---
title: Overview tab
description: Select a deployment from the Deployments page to view the Overview page.
---
# Overview tab {: #overview-tab }
When you select a deployment from the **Deployments** page (also called the *deployment inventory*), DataRobot opens to the **Overview** page for that deployment.
The **Overview** page provides a model- and environment-specific summary that describes the deployment, including the information you supplied when creating the deployment and any model replacement activity.

## Summary {: #summary }
The **Summary** section of the **Overview** tab Lists user-provided information about deployments, including:
* Name
* Description
* Prediction Environment
* Importance
Where applicable, click the pencil icon () to edit this information; changes affect the [**Deployments**](deploy-inventory) page.
## Content {: #content }
The **Content** section of the **Overview** tab lists a deployment's model and environment-specific information, including:
| Field | Description |
|-----------------|-------------|
Dataset | Filename of the dataset used to create the deployment's current model.
Target | Feature name of the target used by the deployment's current model.
Model | Model name of the deployment's current model. Click to open the **Leaderboard** to the model blueprint (**Describe** > **Blueprint**) for the model.
Model ID | Model ID number of the deployment's current model. Click to copy the number to your clipboard. In addition, you can copy the Model ID of any models deployed in the past from the deployment logs (**History** > **Logs**).
Deployment ID | Deployment ID number of the current deployment. Click to copy the number to your clipboard.
Build Environment | [Build environment](gov-lens#build-environments) used by the deployment's current model (e.g., DataRobot, Python, R, or Java).
Project | [Project data](histogram) used for the currently deployed model. Click to open the **Data** > **Project Data** tab for the project providing data to the deployed model.
!!! note
The information included in this list differs for deployments using custom models and external environments. It can also include information dependent on the target type.
## History {: #history }
Tracking deployment events in a deployment's **History** section is essential when a deployed model supports a critical use case. You can maintain deployment stability by monitoring the **Governance** and **Logs** events. These events include when the model was deployed or replaced. The deployment history links these events to the user responsible for the change.
### Governance
Many organizations, especially those in highly regulated industries, need greater control over model deployment and management. Administrators can define [deployment approval policies](deploy-approval) to facilitate this enhanced control. However, by default, there aren't any approval requirements before deploying.
You can find a deployment's available governance log details under **History** > **Governance**, including an audit trail for any deployment approval policies triggered for the deployment.
### Logs
When a model begins to experience data or accuracy drift, you should gather a new dataset, train a new model, and replace the old model. The details of this deployment lifecycle are recorded, including timestamps for model creation and deployment and a record of the user responsible for the recorded action. Any user with deployment owner permissions can [replace the deployed model](deploy-replace).
You can find a deployment's model-related events under **History** > **Logs**, including the creation and deployment dates and any model replacements events. Each model replacement event reports the replacement date and justification (if provided). In addition, you can find and copy the **Model ID** of any previously deployed model.

## Deployment Reports
Monitoring reports are a critical part of the deployment governance process. DataRobot allows you to download deployment reports, compiling deployment status, charts, and overall quality into a sharable report. Deployment reports are compatible with all deployment types.
For more information, see [Deployment reports](deploy-reports).
|
dep-overview
|
---
title: Remote repository file browser for custom models and tasks
description: The remote repository file browser for custom models and tasks allows you to browse a remote repository from the Custom Model Workshop and select the files you want to pull into a custom model or task.
section_name: MLOps
maturity: public-preview
---
# Remote repository file browser for custom models and tasks {: #remote-repository-file-browser-for-custom-models-and-tasks }
!!! info "Availability information"
The remote repository file browser for custom models and tasks is off by default. Contact your DataRobot representative or administrator for information on enabling this feature.
<b>Feature flag:</b> Enable File Browser for Pulling Model or Task Files from Remote Repositories
Now available as a public preview feature, you can browse the folders and files in a remote repository to select the files you want to add to a custom model or task. When you [add a model](custom-inf-model#create-a-new-custom-model) or [add a task](cml-custom-tasks) to the Custom Model Workshop, you can add files to that model or task from a wide range of repositories, including Bitbucket, GitHub, GitHub Enterprise, S3, GitLab, and GitLab Enterprise. After you [add a repository to DataRobot](custom-model-repos#add-a-remote-repository), you can pull files from the repository and include them in the custom model or task.
To use the remote repository file browser:
1. In the top navigation bar, click **Model Registry**.
2. Click **Custom Model Workshop**, click either the **Models** or **Tasks** tab, and select a model or task from the list.
3. Under **Assemble Model** or **Create Estimator** (in the Tasks tab), click **Select remote repository**.

!!! note
If the **Model** or **Estimator** group box (in the Tasks tab) is empty, select a **Base Environment** for the model or task.
4. In the **Select a remote repository** dialog box, select a repository in the list and click **Select content**.

5. In the **Pull from GitHub repository** dialog box, select the checkbox for any files or folders you want to pull into the custom model.
In addition, you can click **Select all** to select every file in the repository, or, after you select one or more files, you can click **Deselect all** to clear your selections.
!!! note
This step uses GitHub as an example; however, the process is the same for each repository type.

!!! tip
You can see how many files you have selected at the bottom of the dialog box (e.g., _+ 4 files will be added_).
6. Once you select each file you want to pull into the custom model, click **Pull**.
The added files appear under the **Model** header as part of the custom model.
|
pp-remote-repo-file-browser
|
---
title: Extend compliance documentation with key values
description: Build custom compliance documentation templates with references to key values, adding the associated data to the template and limiting the manual editing needed to complete the compliance documentation.
section_name: MLOps
maturity: public-preview
---
# Extend compliance documentation with key values {: #extend-compliance-documentation-with-key-values }
!!! info "Availability information"
Extended Compliance Documentation is off by default. Contact your DataRobot representative or administrator for information on enabling this feature.
<b>Feature flag:</b> Enable Extended Compliance Documentation
When you [create a model package in the Model Registry](reg-create), you can [generate automated compliance documentation for the model](reg-compliance). The compliance documentation provides evidence that the components of the model work as intended, the model is appropriate for its intended business purpose, and the model is conceptually sound. After generating the compliance documentation, you can view it or download it as a Microsoft Word (DOCX) file and edit it further.
Now available as a public preview feature, you can [build custom compliance documentation templates](template-builder) with references to key values, adding the associated data to the template and limiting the manual editing needed to complete the compliance documentation.
Key values associated with a model in the Model Registry are key-value pairs containing information about the registered model package. Each key-value pair has the following:
* **Name:** The unique and descriptive name of the key (for the model package or version).
* **Value type:** The data type of the value associated with the key. The possible types are string, numeric, boolean, URL, image, dataset, pickle, binary, JSON, or YAML.
* **Value:** The stored data or file.
You can include string, numeric, boolean, image, and dataset key values in custom compliance documentation templates. When you generate compliance documentation for a model package using a custom template referencing a supported key value, DataRobot inserts the matching values from the associated model package; for example, if the key value has an image attached, building the compliance documentation inserts that image, or if the key value refers to a dataset, it inserts the first 100 rows and 42 columns of the dataset.
## Create key values {: #create-key-values }
In the [**Model Registry**](reg-create), open a model package and access the **Key values** tab. In the table on this tab, you can view, add, edit, and delete key values, depending on your model package permissions.

??? note "Key value permissions"
* Read-write key values use the permissions of the model package.
* If you have view permissions for the model package, you can use key values in compliance documentation and search, sort, and filter the key values list.
* If you have edit permissions for the model package, you can _also_ add, edit, and delete key values from the key values list.
When you add a new key value, you can add a string, numeric, boolean, URL, JSON, or YAML key value without an attached file.
To add a new key value:
1. Click **+ Add key values > New key value**

2. In the **Add new key value** dialog box, configure the following settings:

Setting | Description
------------|------------------
Category | Select one of the following categories for the new key value to organize your key values by purpose: <ul><li>**Training parameter**</li><li>**Metric**</li><li>**Tag**</li><li>**Artifact**</li><li>**Runtime parameter**</li></ul>
Value type | Select one of the following value types for the new key value: <ul><li>**Dataset**</li><li>**Image**</li><li>**String**</li><li>**Pickle**</li><li>**Binary**</li><li>**Numeric**</li><li>**Boolean**</li><li>**URL**</li><li>**JSON**</li><li>**YAML**</li></ul>
Value | If you selected one of the following value types, enter the appropriate data:<ul><li>**String**: Enter any string up to 4 KB.</li><li>**Numeric**: Enter an integer or floating-point number.</li><li>**Boolean**: Select **True** or **False**.</li><li>**URL**: A URL in the format `scheme://location`; for example, `https://example.com`. DataRobot does not fetch the URL or provide a link to this URL in the user interface; however, in a downloaded compliance document, the URL may appear as a link.</li><li>**JSON**: Enter or upload JSON as a string. This JSON must parse correctly; otherwise, DataRobot won't accept it.</li><li>**YAML**: Enter or upload YAML as a string. DataRobot does not validate this YAML.</li></ul>
Upload | If you selected one of the following value types, add an artifact by uploading a file: <ul><li>**Upload a dataset**: A CSV file or any other compatible data file ( with a `.csv`, `.tsv`, `.dsv`, `.xls`, `.xlsx`, `.gz`, `.bz2`, `.tar`, `.tgz`, or `.zip` extension). The file is added as a dataset in the AI catalog, so all file types supported there are also supported for key values.</li><li>**Upload an image**: A JPEG or PNG file (with a `.jpg`, `.jpeg`, or `.png` extension).</li><li>**Upload a pickle file**: A Python PKL file (with the `.pkl` extension). DataRobot only stores the file for the key value; you can't access or run objects or code in the file.</li><li>**Upload a binary file**: A file with any file extension. DataRobot stores the file as an opaque object.
Name | Enter a descriptive name for the key in the key-value pair.
Description | (Optional) Enter a description of the key value's purpose.
3. Click **OK** to save the key value. The new key appears in the table.
## View and manage key values {: #view-and-manage-key-values }
On the **Key values** tab, you can view key value information and manage your custom key values. You can search key values by name (full or partial) and refresh the list. You can also filter by the categories shown. If the model package has more than 10 key values, the list is paginated.
Additional controls are located in the **Actions** column:

Action | Description
------------|------------------
View | You can view values in the following ways: </ul><li>String, numeric, and boolean values: View the values directly in the table.</li><li>Image and dataset files: Click the preview icon (). Previewing an image key value shows the image in the browser. Previewing a dataset key value brings you to the dataset page in the AI catalog.</li><li>Long strings, JSON, and YAML: Hover over the truncated value to view the full value.</li><li>Pickle and binary files: Cannot be previewed. In these cases, the "value" is the original filename of the uploaded file.</li></ul>
Edit | Click the edit icon () to edit a key value. You can only edit the name, value, or description. Leaving a comment in the comment field is optional.
Delete | Click the delete icon () to delete a key value. If you delete an image, pickle, or binary key value, and no other key value is using the same file, DataRobot deletes the file from storage. Datasets remain in the AI catalog after deletion.
## Customize a compliance documentation template with key values {: #customize-a-compliance-documentation-template-with-key-values }
Once you've added key values, you can [build custom compliance documentation templates](template-builder) with references to those key values. Referencing key values in a compliance documentation template adds the associated data to the generated compliance documentation, limiting the amount of manual editing needed to complete the compliance documentation.
To reference key values in a compliance documentation template:
1. Click your profile avatar (or the default avatar ) in the upper-right corner of DataRobot and, under **App Admin**, click **Template Builder**.
2. On the **Flexible Documentation Templates** page, [create or edit a compliance documentation template](template-builder#create-and-edit-templates).
3. In the template, on the **Key Values** panel, select a model package with key values from the **Model Package** list.
4. To add a key value reference to the template, edit a section, click to place the cursor where you want to insert the reference, then click the key value in the **Key Values** panel.
!!! note
You can include string, numeric, boolean, image, and dataset key values in custom compliance documentation templates. After you add a value, you can remove it by deleting the reference from the section.

5. To preview the document, click **Save changes** and then click **Preview template**.
As the template is not applied to any specific model package until you generate the compliance documentation, the preview displays placeholders for the key value.
## Key values for registered models {: key-values-for-registered-models }
!!! info "Availability information"
Versioning support in the Model Registry is off by default. Contact your DataRobot representative or administrator for information on enabling this feature.
<b>Feature flag:</b> Enable Versioning Support in the Model Registry
If you have enabled the [new Model Registry with versioning support](model-registry-versioning), DataRobot automatically populates key values for model packages you create. These key values are read-only, and their names start with `datarobot.`. They are generated when you register a DataRobot model from the Leaderboard or a custom model from the Custom Model Workshop.
* DataRobot models: DataRobot creates training parameters and metrics (such as `datarobot.parameter.max_depth`), metrics (such as `datarobot.metric.AUC.validation`), and tags (such as `datarobot.registered.model.version`). The automatic training parameters and metrics depend on the particular DataRobot model.
* Custom models: DataRobot creates runtime parameters from the model. The automatic key values depend on the custom model.
Although you can't modify or delete these automatic key values, you can manually add key values as needed to supplement the system-provided values. Your compliance documentation template or integration workflow can use key values from either source.
### Copy key values from a previous model package version
[Registered models](model-registry-versioning) have versions; each is an individual model package with an independent set of key values. If you created key values in an earlier version, you can copy these to a newer version:
1. Navigate to **Model Registry > Registered Models** page and open the registered model containing the version you want to add key values to.
2. To open the registered model version, do either of the following:

* To open the version in the current tab, click the row for the version you want to access.
* To open the version in a new tab, click the open icon () next to the **Type** column for the version you want to access.
3. Click **+ Add key values** and, under **Other**, click **Copy key values from previous version**.
4. In the **Copy key values to this registered version** dialog box, select **All categories** or a single category.

5. Click **OK** to copy the key values.
If a key value with the same name exists in the newer version and it is not read-only, the value from the older version will overwrite it. Otherwise, a new key value with that name is created in the newer version. Files for artifact key values (image, dataset, pickle, and binary) are not copied in bulk; instead, the new and old key values share the same file. If you edit either key value to use a different file, the other key value is unaffected, and the file is no longer shared. System key values are not included in bulk copy; for example, `model.version` is not overwritten in a newer version with the old version's value.
|
model-registry-key-values
|
---
title: Timeliness indicators for predictions and actuals
description: Enable timeliness tracking to retain the last calculated health status and reveal when status indicators are based on old data.
section_name: MLOps
maturity: public-preview
---
# Timeliness indicators for predictions and actuals
!!! info "Availability information"
The timeliness indicator is off by default. Contact your DataRobot representative or administrator for information on enabling this feature.
<b>Feature flag:</b> Enable timeliness stats indicator for deployments
Deployments have four statuses to define the general health of a deployment: [Service Health](service-health), [Data Drift](data-drift), [Accuracy](deploy-accuracy), and [Fairness](mlops-fairness). These statuses are calculated based on the most recent available data. For deployments relying on batch predictions made in intervals greater than 24 hours, this method can result in a _Gray / Unknown_ status on the [Prediction Health indicators in the deployment inventory](deploy-inventory#prediction-health-lens). Now available for Public Preview, those deployment health indicators can retain the most recently calculated health status, presented along with _timeliness_ status indicators to reveal when they are based on old data. You can determine the appropriate timeliness intervals for your deployments on a case-by-case basis.
## Define timeliness settings
You can configure timeliness tracking on the [**Usage** tab](deploy-usage) for predictions and actuals. After enabling tracking, you can define the timeliness interval frequency based on the prediction timestamp and the actuals upload time separately, depending on your organization's needs:
To enable and define timeliness tracking:
1. From the **Deployments** page, do either of the following:
* Click the deployment you want to define timeliness settings for, and then click **Usage > Settings**.
* Click the _Gray / Not Tracked_ icon in the Predictions Timeliness or Actuals Timeliness column to open the **Usage Settings** page for that deployment.

2. On the **Usage Settings** page, configure the following settings:

Setting | Description
---------------------------------|------------
Track timeliness | Enable one or more of **Track timeliness of predictions** and **Track timeliness of actuals**. To track the timeliness of actuals, [provide an association ID](accuracy-settings#select-an-association-id) and [enable target monitoring](data-drift-settings) for the deployment.
Predictions timestamp definition | If you enabled timeliness tracking for predictions, use the frequency selectors to define an **Expected prediction frequency** in [ISO 8601](https://www.iso.org/iso-8601-date-and-time-format.html){ target=_blank } format. The minimum granularity is one hour. To define time intervals directly in ISO 8601 notation (`P1Y2M3DT1H`), click **Switch to advanced frequency**.
Actuals timestamp definition | If you enabled timeliness tracking for actuals, use the frequency selectors to define the **Expected actuals upload frequency** in [ISO 8601](https://www.iso.org/iso-8601-date-and-time-format.html){ target=_blank } format. The minimum granularity is one hour. To define time intervals directly in ISO 8601 notation (`P1Y2M3DT1H`), click **Switch to advanced frequency**.
!!! tip
You can click **Reset to defaults** to return to a daily expected frequency, or `P1D`.
3. Click **Save**.
## View timeliness indicators
Once you've enabled timeliness tracking on the **Usage > Settings** tab, you can view timeliness indicators on the [**Usage** tab](deploy-usage) and in the [**Deployments** inventory](deploy-inventory):
!!! note
In addition to the indicators on the **Usage** tab and the **Deployments** inventory, when a timeliness status changes to _Red / Failing_, a notification is sent through email or the [channel configured in your notification policies](web-notify).
=== "Deployments inventory"
View the **Predictions Timeliness** and **Actuals Timeliness** columns:

=== "Usage tab"
View the **Predictions Timeliness** and **Actuals Timeliness** tiles:

Along with the status, you can view the **Updated** time for the timeliness tile.
## View timeliness status events
On the **Service Health** tab, under **Recent Activity**, you can view timeliness status events in the **Agent Activity** log:

## Filter the deployment inventory by timeliness
In the **Deployments** inventory, you can click **Filters** to apply **Predictions Timeliness Status** and **Actuals Timeliness Status** filters by status value:

|
timeliness-status-indicators
|
---
title: Automated deployment and replacement of Scoring Code in AzureML
description: Create a DataRobot-managed AzureML prediction environment to deploy and replace DataRobot Scoring Code in AzureML.
section_name: MLOps
maturity: public-preview
---
# Automated deployment and replacement of Scoring Code in AzureML {: #automated-deployment-and-replacement-of-scoring-code-in-azureml }
!!! info "Availability information"
Automated deployment and replacement of Scoring Code in AzureML is off by default. Contact your DataRobot representative or administrator for information on enabling this feature.
<b>Feature flag:</b> Enable the Automated Deployment and Replacement of Scoring Code in AzureML
Now available for Public Preview, you can create a DataRobot-managed AzureML prediction environment to deploy DataRobot Scoring Code in AzureML. With DataRobot management enabled, the external AzureML deployment has access to MLOps features, including automatic Scoring Code replacement.
## Create an AzureML prediction environment {: #create-an-azure-prediction-environment }
To deploy a model in AzureML, you first create a custom AzureML prediction environment:
1. Click **Deployments** > **Prediction Environments** and then click **Add prediction environment**.

2. In the **Add prediction environment** dialog box, configure the prediction environment settings:

* Enter a descriptive **Name** and an optional **Description** of the prediction environment.
* Select **Azure** from the **Platform** drop-down list. The **Supported Model Formats** settings are automatically set to **DataRobot Scoring Code** and can't be changed, as this is the only model format supported by AzureML.
* Enable the **Managed by DataRobot** setting to allow this prediction environment to automatically package and deploy DataRobot Scoring Code models through the Management Agent.
* Select the related Azure Service Principal **Credentials**.
??? note "Azure Service Principal credentials required"
DataRobot management of Scoring Code in AzureML requires existing Azure Service Principal **Credentials**. If you don't have existing credentials, the **Azure Service Principal credentials required** alert appears, directing you to **Go to Credentials** to [create Azure Service Principal credentials](stored-creds).

To create the required credentials, for **Credential type**, select **Azure Service Principal**. Then, enter a **Client ID**, **Client Secret**, **Azure Tenant ID**, and a **Display name**. To validate and save the credentials, click **Save and sign in**.

You can find these IDs and the diplay name on Azure's **App registrations** > **Overview** tab (1). You can generate secrets on the [**App registration > Certificates and secrets** tab](https://learn.microsoft.com/en-us/azure/active-directory/develop/quickstart-register-app#add-credentials){ target=_blank }(2):

3. Configure the **Azure Subscription**, **Azure Resource Group**, and **AzureML Workspace** fields accessible using the provided **Credentials**.
4. (Optional) If you want to connect to and retrieve data from Azure Event Hubs for monitoring, configure the **Event Hubs Namespace**, **Event Hubs Instance**, and **Managed Identities** fields. This requires valid **Credentials**, an **Azure Subscription ID**, and an **Azure Resource Group**.
5. (Optional) If you are using tags for governance and resource management in AzureML, click **Add AzureML tags** and then **+ Add new tag** to add the required tags to the prediction environment.

6. After you configure the environment settings, click **Add environment**.
The AzureML environment is now available from the **Prediction Environments** page.
## Deploy a model to the AzureML prediction environment {: #deploy-a-model-to-the-azure-prediction-environment }
Once you've created an AzureML prediction environment, you can deploy a model to it:
1. Click **Model Registry** > **Model Packages** and select the Scoring Code enabled model you want to deploy to the AzureML prediction environment.
!!! tip
You can also deploy a model to your AzureML prediction environment from the **Deployments** > **Prediction Environments** tab by clicking **+ Add new deployment** in the prediction environment.
2. On the **Package Info** tab, click **Deploy model package**.

3. In the **Select Deployment Target** dialog box, under **Select deploy target**, click **AzureML**.

!!! note
If you can't click the AzureML deployment target, the selected model doesn't have Scoring Code available.
4. Under **Select prediction environment**, select the AzureML prediction environment you added, and then click **Confirm**.
5. [Configure the deployment](add-deploy-info) and, in the **Prediction History and Service Health** section, under **Endpoint**, click **+ Add endpoint**.

6. In the **Select endpoint** dialog box, define an **Offline** or **Batch** endpoint, depending on your expected workload, and then click **Next**.

7. (Optional) Define additional **Environment key-value pairs** to provide extra parameters to the Azure deployment interface, then click **Confirm**.
8. Click **Deploy model**.
While the deployment is **Launching**, you can monitor the status events on the **Service Health** tab in **Recent Activity > Agent Activity**:

## Make predictions in AzureML {: #make-predictions-in-azureml }
After you deploy a model to an AzureML prediction environment, you can use the code snippet from the **Predictions > Portable Predictions** tab to score data in AzureML.

Before you run the code snippet, you must provide the required credentials in either of the following ways:
* Export the Azure Service Principal’s secrets as environment variables locally before running the snippet:
Environment variable | Description
----------------------|--------------------------
`AZURE_CLIENT_ID` | The **Application ID** in the **App registration > Overview** tab.
`AZURE_TENANT_ID` | The **Directory ID** in the **App registration > Overview** tab.
`AZURE_CLIENT_SECRET` | The secret token generated in the [**App registration > Certificates and secrets** tab](https://learn.microsoft.com/en-us/azure/active-directory/develop/quickstart-register-app#add-credentials){ target=_blank }.
* Install the [Azure CLI](https://learn.microsoft.com/en-us/cli/azure/install-azure-cli){ target=_blank }, and run the `az login` command to allow the portable predictions snippet to use your personal Azure credentials.
!!! important
Deployments to AzureML **Batch** and **Online** endpoints utilize different APIs than standard DataRobot deployments.
* Online endpoints support JSON or CSV as input and outputs results to JSON.
* Batch endpoints support CSV input and output the results to a CSV file.
|
azureml-sc-deploy-replace
|
---
title: Model package artifact creation workflow
description: The improved model package artifact creation workflow provides a clearer and more consistent path to model deployment with visible connections between a model and its associated model packages.
section_name: MLOps
maturity: public-preview
---
# Model package artifact creation workflow {: #model-package-artifact-creation-workflow }
!!! info "Availability information"
The updated model package creation workflow is off by default. Contact your DataRobot representative or administrator for information on enabling this feature.
<b>Feature flag:</b> Enable .mlpkg Artifact Creation for Model Packages
Now available as a public preview feature, the improved model package artifact creation workflow provides a clearer and more consistent path to model deployment with visible connections between a model and its associated model packages in the [Model Registry](reg-create). Using this new approach, when you deploy a model, you begin by providing model package details and adding the model package to the Model Registry. After you create the model package and allow the build to complete, you can deploy it by [adding the deployment information](add-deploy-info).
## Register and deploy a model from the Leaderboard
To register and deploy a model using this new workflow:
1. From the **Leaderboard**, select the model to use for generating predictions, and then click **Predict > Deploy**.

2. To follow best practices, DataRobot recommends that you first [prepare the model for deployment](model-rec-process#prepare-a-model-for-deployment). This process runs **Feature Impact**, retrains the model on a reduced feature list, and trains on a higher sample size, followed by the entire sample (latest data for date/time partitioned projects).
* If the model has the **Prepared For Deployment** badge, proceed to the next step.

* If the model doesn't have the **Prepared For Deployment** badge, click **Prepare for deployment**.

3. On the **Deploy model** tab, provide the following model package information, and then click **Add to Model Registry**:

Field | Description
--------------------------|------------
Prediction threshold | For binary classification models, enter the value a prediction score must exceed to be assigned to the positive class. The default value is `0.5`.
Model package name | Enter a descriptive model package name. The default is the model name (followed by the prediction threshold for binary classification model packages).
Model package description | _Optional_. Enter a description of the model package.
!!! note
If you set the prediction threshold before the [deployment preparation process](model-rec-process), the value does not persist. When deploying the prepared model, if you want it to use a value other than the default, set the value after the model has the **Prepared for Deployment** badge.
4. Allow the model to build. The **Building** status can take a few minutes, depending on the size of the model. A model package must have a **Status** of **Ready** before you can deploy it.

5. In the **Model Packages** list, locate the model package you want to deploy, and then click **Deploy**.

6. Add [deployment information and create the deployment](add-deploy-info).
## Deploy a model package from the Model Registry
To deploy a registered model using this new workflow:
1. Click **Model Registry** > **Model Packages**.
2. Click the **Actions** menu for the model package you want to deploy, and then click **Deploy**.
The **Status** column shows the build status of the model package.

If you deploy a model package that doesn't have a **Status** of **Ready**, the build process starts:

3. Add [deployment information and create the deployment](add-deploy-info).
You can also open a model package from the Model Registry and deploy it from **Package Info** tab:

|
pp-model-pkg-artifact-creation
|
---
title: Runtime parameters for custom models
description: Add runtime parameters to a custom model through the model metadata.
section_name: MLOps
maturity: public-preview
---
# Runtime parameters for custom models
!!! info "Availability information"
Runtime parameters for custom models are off by default. Contact your DataRobot representative or administrator for information on enabling this feature.
<b>Feature flag:</b> Enable the Injection of Runtime Parameters for Custom Models
Now available as a public preview feature, you can add runtime parameters to a custom model through the model metadata, making your custom model code easier to reuse.
## Define runtime parameters
To define runtime parameters, you can add the following `runtimeParameterDefinitions` in `model-metadata.yaml`:
Key | Value
---------------|------
`fieldName` | The name of the runtime parameter.
`type` | The data type the runtime parameter contains:`string` or `credential`.
`defaultValue` | (Optional) The default string value for the runtime parameter (the credential type doesn't support default values).
`description` | (Optional) A descripton of the purpose or contents of the runtime parameter.
!!! note
If you define a runtime parameter without specifying a `defaultValue`, the default value is `None`.
``` yaml title="Example: model-metadata.yaml"
name: runtime-parameter-example
type: inference
targetType: regression
runtimeParameterDefinitions:
- fieldName: my_first_runtime_parameter
type: string
description: My first runtime parameter.
- fieldName: runtime_parameter_with_default_value
type: string
defaultValue: Default
description: A runtime parameter with a default value.
- fieldName: runtime_parameter_for_credentials
type: credential
description: A runtime parameter containing a dictionary of credentials.
```
The `credential` runtime parameter type supports any `credentialType` value available in the DataRobot REST API. The credential information included depends on the `credentialType`, as shown in the examples below:
!!! note
For more information on the supported credential types, see the [API reference documentation for credentials](https://docs.datarobot.com/en/docs/api/reference/public-api/credentials.html#schemacredentialsbody).
<table>
<thead>
<tr>
<th>Credential Type</th>
<th>Example</th>
</tr>
</thead>
<tbody>
<tr>
<td><code>basic</code></td>
<td>
<pre>
basic:
credentialType: basic
description: string
name: string
password: string
user: string
</pre>
</td>
</tr>
<tr>
<td><code>azure</code></td>
<td>
<pre>
azure:
credentialType: azure
description: string
name: string
azureConnectionString: string
</pre>
</td>
</tr>
<tr>
<td><code>gcp</code></td>
<td>
<pre>
gcp:
credentialType: gcp
description: string
name: string
gcpKey: string
</pre>
</td>
</tr>
<tr>
<td><code>s3</code></td>
<td>
<pre>
s3:
credentialType: s3
description: string
name: string
awsAccessKeyId: string
awsSecretAccessKey: string
awsSessionToken: string
</pre>
</td>
</tr>
</tbody>
</table>
## Provide override values during local development
For local development with DRUM, you can specify a `.yaml` file containing the values of the runtime parameters. The values defined here override the `defaultValue` set in `model-metadata.yaml`:
``` yaml title="Example: .runtime-parameters.yaml"
my_first_runtime_parameter: Hello, world.
runtime_parameter_with_default_value: Override the default value.
runtime_parameter_for_credentials:
credentialType: basic
name: credentials
password: password1
user: user1
```
When using DRUM, the new `--runtime-params-file` option specifies the file containing the runtime parameter values:
``` sh title="Example: --runtime-params-file"
drum score --runtime-params-file .runtime-parameters.yaml --code-dir model_templates/python3_sklearn --target-type regression --input tests/testdata/juniors_3_year_stats_regression.csv
```
## Import and use runtime parameters in custom code
To import and access runtime parameters, you can import the `RuntimeParameters` module in your code in `custom.py`:
``` py title="Example: custom.py"
from datarobot_drum import RuntimeParameters
def mask(value, visible=3):
return value[:visible] + ("*" * len(value[visible:]))
def transform(data, model):
print("Loading the following Runtime Parameters:")
parameter1 = RuntimeParameters.get("my_first_runtime_parameter")
parameter2 = RuntimeParameters.get("runtime_parameter_with_default_value")
print(f"\tParameter 1: {parameter1}")
print(f"\tParameter 2: {parameter2}")
credentials = RuntimeParameters.get("runtime_parameter_for_credentials")
if credentials is not None:
credential_type = credentials.pop("credentialType")
print(
f"\tCredentials (type={credential_type}): "
+ str({k: mask(v) for k, v in credentials.items()})
)
else:
print("No credential data set")
return data
```
## View and edit runtime parameters in DataRobot
When you add a `model-metadata.yaml` file with `runtimeParameterDefinitions` to DataRobot while [creating a custom model](custom-inf-model), the **Runtime Parameters** section appears on the **Assemble** tab for that custom model. After you build the environment and create a new version, you can click **View and Edit** to configure the parameters:

!!! note
Each change to a runtime parameter creates a new minor version of the custom model.
After you [test a model](custom-model-test) with runtime parameters in the **Custom Model Workshop**, you can navigate to the **Test > Runtime Parameters** to view the model's parameters:

|
pp-cus-model-runtime-params
|
---
title: Monitoring jobs for custom metrics
description: To use custom metrics with external data sources, monitoring job definitions allow DataRobot to pull feature data and predictions from outside of DataRobot and into your defined custom metrics for monitoring on the Custom Metrics tab.
section_name: MLOps
maturity: public-preview
---
# Monitoring jobs for custom metrics {: #monitoring-jobs-for-custom-metrics }
!!! info "Availability information"
Monitoring jobs for custom metrics are off by default. Contact your DataRobot representative or administrator for information on enabling this feature.
<b>Feature flag:</b> Enable Custom Metrics Job Definitions
Monitoring job definitions allow DataRobot to pull calculated custom metric values from outside of DataRobot into the custom metric defined on the [Custom Metrics](custom-metrics) tab, supporting custom metrics with external data sources. For example, you can create a monitoring job to connect to Snowflake, fetch custom metric data from the relevant Snowflake table, and send the data to DataRobot for analysis through a custom metric you created.
To create the monitoring jobs for custom metrics:
1. Click **Deployments** and select a deployment from the inventory.
2. On the selected deployment's **Overview**, click **Job Definitions**.
3. On the **Job Definitions** page, click **Monitoring Jobs**, and then click **Add Job Definition**.
4. On the **New Monitoring Job Definition** page, configure the following options:

| | Field name | Description |
|------------------------|------------|---------------|
|  | Monitoring job definition name | Enter the name of the monitoring job that you are creating for the deployment. |
|  | Monitoring data source | Set the [source type](#set-monitoring-data-source) and [define the connection](data-conn) for the data to be scored. |
|  | Monitoring options | Configure [custom model monitoring options](#set-custom-model-monitoring-options). |
|  | Jobs schedule | Configure whether to run the job immediately and whether to [schedule the job](#schedule-monitoring-jobs).|
|  | Save monitoring job definition | Click this button to save the job definition. The button changes to **Save and run monitoring job definition** if **Run this job immediately** is enabled. Note that this button is disabled if there are any validation errors. |
## Set monitoring data source {: #set-monitoring-data-source }
In the **Monitoring data source** section, select a **Source type** (called an [intake adapter](intake-options)) and complete the appropriate authentication workflow for the source type in the **Connection details** section.
Select a connection type below to view field descriptions:
!!! note
When browsing for connections, invalid adapters are not shown.
**Database connections**
* [JDBC](../../api/reference/batch-prediction-api/intake-options.html#jdbc-scoring)
**Cloud Storage Connections**
* [Azure](intake-options#azure-blob-storage-scoring)
* [GCP](intake-options#google-cloud-storage-scoring) (Google Cloud Platform Storage)
* [S3](intake-options#amazon-s3-scoring)
**Data Warehouse Connections**
* [BigQuery](intake-options#bigquery-scoring)
* [Snowflake](intake-options#snowflake-scoring)
* [Synapse](intake-options#synapse-scoring)
**Other**
* [AI Catalog](intake-options#ai-catalog-dataset-scoring)
After you set your monitoring source, DataRobot validates that the data applies to the deployed model.
!!! note
DataRobot validates that a data source is compatible with the model when possible, but not in all cases. DataRobot validates for AI Catalog, most JDBC connections, Snowflake, and Synapse.
## Set custom model monitoring options {: #set-custom-model-monitoring-options }
In the **Monitoring options** section, click **Custom Metrics** and configure the following options:

Field | Description
------------------|------------
Custom metric | Select the custom metric you want to monitor from the current deployment.
Value column | Select the column in the dataset containing the calculated values of the custom metric.
Timestamp column | Select the column in the dataset containing a timestamp.
Date format | Select the date format used by the timestamp column.
## Schedule monitoring jobs {: #schedule-monitoring-jobs }
You can schedule monitoring jobs to run automatically on a schedule. When outlining a monitoring job definition, enable **Run this job automatically on a schedule**, then specify the frequency (daily, hourly, monthly, etc.) and time of day to define the schedule on which the job runs.

For further granularity, select **Use advanced scheduler**. You can set the exact time (to the minute) you want to run the monitoring job.

After setting all applicable options, click **Save monitoring job definition**.
|
custom-metric-monitoring-jobs
|
---
title: Automated deployment and replacement of Scoring Code in Snowflake
description: Create a DataRobot-managed Snowflake prediction environment to deploy and replace DataRobot Scoring Code in Snowflake.
section_name: MLOps
maturity: public-preview
---
# Automated deployment and replacement of Scoring Code in Snowflake {: #automated-deployment-and-replacement-of-scoring-code-in-snowflake }
!!! info "Availability information"
Automated deployment and replacement of Scoring Code in Snowflake is off by default. Contact your DataRobot representative or administrator for information on enabling this feature.
<b>Feature flag:</b> Enable the Automated Deployment and Replacement of Scoring Code in Snowflake
Now available for Public Preview, you can create a DataRobot-managed Snowflake prediction environment to deploy DataRobot Scoring Code in Snowflake. With DataRobot management enabled, the external Snowflake deployment has access to MLOps management, including automatic Scoring Code replacement.
## Create a Snowflake prediction environment {: #create-a-snowflake-prediction-environment }
To deploy a model in Snowflake, you first create a custom Snowflake prediction environment:
1. Click **Deployments** > **Prediction Environments** and then click **Add prediction environment**.

2. In the **Add prediction environment** dialog box, configure the prediction environment settings:

* Enter a descriptive **Name** and an optional **Description** of the prediction environment.
* Select **Snowflake** from the **Platform** drop-down list. The **Supported Model Formats** settings are automatically set to **DataRobot Scoring Code** and can't be changed, as this is the only model format supported by Snowflake.
* Enable the **Managed by DataRobot** setting to allow this prediction environment to automatically package and deploy DataRobot Scoring Code models through the Management Agent.
!!! note
DataRobot management of Scoring Code in Snowflake requires an existing **Data Connection** to Snowflake with stored **Credentials**. If you don't have an existing Snowflake data connection, the **No Data Connections found** alert appears, directing you to **Go to Data Connections** to [create a Snowflake connection](dc-snowflake).

* Select a **Data Connection** and the related **Credentials**, and then select the Snowflake **Schemas**. Snowflake schemas are collections of Snowflake tables.
3. After you configure the environment settings, click **Add environment**.
The Snowflake environment is now available from the **Prediction Environments** page.
## Deploy a model to the Snowflake prediction environment {: #deploy-a-model-to-the-snowflake-prediction-environment }
Once you've created a Snowflake prediction environment, you can deploy a model to it:
1. Click **Model Registry** > **Model Packages** and select the Scoring Code enabled model you want to deploy to the Snowflake prediction environment.
2. On the **Package Info** tab, click **Deploy model package**.

3. In the **Select Deployment Target** dialog box, under **Select deploy target**, click **Snowflake**.

!!! note
If you can't click the Snowflake deployment target, the selected model doesn't have Scoring Code available.
4. Under **Select prediction environment**, select the Snowflake prediction environment you added, and then click **Confirm**.

5. [Configure the deployment](add-deploy-info), and then click **Deploy model**.
6. Once the model is deployed to Snowflake, you can use the **Score your data** code snippet from the **Predictions > Portable Predictions** tab to score data in Snowflake.
## Restart a Snowflake prediction environment {: #restart-a-snowflake-prediction-environment }
When you update database settings or credentials for the Snowflake data connection used by the prediction environment, you can restart the environment to apply those changes to the environment:
1. Click **Deployments > Prediction Environments** page, and then select the Snowflake prediction environment from the list.
2. Below the prediction environment settings, under **Service Account**, click **Restart Environment**.

|
pp-snowflake-sc-deploy-replace
|
---
title: Custom model proxies for external models
description: Create custom model proxies for external models in the Custom Model Workshop.
section_name: MLOps
maturity: public-preview
platform: self-managed-only
---
# Custom model proxies for external models
!!! info "Availability information: Self-Managed only"
Custom model proxies for external models are only available on the Self-Managed AI Platform and are off by default. Contact your DataRobot representative or administrator for information on enabling this feature.
<b>Feature flags:</b> Enable Proxy Models, Enable the Injection of Runtime Parameters for Custom Models
Now available as a public preview feature, you can create a custom model as a proxy for an externally hosted model. To create a proxy model, you:
1. (Optional) [Add runtime parameters](pp-cus-model-runtime-params) to the custom model through the model metadata (`model-metadata.yaml`).
2. [Add proxy code](#add-proxy-code) to the custom model through the custom model file (`custom.py`).
3. [Create a proxy model](#create-a-proxy-model) in the Custom Model Workshop.
## Add proxy code {: #add-proxy-code }
The custom model you create as a proxy for an external model should contain custom code in the `custom.py` file to connect the [proxy model](#create-a-proxy-model) with the externally hosted model; this code is the *proxy code*. See the [custom model assembly](custom-model-assembly/index) documentation for more information on writing custom model code.
The proxy code in the `custom.py` file should do the following:
* Import the necessary modules and, optionally, the [runtime parameters](pp-cus-model-runtime-params) from `model-metadata.yaml`.
* Connect the custom model to an external model via an HTTPs connection or the network protocol required by your external model.
* Request predictions and convert prediction data as necessary.
To simplify the reuse of proxy code, you can add [runtime parameters](pp-cus-model-runtime-params) through your model metadata in the `model-metadata.yaml` file:
``` yaml title="model-metadata.yaml"
name: runtime-parameter-example
type: inference
targetType: regression
runtimeParameterDefinitions:
- fieldName: endpoint
type: string
description: The name of the endpoint.
- fieldName: API_KEY
type: credential
description: The HTTP basic credential containing the endpoint's API key in the password field (the username field is ignored).
```
If you define runtime parameters in the model metadata, you can import them into the `custom.py` file to use in your proxy code. After importing these parameters, you can assign them to variables in your proxy code. This allows you to create a prediction request to connect to and retrieve prediction data from the external model. The following example outlines the basic structure of a `custom.py` file:
``` py title="custom.py"
# Import modules required to make a prediction request.
import json
import ssl
import urllib.request
import pandas as pd
# Import SimpleNamespace to create an object to store runtime parameter variables.
from types import SimpleNamespace
# Import RuntimeParameters to use the runtime parameters set in the model metadata.
from datarobot_drum import RuntimeParameters
# Override the default load_model hook to read the runtime parameters.
def load_model(code_dir):
# Assign runtime parameters to variables.
api_key = RuntimeParameters.get("API_KEY")["password"]
endpoint = RuntimeParameters.get("endpoint")
# Create scoring endpoint URL.
url = f"https://{endpoint}.example.com/score"
# Return an object containing the variables necessary to make a prediction request.
return SimpleNamespace(**locals())
# Write proxy code to request and convert scoring data from the external model.
def score(data, model, **kwargs):
# Call make_remote_prediction_request.
# Convert prediction data as necssary.
def make_remote_prediction_request(payload, url, api_key):
# Connect to the scoring endpoint URL.
# Request predictions from the external model.
```
## Create a proxy model {: #create-a-proxy-model }
To create a custom model as a proxy for an external model, you can add a new proxy model to the [Custom Model Workshop](custom-model-workshop/index). A proxy model contains the proxy code you created to connect with your external model, allowing you to use features like [compliance documentation](reg-compliance), [challenger analysis](challengers), and [custom model tests](custom-model-test) with a model running on infrastructure outside of DataRobot.
To add a Proxy model through the Custom Model Workshop:
1. Click **Model Registry** > **Custom Model Workshop**.
2. On the **Models** tab, click **+ Add new model**.
3. In the **Add Custom Inference Model** dialog box, select **Proxy**, and then add the model information.

| | Element | Description |
|---|---|---|
|  | Model name | Name the custom model. |
|  | Target type / Target name | Select the target type ([binary classification](glossary/index#classifiction), [regression](glossary/index#regression), [multiclass classification](glossary/index#multiclass), [anomaly detection](custom-inf-model#anomaly-detection), or [unstructured](unstructured-custom-models)) and enter the name of the target feature. |
|  | Positive class label / Negative class label | These fields only display for binary classification models. Specify the value to be used as the positive class label and the value to be used as the negative class label. <br> For a multiclass classification model, these fields are replaced by a field to enter or upload the target classes in `.csv` or `.txt` format. |
3. Click **Show Optional Fields** and, if necessary, enter a prediction threshold, the language used to build the model, and a description.
4. After completing the fields, click **Add Custom Model**.
5. In the **Assemble** tab, under **Model Environment** on the right, select a model environment by clicking the **Base Environment** dropdown menu on the right and selecting an environment. The model environment is used for [testing](custom-model-test) and [deploying](deploy-custom-inf-model) the custom model.

!!! note
The **Base Environment** pulldown menu includes [drop-in model environments](drop-in-environments), if any exist, as well as [custom environments](custom-environments#create-a-custom-environment) that you can create.
6. Under **Model** on the left, add proxy model content by dragging and dropping files or browsing. Alternatively, select a [remote integrated repository](custom-model-repos).

If you click **Browse local file**, you have the option of adding a **Local Folder**. The local folder is for dependent files and additional assets required by your model, not the model itself. Even if the model file is included in the folder, it will not be accessible to DataRobot unless the file exists at the root level. The root file can then point to the dependencies in the folder.
!!! note
You must also upload web server Scoring Code and a `start_server.sh` file to your model's folder unless you are pairing the model with a [drop-in environment](drop-in-environments).
7. On the **Assemble** tab, next to **Resource settings**, click the edit icon () to activate the required **Network access** for the proxy model.
8. If you provide runtime parameters in the model metadata, after you build the environment and create a new version, you can configure the parameters on the **Assemble** tab under **Runtime Parameters**.
9. Finally, you can [register the custom model](custom-model-reg) to create a proxy model you can use to generate [compliance documentation](reg-compliance). You can then deploy the proxy model to set up [challenger analysis](challengers) and run [custom model tests](custom-model-test) on the external model.
|
pp-ext-model-proxy
|
---
title: Versioning support in the Model Registry
description: Create registered models to provide an additional layer of organization to your model packages.
section_name: MLOps
maturity: public-preview
---
# Versioning support in the Model Registry {: #enable-versioning-support-in-the-model-registry }
!!! info "Availability information"
Versioning support in the Model Registry is off by default. Contact your DataRobot representative or administrator for information on enabling this feature.
<b>Feature flag:</b> Enable Versioning Support in the Model Registry
The Model Registry is an organizational hub for various models used in DataRobot, where you can access models as deployment-ready model packages. Now available for Public Preview, the **Model Registry > Registered Models** page provides an additional layer of organization to your models.

On this page, you can group model packages into _registered models_, allowing you to categorize them based on the business problem they solve. Registered models can contain:
* DataRobot, custom, and external models
* Challenger models (alongside the champion)
* Automatically retrained models.
Once you add registered models and registered model versions, you can search, filter, and sort them. You can also share your registered models (and the versions they contain) with other users.
## Add registered models
You can register DataRobot, custom, and external model packages. When you add model packages to the **Registered Models** page, you can create a new registered model (version one) or save the model package as a new version of an existing registered model. Model packages added as versions of the same registered model _must_ have the same target type, target name, and, if applicable, target classes and time series settings.
Each registered model on the **Registered Models** page _must_ have a unique name. If you choose a name that exists anywhere within your organization when creating a new registered model, the **Model registration failed** warning appears. Use a different name or add this model as a new version of the existing registered model.
### Add a DataRobot model
To add a DataRobot model as a registered model or version:
1. Navigate to **Models > Leaderboard**.
2. From the **Leaderboard**, click the model you want to register and click **Predict > Deploy**.
3. Under **Deploy model**, click **Add to Model Registry**.

4. In the **Register new model** dialog box, configure the following:

| Field | Description |
|-------|-------------|
| Register model | Select one of the following:<ul><li>**Register new model:** Create a new registered model. This creates the first version (**V1**).</li><li>**Save as a new version to existing model:** Create a version of an existing registered model. This increments the version number and adds a new version to the registered model.</li></ul> |
| Registered model name / Registered Model | Do one of the following:<ul><li>**Registered model name:** Enter a unique and descriptive name for the new registered model. If you choose a name that exists anywhere within your organization, the **Model registration failed** warning appears.</li><li>**Registered Model:** Select the existing registered model you want to add a new version to.</li></ul> |
| Registered model version | Assigned automatically. This displays the expected version number of the version (e.g., V1, V2, V3) you create. This is always **V1** when you select **Register a new model**. |
| **Optional settings** | :~~: | :~~: |
| Version description | Describe the business problem these model packages solve, or, more generally, the relationship between them. |
| Tags | Click **+ Add item** and enter a **Key** and a **Value** for each key-value pair you want to tag the model _version_ with. Tags do not apply to the registered model, just the versions within. Tags added when registering a new model are applied to **V1**. |
5. Click **Add to registry**.
### Add a custom model
To add a custom model as a registered model or version:
1. Navigate to **Model Registry > Custom Model Workshop**.
2. From the **Custom Model Workshop**, click the model you want to register and, on the **Assemble** tab, click **Add to Model Registry**.

4. In the **Register new model** dialog box, configure the following:

| Field | Description |
|-------|-------------|
| Register model | Select one of the following:<ul><li>**Register new model:** Create a new registered model. This creates the first version (**V1**).</li><li>**Save as a new version to existing model:** Create a version of an existing registered model. This increments the version number and adds a new version to the registered model.</li></ul> |
| Registered model name / Registered Model | Do one of the following:<ul><li>**Registered model name:** Enter a unique and descriptive name for the new registered model. If you choose a name that exists anywhere within your organization, the **Model registration failed** warning appears.</li><li>**Registered Model:** Select the existing registered model you want to add a new version to.</li></ul> |
| Registered model version | Assigned automatically. This displays the expected version number of the version (e.g., V1, V2, V3) you create. This is always **V1** when you select **Register a new model**. |
| **Optional settings** | :~~: |
| Version description | Describe the business problem these model packages solve, or, more generally, the relationship between them. |
| Tags | Click **+ Add item** and enter a **Key** and a **Value** for each key-value pair you want to tag the model _version_ with. Tags do not apply to the registered model, just the versions within. Tags added when registering a new model are applied to **V1**. |
5. Click **Add to registry**.
### Add an external model
To add an external model as a registered model or version:
1. On the **Model Registry > Registered Models** page, click **Register New Model > External model**.

2. In the **Register new external model** dialog box, configure the following:

| Field | Description |
|-------|-------------|
| Package Name | The name of the model package. |
| Package description (optional) | Information to describe the model package. |
| Model location (optional) | The location of the model running outside of DataRobot. Describe the location as a filepath, such as folder1/opt/model.tar. |
| Build Environment | The programming language in which the model was built. |
| Training data (optional) | The filename of the training data, uploaded locally or via the **AI Catalog**. Click **Clear selection** to upload and use a different file. |
| Holdout data (optional) | The filename of the holdout data, uploaded locally or via the **AI Catalog**. Use holdout data to set an [accuracy baseline](#set-an-accuracy-baseline) and enable support for target drift and challenger models. |
| Target | The dataset column name the model will predict on. |
| Prediction type | The type of prediction the model is making, either binary classification or regression. For a classification model, you must also provide the positive and negative class labels and a prediction threshold. |
| Prediction column | The column name in the holdout dataset containing the prediction result. |
If registering a [time series](time/index) model, select the **This is a time series model** checkbox and configure the following fields:

| Field | Description |
|-------|-------------|
| Forecast date feature | The column in the training dataset that contains date/time values used by DataRobot to detect the range of dates (the valid forecast range) available for use as the forecast point. |
| Date/time format | The format used by the date/time features in the training dataset. |
| Forecast point feature | The column in the training dataset that contains the point from which you are making a prediction. |
| Forecast unit | The time unit (seconds, days, months, etc.) that comprise the [time step](glossary/index#time-step). |
| Forecast distance feature | The column in the training dataset containing a unique time step—a relative position—within the forecast window. A time series model outputs one row for each forecast distance. |
| Series identifier (optional, used for [multiseries models](multiseries)) | The column in the training dataset that identifies which series each row belongs to. |
Finally, configure the registered model settings:
| Field | Description |
|-------|-------------|
| Register model | Select one of the following:<ul><li>**Register new model:** Create a new registered model. This creates the first version (**V1**).</li><li>**Save as a new version to existing model:** Create a version of an existing registered model. This increments the version number and adds a new version to the registered model.</li></ul> |
| Registered model name / Registered Model | Do one of the following:<ul><li>**Registered model name:** Enter a unique and descriptive name for the new registered model. If you choose a name that exists anywhere within your organization, the **Model registration failed** warning appears.</li><li>**Registered Model:** Select the existing registered model you want to add a new version to.</li></ul> |
| Registered model version | Assigned automatically. This displays the expected version number of the version (e.g., V1, V2, V3) you create. This is always **V1** when you select **Register a new model**. |
| **Optional settings** | :~~: |
| Version description | Describe the business problem these model packages solve, or, more generally, the relationship between them. |
| Tags | Click **+ Add item** and enter a **Key** and a **Value** for each key-value pair you want to tag the model _version_ with. Tags do not apply to the registered model, just the versions within. Tags added when registering a new model are applied to **V1**. |
3. Once all fields for the external model are defined, click **Register**.
## Access registered models and versions
On the **Registered Models** page, you can sort registered models by **Name** or **Last modified**. In a registered model, on the **Versions** tab, you can sort versions by **Name**, **Created at**, **Last updated at**, or **Model type**:

In the top-left corner of the **Registered Models** page, you can:

* Click **Search** and enter the registered model name to locate it on the **Registered Models** page.
* Click **Filters** to enable, modify, or clear filters on the **Registered Models** page. You can filter by **Target name**, **Target type**, **Created by**, **Created between**, and **Modified between**. This control filters on registered models:

## View model and version information
Once you locate the registered model or model version you are looking for, you can access a variety of information about the registered model or version.
### Model info
Click a registered model to open the details panel. From that panel, you can access the following tabs:

Tab | Description
------------|------------
Versions | View all model versions for a registered model and the associated creation and status information. <ul><li>To open the version in the current tab, click the row for the version you want to access.</li><li>To open the version in a new tab, click the open icon () next to the **Type** column for the version you want to access.</li></ul>
Deployments | View all model deployments for a registered model and the associated creation and status information. You can click a name in the **Deployment** column to open that deployment.
Model Info | View the registered model **ID**, **Name**, **Latest Version**, **Created By** username, **Created** date, **Last Modified** date, **Target Type**, and **Target Name**. You can click the pencil icon () next to **Name** to edit the registered model's name.
### Version info
To open the registered model version, do either of the following:

* To open the version in the current tab, click the row for the version you want to access.
* To open the version in a new tab, click the open icon () next to the **Type** column for the version you want to access.

!!! tip
You can click **Switch** next to the name in the version header to select another version to view.
Tab | Description
------------|------------
Info | View general model information for the model version. In addition: <ul><li>For all model types, you can click **+ Add tag** in the **Tags** field to apply additional tags to the version.</li><li>For DataRobot models, you can click the **Model Name** to open the model in the Leaderboard.</li><li>For custom models, you can click the **Custom Model ID** to open the model in the Custom Model Workshop.</li><li>For custom models [created via GitHub Actions](custom-model-github-action), you can click the **Git Commit Reference** to open the commit that created the model package in GitHub.</li>
Key Values | Create [key values](model-registry-key-values) for the model version.
Compliance Documentation | Generate [compliance documentation](reg-compliance) for the model version.
Deployments | View all model deployments for a registered model version, in addition to the associated creation and status information. You can click a name in the **Deployment** column to open that deployment.
## Deploy registered models
You can deploy a registered model at any time from the **Registered Models** page. To do that, you must open a custom model version:
1. On the **Registered Models** page, click the registered model containing the model version you want to deploy.
2. To open the registered model version, do either of the following:

* To open the version in the current tab, click the row for the version you want to access.
* To open the version in a new tab, click the open icon () next to the **Type** column for the version you want to access.
3. In the version header, click **Deploy**, and then [configure the deployment settings](add-deploy-info).

## Manage registered models
You can **Share** or **Delete** registered models from the menu icon () in the last column of the **Registered Models** page:

!!! important "Changes to model sharing"
With the introduction of the **Registered Models** page, _registered models_ are the model artifact used for sharing, not _model packages_. When you share a registered model, you automatically share each model package contained in that registered model.
|
model-registry-versioning
|
---
title: MLflow integration for DataRobot
description: Export a model from MLflow and import it into the DataRobot Model Registry], creating key values from the training parameters, metrics, tags, and artifacts in the MLflow model.
section_name: MLOps
maturity: public-preview
---
# MLflow integration for DataRobot
!!! info "Availability information"
Key values in the Model Registry are off by default. Contact your DataRobot representative or administrator for information on enabling this feature.
<b>Feature flag:</b> Enable Extended Compliance Documentation
The MLflow integration for DataRobot allows you to export a model from MLflow and import it into the DataRobot [Model Registry](registry/index), creating [key values](model-registry-key-values) from the training parameters, metrics, tags, and artifacts in the MLflow model.
## Prerequisites for the MLflow integration {: #prerequisites-for-the-mlflow-integration }
The MLflow integration for DataRobot requires the following:
* Python >= 3.7
* DataRobot >= 9.0
This integration library uses a Public Preview API endpoint; the DataRobot user associated with your API token must have the following permissions:
* **Enable Extended Compliance Documentation** to access [key values](model-registry-key-values).
* **Owner** or **User** permissions for the DataRobot model package.
!!! note
Optionally, the **Enable Versioning Support in the Model Registry** feature flag allows you to import MLflow model information to [Registered Model Versions](model-registry-versioning).
## Install the MLflow integration for DataRobot {: #install-the-mlflow-integration-for-datarobot }
You can install the `datarobot-mlfow` integration with `pip`:
``` sh title="pip installation"
pip install datarobot-mlflow
```
If you are running the integration on Azure, use the following command:
``` sh title="Azure pip installation"
pip install "datarobot-mlflow[azure]"
```
## Configure command line options {: #configure-command-line-options }
The following command line options are available for the `drflow_cli`:
Option | Description
----------------------------|-------------
`--mlflow-url` | Defines the MLflow tracking URL; for example:<ul><li>Local MLflow: `"file:///Users/me/mlflow/examples/mlruns"`</li><li>Azure Databricks MLflow: `"azureml://region.api.azureml.ms/mlflow/v1.0/subscriptions/subscription-id/resourceGroups/resource-group-name/providers/Microsoft.MachineLearningServices/workspaces/azure-ml-workspace-name"`</li></ul>
`--mlflow-model` | Defines the MLflow model name; for example, `"cost-model"`.
`--mlflow-model-version` | Defines the MLflow model version; for example, `"2"`.
`--dr-url` | Provides the main URL of DataRobot instance; for example, `https://app.datarobot.com`.
`--dr-model` | Defines the ID of the model package for key value upload; for example, `64227b4bf82db411c90c3209`. [Registered Model Versions](model-registry-versioning) are also supported if enabled.
`--prefix` | Provides a string to prepend to the names of all key values imported to DataRobot. The default value is empty.
`--debug` | Sets the Python logging level to `logging.DEBUG`. The default level is `logging.WARNING`.
`--verbose` | Prints information to stdout during the following processes:<ul><li>Retrieving model from MLflow: prints model information.</li><li>Setting model data in DataRobot: prints each key value added to DataRobot.</li></ul>
`--with-artifacts` | Downloads MLflow model artifacts to `/tmp/model`.
`--service-provider-type` | Defines the service provider for `validate-auth`. The supported value is `azure-databricks` for Databricks MLflow in Azure.
`--auth-type` | Defines the authentication type for `validate-auth`. The supported value is `azure-service-principal` for Azure Service Principal.
`--action` | Defines the operation you want the MLflow integration for DataRobot to perform.
The following command line operations are available for the `--action` option:
Action | Description
-------------------|-------------
`sync` | Imports parameters, tags, metrics, and artifacts from an MLflow model into a DataRobot model package as key values. This action requires `--mlflow-url`, `--mlflow-model`, `--mlflow-model-version`, `--dr-url`, and `--dr-model`.
`list-mlflow-keys` | Lists parameters, tags, metrics, and artifacts in an MLflow model. This action requires `--mlflow-url`, `--mlflow-model`, and `--mlflow-model-version`.
`validate-auth` | Validates the Azure AD Service Principal credentials for troubleshooting purposes. This action requires `--auth-type` and `--service-provider-type`.
## Set environment variables {: #set-environment-variables }
In addition to the command line options above, you should also provide any environment variables required for your use case:
Environment variable | Description
----------------------|-------------
`MLOPS_API_TOKEN` | A DataRobot API key, found in the DataRobot [**Developer Tools**](api-key-mgmt#api-key-management).
`AZURE_TENANT_ID` | The Azure Tenant ID for your Azure Databricks MLflow instance, found in the Azure portal.
`AZURE_CLIENT_ID` | The Azure Client ID for your Azure Databricks MLflow instance, found in the Azure portal.
`AZURE_CLIENT_SECRET` | The Azure Client Secret for your Azure Databricks MLflow instance, found in the Azure portal.
You can use `export` to define these environment variables with the information required for your use case:
``` sh
export MLOPS_API_TOKEN="<dr-api-key>"
export AZURE_TENANT_ID="<tenant-id>"
export AZURE_CLIENT_ID="<client-id>"
export AZURE_CLIENT_SECRET="<secret>"
```
## Run the sync action to import a model from MLflow into DataRobot {: #run-the-sync-action-to-import-a-model-from-mlflow-into-datarobot }
You can use the command line options and actions defined above to export MLflow model information from MLflow and import it into the DataRobot Model Registry:
```sh title="Import from MLflow"
DR_MODEL_ID="<MODEL_PACKAGE_ID>"
env PYTHONPATH=./ \
python datarobot_mlflow/drflow_cli.py \
--mlflow-url http://localhost:8080 \
--mlflow-model cost-model \
--mlflow-model-version 2 \
--dr-model $DR_MODEL_ID \
--dr-url https://app.datarobot.com \
--with-artifacts \
--verbose \
--action sync
```
After you run this command successfully, you can see MLflow information on the **Key Values** tab of a Model Package or a Registered Model version:

In addition, in the **Activity log** of the **Key Values** tab, you can view a record of the key value creation events:

## Troubleshoot Azure AD Service Principal credentials { #troubleshoot-azure-ad-service-principal-credentials }
In addition, you can use the following command line example to validate Azure AD Service Principal credentials for troubleshooting purposes:
```sh title="Validate Azure AD Service Principal credentials"
export MLOPS_API_TOKEN="n/a" # not used for Azure auth check, but the environment variable must be present
env PYTHONPATH=./ \
python datarobot_mlflow/drflow_cli.py \
--verbose \
--auth-type azure-service-principal \
--service-provider-type azure-databricks \
--action validate-auth
```
This command should produce the following output if you haven't [configured the required environment variables](#environment-variables):
``` sh title="Example output: missing environment variables"
Required environment variable is not defined: AZURE_TENANT_ID
Required environment variable is not defined: AZURE_CLIENT_ID
Required environment variable is not defined: AZURE_CLIENT_SECRET
Azure AD Service Principal credentials are not valid; check environment variables
```
If you see this error, provide the required Azure AD Service Principal credentials as environment variables:
``` sh title="Provide Azure AD Service Principal credentials"
export AZURE_TENANT_ID="<tenant-id>"
export AZURE_CLIENT_ID="<client-id>"
export AZURE_CLIENT_SECRET="<secret>"
```
When the environment variables for the Azure AD Service Principal credentials are defined, you should see the following output:
``` sh title="Example output: successful authentication"
Azure AD Service Principal credentials are valid for obtaining access token
```
|
mlflow-integration
|
---
title: Feature cache for Feature Discovery deployments
description: Schedule feature cache for Feature Discovery deployments
section_name: MLOps
maturity: public-preview
---
# Feature cache for Feature Discovery deployments {: #feature-cache-for-feature-discovery-deployments }
!!! info "Availability information"
Feature Cache for deployed Feature Discovery models is off by default. Contact your DataRobot representative or administrator for information on enabling the feature.
<b>Feature flag:</b> Enable Feature Cache for Feature Discovery
Now available as public preview, you can schedule feature cache for Feature Discovery deployments, which instructs DataRobot to pre-compute and store features before making predictions. Currently, you can only make batch predictions with Feature Discovery projects; however, generating these features in advance makes single-record, low-latency scoring possible.
Once feature cache is enabled and configured in the deployment's settings, DataRobot caches features and stores them in a database. When new predictions are made, the primary dataset is sent to the prediction endpoint, which enriches the data from the cache and returns the prediction response. The feature cache is then periodically updated based on the specified schedule.
To enable feature cache, go to the [**Predictions > Settings**](predictions-settings) tab of a [Feature Discovery project's](fd-overview) deployment. Then, turn on the **Enable Feature Cache** toggle and choose a schedule for DataRobot to update cached features.

!!! note
If you are configuring the settings for a new deployment, the creation process may take longer than usual as features are computed and stored for the first time during deployment creation. Once feature cache is enabled for a deployment, it cannot be disabled later on.
You can change how often DataRobot caches features or monitor the [status](#general-statuses) of feature caching on the deployment's **Predictions > Settings** tab.
## General statuses {: #general-statuses }
In your deployment's settings, you can monitor the status of feature cache.

The table below describes each possible status:
Status | Description
---------- | -----------
Not fetched | Feature cache was configured but data hasn't been populated into feature cache yet. Predictions are **impossible at the moment**.
Outdated | Data was not populated during the last scheduled run. Outdated data still present in feature cache and predictions are possible but **accuracy can be badly impacted**.
Configuration failed | Feature cache was enabled but **failed** to be configured. **Predictions are impossible**.
Failed to fetch | Data failed to be stored in cache. **Predictions are impossible**.
Updated | Last scheduled run was completed successfully. Predictions work as expected.
## Considerations
Consider the following when enabling feature cache for a Feature Discovery project:
- The maximum number of prediction features is 300.
- The scoring dataset can have a maximum of 200 rows. Datasets with more than 200 rows will not be scored with feature cache.
- Feature cache is not compatible with data drift tracking.
- Feature cache is only visible in the UI if DataRobot detects secondary datasets.
- Feature cache cannot be disabled once it is enabled for a deployment.
|
safer-ft-cache
|
---
title: Using the Tableau Analytics Extension with deployments
description: Use the Tableau analytics extension to integrate DataRobot predictions into your Tableau project.
section_name: MLOps
maturity: public-preview
---
# Using the Tableau Analytics Extension with deployments {: #using-the-tableau-analytics-extension-with-deployments }
!!! info "Availability information"
The Tableau analytics extension is off by default. Contact your DataRobot representative or administrator for information on enabling the feature.
<b>Feature flag:</b> Enable Tableau Analytics Extension
Now available as a public preview feature, you can use the Tableau analytics extension to integrate DataRobot predictions into your Tableau project. The extension supports the Tableau Analytics API, which allows its users to create endpoints that can send data to and receive data from Tableau. Using the extension, you can visualize prediction information and update it with information based on prediction responses from DataRobot.
### Configure the Tableau Analytics Extension {: #configure-the-tableau-analytics-extension }
To use the extension, you must correctly configure both Tableau and the integration code snippet itself.
First, you must successfully connect to the Tableau Analytics Extension. To do so, from Tableau, navigate to **Settings** > **Manage Analytics Extension Connection**.

Complete the following fields and enable SSL to connect Tableau to your DataRobot instance:

Field | Description
-------|------------
Server | Enter the address for the DataRobot server from which Tableau receives prediction responses. The address should match the instance you are using: `yourinstance.com/tableauintegrations`.
Port | Enter the port number for the DataRobot instance's server. The value should always be `443`.
Enable SSL | Select the checkbox to enable SSL and establish a secure connection from DataRobot to Tableau. SSL is required for all integrations.
Username and password | Select the checkbox to sign in with a username and password. Use your DataRobot username and provide your [API Key](api-key-mgmt) as the password.
When you have completed the fields, select **Test Connection** to verify that Tableau can successfully connect to DataRobot. If the connection is successful, click **OK**. DataRobot is now connected to the Tableau Analytics Extension.
After establishing the connection, access the deployment from which you want to send prediction responses to Tableau. Navigate to **Predictions > Integrations > Tableau Analytics Extensions API**.

Copy the snippet displayed on the page by selecting **Copy script to clipboard**.

Return to Tableau and from the **Data** dropdown menu, select **Create Calculated Field**.

Paste the snippet into the empty field, click **Apply**, then **OK**. This creates a calculated field that acts like a spreadsheet function.

You can now drag and drop the newly created table (named "Calculation1" by default) anywhere a normal field would go in Tableau to generate predictions for a data source using the deployed model in DataRobot.
### Edit the code snippet with column mapping {: #edit-the-code-snippet-with-column-mapping }
It's possible certain columns in the dataset return errors after pasting the snippet in the calculated field. You can edit these columns so that DataRobot and Tableau recognize them as the same field. From the **Tableau Analytics Extension** page in DataRobot, select **Add mapping**.

In the left column, select the checkbox for any features you want to map, then click the orange right arrow to set them as input features. When you have selected all of the features you want to map, click **Next**.

Enter the name of each column as it is displayed in Tableau. You can add any additional input features by selecting **Add column**. When you have mapped all of the input features, click **Add mappings**.

Once added, the snippet automatically updates to include the mappings. Copy it to your clipboard and add it to Tableau as a new calculated field to make predictions.
After making edits, you can modify any columns from the **Tableau Analytics Extension** page or add columns without returning to the modal by selecting **Add column**.

|
tableau-extension
|
---
title: MLOps public preview features
description: Read preliminary documentation for MLOps features currently in the DataRobot public preview pipeline.
section_name: MLOps
maturity: public-preview
---
# MLOps public preview features {: #public-preview-features }
{% include 'includes/pub-preview-notice-include.md' %}
## Available MLOps public preview documentation {: #available-mlops-public-preview-documentation }
=== "SaaS"
Public preview for... | Describes...
----- | ------
[Service health and accuracy history](pp-deploy-history) | Service Health and accuracy history allows you to compare the current model and up to five previous models in one place and on the same scale.
[Timeliness indicators for predictions and actuals](timeliness-status-indicators) | Enable timeliness tracking to retain the last calculated health status and reveal when the status indicators are based on old data.
[Model logs for model packages](pp-model-pkg-logs) | View model logs for model packages from the Model Registry to see successful operations (INFO status) and errors (ERROR status).
[Model package artifact creation workflow](pp-model-pkg-artifact-creation) | The improved model package artifact creation workflow provides a clearer and more consistent path to model deployment, with visible connections between a model and its associated model packages.
[Automated deployment and replacement of Scoring Code in Snowflake](pp-snowflake-sc-deploy-replace) | Create a DataRobot-managed Snowflake prediction environment to deploy and replace DataRobot Scoring Code in Snowflake.
[Automated deployment and replacement of Scoring Code in AzureML](azureml-sc-deploy-replace) | Create a DataRobot-managed AzureML prediction environment to deploy and replace DataRobot Scoring Code in AzureML.
[Monitoring jobs for custom metrics](custom-metric-monitoring-jobs) | Monitoring job definitions allow DataRobot to pull calculated custom metric values from outside of DataRobot into the metric defined on the Custom Metrics tab.
[Remote repository file browser for custom models and tasks](pp-remote-repo-file-browser) | Browse the folders and files in a remote repository to select the files you want to add to a custom model or task.
[Public network access for custom models](cus-model-pub-network-access) | Access any fully qualified domain name (FQDN) in a public network so that the model can leverage third-party services, or disable public network access to isolate a model from the network and block outgoing traffic.
[Runtime parameters for custom models](pp-cus-model-runtime-params) | Add runtime parameters to a custom model through the model metadata.
[MLOps reporting for unstructured models](mlops-unstructured-models) | Report MLOps statistics from custom inference models created with an unstructured regression, binary, or multiclass target type.
[Versioning support in the Model Registry](model-registry-versioning) | Create registered models to provide an additional layer of organization to your model packages.
[Compliance documentation with key values](model-registry-key-values) | Build custom compliance documentation templates with references to key values, adding the associated data to the template and limiting the manual editing needed to complete the compliance documentation.
[MLflow integration for DataRobot](mlflow-integration) | Export a model from MLflow and import it into the DataRobot Model Registry, creating key values from the training parameters, metrics, tags, and artifacts in the MLflow model.
[Tableau Analytics Extension for deployments](tableau-extension) | Use the Tableau analytics extension to integrate DataRobot predictions into your Tableau project.
[Multipart upload for the batch prediction API](batch-pred-multipart-upload) | Upload scoring data through multiple files to improve file intake for large datasets.
=== "Self-Managed"
Public preview for… | Describes...
----- | ------
[Service health and accuracy history](pp-deploy-history) | Service Health and Accuracy history allow you to compare the current model and up to five previous models in one place, on the same scale.
[Timeliness indicators for predictions and actuals](timeliness-status-indicators) | Enable timeliness tracking to retain the last calculated health status and reveal when the status indicators are based on old data.
[Model logs for model packages](pp-model-pkg-logs) | View model logs for model packages from the Model Registry to see successful operations (INFO status) and errors (ERROR status).
[Model package artifact creation workflow](pp-model-pkg-artifact-creation) | The improved model package artifact creation workflow provides a clearer and more consistent path to model deployment, with visible connections between a model and its associated model packages.
[Automated deployment and replacement of Scoring Code in Snowflake](pp-snowflake-sc-deploy-replace) | Create a DataRobot-managed Snowflake prediction environment to deploy and replace DataRobot Scoring Code in Snowflake.
[Automated deployment and replacement of Scoring Code in AzureML](azureml-sc-deploy-replace) | Create a DataRobot-managed AzureML prediction environment to deploy and replace DataRobot Scoring Code in AzureML.
[Monitoring jobs for custom metrics](custom-metric-monitoring-jobs) | Monitoring job definitions allow DataRobot to pull calculated custom metric values from outside of DataRobot into the metric defined on the Custom Metrics tab.
[Remote repository file browser for custom models and tasks](pp-remote-repo-file-browser) | Browse the folders and files in a remote repository to select the files you want to add to a custom model or task.
[Public network access for custom models](cus-model-pub-network-access) | Access any fully qualified domain name (FQDN) in a public network so that the model can leverage third-party services, or disable public network access to isolate a model from the network and block outgoing traffic.
[Runtime parameters for custom models](pp-cus-model-runtime-params) | Add runtime parameters to a custom model through the model metadata.
[Custom model proxy for external models](pp-ext-model-proxy) | (_Self-Managed AI Platform only_) Create custom model proxies for external models in the Custom Model Workshop.
[MLOps reporting for unstructured models](mlops-unstructured-models) | Report MLOps statistics from custom inference models created with an unstructured regression, binary, or multiclass target type.
[Versioning support in the Model Registry](model-registry-versioning) | Create registered models to provide an additional layer of organization to your model packages.
[Compliance documentation with key values](model-registry-key-values) | Build custom compliance documentation templates with references to key values, adding the associated data to the template and limiting the manual editing needed to complete the compliance documentation.
[MLflow integration for DataRobot](mlflow-integration) | Export a model from MLflow and import it into the DataRobot Model Registry, creating key values from the training parameters, metrics, tags, and artifacts in the MLflow model.
[Tableau Analytics Extension for deployments](tableau-extension) | Use the Tableau analytics extension to integrate DataRobot predictions into your Tableau project.
[Multipart upload for the batch prediction API](batch-pred-multipart-upload) | Upload scoring data through multiple files to improve file intake for large datasets.
|
index
|
---
title: Public network access for custom models
description: Access any fully qualified domain name (FQDN) in a public network so that the model can leverage third-party services, or disable public network access to isolate a model from the network and block outgoing traffic.
section_name: MLOps
maturity: public-preview
---
# Public network access for custom models {: #full-network-access-for-custom-models }
!!! info "Availability information"
Public network access for custom models is off by default. Contact your DataRobot representative or administrator for information on enabling this feature.
<b>Feature flag:</b> Enable Public Network Access for all Custom Models
Now available as a Public Preview feature, you can enable full network access for any custom model. When you create a custom model, you can access any fully qualified domain name (FQDN) in a public network so that the model can leverage third-party services. Alternatively, you can disable public network access if you want to isolate a model from the network and block outgoing traffic to enhance the security of the model.
To manage a custom model's network access:
1. Navigate to **Model Registry > Custom Model Workshop**.
2. On the **Models** tab, click the model you want to manage and then click the **Assemble** tab.
3. On the custom model's **Assemble Model** page, under **Resource Settings**, review the **Network access** setting:

Setting | Description
--------|------------
Public | (Default) The custom model can access any fully qualified domain name (FQDN) in a public network to leverage third-party services.
None | The custom model is isolated from the public network and cannot access third party services.
!!! tip
You can also see the **Network access** setting in the custom model's model package on the **Model Registry > Model Packages** page. Click the custom model package, and then, on the **Package Info** tab, scroll down to the **Resource Allocation** section.
4. If you want to change the Network access setting, click the edit icon (); then, in the **Update resource settings** dialog box, change the **Network access** option and click **Save**.

!!! important
You can only update resource settings in the most recent model version. Additionally, if the most recent model version was registered or deployed, you cannot apply the updated resource settings, and you'll encounter a "frozen version" warning if you try to save the changes:

If you encounter this warning, create a new custom model version, update the resource settings, and then register and deploy the model with the correct settings.
|
cus-model-pub-network-access
|
---
title: Model logs for model packages
description: View the model logs for a model package to see a history of the successful operations (INFO status) and errors (ERROR status).
section_name: MLOps
maturity: public-preview
---
# Model logs for model packages {: #model-logs-for-model-packages }
!!! info "Availability information"
Model logs for model packages in the Model Registry are off by default. Contact your DataRobot representative or administrator for information on enabling this feature.
<b>Feature flag:</b> Enable Model Logs for Model Packages
A model package's model logs display information about the operations of the underlying model. This information can help you identify and fix errors. For example, compliance documentation requires DataRobot to execute many jobs, some of which run sequentially and some in parallel. These jobs may fail, and reading the logs can help you identify the cause of the failure (e.g., the Feature Effects job fails because a model does not handle null values).
!!! important
In the Model Registry, a model package's **Model Logs** tab _only_ reports the operations of the underlying model, not the model package operations (e.g., model package deployment time).
To view the model logs for a model package:
1. In the **Model Registry**, click **Model Packages**.
2. Click a model package in the list, and then click **Model Logs**.
3. On the **Model Logs** tab, review the timestamped log entries:

| | Information | Description |
|-|-------------|-------------|
|  | Date / Time | The date and time the model log event was recorded. |
|  | Status | The status the log entry reports: <ul><li><span style="color:#3BC169">INFO</span>: Reports a successful operation.</li><li><span style="color:#E74D4D">ERROR</span>: Reports an unsuccessful operation.</li></ul> |
|  | Message | The description of the successful operation (INFO), or the reason for the failed operation (ERROR). This information can help you troubleshoot the root cause of the error. |
4. If you can't locate the log entry for the error you need to fix, it may be an older log entry not shown in the current view. Click **Load older logs** to expand the **Model Logs** view.

!!! tip
Look for the older log entries at the top of the **Model Logs**; they are added to the top of the existing log history.
|
pp-model-pkg-logs
|
---
title: Service Health and Accuracy history
description: Service Health and Accuracy history allow you to compare the current model with previous models in one place, on the same scale.
section_name: MLOps
maturity: public-preview
---
# Service Health and Accuracy history {: #service-health-and-accuracy-history }
!!! info "Availability information"
Deployment history for service health and accuracy is off by default. Contact your DataRobot representative or administrator for information on enabling this feature.
<b>Feature flag:</b> Enable Deployment History
When analyzing a deployment, [**Service Health**](service-health) and [**Accuracy**](deploy-accuracy) can provide critical information about the performance of current and previously deployed models. However, comparing these models can be a challenge as the charts are displayed separately, and the scale adjusts to the data. To improve the usability of the service health and accuracy comparisons, the [**Service Health > History**](#service-health-history) and [**Accuracy > History**](#accuracy-history) tabs (now available for public preview) allow you to compare the current model with previously deployed models in one place, on the same scale.
## Service Health history {: #service-health-history }
The [**Service Health** page](service-health) displays metrics you can use to assess a deployed model's ability to respond to prediction requests quickly and reliably. In addition, on the **History** tab, you can access visualizations representing the service health history of up to five of the most recently deployed models, including the currently deployed model. This history is available for each metric tracked in a model's service health, helping you identify bottlenecks and assess capacity, which is critical to proper provisioning. For example, if a deployment's response time seems to have slowed, the **Service Health** page for the model's deployment can help diagnose the issue. If the service health metrics show that median latency increases with an increase in prediction requests, you can then check the **History** tab to compare the currently deployed model with previous models. If the latency increased after replacing the previous model, you could consult with your team to determine whether to deploy a better-performing model.
To access the **Service Health > History** tab:
1. Click **Deployments** and select a deployment from the inventory.
2. On the selected deployment's **Overview**, click **Service Health**.
3. On the **Service Health > Summary** page, click **History**.
The **History** tab tracks the following metrics:

| Metric | Reports |
|-----------|-----------|
| Total Predictions | The number of predictions the deployment has made. |
| Requests over _x_ ms | The number of requests where the response time was longer than the specified number of milliseconds. The default is 2000 ms; click in the box to enter a time between 10 and 100,000 ms or adjust with the controls. |
| Response Time (ms) | The time (in milliseconds) DataRobot spent receiving a prediction request, calculating the request, and returning a response to the user. The report does not include time due to network latency. Select the **Median** prediction request time or **90th percentile**, **95th percentile**, or **99th percentile**. The display reports a dash if you have made no requests against it or if it's an external deployment. |
| Execution Time (ms) | The time (in milliseconds) DataRobot spent calculating a prediction request. Select the **Median** prediction request time or **90th percentile**, **95th percentile**, or **99th percentile**. |
| Data Error Rate (%) | The percentage of requests that result in a 4xx error (problems with the prediction request submission). This is a component of the value reported as the Service Health Summary in the [**Deployments**](deploy-inventory) page top banner. |
| System Error Rate (%) | The percentage of well-formed requests that result in a 5xx error (problem with the DataRobot prediction server). This is a component of the value reported as the Service Health Summary in the [**Deployments**](deploy-inventory) page top banner. |
4. To view the details for a data point in a service health history chart, you can hover over the related bin on the chart:

## Accuracy history {: #accuracy-history }
The [**Accuracy** page](deploy-accuracy) analyzes the performance of model deployments over time using standard statistical measures and visualizations. Use this tool to analyze a model's prediction quality to determine if it is decaying and if you should consider replacing it. In addition, on the **History** page, you can access visualizations representing the accuracy history of up to five of the most recently deployed models, including the currently deployed model, allowing you to compare model accuracy directly. These accuracy insights are rendered based on the problem type and its associated optimization metrics.
!!! note
Accuracy monitoring is not enabled for deployments by default. To enable it, first upload the data that contains predicted and actual values for the deployment collected outside of DataRobot. For more information, see the documentation on [setting up accuracy for deployments](accuracy-settings) by adding [actuals](glossary/index#actuals).
To access the **Accuracy > History** tab:
1. Click **Deployments** and select a deployment from the inventory.
2. On the selected deployment's **Overview**, click **Accuracy**.
3. On the **Accuracy > Summary** page, click **History**.
The **History** tab tracks the following:

| Metric | Reports |
|-----------------|----------------|
| Accuracy Over Time | A line graph visualizing the change in the selected accuracy metric over time for up to five of the most recently deployed models, including the currently deployed model. The available accuracy metrics depend on the project type. |
| Predictions vs Actuals Over Time | A line graph visualizing the difference between the average predicted values and average actual values over time for up to five of the most recently deployed models, including the currently deployed model. For classification projects, you can display results per-class. |
=== "Accuracy Over Time"
The accuracy over time chart plots the selected accuracy metric for each prediction range along a timeline. The accuracy metrics available depend on the type of modeling project used for the deployment:
| Project type | Available metrics |
|-----------------------|-------------------|
| Regression | RMSE, MAE, Gamma Deviance, Tweedie Deviance, R Squared, FVE Gamma, FVE Poisson, FVE Tweedie, Poisson Deviance, MAD, MAPE, RMSLE |
| Binary classification | LogLoss, AUC, Kolmogorov-Smirnov, Gini-Norm, Rate@Top10%, Rate@Top5%, TNR, TPR, FPR, PPV, NPV, F1, MCC, Accuracy, Balanced Accuracy, FVE Binomial |
You can select an accuracy metric from the **Metric** drop-down list.
=== "Predictions vs Actuals Over Time"
The Predictions vs Actuals Over Time chart plots the average predicted value next to the average actual value for each prediction range along a timeline.</span> In addition, the volume chart below the graph displays the number of predicted and actual values corresponding to the predictions made within each plotted time range. The shaded area represents the number of uploaded actuals, and the striped area represents the number of predictions without corresponding actuals.
The timeline and bucketing work the same for classification and regression projects; however, for classification projects, you can use the **Class** dropdown to display results for that class.
4. To view the details for a data point in an accuracy history chart, you can hover over the related bin on the chart:

|
pp-deploy-history
|
---
title: Multipart upload for the batch prediction API
description: Multipart upload for batch predictions allows you to override the default behavior to upload more than one file using multiple PUT requests and a POST request to finalize the upload process.
section_name: MLOps
maturity: public-preview
---
# Multipart upload for the batch prediction API
!!! info "Availability information"
Multipart upload for batch predictions is off by default. Contact your DataRobot representative or administrator for information on enabling this feature.
<b>Feature flag:</b> Enable multipart upload for batch predictions
The batch prediction API's local file intake process requires that you upload scoring data for a job using a `PUT` request to the URL specified in the `csvUpload` parameter. By default, a single `PUT` request starts the job (or queues it for processing if the prediction instance is occupied). Multipart upload for batch predictions allows you to override the default behavior to upload scoring data through multiple files. This upload process requires multiple `PUT` requests followed by a single `POST` request (`finalizeMultipart`) to finalize the multipart upload manually. This feature can be helpful when you want to upload large datasets over a slow connection.
!!! note
For more information on the batch prediction API and local file intake, see [Batch Prediction API](batch-prediction-api/index) and [Prediction intake options](intake-options#local-file-streaming).
<!--private start-->
See the [API documentation](https://app.datarobot.com/apidocs/index.html){ target=_blank } for details on the REST API.
<!--private end-->
## Multipart upload endpoints {: #multipart-upload-endpoints }
This feature adds the following multipart upload endpoints to the batch prediction API:
| Endpoint | Description |
|----------|-------------|
| `PUT /api/v2/batchPredictions/:id/csvUpload/part/0/` | Upload scoring data in multiple parts to the URL specified by `csvUpload`. Increment `0` by 1 in sequential order for each part of the upload. |
| `POST /api/v2/batchPredictions/:id/csvUpload/finalizeMultipart/` | Finalize the multipart upload process. Make sure each part of the upload has finished before finalizing. |
## Local file intake settings {: #local-file-intake-settings }
The intake settings for the local file adapter added two new properties to support multipart upload for the batch prediction API:
| Property | Type | Default | Description |
|-----------|------|---------|-------------|
| `intakeSettings.multipart` | boolean |`false` | <ul><li>`true`: Requires you to submit multiple files via a `PUT` request and finalize the process manually via a `POST` request (`finalizeMultipart`).</li><li>`false`: Finalizes intake after one file is submitted via a `PUT` request.</li> |
| `intakeSettings.async` | boolean | `true` | <ul><li>`true`: Starts the scoring job when the initial `PUT` request for file intake is made.</li><li>`false`: Postpones the scoring job until the `PUT` request resolves or the `POST` request for `finalizeMultipart` resolves.</li></ul> |
### Multipart intake setting {: #multipart-intake-settings }
To enable the new multipart upload workflow, configure the `intakeSettings` for the `localFile` adapter as shown in the following sample request:
```
{
"intakeSettings": {
"type": "localFile",
"multipart": true
}
}
```
These properties alter the local file upload workflow, requiring you to:
* Upload any number of sequentially numbered files.
* Finalize the upload to indicate that all required files uploaded successfully.
### Async intake setting {: #async-intake-settings }
To enable the new multipart upload workflow with async enabled, configure the `intakeSettings` for the `localFile` adapter as shown in the following sample request:
!!! note
You can also use the `async` intake setting independently of the `multipart` setting.
```
{
"intakeSettings": {
"type": "localFile",
"multipart": true,
"async": false
}
}
```
A defining feature of batch predictions is that the scoring job starts on the initial file upload, and only one batch prediction job at a time can run for any given prediction instance. This functionality may cause issues when uploading large datasets over a slow connection. In these cases, the client's upload speed could create a bottleneck and block the processing of other jobs. To avoid this potential bottleneck, you can set `async` to `false`, as shown in the example above. This configuration postpones submitting the batch prediction job to the queue.
When `"async": false`, the point at which a job enters the batch prediction queue depends on the `multipart` setting:
* If `"multipart": true`, the job is submitted to the queue after the `POST` request for `finalizeMultipart` resolves.
* If `"multipart": false`, the job is submitted to the queue after the initial file intake `PUT` request resolves.
## Example multipart upload requests {: #example-multipart-upload-requests }
The batch prediction API requests required to upload a 3 part multipart batch prediction job would be:
```
PUT /api/v2/batchPredictions/:id/csvUpload/part/0/
PUT /api/v2/batchPredictions/:id/csvUpload/part/1/
PUT /api/v2/batchPredictions/:id/csvUpload/part/2/
POST /api/v2/batchPredictions/:id/csvUpload/finalizeMultipart/
```
Each uploaded part is a complete CSV file with a header.
## Abort a multipart upload {: #abort-a-multipart-upload }
If you start a multipart upload that you don't want to finalize, you can use a `DELETE` request to the existing `batchPredictions` abort route:
```
DELETE /api/v2/batchPredictions/:id/
```
|
batch-pred-multipart-upload
|
---
title: MLOps reporting for unstructured models
description: Report MLOps statistics from custom inference models created with an unstructured regression, binary, or multiclass target type.
section_name: MLOps
maturity: public-preview
---
# MLOps reporting for unstructured models
!!! info "Availability information"
MLOps Reporting from Unstructured Models is off by default. Contact your DataRobot representative or administrator for information on enabling this feature.
<b>Feature flag:</b> Enable MLOps Reporting from Unstructured Models
Now available for public preview, you can report MLOps statistics from Python custom inference models [created in the Custom Model Workshop](custom-inf-model) with an **Unstructured (Regression)**, **Unstructured (Binary)**, or **Unstructured (Multiclass)** target type:

With this feature enabled, when you [assemble an unstructured custom inference model](unstructured-custom-models) in Python, you can read the `mlops` input argument from the `kwargs` as follows:
``` python
mlops = kwargs.get('mlops')
```
For an example of an unstructured Python custom model with MLOps reporting, see the [DataRobot User Models repository](https://github.com/datarobot/datarobot-user-models/tree/master/model_templates/python3_unstructured_with_mlops_reporting){ target=_blank }.
**************************************************
## Unstructured custom model reporting methods {: #unstructured-custom-model-reporting-methods }
If the value of `mlops` is not `None`, you can access and use the following methods:
### `report_deployment_stats` {: #report-deployment-stats }
Reports the number of predictions and execution time to DataRobot MLOps.
``` python
report_deployment_stats(num_predictions: int, execution_time: float)
```
Argument | Description
------------------|------------------
`num_predictions` | The number of predictions.
`execution_time` | The time, in milliseconds, that it took to calculate all predictions.
**************************************************
### `report_predictions_data` {: #report-predictions-data }
Reports the features, along with their predictions and association IDs, to DataRobot MLOps.
``` python
report_predictions_data(features_df: pandas.DataFrame, predictions: list, association_ids: list, class_names: list)
```
Argument | Description
--------------|------------------
`features_df` | A dataframe containing all features to track and monitor. Exclude any features you don't want to report from the dataframe.
`predictions` | A list of predictions. <ul><li>For _regression_ deployments, this is a 1-dimensional list containing prediction values (e.g., `[1, 2, 4, 3, 2]`).</li><li>For _classification_ deployments, this is a 2-dimensional list, where the inner list is the probabilities for each class type (e.g., `[[0.2, 0.8], [0.3, 0.7]]`).</li></ul>
`association_ids` | (Optional) A list of association IDs corresponding to each prediction. Association IDs are used to calculate accuracy and must be unique for each reported prediction. The number of `predictions` should equal the number of `association_ids` in the list.
`class_names` | (Classification only) A list of the names of predicted classes (e.g., `["class1", "class2", "class3"]`). For classification deployments, the class names must be in the same order as the prediction probabilities reported. If the order isn't specified, the prediction order defaults to the order of the class names on the deployment. This argument is ignored for regression deployments.
**************************************************
## Local testing {: #local-testing }
To test an unstructured custom model with MLOps reporting locally, you must use the [`drum` utility](custom-model-drum) with the following input arguments (or the corresponding environment variables):
Input argument | Environment variable | Description
---------------|-----------------------|-------------
`--target-type` | `TARGET_TYPE` | Must be `unstructured`.
`--webserver` | `EXTERNAL_WEB_SERVER_URL` | The DataRobot external web server URL.
`--api-token` | `API_TOKEN` | The DataRobot API token.
`--monitor-embedded` | `MLOPS_REPORTING_FROM_UNSTRUCTURED_MODELS` | Enables a model to use MLOps library to report statistics.
`--deployment-id` | `DEPLOYMENT_ID` | The deployment ID for monitoring model predictions.
`--model-id` | `MODEL_ID` | The deployed model ID for monitoring predictions.
|
mlops-unstructured-models
|
---
title: Deployment
description: Use DataRobot MLOps to deploy DataRobot models, as well as custom and external models written in languages like Python and R, onto runtime environments.
---
# Deployment {: #deployment }
With MLOps, the goal is to make model deployment easy. Regardless of your role—a business analyst, data scientist, data engineer, or member of an Operations team— you can easily create a deployment in MLOps. Deploy models built in DataRobot and those written in various programming languages like Python and R.
The following sections describe how to deploy models to a production environment of your choice and use MLOps to [monitor](monitor/index) and [manage](manage-mlops/index) those models.
See the associated [deployment](#feature-considerations) and [custom model deployment](custom-models/index#feature-considerations) considerations for additional information.
Topic | Describes
-------|--------------
[Deployment workflows](deploy-workflows/index) | How to deploy and monitor DataRobot AutoML models, custom inference models, and external models in various prediction environments.
[Register models](registry/index) | How to register DataRobot AutoML models, custom inference models, and external models in the Model Registry.
[Prepare DataRobot Models for deployment](dr-model-prep/index) | How to prepare DataRobot AutoML models for deployment.
[Prepare custom models for deployment](custom-models/index) | How to create, test, and prepare custom inference models for deployment.
[Prepare for external model deployment](ext-model-prep/index) | How to create and manage external models and prediction environments.
[Deploy models](deploy-methods/index) | How to deploy DataRobot models, custom inference models, and external models to DataRobot MLOps.
[MLOps agents](mlops-agent/index) | How to configure the monitoring and management agent for external models.
[Algorithmia Developer Center](https://algorithmia.com/developers){ target=_blank } | Using the Algorithmia platform to connect models to data sources and deploy them quickly to production.
## Feature considerations {: #feature-considerations }
When curating a prediction request/response dataset from an external source:
* Include the 25 most important features.
* Follow the CSV [file size](file-types) requirements.
* For classification projects, classes must have a value of 0 or 1 or be text strings.
Additionally, note that:
* *Self-Managed AI Platform only*: By default, the 25 most important features and the target are tracked for data drift.
* The [**Make Predictions**](predict) tab is not available for external deployments.
* DataRobot deployments only track predictions made against dedicated prediction servers by `deployment_id`.
* To be analyzed by model management, other prediction methods should record requests and predictions to a CSV file. Then, upload the file to DataRobot as an [external deployment](deploy-external-model).
* As of Self-Managed AI Platform version 7.0, the previously deprecated endpoints using `project_id` and `model_id` instead of `deployment_id`, return `HTTP 404 Not found` (unless otherwise configured with a DataRobot representative).
* The first 1,000,000 predictions per deployment per hour are tracked for data drift analysis and computed for accuracy. Further predictions within an hour where this limit has been reached are not processed for either metric. However, there is no limit on predictions in general.
* If you score larger datasets (up to 5GB), there will be a longer wait time for the predictions to become available, as multiple prediction jobs must be run. If you choose to navigate away from the predictions interface, the jobs will continue to run.
* After making prediction requests, it can take 30 seconds or so for data drift and accuracy metrics to update. Note that the speed at which the metrics update depends on the model type (e.g., time series), the deployment configuration (e.g., segment attributes, number of forecast distances), and system stability.
* DataRobot recommends that you do not submit multiple prediction rows that use the same association ID—an association ID is a *unique* identifier for a prediction row. If multiple prediction rows are submitted, only the latest prediction uses the associated actual value. All prior prediction rows are, in effect, unpaired from that actual value. Additionally, *all* predictions made are included in data drift statistics, even the unpaired prediction rows.
* If you want to write back your predictions to a cloud location or database, you must use the [Prediction API](dr-predapi).
### Time series deployments {: time-series-deployment }
* To make predictions with a time series deployment, the amount of history needed depends on the model used:
* **Traditional time series** (**ARIMA** family) models require the full history between training time and prediction time. DataRobot recommends scoring these models with the [Prediction API](dr-predapi).
* All other time series models only require enough history to fill the feature derivation window, which varies by project. For cross series, all series must be provided at prediction time.
Both categories of models are supported for real-time predictions, with a maximum payload size of 50 MB.
* ARIMA family and non-ARIMA cross-series models do not support [batch predictions](batch-pred-ts).
* All other time series models support batch predictions. For multiseries, input data must be sorted by series ID and timestamp.
* There is no data limit for time series batch predictions on supported models other than a single series cannot exceed 50 MB.
* When scoring regression time series models using [integrated enterprise databases](batch-pred-jobs), you may receive a warning that the target database is expected to contain the following column, which was not found: `DEPLOYMENT_APPROVAL_STATUS`. The column, which is optional, records whether the deployed model has been approved by an administrator. If your organization has configured a [deployment approval workflow](dep-admin), you can:
* Add the column to the target database.
* Redirect the data to another column by using the `columnNamesRemapping` parameter.
After taking either of the above actions, run the prediction job again, and the approval status appears in the prediction results. If you are not recording approval status, ignore the message, and the prediction job continues.
* To ensure DataRobot can process your time series data for [deployment predictions](batch-dep/index), configure the dataset to meet the following requirements:
* Sort prediction rows by their timestamps, with the earliest row first.
* For multiseries, sort prediction rows by series ID and then by timestamp.
* There is *no limit* on the number of series DataRobot supports. The only limit is the job timeout. For more information, see the [batch prediction limits](batch-prediction-api/index#limits).
For dataset examples, see the [requirements for the scoring dataset](batch-pred-ts#requirements-for-the-scoring-dataset).
### Multiclass deployments {: #multiclass-deployments }
* Multiclass deployments of up to 100 classes support monitoring for target, accuracy, and data drift.
* Multiclass deployments of up to 100 classes support retraining.
* Multiclass deployments created before Self-Managed AI Platform version 7.0 with feature drift enabled don't have historical data for feature drift of the target; only new data is tracked.
* DataRobot uses holdout data as a baseline for target drift. For multiclass deployments using certain datasets, rare class values could be missing in the holdout data and, as a result, in the baseline for drift. In this scenario, these rare values are treated as new values.
### Challengers {: #challengers }
* To enable [Challengers](challengers) and replay predictions against them, the deployed model must support target drift tracking *and* not be a [Feature Discovery](fd-overview) or [Unstructured custom inference](unstructured-custom-models) model.
* To replay predictions against Challengers, you must be in the [Organization](admin-overview#what-are-organizations) associated with the deployment. This restriction also applies to deployment [Owners](roles-permissions#deployment-roles).
### Prediction results cleanup {: #prediction-results-cleanup }
For each deployment, DataRobot periodically performs a cleanup job to delete the deployment's predicted and actual values from its corresponding prediction results table in Postgres. DataRobot does this to keep the size of these tables reasonable while allowing you to consistently generate accuracy metrics for all deployments and schedule replays for challenger models without the danger of hitting table size limits.
The cleanup job prevents a deployment from reaching its "hard" limit for prediction results tables; when the table is full, predicted and actual values are no longer stored, and additional accuracy metrics for the deployment cannot be produced. The cleanup job triggers when a deployment reaches its "soft" limit, serving as a buffer to prevent the deployment from reaching the "hard" limit. The cleanup prioritizes deleting the oldest prediction rows *already tied to a corresponding actual value*. Note that the *aggregated data* used to power data drift and accuracy over time are unaffected.
### Managed AI Platform {: #managed-ai-platform }
Managed AI Platform users have the following hourly limitations. Each deployment is allowed:
* [**Data drift**](data-drift) analysis: 1,000,000 predictions or, for each individual prediction instance, 100MB of total prediction requests. If either limit is reached, data drift analysis is halted for the remainder of the hour.
* Prediction row storage: the first 100MB of total prediction requests per deployment per each individual prediction instance. If the limit is reached, no prediction data is collected for the remainder of the hour.
|
index
|
---
title: Set up accuracy monitoring
description: Configure accuracy monitoring on a deployment's Accuracy Settings tab.
---
# Set up accuracy monitoring {: #set-up-accuracy-monitoring }
You can monitor a deployment for accuracy using the [**Accuracy**](deploy-accuracy) tab, which lets you analyze the performance of the model deployment over time using standard statistical measures and exportable visualizations. You can enable accuracy monitoring on the **Accuracy > Settings** tab. To configure accuracy monitoring, you must:
* [Enable target monitoring](#enable-target-monitoring) in the [Data Drift Settings](data-drift-settings)
* [Select an association ID](#select-an-association-id) in the Accuracy Settings
* [Add actuals](#add-actuals) in the Accuracy Settings
On a deployment's **Accuracy Settings** page, you can configure the **Association ID** and **Upload Actuals** settings and the accuracy monitoring **Definition** and **Notifications** settings:

| Field | Description |
|-------------------------------|---------------|
| **Association ID** | :~~: |
| [Association ID](#select-an-association-id) | Defines the name of the column that contains the association ID in the prediction dataset for your model. Association IDs are required for setting up accuracy monitoring in a deployment. The association ID functions as an identifier for your prediction dataset so you can later match up outcome data (also called "actuals") with those predictions. |
| [Require association ID in prediction requests](#select-an-association-id) | Requires your prediction dataset to have a column name that matches the name you entered in the Association ID field. When enabled, you will get an error if the column is missing. |
| [Enable automatic actuals feedback for time series models](#association-ids-for-time-series-deployments) | For time series deployments that have indicated an association ID, this setting enables the automatic submission of actuals so that you do not need to submit them manually via the UI or API. Once enabled, actuals can be extracted from the data used to generate predictions. As each prediction request is sent, DataRobot can extract an actual value for a given date. This is because when you send prediction rows to forecast, historical data is included. This historical data serves as the actual values for the previous prediction request. |
| **Upload Actuals** | :~~: |
| [Drop file(s) here or choose file](#add-actuals) | Uploads a file with actuals to monitor accuracy by matching the model's predictions with actual values. Actuals are required to enable the [**Accuracy**](deploy-accuracy) tab. |
| [Assigned features](#assigned-features) | Configures the **Assigned features** settings after you upload actuals. |
| **Definition** | :~~: |
| [Set definition](#define-accuracy-monitoring-notifications) | Configures the metric, measurement, and threshold definitions for accuracy monitoring. |
| **Notifications** | :~~: |
| [Send notification](#schedule-accuracy-monitoring-notifications) | Configures the accuracy monitoring notification schedule. |
## Enable target monitoring {: #enable-target-monitoring}
In order to enable accuracy monitoring, you must also [enable target monitoring](data-drift-settings) in the **Data Drift** section of the **Data Drift Settings** tab.

If target monitoring is turned off, a message displays on the **Accuracy** tab to remind you to enable target monitoring.
## Select an association ID {: #select-an-association-id }
To activate the [**Accuracy** tab](deploy-accuracy) for a deployment, first designate an association ID in the prediction dataset. The association ID is a [foreign key](https://www.tutorialspoint.com/Foreign-Key-in-RDBMS){ target=_blank }, linking predictions with future results (or [actuals](glossary/index#actuals)). It corresponds to an event for which you want to track the outcome; For example, you may want to track a series of loans to see if any of them have defaulted or not.
!!! important
You must set an association ID _before_ making predictions to include those predictions in accuracy tracking.
On the **Accuracy > Settings** tab of a deployment, the **Association ID** section has a field for the column name containing the association IDs. The column name you define in the **Association ID** field must match the name of the column containing the association IDs in the prediction dataset for your model. Each cell for this column in your prediction dataset should contain a unique ID that pairs with a corresponding unique ID that occurs in the actuals payload.

In addition, you can enable **Require association ID in prediction requests** to throw an error if the column is missing from your prediction dataset when you make a prediction request.
You can set the column name containing the association IDs on a deployment at any time, whether predictions have been made against that deployment or not. Once set, you can only update the association ID if you have not yet made predictions that include that ID. Once predictions have been made using that ID, you cannot change it.
Association IDs (the contents in each row for the designated column name) must be shorter than 128 characters, or they will be truncated to that size. If truncated, uploaded actuals will require the truncated association IDs for your actuals in order to properly generate accuracy statistics.
??? faq "How does an association ID work?"
For an example of an association ID, look at this sample dataset of transactions:

The third column, `transaction_num`, is the column containing the association IDs. A row's unique ID (`transaction_num` in this example) groups together the other features in that row (`transaction_amnt` and `annual_inc` in this example), creating an "association" between the related feature values. Defining `transaction_num` as the column containing association IDs allows DataRobot to use these unique IDs to associate each row of prediction data and predicted outcome with the actual outcome later. Therefore, `transaction_num` is what you would enter in the **Association ID** field when setting up accuracy.
### Association IDs for time series deployments {: #association-ids-for-time-series-deployments }
For time series deployments, prediction requests already contain the data needed to uniquely identify individual predictions. Therefore, it is important to consider the feature used as an association ID, depending on the deployment type, consider the following guidelines:
* **Single-series deployments**: DataRobot recommends using the `Forecast Date` column as the association ID because it is the date you are making predictions for. For example, if today is June 15th, 2022, and you are forecasting daily total sales for a store, you may wish to know what the sales will be on July 15th, 2022. You will have a single actual total sales figure for this date, so you can use “2022-07-15” as the association ID (the forecast date).
* **Multiseries deployments**: DataRobot recommends using a custom column containing `Forecast Date + Series ID` as the association ID. If a single model can predict daily total sales for a number of stores, then you can use, for example, the association ID “2022-07-15 1234” for sales on July 15th, 2022 for store #1234.
* **All time series deployments**: You may want to forecast the same date multiple times as the date approaches. For example, you might forecast daily sales 30 days in advance, then again 14 days in advance, and again 7 days in advance. These forecasts all have the same forecast date, and therefore the same association ID.

!!! important
Be aware that models may produce different forecasts when predicting closer to the forecast date. Predictions for multiple forecast distances are each tracked individually so that accuracy can be properly calculated for each forecast distance.
After you designate an association ID, you can toggle **Enable automatic actuals feedback for time series models** to on. This setting automatically sumbmits actuals so that you do not need to submit them manually via the UI or API. Once enabled, actuals can be extracted from the data used to generate predictions. As each prediction request is sent, DataRobot can extract an actual value for a given date. This is because when you send prediction rows to forecast, historical data is included. This historical data serves as the actual values for the previous prediction request.
## Add actuals {: #add-actuals }
You can directly upload datasets containing actuals to a deployment from the **Accuracy > Settings** tab (described here) or through the [API](#upload-actuals-with-the-api). The deployment's prediction data must correspond to the actuals data you upload. Review the [row limits](#actuals-upload-limit) for uploading actuals before proceeding.
1. To use actuals with your deployment, in the **Upload Actuals** section, click **Choose file**. Either upload a file directly or select a file from the [**AI Catalog**](catalog). If you upload a local file, it is added to the **AI Catalog** after successful upload. Actuals must be snapshotted in the AI Catalog to use them with a deployment.
2. Once uploaded, complete the fields that populate in the **Actuals** section. Under <span id="assigned-features">**Assigned features**</span>, each field has a dropdown menu that allows you to select any of the columns from your dataset:

| Field | Description |
|-------------------------|-------------|
| Actual Response | Defines the column in your dataset that contains the actual values. |
| Association ID | Defines the column that contains the [association IDs](#select-an-association-id). |
| Timestamp (optional) | Defines the column that contains the date/time when the actual values were obtained, formatted according to [RFC 3339](https://tools.ietf.org/html/rfc3339){ target=_blank } (for example, 2018-04-12T23:20:50.52Z). |
??? note "Column name matching"
The column names for the association ID in the prediction and the actuals datasets do not need to match. The only requirement is that each dataset contains a column that includes an identifier that does match the other dataset. For example, if the column `store_id` contains the association ID in the prediction dataset that you will use to identify a row and match it to the actual result, enter `store_id` in the **Association ID** section. In the **Upload Actuals** section under **Assigned fields**, in the **Association ID** field, enter the name of the column in the actuals dataset that contains the identifiers associated with the identifiers in the prediction dataset.

3. After you configure the **Assigned fields**, click **Save**.
When you complete this configuration process and [making predictions](../../predictions/index) with a dataset containing the defined **Association ID**, the [**Accuracy**](deploy-accuracy) page is enabled for your deployment.
## Upload actuals with the API {: #upload-actuals-with-the-api }
This workflow outlines how to enable the **Accuracy** tab for deployments using the DataRobot API.
1. From the **Accuracy > Settings** tab, locate the **Association ID** section.
2. In the **Association ID** field, enter the column name containing the association IDs in your prediction dataset.
3. Enable **Require association ID in prediction requests**. This requires your prediction dataset to have a column name that matches the name you entered in the **Association ID** field. You will get an error if the column is missing.
!!! note
You can set an association ID and *not* toggle on this setting if you are sending prediction requests that do not include the association ID and you do not want them to error; however, until it is enabled, you cannot monitor accuracy for your deployment.
4. [Make predictions](../../predictions/index) using a dataset that includes the association ID.
5. Submit the actual values via the DataRobot API (for details, refer to the API documentation by signing in to DataRobot, clicking the question mark on the upper right, and selecting **API Documentation**; in the API documentation, select **Deployments > Submit Actuals - JSON**). You should review the [row limits](#actuals-upload-limit) for uploading actuals before proceeding.
!!! note
The actuals payload must contain the `associationId` and `actualValue`, with the column names for those values in the dataset defined during the upload process. If you submit multiple actuals with the same association ID value, either in the same request or a subsequent request, DataRobot updates the actuals value; however, this update doesn't recalculate the metrics previously calculated using that initial actuals value. To recalculate metrics, you can [clear the deployment statistics](actions-menu#clear-deployment-statistics) and reupload the actuals (or create a new deployment).
Use the following snippet in the API to submit the actual values:
``` python
import requests
API_TOKEN = ''
USERNAME = 'johndoe@datarobot.com'
DEPLOYMENT_ID = '5cb314xxxxxxxxxxxa755'
LOCATION = 'https://app.datarobot.com'
def submit_actuals(data, deployment_id):
headers = {'Content-Type': 'application/json', 'Authorization': 'Token {}'.format(API_TOKEN)}
url = '{location}/api/v2/deployments/{deployment_id}/actuals/fromJSON/'.format(
deployment_id=deployment_id, location=LOCATION
)
resp = requests.post(url, json=data, headers=headers)
if resp.status_code >= 400:
raise RuntimeError(resp.content)
return resp.content
def main():
deployment_id = DEPLOYMENT_ID
payload = {
'data': [
{
'actualValue': 1,
'associationId': '5d8138fb9600000000000000', # str
},
{
'actualValue': 0,
'associationId': '5d8138fb9600000000000001',
},
]
}
submit_actuals(payload, deployment_id)
print('Done')
if __name__ == "__main__":
main()
```
After submitting at least 100 actuals for a non-time series deployment (there is no minimum for time series deployments) and making predictions with corresponding association IDs, the [**Accuracy**](deploy-accuracy) tab becomes available for your deployment.
??? note "Actuals upload limit"
The <span id="actuals-upload-limit">number of actuals you can upload to a deployment is limited</span> _per request_ and _per hour_. These limits vary depending on the endpoint used:
Endpoint | Upload limit
--------------|-------------
`fromJSON` | <ul><li>10,000 rows per request</li><li>10,000,000 rows per hour</li></ul>
`fromDataset` | <ul><li>5,000,000 rows per request</li><li>10,000,000 rows per hour</li></ul>
## Define accuracy monitoring notifications {: #define-accuracy-monitoring-notifications }
For accuracy, the notification conditions relate to a [performance optimization metric](opt-metric) for the underlying model in the deployment. Select from the same set of metrics that are available on the Leaderboard. You can visualize accuracy using the [Accuracy over Time graph](deploy-accuracy#accuracy-over-time-graph) and the [Predicted & Actual graph](deploy-accuracy#predicted-actual-graph). Accuracy monitoring is defined by a single accuracy rule. Every 30 seconds, the rule evaluates the deployment's accuracy. Notifications trigger when this rule is violated.
Before configuring accuracy notifications and monitoring for a deployment, set an [association ID](accuracy-settings#association-id). If not set, DataRobot displays the following message when you try to modify accuracy notification settings:

!!! note
Only deployment _Owners_ can modify accuracy monitoring settings. They can set no more than one accuracy rule per deployment. _Consumers_ cannot modify monitoring or notification settings. _Users_ can [configure the conditions under which notifications are sent to them](deploy-notifications) and see explained status information by hovering over the accuracy status icon:

To set up accuracy monitoring:
1. On the **Accuracy Settings** page, in the **Definition** section, configure the settings for monitoring accuracy:

| | Element | Description |
|---|---------|-------------|
|  | Metric | Defines the metric used to evaluate the accuracy of your deployment. The metrics available from the dropdown menu are the same as those [supported by the **Accuracy** tab.](deploy-accuracy#available-accuracy-metrics)|
|  | Measurement | Defines the unit of measurement for the accuracy metric and its thresholds. You can select **value** or **percent** from the dropdown. The **value** option measures the metric and thresholds by specific values, and the **percent** option measures by percent changed. The **percent** option is unavailable for model deployments that don't have training data. |
|  | "At Risk" / "Failing" thresholds | Sets the values or percentages that, when exceeded, trigger notifications. Two thresholds are supported: when the deployment's accuracy is "At Risk" and when it is "Failing." DataRobot provides default values for the thresholds of the first accuracy metric provided (LogLoss for classification and RMSE for regression deployments) based on the deployment's training data. Deployments without training data populate default threshold values based on their prediction data instead. If you change metrics, default values are not provided. |
!!! note
Changes to thresholds affect the periods of time in which predictions are made across the entire history of a deployment. These updated thresholds are reflected in the performance monitoring visualizations on the [Accuracy](deploy-accuracy) tab.
2. After updating the accuracy monitoring settings, click **Save**.
### Examples of accuracy monitoring settings
Each combination of metric and measurement determines the expression of the rule. For example, if you use the LogLoss metric measured by value, the rule triggers notifications when accuracy "is greater than" the values of 5 or 10:

However, if you change the metric to AUC and the measurement to percent, the rule triggers notifications when accuracy "decreases by" the values set for the threshold:

## Schedule accuracy monitoring notifications {: #schedule-accuracy-monitoring-notifications }
To schedule accuracy monitoring email notifications:
1. On the **Accuracy Settings** page, in the **Notifications** section, select the **Send notifications** checkbox.
2. Configure the settings for accuracy notifications.
The following table lists the scheduling options. All times are displayed in your configured time zone:
| Frequency | Description |
|-----------------|-------------|
| Every day | Each day at the configured hour* |
| Every week | Configurable day and hour |
| Every month | Configurable date and hour |
| Every quarter | Configurable number of days (`1`-`31`) past the first day of January, April, July, October, at the configured hour |
_* The cadence setting applies across all days selected. In other words, you cannot set checks to occur every 12 hours on Saturday and every 2 hours on Monday._
3. After updating the scheduling settings, click **Save**.
At the configured time, DataRobot sends emails to subscribers.
|
accuracy-settings
|
---
title: Configure retraining
description: To maintain model performance after deployment without extensive manual work, enable Automated Retraining by configuring the general retraining settings and then defining retraining policies.
---
# Configure retraining {: #Configure-retraining }
To maintain model performance after deployment without extensive manual work, DataRobot provides an automatic retraining capability for deployments. Upon providing a retraining dataset registered in the [**AI Catalog**](catalog), you can [define up to five retraining policies](set-up-auto-retraining) on each deployment. Before you define retraining policies, you must configure a deployment's general retraining settings on the **Retraining > Settings** tab.
!!! note
Editing retraining settings requires [`Owner`](roles-permissions) permissions for the deployment. Those with `User` permissions can view the retraining settings for the deployment.
On a deployment's **Retraining Settings** page, you can configure the following settings:

| Element | Description |
|---------|-------------|
| [Retraining user](#select-a-retraining-user) | Selects a retraining user who has Owner access for the deployment. For resource monitoring, retraining policies must be run as a user account. |
| [Prediction environment](#choose-a-prediction-environment) | Selects the default prediction environment for scoring challenger models. |
| [Retraining data](#provide-retraining-data) | Defines a retraining dataset for all retraining profiles. Drag or browse for a local file or select a dataset from the AI Catalog. |
After you click **Save** and define these settings, you can [define a retraining policy](set-up-auto-retraining#set-up-retraining-policies).
## Select a retraining user {: #select-a-retraining-user }
When executed, scheduled retraining policies use the permissions and resources of an identified user (manually triggered policies use the resources of the user who triggers them.) The user needs the following:
* For the retraining data, permission to use data and create snapshots.
* Owner permissions for the deployment.
[Modeling workers](worker-queue) are required to train the models requested by the retraining policy. Workers are drawn from the retraining user's pool, and each retraining policy requests 50% of the retraining user's total number of workers. For example, if the user has a maximum of four modeling workers and retraining policy A is triggered, it runs with two workers. If retraining policy B is triggered, it also runs with two workers. If policies A and B are running and policy C is triggered, it shares workers with the other two policies running.
!!! note
Interactive user modeling requests do not take priority over retraining runs. If your workers are applied to retraining, and you initiate a new modeling run (manual or Autopilot), it shares workers with the retraining runs. For this reason, DataRobot recommends creating a user with a capped number of workers and designating this user for retraining jobs.
## Choose a prediction environment {: #choose-a-prediction-environment }
[Challenger analysis](challengers) requires replaying predictions that were initially made with the champion model against the challenger models. DataRobot uses a defined schedule and prediction environment for replaying predictions. When a new challenger is added as a result of retraining, it uses the assigned prediction environment to generate predictions from the replayed requests. It is possible to later change the prediction environment any given challenger is using from the **Challengers** tab.
While they are acting as challengers, models can only be deployed to DataRobot prediction environments. However, the champion model can use a different prediction environment from the challengers—either a DataRobot environment (for example, one marked for "Production" usage to avoid resource contention) or a remote environment (for example, AWS, OpenShift, or GCP). If a model is promoted from challenger to champion, it will likely use the prediction environment of the former champion.
## Provide retraining data {: #provide-retraining-data }
All retraining policies on a deployment refer to the same **AI Catalog** dataset. When a retraining policy triggers, DataRobot uses the latest version of the dataset (for uploaded **AI Catalog** items) or creates and uses a new snapshot from the underlying data source (for catalog items using data connections or URLs). For example, if the catalog item uses a Spark SQL query, when the retraining policy triggers, it executes that query and uses the resulting rows as input to the modeling settings (including partitioning). For **AI Catalog** items with underlying data connections, if the catalog item already has the maximum number of snapshots (100), the retraining policy will delete the oldest snapshot before taking a new one.
|
retraining-settings
|
---
title: Enable data export
description: Enable prediction row storage for a deployment, allowing you to export the stored prediction and training data to compute and monitor custom business or performance metrics.
---
# Enable data export {: #enable-data-export }
You can enabled prediction row storage to activate the [Data Export tab](data-export), where you can export a deployment's stored training data, prediction data, and actuals to compute and monitor custom business or performance metrics on the [Custom Metrics tab](custom-metrics) or outside DataRobot.

| Field | Description |
|-------------------------------|----------------------------|
| Enable prediction row storage | Enables prediction data storage, a setting required to store and export a deployment's prediction data for use in custom metrics. |
|
data-export-settings
|
---
title: Review predictions settings
description: The Predictions Settings tab provides details about your deployment's inference (also known as scoring) data.
---
# Review predictions settings
On a deployment's **Predictions > Settings** tab, you can view details about your deployment's inference (also known as scoring) data—the data containing prediction requests and results from the model.
On the **Predictions Settings** page, you can access the following information:

| Field | Description |
|--------------------------|----------------------------|
| Prediction environment | Displays the environment where predictions are generated. [Prediction environments](pred-env) allow you to establish access controls and approval workflows. |
| Prediction timestamp | Displays the method used for time-stamping prediction rows. Use the time of the prediction request or use a date/time feature (e.g., forecast date) provided with prediction data to determine the timestamp. Forecast date time-stamping is set automatically for time series deployments. It allows for a common time axis to be used between training data and the basis of data drift and accuracy statistics. This setting cannot be changed after the deployment is created and predictions are made.|
## Set feature discovery project settings {: #set-feature-discovery-project-settings }
Feature discovery projects use multiple datasets to generate new features, eliminating the need to perform manual feature engineering to consolidate those datasets into one. For deployed [feature discovery projects](fd-overview), you can manage your **Feature discovery** configuration on the **Predictions Settings** page.
To manage datasets for a feature discovery project:
1. On the **Predictions Settings** page, locate the **Feature discovery** section:

2. Click **Preview** to review the **Secondary Datasets Configuration Info** dialog box.
3. If you need to update your secondary datasets, click **Change** to open the **Secondary Datasets Configuration** dialog box, where you can:
* Click **create new**, define a new configuration, and then click **Create configuration**.
* Click the menu icon () to **Preview**, **Select**, or **Delete** a configuration.

!!! note
You can't delete the **Default Configuration** or the **Selected** configuration.
4. Click **Apply**.
## Set prediction intervals for time series deployments {: #set-prediction-intervals-for-time-series-deployments }
Time series users have the additional capability to add a prediction interval to the prediction response of deployed models. When enabled, prediction intervals will be added to the response of any prediction call associated with the deployment.
To enable prediction intervals, navigate to the **Predictions** > **Prediction Intervals** tab, click the **Enable prediction intervals** toggle, and select an **Interval size** (read more about prediction intervals [here](ts-predictions#prediction-preview)):

After you set an interval, copy the deployment ID from the [Overview tab](dep-overview), the deployment URL, or the snippet in the [**Prediction API**](code-py) tab to check that the deployment was added to the database. You can compare the results from your API output with [prediction preview](ts-predictions#prediction-preview) in the UI to verify results.

For more information on working with prediction intervals via the API, access the API documentation by signing in to DataRobot, clicking the question mark on the upper right, and selecting **API Documentation**. In the API documentation, select **Time Series Projects > Prediction Intervals**.
|
predictions-settings
|
---
title: Configure challengers
description: Configure deployments using the Challengers tab to store prediction request data at the row level and replay predictions on a schedule.
---
# Configure challengers {: #configure-challengers }
DataRobot can securely store prediction request data at the row level for deployments (not supported for external model deployments). This setting must be enabled on the **Challengers > Settings** tab for any deployment using the [**Challengers**](challengers) tab. In addition to enabling challenger analysis, access to stored prediction request rows enables you to thoroughly audit the predictions and use that data to troubleshoot operational issues. For instance, you can examine the data to understand an anomalous prediction result or why a dataset was malformed.
You may have enabled prediction rows storage during [deployment creation](add-deploy-info). During deployment creation, this toggle appears under the [**Challenger Analysis**](add-deploy-info#challenger-analysis) section. Once enabled, prediction requests made for the deployment are collected by DataRobot. Prediction explanations are not stored. Contact your DataRobot representative to learn more about data security, privacy, and retention measures or to discuss prediction auditing needs.
On a deployment's **Challengers Settings** page, you can configure the following settings to store prediction request data at the row level and replay predictions on a schedule:
!!! important
Prediction requests are only collected if the prediction data is in a valid data format interpretable by DataRobot, such as CSV or JSON. Failed prediction requests with a valid data format are also collected (i.e., missing input features).

| Field | Description |
|-------------------------------|----------------------------|
| **Challengers** | :~~: |
| Enable prediction row storage | Enables prediction data storage, a setting required to score predictions made by challenger and compare performance with the deployed model. |
| Enable challenger analysis | Enables the use of challenger models, allowing you to compare models post-deployment and replace the champion model if necessary. |
| **Replay Schedule** | :~~: |
| Automatically replay challengers | Enables a recurring, scheduled challenger replay on stored predictions for retraining. |
| Replay time | Displays the selected replay time in UTC. |
| Local replay time | Displays the selected replay time converted to local time. If the selected replay time in UTC results in the replay time on a new day, a warning appears. |
| Frequency / Time | Configures the replay schedule, selecting from the following options: <ul><li>**Every hour**: Each hour on the selected minute past the hour.</li><li>**Every day**: Each day at the selected time.</li><li>**Every week**: Each selected day at the selected time.</li><li>**Every month**: Each month, on each selected day, at the selected time. The selected days in a month are provided as numbers (`1` to `31`) in a comma separated list.</li><li>**Every quarter**: Each month of a quarter, on each selected day, at the selected time. The selected days in each month are provided as numbers (`1` to `31`) in a comma separated list.</li><li>**Every year**: Each selected month, on each selected day, at the selected time. The selected days in each month are provided as numbers (`1` to `31`) in a comma separated list.</li></ul> |
| Use advanced scheduler | Configures the replay schedule, entering values for the following advanced options: <ul><li>**Minute**: A comma-separated list of numbers between `0` and `59` or `*` for all.</li><li>**Hour**: A comma-separated list of numbers between `0` and `23` or `*` for all.</li><li>**Day of month**: A comma-separated list of numbers between `1` and `31` or `*` for all.</li><li>**Month**: A comma-separated list of numbers between `1` and `12` or `*` for all.</li><li>**Day of week**: A comma-separated list of numbers between `0` and `6` or `*` for all.</li></ul>|
!!! note
The **Time** setting applies across all days selected. In other words, you cannot set checks to occur every 12 hours on Saturday and every 2 hours on Monday.
|
challengers-settings
|
---
title: Deployment settings
description:
---
# Deployment settings
After you [create and configure a deployment](add-deploy-info), you can use the settings tabs for individual features to add or update deployment functionality:
Topic | Describes
-------|------------
[Set up service health monitoring](service-health-settings) | Enable [segmented analysis](deploy-segment) to assess service health, data drift, and accuracy statistics by filtering them into unique segment attributes and values.
[Set up data drift monitoring](data-drift-settings) | Enable [data drift monitoring](data-drift) on a deployment's Data Drift Settings tab.
[Set up accuracy monitoring](accuracy-settings) | Enable [accuracy monitoring](deploy-accuracy) on a deployment's Accuracy Settings tab.
[Set up fairness monitoring](fairness-settings) | Enable [fairness monitoring](mlops-fairness) on a deployment's Fairness Settings tab.
[Set up humility rules](humility-settings) | Enable [humility monitoring](humble) by creating rules which enable models to recognize, in real-time, when they make uncertain predictions or receive data they have not seen before.
[Configure retraining](retraining-settings) | Enable [Automated Retraining](set-up-auto-retraining) for a deployment by defining the general retraining settings and then creating retraining policies.
[Configure challengers](challengers-settings) | Enable [challenger comparison](challengers) by configuring a deployment to store prediction request data at the row level and replay predictions on a schedule.
[Review predictions settings](predictions-settings) | Review the Predictions Settings tab to view details about your deployment's inference data.
[Enable data export](data-export-settings) | Enable [data export](data-export) to compute and monitor custom business or performance metrics.
[Set prediction intervals for time series deployments](predictions-settings#set-prediction-intervals-for-time-series-deployments) | Enable [prediction intervals](ts-predictions#prediction-preview) in the prediction response for deployed time series models.
|
index
|
---
title: Set up fairness monitoring
description: Configure fairness monitoring on a deployment's Fairness Settings tab.
---
# Set up fairness monitoring
On a deployment's **Fairness > Settings** tab, you can define [Bias and Fairness](fairness-metrics) settings for your deployment to identify any biases in a binary classification model's predictive behavior. If fairness settings are defined prior to deploying a model, the fields are automatically populated. For additional information, see the section on [defining fairness tests](fairness-metrics#configure-metrics-and-mitigation).
!!! note
To configure fairness settings, you must enable target monitoring for the deployment. Target monitoring allows DataRobot to monitor how the values and distributions of the target change over time by storing prediction statistics. If target monitoring is turned off, a message displays on the **Fairness** tab to remind you to enable [target monitoring](data-drift-settings).
Configuring fairness criteria and notifications can help you identify the root cause of bias in production models. On the **Fairness** tab for individual models, DataRobot calculates per-class bias and fairness over time for each protected feature, allowing you to understand why a deployed model failed the predefined acceptable bias criteria. For information on fairness metrics and terminology, see the [Bias and Fairness reference page](bias-ref).
To measure the fairness of production models, you must configure bias and fairness testing in the **Fairness > Settings** tab of a deployed model. If bias and fairness testing was configured for the model prior to deployment, the fields are automatically populated.
On a deployment's **Fairness Settings** page, you can configure the following settings:

| Field | Description |
|--------------------------|----------------------------|
| **Segmented Analysis** | :~~: |
| [Track attributes for segmented analysis of training data and predictions](deploy-segment) | Enables DataRobot to monitor deployment predictions by segments, for example by categorical features. |
| **Fairness** | :~~: |
| [Protected features](glossary/index#protected-feature) | Selects each protected feature's dataset column to measure fairness of model predictions against; these features must be categorical. |
| [Primary fairness metric](#select-a-fairness-metric) | Selects the statistical measure of parity constraints used to assess fairness. |
| Favorable target outcome | Selects the outcome value perceived as favorable for the protected class relative to the target. |
| Fairness threshold | Selects the fairness threshold to measure if a model performs within appropriate fairness bounds for each protected class. |
| **Association ID** | :~~: |
| Association ID | Defines the name of the column that contains the association ID in the prediction dataset for your model. An association ID is required to calculate two of the **Primary fairness metric** options: _True Favorable Rate & True Unfavorable Rate Parity_ and _Favorable Predictive & Unfavorable Predictive Value Parity_. The association ID functions as an identifier for your prediction dataset so you can later match up outcome data (also called "actuals") with those predictions. |
| Require association ID in prediction requests | Requires your prediction dataset to have a column name that matches the name you entered in the Association ID field. When enabled, you will get an error if the column is missing. |
| **Definition** | :~~: |
| [Set definition](#define-fairness-monitoring-notifications) | Configures the number of protected classes below the fairness threshold required to trigger monitoring notifications. |
| **Notifications** | :~~: |
| [Send notification](#schedule-fairness-monitoring-notifications) | Configures the fairness monitoring notification schedule. |
## Select a fairness metric {: #select-a-fairness-metric }
DataRobot supports the following fairness metrics in MLOps:
* [Equal Parity](bias-ref#equal-parity)
* [Proportional Parity](bias-ref#proportional-parity)
* [Prediction Balance](bias-ref#prediction-balance)
* [True Favorable](bias-ref#true-favorable-rate-parity) and [True Unfavorable](bias-ref#true-unfavorable-rate-parity) Rate Parity (_True Positive Rate Parity and True Negative Rate Parity_)
* [Favorable Predictive](bias-ref#favorable-predictive-value-parity) and [Unfavorable Predictive](bias-ref#unfavorable-predictive-value-parity) Value Parity (_Positive Predictive Value Parity and Negative Predictive Value Parity_)
If you are unsure of the appropriate fairness metric for your deployment, click [help me choose](fairness-metrics#select-a-metric).

!!! note
To calculate _True Favorable Rate & True Unfavorable Rate Parity_ and _Favorable Predictive & Unfavorable Predictive Value Parity_, the deployment must provide an [association ID](accuracy-settings#association-id).
## Define fairness monitoring notifications {: #define-fairness-monitoring-notifications }
Configure notifications to alert you when a production model is at risk of or fails to meet predefined fairness criteria. You can visualize fairness status on the [Fairness](mlops-fairness#investigate-bias) tab. Fairness monitoring uses a primary fairness metric and two thresholds—protected features considered to be "At Risk" and "Failing"—to monitor fairness. If not specified, DataRobot uses the default thresholds.
!!! note
To access the settings in the **Definition & Notifications** section, configure and save the fairness settings. Only deployment _Owners_ can modify data drift monitoring settings; however, _Users_ can [configure the conditions under which notifications are sent to them](deploy-notifications). _Consumers_ cannot modify monitoring or notification settings.
To customize the rules used to calculate the fairness status for each deployment:
1. On the **Fairness Settings** page, in the **Definition** section, click **Set definition** and configure the threshold settings for monitoring fairness:

Threshold | Description
----------|--------
At Risk | Defines the number of protected features below the bias threshold that, when exceeded, classifies the deployment as "At Risk" and triggers notifications. The threshold for **At Risk** should be lower than the threshold for **Failing**. <br> **Default value**: `1`
Failing | Defines the number of protected features below the bias threshold that, when exceeded, classifies the deployment as "Failing" and triggers notifications. The threshold for **Failing** should be higher than the threshold for **At Risk**. <br> **Default value**: `2`
!!! note
Changes to thresholds affect the periods in which predictions are made across the entire history of a deployment. These updated thresholds are reflected in the performance monitoring visualizations on the [Fairness](mlops-fairness) tab.
2. After updating the fairness monitoring settings, click **Save**.
## Schedule fairness monitoring notifications {: #schedule-fairness-monitoring-notifications }
To schedule fairness monitoring email notifications:
1. On the **Fairness Settings** page, in the **Notifications** section, select the **Send notifications** checkbox.
2. Configure the settings for fairness notifications:
The following table lists the scheduling options. All times are displayed in your configured time zone:

| Frequency | Description |
|-----------------|-------------|
| Every hour | Each hour on the 0 minute |
| Every day | Each day at the configured hour* |
| Every week | Configurable day and hour |
| Every month | Configurable date (`1`-`31`) and hour |
| Every quarter | Configurable number of days (`1`-`31`) past the first day of January, April, July, October, at the configured hour |
_* The cadence setting applies across all days selected. In other words, you cannot set checks to occur every 12 hours on Saturday and every 2 hours on Monday._
3. After updating the scheduling settings, click **Save**.
At the configured time, DataRobot sends emails to subscribers.
|
fairness-settings
|
---
title: Set up humility rules
description: Configure humility rules which enable models to recognize, in real-time, when they make uncertain predictions or receive data they have not seen before. When they recognize conditions you specify in the rule, they perform desired actions you configure.
---
# Set up humility rules
!!! info "Availability information"
The **Humility** tab is only available for DataRobot MLOps users. Contact your DataRobot representative for more information about enabling this feature. To enable humility-over-time monitoring for a deployment (displayed on the [Summary](#view-humility-data-over-time) page), you must enable [Data Drift](data-drift-settings) monitoring.
MLOps allows you to create humility rules for deployments on the **Humility > Settings** tab. Humility rules enable models to recognize, in real-time, when they make uncertain predictions or receive data they have not seen before. Unlike data drift, model humility does not deal with broad statistical properties over time—it is instead triggered for individual predictions, allowing you to set desired behaviors with rules that depend on different triggers. Using humility rules to add triggers and corresponding actions to a prediction helps mitigate risk for models in production. Humility rules help to identify and handle data integrity issues during monitoring and to better identify the root cause of unstable predictions.
The **Humility** tab contains the following sub-tabs:
* **Summary**: [View a summary of humility data over time](humble) after configuring humility rules and making predictions with humility monitoring enabled.
* **Settings**: [Create humility rules](#create-humility-rules) to monitor for uncertainty and specify actions to manage it.
* **Prediction Warnings** (for regression projects only): [Configure prediction warnings](#prediction-warnings) to detect when deployments produce predictions with outlier values.
Specific humility rules are available for multiseries projects detailed [below](#multiseries-humility-rules). While they follow the same general workflow for humility rules as AutoML projects, they have specific settings and options.
## Create humility rules {: #create-humility-rules }
To create humility rules for a deployment:
1. In the **Deployments** inventory, select a deployment and navigate to the **Humility > Settings** tab.

2. On the **Humility Rules Settings** page, if you haven't enabled humility for a model, click the **Enable humility** toggle, then:
* To create a new, custom rule, click **+ Add Rule**.
* To use the [rules provided by DataRobot](#recommended-rules), click **+ Use Recommended Rules**.
3. Click the pencil icon () to enter a name for the rule. Then [select a **Trigger**](#choose-a-trigger-for-the-rule) and [specify an **Action**](#choose-an-action-for-the-rule) to take based on the selected trigger. The trigger detects a rule violation and the action handles the violating prediction.

When rule configuration is complete, a rule explanation displays below the rule describing what happens for the configured trigger and respective action.
4. Click **Add** to save the rule, and click **+Add new rule** to add additional rules.
5. After adding and [editing](#edit-humility-rules) the rules, click **Submit**.

!!! warning
Clicking **Submit** is the only way to permanently save new rules and rule changes. If you navigate away from the **Humility** tab without clicking **Submit**, your rules and edits to rules are not saved.
!!! note
If a rule is a duplicate of an existing rule, you cannot save it. In this case, when you click **Submit**, a warning displays:

After you save and submit the humility rules, DataRobot monitors the deployment using the new rules and any previously created rules. After a rule is created, the prediction response body returns the humility object. Refer to the [Prediction API documentation](dr-predapi#making-predictions-with-humility-monitoring) for more information.
### Choose a trigger for the rule {: #choose-a-trigger-for-the-rule }
Select a **Trigger** for the rule you want to create. There are three triggers available:

Each trigger requires specific settings. The following table and subsequent sections describe these settings:
| Trigger | Description |
|---------|-------------|
| [Uncertain Prediction](#uncertain-prediction) | Detects whether a prediction's value violates the configured thresholds. You must set lower-bound and upper-bound thresholds for prediction values. |
| [Outlying Input](#outlying-input) | Detects if the input value of a numeric feature is outside of the configured thresholds. |
| [Low Observation Region](#low-observation-region) | Detects if the input value of a categorical feature value is not included in the list of specified values. |
#### Uncertain Prediction {: #uncertain-prediction }

To configure the uncertain prediction trigger, set lower-bound and upper-bound thresholds for prediction values. You can either enter these values manually or click **Calculate** to use computed thresholds derived from the Holdout partition of the model (only available for DataRobot models). For regression models, the trigger detects any values outside of the configured thresholds. For binary classification models, the trigger detects any prediction's probability value that is <em>inside</em> the thresholds. You can view the type of model for your deployment from the **Settings > Data** tab.
#### Outlying Input {: #outlying-input }

To configure the outlying input trigger, select a numeric feature and set the lower-bound and upper-bound thresholds for its input values. Enter the values manually or click **Calculate** to use computed thresholds derived from the training data of the model (only available for DataRobot models).
#### Low Observation Region {: #low-observation-region }

To configure the low observation region trigger, select a categorical feature and indicate one or more values. Any input value that appears in prediction requests that does not match the indicated values triggers an action.
### Choose an action for the rule {: #choose-an-action-for-the-rule }
Select an **Action** for the rule you are creating. DataRobot applies the action if the trigger indicates a rule violation. There are three actions available:

| Action | Description |
|--------|-------------|
| [Override prediction](#override-prediction) | Modifies predicted values for rows violating the trigger with the value configured by the action. |
| [Throw error](#throw-error) | Rows in violation of the trigger return a 480 HTTP error with the predictions, which also contributes to the data error rate on the **Service Health** tab. |
| No operation | No changes are made to the detected prediction value. |
#### Override prediction {: #override-prediction }

To configure the override action, set a value that will overwrite the returned value for predictions violating the trigger. For binary classification and multiclass models, the indicated value can be set to either of the model's class labels (e.g., "True" or "False"). For regression models, manually enter a value or use the maximum, minimum, or mean provided by DataRobot (only provided for DataRobot models).
#### Throw error {: #throw-error }
To configure the throw error action, you can use the default error message provided or specify your own custom error message. This error message will appear along a 480 HTTP error with the predictions.

## Edit humility rules {: #edit-humility-rules }
You can edit or delete existing rules from the **Humility > Rules** tab if you have [Owner](roles-permissions#deployment-roles) permissions.
!!! note
Edits to humility rules can have significant impact on deployment predictions, as prediction values can be overwritten with new values or can return errors based on the rules configured.
### Edit a rule {: #edit-a-rule }
1. Select the pencil icon for the rule.

2. Change the [trigger](#choose-a-trigger-for-the-rule), [action](#choose-an-action-for-the-rule), and any associated values for the rule. When finished, click **Save Changes**.

3. After editing the rules, click **Submit**. If you navigate away from the **Humility** tab without clicking **Submit**, your edits will be lost.
### Delete a rule {: #delete-a-rule }
1. Select the trash can icon for the rule.

2. Click **Submit** to complete the delete operation. If you navigate away from the **Humility** tab without clicking **Submit**, your rules will not be deleted.
### Reorder a rule {: #reorder-a-rule }
To [reorder](#rule-application-order) the rules listed, drag and drop them in the desired order and click **Submit**.
#### Rule application order {: #rule-application-order }
The displayed list order of your rules determines the order in which they are applied. Although every humility rule trigger is applied, if multiple rules match the trigger of a prediction response, DataRobot applies the first rule in the list that changes the prediction value. However, if <em>any</em> triggered rule has the "Throw Error" action, that rule takes priority.
For example, consider a deployment with the following rules:
| Trigger | Action | Thresholds |
|---------|--------|------------|
| Rule 1: Uncertain Prediction | Override the prediction value to 55. | Lower: 1 <br><br> Upper: 50 |
| Rule 2: Uncertain Prediction | Override the prediction value to 66. | Lower: 45 <br><br> Upper: 50 |
If a prediction returns the value 100, both rules will trigger, as both rules detect an uncertain prediction outside of their thresholds. The first rule, Rule 1, takes priority, so the prediction value is overwritten to 55. The action to overwrite the value to 66 (based on Rule 2) is ignored.
In another example, consider a deployment with the following rules:
| Trigger | Action | Thresholds |
|---------|--------|------------|
| Rule 1: Uncertain Prediction | Override the prediction value to 55. | Lower: 1 <br><br> Upper: 55 |
| Rule 2: Uncertain Prediction | Throw an error. | Lower: 45 <br><br> Upper: 60 |
If a prediction returns the value 50, both rules will trigger. However, Rule 2 takes priority over Rule 1 because it is configured to return an error. Therefore, the value is <em>not overwritten</em>, as the action to return an error is higher priority than the numerical order of the rules.
## Prediction warnings {: #prediction-warnings }
Enable prediction warnings for regression model deployments on the **Humility > Prediction Warnings** tab. Prediction warnings allow you to mitigate risk and make models more robust by identifying when predictions do not match their expected result in production. This feature detects when deployments produce predictions with outlier values, summarized in a report that returns with your predictions.
!!! note
Prediction warnings are only available for deployments using regression models. This feature does not support classification or time series models.
Prediction warnings provide the same functionality as the [Uncertain Prediction](#uncertain-prediction) trigger that is part of humility monitoring. You may want to enable both, however, because prediction warning results are integrated into the Predictions over time chart on the [**Data Drift**](data-drift) tab.
### Enable prediction warnings {: #enable-prediction-warnings }
1. To enable prediction warnings, navigate to **Humility > Prediction Warnings**.
!!! note
The **Prediction Warnings** tab only displays for regression model deployments.
2. Enter a **Lower bound** and **Upper bound**, or click **Configure** to have DataRobot calculate the prediction warning ranges.

DataRobot derives thresholds for the prediction warning ranges from the Holdout partition of your model. These are the boundaries for outlier detection—DataRobot reports any prediction result outside of these limits. You can choose to accept the Holdout-based thresholds or manually define the ranges instead.
3. After making any desired changes, click **Save ranges**.
You can [generate a prediction warning report](#generate-a-prediction-warning-report) after the humility rules are in effect and predictions have been generated for the deployment. Prediction warnings are also reported on the [Predictions Over Time chart](data-drift) in the **Data Drift** tab.
!!! note
Prediction warnings are not retroactive. For example, if you set the upper-bound threshold for outliers to 40, a prediction with a value of 50, made prior to setting up thresholds, is not retroactively detected as an outlier. Prediction warnings will only return with prediction requests made after the feature is enabled.
### Recommended rules {: #recommended-rules }
If you want DataRobot to recommend a set of rules for your deployment, click **Use Recommended Rules** when adding a new humility rule.

This option generates two automatically configured humility rules:
* Rule 1: The [Uncertain prediction](#uncertain-prediction) trigger and the No operation action.
* Rule 2: The [Outlying input](#outlying-input) trigger for the most important numeric feature (based on **Feature Impact** results) and the No operation action.
Both recommended rules have automatically calculated upper- and lower-bound thresholds.
### Generate a prediction warning report {: #generate-a-prediction-warning-report }
After saving your settings, navigate to the deployment's [**Predictions > Prediction API scripting code**](code-py) tab. To generate a report of prediction warnings, check the **Prediction Warnings** box.

Once checked, copy the snippet and [make predictions](../../predictions/index). Enabling prediction warnings modifies the snippet to report any detected outliers alongside your prediction results.
Every prediction result contains the `isOutlierPrediction` key and the result, marked true for a detected outlier and false when not detected.

When DataRobot detects outlier predictions, consider substituting the outlier value with:
* The minimum or maximum target value from the training dataset.
* The mean or median target value from the training dataset.
## Multiseries humility rules {: #multiseries-humility-rules }
DataRobot supports multiseries blueprints that support feature derivation and predicting using partial history or no history at all—series that were not trained previously and do not have enough points in the training dataset for accurate predictions. This is useful, for example, in demand forecasting. When a new product is introduced, you may want initial sales predictions. In conjunction with “cold start modeling” (modeling on a series in which there is not sufficient historical data) you can predict on new series, but also keep accurate predictions for series *with* a history.
With the support in place, you can set up [a humility rule](humble) that:
* Triggers off a new series (unseen in training data).
* Takes a specified action.
* Optionally, returns a custom error message.
!!! note
If you [replace a model](deploy-replace) within a deployment using a model from a different project, the humility rule is disabled. If the replacement is a model from the same project, the rule is saved.
### Create multiseries humility rules {: #create-multiseries-humility-rules }
1. Select a deployment from your **Deployments** inventory and click **Humility**. Toggle **Enable humility** to on.
2. Click **+ Add new rule** to begin configuring the rule. Time series deployments can only have one humility rule applied. If you already have a rule, click it to edit if you want to make changes.
3. Select a trigger. To include new series data, select **New series** as the trigger. This rule detects if a series is present that was not available in the training data and does not have enough history in the prediction data for accurate predictions.
4. Select an action.

Subsequent options are dependent on the selected action, as described in the following table:
| Action | If a new series is encountered... | Further action |
|--------|-----------------------------------|----------------|
| No operation | DataRobot records the event but the prediction is unchanged. | N/A |
| Use model with new series support | The prediction is overridden by the prediction from a selected model with new series support. | [Select a model](#select-a-model-replacement) that supports unseen series modeling. DataRobot preloads supported models in the dropdown. |
| Use global most frequent class (binary classification only) | The prediction value is replaced with the most frequent class across all series. | N/A |
| Use target mean for all series (regression only) | The prediction value is overridden by the global target mean for all series. | N/A |
| Override prediction | The prediction value is changed to the specified preferred value. | Enter a numeric value to replace the prediction value for any new series. |
| Return error | The default or a custom error is returned with the 480 error. | Use the default or click in the box to enter a custom error message. |
When finished, click the **Add** button to create the new rule or save changes.
### Select a model replacement {: #select-a-model-replacement }
When you expand the **Model with new series support** dropdown, DataRobot provides a list of models available from the [Model Registry](registry/index), not the Leaderboard. Using models available from the registry decouples the model from the project and provides support for packages. In this way, you can use a backup model from any compatible project, as long as it uses the same target and has the same series available.
!!! note
If no packages are available, deploy a “new series support” model and add it to the Model Registry. You can identify qualifying models from the Leaderboard by the NEW SERIES OPTIMIZED badge. There is also a notification banner in the **Make Predictions** tab if you try to use a non-optimized model.
## Humility tab considerations {: #humility-tab-considerations }
Consider the following when using the **Humility** tab:
* You cannot define more than 10 humility rules for a deployment.
* Humility rules can only be defined by [owners](roles-permissions#deployment-roles) of the deployment. Users of the deployment can view the rules but cannot edit them or define new rules.
* The "Uncertain Prediction" trigger is only supported for regression and binary classification models.
* Multiclass models only support the "Override prediction" trigger.
|
humility-settings
|
---
title: Set up service health monitoring
description: Configure segmented analysis to access drill-down analysis of service health, data drift, and accuracy statistics by filtering them into unique segment attributes and values.
---
# Set up service health monitoring
On a deployment's **Service Health > Settings** tab, you can enable [segmented analysis](deploy-segment) for service health; however, to use segmented analysis for data drift and accuracy, you must also enable the following deployment settings:
* [Enable target monitoring](data-drift-settings) (required to enable data drift _and_ accuracy tracking)
* [Enable feature drift tracking](data-drift-settings) (required to enable data drift tracking)
Once you've enabled the tracking required for your deployment, configure segment analysis to access segmented analysis of [service health](deploy-segment#service-health), [data drift](deploy-segment#data-drift-and-accuracy), and [accuracy](deploy-segment#data-drift-and-accuracy) statistics by filtering them into unique segment attributes and values.
On a deployment's **Service Health > Settings** tab, you can configure the **Service Health Settings**:

| Field | Description |
|--------------------------|----------------------------|
| **Segmented Analysis** | :~~: |
| [Track attributes for segmented analysis of training data and predictions](deploy-segment) | Enables DataRobot to monitor deployment predictions by segments, for example by categorical features. |
| **Notifications** | :~~: |
| [Send notification](#schedule-service-health-monitoring-notifications) | Configures the service health monitoring notification schedule. |
After enabling segmented analysis, you must specify the segment attributes to track in training and prediction data before making predictions. Selecting a segment attribute for tracking causes the model's data to be segmented by the attribute, allowing users to closely analyze the segment values that comprise the attributes selected for tracking. Attributes used for segmented analysis must be present in the training dataset for a deployed model, but they don't need to be features of the model. The list of segment attributes available for tracking is limited to categorical features, except the selected series ID used by multiseries deployments. To track an attribute, add it to the **Track attributes for segmented analysis of training data and predictions** field. The "Consumer" attribute (representing users making prediction requests) is always listed by default.

For time series deployments with segmented analysis enabled, DataRobot automatically adds up to two segmented attributes: `Forecast Distance` and `series id` (the ID is only provided for multiseries models). Forecast distance is automatically available as a segment attribute without being explicitly present in the training dataset; it is inferred based on the forecast point and the date being predicted on. These attributes allow you to view accuracy and drift for a specific forecast distance, series, or other defined attribute.

When you have finalized the attributes to track, click **Save**.
[Make predictions](../../predictions/index) and navigate to the tab you want to analyze for your deployment by segment: [**Service Health**](deploy-segment#service-health), [**Data Drift**](deploy-segment#data-drift-and-accuracy), or [**Accuracy**](deploy-segment#data-drift-and-accuracy).
!!! important
Segmented analysis is only available for predictions made after the segmented analysis is enabled.
## Schedule service health monitoring notifications {: #schedule-service-health-monitoring-notifications }
Service health tracks metrics about a deployment’s ability to respond to prediction requests quickly and reliably. You can visualize service health on the [Service Health](service-health) tab.
!!! note
Only deployment _Owners_ can modify data drift monitoring settings; however, _Users_ can [configure the conditions under which notifications are sent to them](deploy-notifications). _Consumers_ cannot modify monitoring or notification settings.
To schedule service health monitoring email notifications:
1. On the **Service Health Settings** page, in the **Notifications** section, select the **Send notifications** checkbox.
2. Configure the settings for service health notifications:

The following table lists the scheduling options. All times are displayed in your configured time zone:
| Frequency | Description |
|-----------------|-------------|
| Every hour | Each hour on the 0 minute |
| Every day | Each day at the configured hour* |
| Every week | Configurable day and hour |
| Every month | Configurable date (`1`-`31`) and hour |
| Every quarter | Configurable number of days (`1`-`31`) past the first day of January, April, July, October, at the configured hour |
_* The cadence setting applies across all days selected. In other words, you cannot set checks to occur every 12 hours on Saturday and every 2 hours on Monday._
3. After updating the scheduling settings, click **Save**.
At the configured time, DataRobot sends emails to subscribers.
|
service-health-settings
|
---
title: Set up data drift monitoring
description: Configure data drift monitoring on a deployment's Data Drift Settings tab.
---
# Set up data drift monitoring {: #set-up-data-drift-monitoring }
When deploying a model, there is a chance that the dataset used for training and validation differs from the prediction data. You can enable data drift monitoring on the **Data Drift > Settings** tab. DataRobot monitors both target and feature drift information and displays results on the [**Data Drift**](data-drift) tab.
{% include 'includes/how-dr-tracks-drift-include.md' %}
!!! info "Availability information"
Data drift tracking is only available for deployments using deployment-aware prediction API routes (i.e., `https://example.datarobot.com/predApi/v1.0/deployments/<deploymentId>/predictions`).
On a deployment's **Data Drift Settings** page, you can configure the following settings:

| Field | Description |
|-------------------------|----------------------------|
| **Data Drift** | :~~: |
| Enable feature drift tracking | Configures DataRobot to track feature drift in a deployment. Training data is required for feature drift tracking. |
| Enable target monitoring | Configures DataRobot to track target drift in a deployment. Target monitoring is required for [accuracy monitoring](accuracy-settings). |
| **Training data** | :~~: |
| Training data | Displays the dataset used as a training baseline while building a model. |
| **Inference data** | :~~: |
| DataRobot is storing your predictions | Confirms DataRobot is recording and storing the results of any predictions made by this deployment. DataRobot stores a deployment's inference data when a deployment is created. It cannot be uploaded separately. |
| **Inference data (external model)** | :~~: |
| DataRobot is recording the results of any predictions made against this deployment | Confirms DataRobot is recording and storing the results of any predictions made by the external model. |
| Drop file(s) here or choose file | Uploads a file with prediction history data to monitor data drift. |
| **Definition** | :~~: |
| [Set definition](#define-data-drift-monitoring-notifications) | Configures the drift and importance metric settings and threshold definitions for data drift monitoring. |
| **Notifications** | :~~: |
| [Send notification](#schedule-data-drift-monitoring-notifications) | Configures the data drift monitoring notification schedule. |
!!! note
DataRobot monitors both target and feature drift information by default and displays results in the [Data Drift dashboard](data-drift). Use the **Enable target monitoring** and **Enable feature drift tracking** toggles to turn off tracking if, for example, you have sensitive data that should not be monitored in the deployment. The **Enable target monitoring** setting is also required to enable [accuracy monitoring](accuracy-settings).
You can customize how data drift is monitored. See the data drift page for more information on [customizing data drift status](data-drift#customize-data-drift-status) for deployments.
## Define data drift monitoring notifications {: #define-data-drift-monitoring-notifications }
Drift assesses how the distribution of data changes across all features for a specified range. The thresholds you set determine the amount of drift you will allow before a notification is triggered.
!!! note
Only deployment _Owners_ can modify data drift monitoring settings; however, _Users_ can [configure the conditions under which notifications are sent to them](deploy-notifications). _Consumers_ cannot modify monitoring or notification settings.
Use the **Definition** section of the **Data Drift > Settings** tab to set thresholds for drift and importance:
* Drift is a measure of how new prediction data differs from the original data used to train the model.
* Importance allows you to separate the features you care most about from those that are less important.
For both drift and importance, you can visualize the thresholds and how they separate the features on the [Data Drift tab](data-drift). By default, the data drift status for deployments is marked as "Failing" () when at least one high-importance feature exceeds the set drift metric threshold; it is marked as "At Risk" () when no high-importance features, but at least one low-importance feature exceeds the threshold.
Deployment _Owners_ can customize the rules used to calculate the drift status for each deployment. As a deployment _Owner_, you can:
* Define or override the list of high or low-importance features to monitor features that are important to you or put less emphasis on less important features.
* Exclude features expected to drift from drift status calculation and alerting so you do not get false alarms.
* Customize what "At Risk" and "Failing" drift statuses mean to personalize and tailor the drift status of each deployment to your needs.
To set up monitoring of drift status for a deployment:
1. On the **Data Drift Settings** page, in the **Definition** section, configure the settings for monitoring data drift:

| | Element | Description |
|--|---------|-------------|
|  | Range | Adjusts the time range of the **Reference period**, which compares training data to prediction data. Select a time range from the dropdown menu. |
|  | Drift metric and threshold | Configures the thresholds of the drift metric. DataRobot only supports the Population Stability Index (PSI) metric. When drift thresholds are changed, the [Feature Drift vs. Feature Importance chart](data-drift#feature-drift-vs-feature-importance-chart) updates to reflect the changes. For more information, see the note on [Drift metric support](#drift-metric-support) below. |
|  | Importance metric and threshold | Configures the thresholds of the Importance metric. The Importance metric measures the most impactful features in the training data. DataRobot only supports the Permutation Importance metric. When drift thresholds are changed, the [Feature Drift vs. Feature Importance chart](data-drift#feature-drift-vs-feature-importance-chart) updates to reflect the changes. See an [example](#example-of-configuring-the-importance-and-drift-thresholds).|
|  | `X` excluded features | Excludes features (including the target) from drift status calculations. Click **`X` excluded features** to open a dialog box where you can enter the names of features to set as **Drift exclusions**. Excluded features do not affect drift status for the deployment but still display on the Feature Drift vs. Feature Importance chart. See an [example](#example-of-an-excluded-feature). |
|  | `X` starred features | Sets features to be treated as high importance even if they were initially assigned low importance. Click **`X` starred features** to open a dialog box where you can enter the names of features to set as **High-importance stars**. Once added, these features are assigned high importance. They ignore the importance thresholds, but still display on the Feature Drift vs. Feature Importance chart. See an [example](#example-of-starring-a-feature-to-assign-high-importance). |
|  | "At Risk" / "Failing" thresholds | Configures the values that trigger drift statuses for "At Risk" () and "Failing" (). See an [example](#example-of-setting-a-drift-status-rule).|
!!! note
Changes to thresholds affect the periods of time in which predictions are made across the entire history of a deployment. These updated thresholds are reflected in the performance monitoring visualizations on the [Data Drift](data-drift) tab.
2. After updating the data drift monitoring settings, click **Save**.
{% include 'includes/drift-metrics-support.md' %}
### Example of an excluded feature {: #example-of-an-excluded-feature }
In the example below, the excluded feature, which appears as a gray circle, would normally change the drift status to "Failing" (). Because it is excluded, the status remains as Passing.

### Example of configuring the importance and drift thresholds {: #example-of-configuring-the-importance-and-drift-thresholds }
In the example below, the chart has adjusted the importance and drift thresholds (indicated by the arrows), resulting in more features "At Risk" and "Failing" than the chart above.

### Example of starring a feature to assign high importance {: #example-of-starring-a-feature-to-assign-high-importance }
In the example below, the starred feature, which appears as a white circle, would normally cause drift status to be "At Risk" due to its initially low importance. However, since it is assigned high importance, the feature will change the drift status to "Failing" ().

### Example of setting a drift status rule {: #example-of-setting-a-drift-status-rule }
The following example configures the rule for a deployment to mark its drift status as "At Risk" if one of the following is true:
* The number of low-importance features above the drift threshold is greater than 1.
* The number of high-importance features above the drift threshold is greater than 3.

## Schedule data drift monitoring notifications {: #schedule-data-drift-monitoring-notifications }
To schedule data drift monitoring email notifications:
1. On the **Data Drift Settings** page, in the **Notifications** section, select the **Send notifications** checkbox.
2. Configure the settings for data drift notifications:
The following table lists the scheduling options. All times are displayed in your configured time zone:
| Frequency | Description |
|-----------------|-------------|
| Every day | Each day at the configured hour* |
| Every week | Configurable day and hour |
| Every month | Configurable date and hour |
| Every quarter | Configurable number of days (`1`-`31`) past the first day of January, April, July, October, at the configured hour |
_* The cadence setting applies across all days selected. In other words, you cannot set checks to occur every 12 hours on Saturday and every 2 hours on Monday._
3. After updating the scheduling settings, click **Save**.
At the configured time, DataRobot sends emails to subscribers.
|
data-drift-settings
|
---
title: Deployment reports
description: Learn about the deployment reports, which summarize details of a deployment, such as its owner, how the model was built, the model age, and the humility monitoring status.
---
# Deployment reports {: #deployment-reports }
Ongoing monitoring reports are a critical step in the deployment and governance process. DataRobot allows you to download deployment reports from MLOps, compiling deployment status, charts, and overall quality into a sharable report. Deployment reports are compatible with all deployment types.
## Generate a deployment report {: #generate-a-deployment-report }
To generate a report for a deployment, select it from the inventory and navigate to the **Overview** tab.
1. Select **Generate Report now**.

2. Configure the report settings. Select the model used, the date range, and the date resolution (the granularity of comparison for the deployment's statistics within the specified time range). Once configured, click **Generate report**.

3. Allow some time for the report generation process to complete. Once finished, select the eye icon to view the report in your browser, or the download icon to view it locally.

## Schedule deployment reports {: #schedule-deployment-reports }
In addition to manual creation, DataRobot allows you to manage a schedule to generate deployment reports automatically:
1. In a deployment, click the **Notifications** tab.
2. Select **Create new report schedule**.

3. Complete the fields to fully configure the schedule:

| | Field | Description |
|------------------------|-------------|-------------|
|  | Frequency | The cadence at which deployment reports are generated for the deployment. |
|  | Time | The time at which the deployment report is generated. |
|  | Range | The period of time captured by the report. |
|  | Resolution | The granularity of comparison for the deployment's statistics within the specified time range. |
|  | Additional recipients | Optional. The email addresses of those who should receive the deployment report, in addition to those who have access to the deployment. |
4. After fully determining the report schedule, click **Save report schedule**.
The reports automatically generate at the configured dates and times.
|
deploy-reports
|
---
title: Governance lens
description: Learn about the Governance lens, which summarizes details of a deployment such as the owner, how the model was built, model age, and humility monitoring status.
---
# Governance lens {: #governance-lens }
The Governance lens summarizes the social and operational aspects of a deployment, such as the deployment owner, how the model was built, the model's age, and the [humility monitoring](humble) status. View the governance lens from the [deployment inventory](deploy-inventory).

The following table describes the information available from the Governance lens:
| Category | Description |
|--------------|-----------------|
| Deployment Name | The name assigned to a deployment at creation, the type of prediction server used, and the project name (DataRobot models only). |
| Build Environment | The environment in which the model was built. |
| Owners | The [owner(s)](roles-permissions#deployment-roles) of each deployment. To view the full list of owners, click on the names listed. A pop-up modal displays the owners with their associated email addresses. |
| Model Age | The length of time the current model has been deployed. This value resets every time [the model is replaced](deploy-replace). |
| Humility Monitoring | The status of [prediction warnings](humility-settings#prediction-warnings) and humility [rules](humility-settings#create-humility-rules) for each deployment. |
| Fairness Monitoring | The status of [fairness rules](fairness-settings) based on the number of protected features below the predefined fairness threshold for each deployment.
| Actions | Menu of additional [model management activities](actions-menu), including adding data, replacing a model, setting data drift, and sharing and deleting deployments. |
## Build environments {: #build-environments }
The build environment indicates the environment in which the model was built.

The following table details the types of build environments displayed in the inventory for each type of model:
| Deployed model | Available build environments |
|-------------------|------------------------------|
| DataRobot model | DataRobot |
| Custom model | Python, R, Java, or Other (if not specified). Custom models derive their build environment from the model's programming language. |
| External model | DataRobot, Python, Java, R, or Other (if not specified). Specify an external model's build environment from the **Model Registry** when [creating a model package](reg-create#register-external-model-packages). |
## Humility Monitoring indicators {: #humility-monitoring-indicators }
The **Humility Monitoring** column provides an at-a-glance indication of how [humility is configured](humility-settings) for each deployment. To view more detailed information for an individual model, or enable humility monitoring, click on a deployment in the inventory list and navigate to the [**Humility**](humble) tab.
The column indicates the status of two Humility Monitoring features: [prediction warnings](humility-settings#prediction-warnings) and [humility rules](humility-settings).
In the deployment inventory, interpret the color indicators for each humility feature as follows:
| Color | Status |
|--------|--------|
|  Blue | Enabled for the deployment. |
|  Light gray | Disabled for the deployment. |
|  Dark gray | Unavailable for the deployment. Humility Monitoring is only available for non-time-aware regression models and custom regression models that provide holdout data.|
## Fairness Monitoring indicators {: bias-and-fairness-monitoring indicators}
The **Fairness** column provides an at-a-glance indication of how each deployment is performing based on predefined [fairness](mlops-fairness) criteria. To view more detailed information for an individual model or enable fairness monitoring, click on a deployment in the inventory list and navigate to the **Settings** tab.
In the deployment inventory, interpret the color indicators as follows:
| Color | Status |
|-------------------------------------------|--------|
|  Light gray | Fairness monitoring is not configured for this deployment. |
|  Green | All protected features are passing the fairness tests. |
|  Yellow | One protected feature is failing the fairness tests. Default is 1. |
|  Red | More than one protected feature is failing the fairness tests. Default is 2. |
You can create rules for fairness monitoring in the [**Definition** section of the **Fairness > Settings**](fairness-settings#define-fairness-monitoring-notifications) tab. If no rules are specified, fairness monitoring uses the default values for "At Risk" and "Failing."
|
gov-lens
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.