markdown
stringlengths
44
160k
filename
stringlengths
3
39
--- title: Fairness tab description: Monitor the fairness of deployed production models over time. --- # Fairness tab {: #fairness-tab } !!! info "Availability information" The **Fairness** tab is only available for DataRobot MLOps users. Contact your DataRobot representative for more information about enabling this feature. After you configure a deployment's [fairness settings](fairness-settings), you can use the **Fairness** tab to configure tests that allow models to monitor and recognize, in real time, when protected features in the dataset fail to meet predefined fairness conditions. When viewing the **Deployment** inventory with the [**Governance** lens](gov-lens), the **Fairness** column provides an at-a-glance indication of how each deployment is performing based on the fairness tests set up in the **Settings > Data** tab. ![](images/bf-mlops-9.png) To view more detailed information for an individual model or investigate why a model is failing fairness tests, click on a deployment in the inventory list and navigate to the **Fairness** tab. !!! note To receive email notifications on fairness status, [configure notifications](deploy-notifications), [schedule monitoring](fairness-settings#schedule-fairness-monitoring-notifications), and [configure fairness monitoring settings](fairness-settings). ## Investigate bias {: #investigate-bias} The **Fairness** tab helps you understand why a deployment is failing fairness tests and which protected features are below the predefined fairness threshold. It provides two interactive and exportable visualizations that help identify which feature is failing fairness testing and why. ![](images/bf-mlops-2.png) | | Chart | Description | | ---------- | ----------- | ----------- | | ![](images/icon-1.png) | [**Per-Class Bias** chart](#view-per-class-bias-chart) | Uses the fairness threshold and fairness score of each class to determine if certain classes are experiencing bias in the model's predictive behavior. | | ![](images/icon-2.png) | [**Fairness Over Time** chart](#view-fairness-over-time) | Illustrates how the distribution of a protected feature's fairness scores have changed over time. | If a feature is marked as _below threshold_, the feature does not meet the predefined fairness conditions. Select the feature on the left to display fairness scores for each segmented attribute and better understand where bias exists within the feature. ![](images/bf-mlops-3.png) To further modify the display, see the documentation for the [version selector](data-drift). ### View per-class bias {: #view-per-class-bias } The **Per-Class Bias** chart helps to identify if a model is biased, and if so, how much and who it's biased towards or against. For more information, see the existing documentation on [per-class bias](per-class). ![](images/bf-mlops-4.png) Hover over a point on the chart to view its details: ![](images/bf-mlops-5.png) ### View fairness over time {: #view-fairness-over-time } After configuring fairness criteria and making predictions with fairness monitoring enabled, you can view how [fairness scores](bias-ref) of the protected feature or feature values have changed over time for a deployment. The X-axis measures the range of time that predictions have been made for the deployment, and the Y-axis measures the fairness score. ![](images/bf-mlops-6.png) Hover over a point on the chart to view its details: ![](images/bf-mlops-7.png) You can also hide specific features or feature values from the chart by unchecking the box next to its name: ![](images/bf-mlops-8.png) The controls work the same as those available on the [Data Drift](data-drift) tab. ## Considerations {: #considerations} * Bias and Fairness monitoring is only available for binary classification models and deployments. * To upload actuals for predictions, an association ID is required. It is also used to calculate True Positive & Negative Rate Parity and Positive & Negative Predictive Value Parity.
mlops-fairness
--- title: Notifications tab description: Enable notifications, which trigger emails for service health and data drift reporting. Notifications are off by default but can be enabled by a deployment Owner. Configure Service Health, Data Drift, Accuracy, and Fairness monitoring. --- # Notifications tab {: #notifications-tab } DataRobot provides automated monitoring with a notification system. You can configure notifications to alert you when service health, data drift status, model accuracy, or fairness values exceed your defined acceptable levels. Notifications trigger emails. They are off by default but can be [enabled by a deployment _Owner_ in the deployment settings](deployment-settings/index). Keep in mind that notifications only control whether emails are sent to subscribers. If notifications are disabled, monitoring of service health, data drift, accuracy, and fairness statistics still occurs. You can also [schedule deployment reports](deploy-reports) on the **Notifications** tab. !!! note A deployment _Consumer_ only receives a notification when a deployment is shared with them and when a previously shared deployment is deleted. They are not notified about other events. To set the types of notifications you want to receive: 1. In the **Deployments** inventory, open a deployment and click the **Notifications** tab. 2. Select whether to email notifications and, if so, whether to send them for all events or just critical events, then, click **Save**. ![](images/notify-1.png) 3. The monitoring actions are located on the [deployment settings](deployment-settings/index) pages, and your control over those settings depends on your [deployment role](roles-permissions#deployment-roles)—_Owner_ or _User_. Both roles can set personal **Notification Settings**; however, only deployment _Owners_ can set up schedules and thresholds to monitor the following: * [Service health](service-health-settings) * [Data drift status](data-drift-settings) * [Accuracy](accuracy-settings) * [Fairness](fairness-settings) Notifications are delivered as emails and must be set up for each deployment you want to monitor. ![](images/notify-3.png) !!! tip You can also [schedule deployment reports](deploy-reports) on the **Notifications** tab.
deploy-notifications
--- title: Governance description: Model governance sets rules and controls for deployments, facilitates scaling of deployments, and provides legal and compliance reports. --- # Governance {: #governance } When machine learning models in production become critical to business functions, new requirements emerge to ensure quality and to comply with legal and regulatory obligations. The deployment and modification of models can have far-reaching impacts, so establishing clear practices can ensure consistent management and minimized risk. Model governance sets the rules and controls for deployments, including access control, testing and validation, change and access logs, and traceability of prediction results. With model governance in place, organizations can scale deployments and provide legal and compliance reports. Scaling the use and value of models in production requires a robust and repeatable production process, including clearly defined roles, procedures, and logging. A consistent process dramatically reduces an organization’s operational, legal, and regulatory risk. Additionally, logging shows that rules were followed and supports troubleshooting to resolve issues quickly, which increases trust and value from AI projects. ## Aspects of governance {: #aspects-of-governance } Model governance for MLOps includes various components: **Roles and responsibilities**: One of the first steps in production model governance is to establish clear [roles](roles-permissions) with duties within the production model lifecycle. Users may have more than one role. [MLOps admins](dep-admin) are central to maintain model governance within an organization. **Access control:** To maintain control over production environments, access to production models and environments must be limited. Limitations can be implemented at the individual user level or via [role-based access control (RBAC)](manage-users#role-based-access-control-for-users). In either case, a limited number of people will have the ability to update production data for model training, deploy production models, or modify production environments. **Deployment testing and validation:** To ensure quality in production, processes should include testing and validation of each new or refreshed model before deployment. These tests and their results should be logged to show that the model was deemed ready for production use. Testing information will be required for model approval. **Model history:** Models will change over time as they are updated and replaced in production. Maintenance of complete [model history](dep-overview#history), including model artifacts and changelogs, is critical for legal and regulatory needs. The ability to understand when a change was made and by whom is critical for compliance but is also very useful for troubleshooting when something goes wrong. **Humility rules:** [Humility rules](humility-settings) can be configured to allow models to be capable of recognizing, in real-time, when they make uncertain predictions or receive data they have not seen before. Unlike data drift, model humility does not deal with broad statistical properties over time—it is instead triggered for individual predictions, allowing you to set desired behaviors with rules that depend on different triggers. **Fairness monitoring:** [Fairness monitoring](fairness-settings) can be configured to allow models to be capable of recognizing when protected features fail to meet predefined fairness criteria. Testing the fairness of production models is triggered by individual predictions, however, any predictions made within the last 30 days are also taken into account. **Traceable model results:** Each model result must be attributable back to the model and model version that generated that result to meet legal and regulatory compliance obligations. Traceability is especially critical because of the dynamic nature of the production model lifecycle that results in frequent model updates. At the time of a legal or regulatory filing, which could be months after an individual model response, the model in production may not be the same as the model used to create the prediction in question. A record of request data and response values with date and time information satisfies this requirement. Also, a model ID should be provided as part of the model response to make the tracking process easier.
index
--- title: Model deployment approval workflow description: DataRobot MLOps system administrators can specify security policies that control who can create or modify deployments and what kind of approval is required. --- # Approval process {: #approval-process } !!! info "Availability information" The Model Deployment Approval Workflow is only available for DataRobot MLOps users. Contact your DataRobot representative for more information about enabling this feature. Note that the approval workflow for deployment events is only prompted when the Model Deployment Approval Workflow is enabled. When you create a new or change an existing deployment, if the approval workflow is enabled, an MLOps administrator within your organization must approve your changes. [Approval policies](deploy-approval) affect the users who have permissions to review deployments, and provide automated actions when reviews time out. Approval policies also affect users whose deployment events are governed by a configured policy (e.g., new deployment creation, model replacement). Be sure to review the [deployment approval workflow considerations](#considerations) before proceeding. ### Deployment importance levels {: #deployment-importance-levels } Before you deploy your model, you are prompted to assign an importance level to it: Critical, High, Moderate, or Low. Importance represents an aggregate of factors relevant to your organization such as the prediction volume of the deployment, level of exposure, potential financial impact, and more. ## Deployment creation approval {: #deployment-creation-approval } Once the deployment is created, MLOps Admins are alerted via email that the deployment requires review. While awaiting review, the deployment is flagged as "NEEDS APPROVAL" in the [deployment inventory](deploy-inventory). ![](images/dep-admin-1.png) While a deployment with "NEEDS APPROVAL" status can still make predictions, DataRobot recommends contacting your MLOps administrator before using it to do so. Once the deployment is approved, the flag updates to "APPROVED" in the inventory. Additionally, predictions made after the deployment is approved are marked as “APPROVED” in the prediction server response metadata. ## Deployment event approval {: #deployment-event-approval } As a deployment owner, you can make changes to an existing deployment and include comments to detail the reason for the change. You also choose whether, after the change request has been approved, you want the change manually or automatically applied. ![](images/dep-admin-4.png) You are always notified via email when your change request has been approved or requires changes. After approval, you can apply the changes; if changes are set to automatic, DataRobot applies changes immediately after approval. ## MLOps Admin: Approve deployment creation {: #mlops-admin-approve-deployment-creation } The MLOps administrator role offers access and governance to those within organizations who oversee deployments. Administrators are often responsible for monitoring deployments that make prediction requests, ensuring the quality of deployment performance, assisting with debugging, and reporting on deployment usage and activity. The MLOps administrator role, assigned by the system administrator, has **User** role permissions for all existing and newly created deployments within their organization. They receive email notifications to approve deployment events such as creation, model replacement, changes to importance levels, and deletion. !!! note These elevated permissions only apply to deployments. Your primary function as an MLOps administrator is to review and approve deployment events within your organization. When a user within your organization creates a deployment, you are alerted via email that the deployment requires review. When you access a deployment that needs approval, DataRobot presents a notification banner on the [Overview tab](dep-overview) prompting you to begin the deployment review process. ![](images/dep-admin-2.png) !!! important You can't create and approve a deployment from the same account; therefore, if the deployment creator and MLOps Admin are the same, the **This deployment needs review** banner doesn't appear in the deployment overview. After you click **Add review**, a dialog box populates to submit approval or request updates for the deployment. Review the deployment and its importance level (chosen by the deployment creator). Optionally, you can include comments with your decision. If approved, DataRobot removes the **NEEDS APPROVAL** flag from the deployment inventory listing. If changes were requested, the flag remains until the changes are addressed and the deployment is approved. ## MLOps Admin: Approve changes to existing deployments {: #mlops-admin-approve-changes-to-existing-deployments } In addition to reviewing deployment creations, MLOps administrators review and approve deployment events such as a model replacement, changes to importance levels, and deployment deletion. You will receive email notifications for these triggering events that require approval. Deployment owners are notified via email when you approve or request changes. As an MLOps administrator accessing a deployment that needs approval for a change, DataRobot presents a notification banner prompting you to begin the deployment review process. You can review the deployment's history of changes in the **Overview** tab under the **Governance** header. The **Governance** header details the history of changes made to a deployment, including the importance levels, the changes made, who made them, and who reviewed them. ![](images/dep-admin-6.png) To approve a deployment change, select **Add Review** from the notification banner (also available under the **Governance** header): ![](images/dep-admin-3.png) A dialog box populates, including a summary of the changes and any comments provided by the deployment owner making the change. Review the desired changes, include any comments with your decision, and submit, approve, or request updates for the deployment. ![](images/dep-admin-5.png) After submitting your review, the notification banner updates if the changes were approved. If the deployment owner chooses to have the changes applied manually, the owner can click **Apply Changes** to do so. ## Considerations {: #considerations } * A deployment Owner can choose to share a deployment with [MLOps administrators](rbac-ref#mlops-admin) and grant either User or Owner permissions. When explicitly shared, Owner rights are the default. * *For Self-Managed AI Platform installations*: An MLOps administrator will be able to monitor actions taken by users in their organization.
dep-admin
--- title: Humility tab description: After configuring rules and making predictions with humility monitoring enabled, you can view the humility data collected over time for a deployment from the Humility tab. --- # Humility tab {: #humility-tab } After [configuring humility rules](humility-settings) and making predictions with humility monitoring enabled, you can view the humility data collected over time for a deployment from the **Humility > Summary** tab. ![](images/humility-9.png) The X-axis measures the range of time that predictions have been made for the deployment. The Y-axis measures the number of times humility rules have triggered for the given period of time. The controls—model version and data time range selectors—work the same as those available on the [Data Drift](data-drift) tab.
humble
--- title: Replace deployed models description: How to replace deployment model packages, to keep models currrent and accurate. DataRobot uses training data to verify that the two models have the same target. --- # Replace deployed models {: #replace-deployed-models } Because model predictions tend to degrade in accuracy over time, DataRobot provides an easy way to switch models and model packages for deployments. This ensures that models are up-to-date and accurate. Using the model management capability to switch model packages for deployments allows model creators to keep models current without disrupting downstream consumers. It helps model validators and data science teams to track model history. And, it provides model *consumers* with confidence in their predictions without needing to know the details of the changeover. ## Replace a model package {: #replace-a-model-package } Use the **Replace model** functionality found in the [**Actions**](actions-menu) (![](images/icon-menu.png)) menu. The menu is available from the **Deployments** area of either the [**Inventory**](deploy-inventory) or the [**Overview**](dep-overview) pages. ![](images/mmm-replace-1.png) You are redirected to the **Overview** tab of the deployment. Click **Import from** to choose your method of model replacement. * **Local File**: Upload a model package exported from DataRobot AutoML to replace an existing model package (standalone MLOps users only). * **Model Registry**: Select a model package from the [**Model Registry**](registry/index) to replace an existing model package. * **Paste AutoML URL**: Copy the URL of the model from the Leaderboard and paste it into the **Replacement Model** field. ![](images/replace-m-2.png) When you have confirmed the model package for replacement, select the replacement reason and click **Accept and replace**. ![](images/replace-m-3.png) ## Model replacement considerations {: #model-replacement-considerations } When replacing a deployed model, note the following: * Model replacement is available for all deployments. Each deployment's model is provided as a model package, which can be replaced with another model package, provided it is [compatible](#model-package-replacement-compatibility). !!! note The new model package *cannot* be the same leaderboad model as an existing champion or challenger; each challenger *must* be a unique model. If you create multiple model packages from the same leaderboard model, you can't use those models as challengers in the same deployment. * While only the most current model is deployed, model history is maintained and can be used as a baseline for data drift. ### Model replacement validation {: #model-replacement-validation } DataRobot validates whether the new model is an appropriate replacement for the existing model and provides warning messages if issues are found. DataRobot compares the models to ensure that: * The target names and types match. For classification targets, the class names must match. * The feature types match. * There are no new features. If the new model has more features, the warning identifies the additional features. This is intended to help prevent prediction errors if the new model requires features not available in the old model. If the new model has fewer or the same number of features, there is no warning. * The replacement model supports all humility rules. * If the existing model is a time series model, the replacement model must also be a time series model and the series types must match (single series/multiseries). * If the model is a custom inference model, it must pass custom model tests. * Prediction intervals must be compatible if enabled for the deployment. * Segments must be compatible if segment analysis is enabled for the deployment. !!! note DataRobot is only able to validate your model’s input features if you have assigned training data to both model packages (the existing model package for your deployment, and the one you selected to replace it with). Otherwise, DataRobot is unable to validate that the two model packages have the same target type and target name. A warning message informs you that model replacement is not allowed if the model, target type, and target name are not the same: ![](images/replace-m-4.png) ### Model package replacement compatibility {: #model-package-replacement-compatibility } Consider the compatibility of each model package type (external and DataRobot) before proceeding with model package replacement for a deployment: === "SaaS" * [External model packages](reg-create#register-external-model-packages) (monitored by the [MLOps agent](mlops-agent/index)) can only replace other external model packages. They cannot be replaced by DataRobot model packages. * [Custom model packages](reg-create#add-a-custom-inference-model) are DataRobot model packages. DataRobot model packages can only replace other DataRobot model packages. They cannot be replaced by external model packages. === "Self-Managed" * [External model packages](reg-create#register-external-model-packages) (monitored by the [MLOps agent](mlops-agent/index)) can only replace other external model packages. They cannot be replaced by DataRobot model packages. * [Custom model packages](reg-create#add-a-custom-inference-model) and [imported .mlpkg files](reg-transfer) are both DataRobot model package types. DataRobot model packages can only replace other DataRobot model packages. They cannot be replaced by external model packages.
deploy-replace
--- title: Lifecycle management description: Lifecycle management provides tools and a robust, repeatable process to scale models and manage the lifecycle of models in production environments. --- # Lifecycle management {: #lifecycle-management } Machine learning models in production environments have a complex lifecycle, and the use and value of models requires a robust and repeatable process to manage that lifecycle. Without proper management, models that reach production may deliver inaccurate data, poor performance, or unexpected results that can damage your business’s reputation for AI trustworthiness. Lifecycle management is essential for creating a machine learning operations system that allows you to scale many models in production. The following sections describe how to manage models in production. Be sure to review the [deployment considerations](deployment/index#feature-considerations) before proceeding. | Topic | Describes... | |-------|--------------| | [Deployment inventory (Deployments page)](deploy-inventory) | Coordinate deployments and view deployment inventory. | | [Manage deployments](actions-menu) | Understand the actions you can take with deployments. | | [Deployment settings](deploy-settings) | Configure and view deployment settings. | | [Replace deployed models](deploy-replace) | Replace the model used for a deployment. | | [Set up Automated Retraining policies](set-up-auto-retraining) | Configure retraining policies to maintain model performance after deploying. |
index
--- title: Manage deployments description: Manage deployments using the actions menu, which allows you to apply deployment settings, share deployments, create applications using the deployed model, replace models, and delete deployments, among other actions. --- # Manage deployments {: #manage-deployments } On the Deployments page, you can manage deployments using the actions menu: ![](images/deploy-menu.png) The available options depend on a variety of criteria, including user permissions and the data available for your deployment. The table below briefly describes each option. | Option | Description | [Availability](roles-permissions#deployment-roles) | |--------|-------------|----------------------------------------------------| | [Replace model](deploy-replace) | Changes out the current model in the deployment with a newly specified model. | Owner | | [Settings](deploy-settings) | Configures various settings for the deployment. Track target and feature [drift](data-drift) in a deployment and activate additional features, such as [prediction row storage](challengers-settings) for [challenger models](challengers) and [segmented analysis](deploy-segment). Use to enable data drift and add learning and inference data. | Owner | | [Share](#share-a-deployment) | Provides sharing capabilities independent of project permissions. | User, Owner | | [Create Application](create-app#from-a-deployment) | Launches a DataRobot application of your choice using the deployed model. | Owner | | [Clear statistics](#clear-deployment-statistics) | Reset the logged statistics for a deployment. | Owner | | [Activate / Deactivate](#activate-a-deployment) | Enables or disables a deployment's monitoring and prediction request capabilities. | Owner | | [Relaunch](mgmt-agent-relaunch) | For management agent deployments, relaunch the deployment in the prediction environment managed by the agent. | Owner | | [Get Scoring Code](sc-download-deployment) | Downloads Scoring Code (in JAR format) directly from the deployment. This action is only available for models that support [Scoring Code](scoring-code/index). | User, Consumer, Owner | | [Delete](#delete-a-deployment)| Removes a deployment from the inventory.| Owner | Access the actions menu in one of these locations: 1. To the right of each deployment: ![](images/deploy-menu-1.png) 2. On the [**Overview**](dep-overview) tab: ![](images/deploy-overview-1a.png) ### Replace a model {: #replace-a-model } Prediction accuracy tends to degrade over time (which you can track in the [Data Drift dashboard](data-drift) as conditions and data change. If you have the correct [permissions](roles-permissions#deployment-roles), you can easily switch over to a new, better-adapted model using the [model replacement](deploy-replace) action. You can then incorporate the new model predictions into downstream applications. This action initiates an [email notification](#deployment-email-notifications). ### Share a deployment {: #share-a-deployment } The sharing capability allows [appropriate user roles](roles-permissions#deployment-roles) to grant permissions on a deployment, independent of the project that created the deployed model. This is useful, for example, when the model creator regularly refreshes the model and wants the people using it to have access to the updated predictions but not to the model itself. ??? note "Job definition sharing" Deployments can be shared between users with the _Owner_, _User_, or _Consumer_ role enabled; however, the job definitions associated with a deployment don't share those role-based permissions. Users can't see job definitions created by other users or batch prediction jobs run by other users on a shared deployment. To share a deployment, select the **Share** (![](images/icon-share.png)) action. Enter the email address of the person you would like to share the deployment with, select their [role](roles-permissions#deployment-roles), and click **Share**. You can later change the user's permissions by clicking on the current permission and selecting a new access level from the dropdown. ![](images/deploy-share-deploy.png) This action initiates an [email notification](#deployment-email-notifications). Additionally, deployment Owners and Users can share with groups and organizations. Select either the Groups or Organizations tab in the sharing modal. Enter the group or organization name in the **Share With** field, determine the role for permissions, and click **Share**. The deployment is shared with—and the role is applied to—every member of the designated group or organization. Additionally, deployment Owners and Users can share with groups and organizations (up to their own access level). ![](images/actions-menu-1.png) ### Clear deployment statistics {: #clear-deployment-statistics } [Deployments](../deployment/index) collect various statistics for a model, including [accuracy](deploy-accuracy), [error rate](service-health), [data drift](data-drift), and [more](../monitor/index). You may want to configure and test a deployed model before pushing a production workload on it to see if, for example, predictions perform well on data similar to that which you would upload for production. After testing a deployment, DataRobot allows you to reset the logged analytics, so you can separate testing data from live data without needing to recreate a deployment to start fresh. Choose a deployment for which you want to reset statistics from the inventory. Click the actions menu and select **Clear statistics**. ![](images/reset-dep-1.png) Complete the fields in the **Clear Deployment Statistics** window to configure the parameters of the reset. ![](images/reset-dep-2.png) Field | Description -------|------------ Model version to clear from | Select the model version from which you want to clear statistics. If the model has not been [replaced](deploy-replace), there is only one option to choose from (the originally deployed model). Date range to clear from | Choose to either clear the entire history from the given model version, or specify a date range to wipe statistics from. Reason for clearance | Optional. Describe why the statistics were reset. Confirm clearance | Select the toggle to confirm that you understand you are removing analytics from the selected deployment. !!! note If your organization has enabled the [deployment approval workflow](dep-admin), then approval must be given before any monitoring data can be removed from a deployment. After fully configuring the fields, click **Clear statistics**. DataRobot removes the monitoring data for the indicated time range from the deployment. ### Delete a deployment {: #delete-a-deployment } If you have the appropriate [permissions](roles-permissions#deployment-roles), you can delete a deployment from the inventory by clicking the trash can icon ![](images/icon-delete.png). This action initiates an [email notification](#deployment-email-notifications) to all users with sharing privileges to the deployment. ### Activate a deployment {: #activate-a-deployment } Deployments have capabilities, such as prediction requests and data monitoring, that consume a large amount of resources. You may want to test the prediction experience for a model or experiment with monitoring output settings without expending any resources or risking a production outage. Deployment activation allows you to control when these resource-intensive capabilities are enabled for individual deployments. Additionally, note that inactive deployments *do not* count towards your deployment limit. !!! info "Availability information" Inactive deployment behavior depends on the [MLOps configuration](pricing) for your organization. A deployment's [Owner](roles-permissions) can activate its prediction requests and some monitoring capabilities. From the **Actions** menu for a deployment, select **Activate**. ![](images/deploy-active-2.png) When created, a deployment is set to the active state by default; use the **Actions** menu to deactivate it. Once deactivated, you can still browse the deployment's monitoring tabs and edit its settings and metadata, but you cannot make predictions. Inactive deployments are indicated by an "INACTIVE" label in the deployment inventory: ![](images/deploy-active-1.png) You can monitor the current number of active and inactive deployments from the tile at the top of the inventory: ![](images/deploy-active-3.png) ### Deployment email notifications {: #deployment-email-notifications } The following actions initiate an email notification. The availability of each action depends on each user's [role](roles-permissions#deployment-roles). Additional notifications are available through the Notifications option of the [**Settings**](deploy-notifications) tab. #### Deployment Sharing {: #deployment-sharing } When you share a deployment with a user, the recipient receives an email, notifying them that permission has been granted. The email notification is only sent on the initial invite, not if permission levels have changed or been revoked. If the receiving user has the Consumer [role](roles-permissions#deployment-roles) on the deployment, the email will contain a deployment ID. Otherwise, for Users and Owners, the email will contain a link to view the deployment. #### Model Replacement {: #model-replacement } If a user with the appropriate [role](roles-permissions#deployment-roles) replaces a model within a deployment, DataRobot sends an email to all other users with the roles of _Owner_ or _User_ notifying them of the replacement. #### Deployment Deletion {: #deployment-deletion } When a user with the appropriate [permission](roles-permissions#deployment-roles) deletes a deployment, DataRobot sends a notifying email containing the deployment ID to all other subscribed users (all [roles](roles-permissions#deployment-roles)).
actions-menu
--- title: Deployment settings description: Add data to a deployment and configure monitoring, notifications, and challenger behavior using the Settings tab. --- # Deployment settings {: #settings-tab } !!! info "Deprecation notice" The **Settings > Data** and **Settings > Monitoring** tabs are deprecated and scheduled for removal. The new deployment settings workflow provides an organized and intuitive interface, separating the categories of deployment configuration and monitoring setup tasks into dedicated settings pages. During the deprecation period, you can use the **Data** tab; however, the **Monitoring** tab directs you to the [service health settings](service-health-settings). You can add data to a deployment and configure monitoring, notifications, and challenger behavior using the **Settings** associated with each deployment tab: Topic | Describes -------|------------ [Set up service health monitoring](service-health-settings) | Enable [segmented analysis](deploy-segment) to assess service health, data drift, and accuracy statistics by filtering them into unique segment attributes and values. [Set up data drift monitoring](data-drift-settings) | Enable [data drift monitoring](data-drift) on a deployment's Data Drift Settings tab. [Set up accuracy monitoring](accuracy-settings) | Enable [accuracy monitoring](deploy-accuracy) on a deployment's Accuracy Settings tab. [Set up fairness monitoring](fairness-settings) | Enable [fairness monitoring](mlops-fairness) on a deployment's Fairness Settings tab. [Set up humility rules](humility-settings) | Enable [humility monitoring](humble) by creating rules which enable models to recognize, in real-time, when they make uncertain predictions or receive data they have not seen before. [Configure retraining](retraining-settings) | Enable [Automated Retraining](set-up-auto-retraining) for a deployment by defining the general retraining settings and then creating retraining policies. [Configure challengers](challengers-settings) | Enable [challenger comparison](challengers) by configuring a deployment to store prediction request data at the row level and replay predictions on a schedule. [Review predictions settings](predictions-settings) | Review the Predictions Settings tab to view details about your deployment's inference data. [Enable data export](data-export-settings) | Enable [data export](data-export) to compute and monitor custom business or performance metrics. [Set prediction intervals for time series deployments](predictions-settings#set-prediction-intervals-for-time-series-deployments) | Enable [prediction intervals](ts-predictions#prediction-preview) in the prediction response for deployed time series models.
deploy-settings
--- title: Set up Automated Retraining policies description: Maintain model performance after deployment through Automated Retraining. --- # Set up Automated Retraining policies {: #set-up-automated-retraining-policies } To maintain model performance after deployment without extensive manual work, DataRobot provides an automatic retraining capability for deployments. Upon providing a retraining dataset registered in the [**AI Catalog**](catalog), you can define up to five retraining policies on each deployment, each consisting of a trigger, a modeling strategy, modeling settings, and a replacement action. When triggered, retraining will produce a new model based on these settings and notify you to consider promoting it. !!! important To configure an Automated Retraining policy, the deployment's [Retraining Settings](retraining-settings) must be configured. ![](images/retrain-1.png) ## Create a retraining policy {:# create-a-retraining-policy } To create and define a retraining policy: 1. Click **Deployments** and select a deployment from the inventory. 2. On the **Retraining > Summary** tab, click **+ Add Retraining Policy**. If you haven't set up retraining, click **Configure Retraining** and configure the [**Retraining Settings**](retraining-settings). 3. Enter a **Policy name** and, optionally, a **Policy description**. ![](images/retrain-add-policy.png) 4. Configure the following retraining policy settings: * [**Retraining trigger**](#retraining-trigger): Select the time or deployment status event DataRobot uses to determine when to run retraining. * [**Model selection**](#model-selection): Configure the methods DataRobot should use to build the new model on the updated data. * [**Model action**](#model-action): Select the replacement strategy DataRobot should use for the model trained during a successful retraining policy run. * [**Modeling strategy**](#modeling-strategy): Configure how DataRobot should set up the new Autopilot project. 5. Click **Save policy**. ### Retraining trigger {: #retraining-trigger } Retraining policies can be triggered manually or in response to three types of conditions: ![](images/retrain-4.png) * **Automatic schedule**: Pick a time for the retraining policy to trigger automatically. Choose from increments ranging from every three months to every day. Note that DataRobot uses your local time zone. * **Drift status**: Initiates retraining when the deployment's data drift status declines to the level(s) you select. * **Accuracy status**: Triggers when the deployment's accuracy status changes from a better status to the levels you select (green to yellow, yellow to red, etc.). !!! note Data drift and accuracy triggers are based on the definitions configured on the [**Data Drift > Settings**](data-drift-settings) and [**Accuracy > Settings**](accuracy-settings) tabs. Once initiated, a retraining policy cannot be triggered again until it completes. For example, if a retraining policy is set to run every hour but takes more than an hour to complete, it will complete the first run rather than start over or queue with the second scheduled trigger. Only one trigger condition can be chosen for each retraining policy. ### Model selection {: #model-selection } Choose a modeling strategy for the retraining policy. The strategy controls how DataRobot builds the new model on the updated data. ![](images/retrain-6.png) * **Use same blueprint as champion at time of retraining**: Fits the same blueprint as the champion model at the time of triggering on the new data snapshot. Select one of the following options: * **Use current hyperparameters**: Use the same hyperparameters and blueprint as the champion model. Uses the champion's hyperparameter search and strategy for each task in the blueprint. Note that if you select this option, the champion model's feature list is used for retraining. The Informative Features list cannot be used. * **Automatically tune hyperparameters**: Use the same blueprint but optimize the hyperparameters for retraining. * **Use best Autopilot model** (recommended): Run Autopilot on the new data snapshot and use the resulting recommended model. Choose from Datarobot's three [modeling modes](model-data#set-the-modeling-mode): Quick, Autopilot, and Comprehensive. If selected, you can also toggle additional Autopilot options: * Only include blueprints that support [Scoring Code](scoring-code/index) * Create [blenders](leaderboard-ref#blender-models) from top-performing Models * Run Autopilot on a feature list with [target leakage](data-quality#target-leakage) removed * Only include models that support [SHAP values](shap-pe) ### Model action {: #model-action } The model action determines what happens to the model produced by a successful retraining policy run. In all scenarios, deployment owners are notified of the new model's creation and the new model is added as a model package to the [Model Registry](registry/index). Apply one of three actions for each policy: ![](images/retrain-model-action.png) * **Add new model as a challenger model**: If there is space in the deployment's five challenger models slots, this action&mdash;which is the default&mdash;adds the new model as a challenger model. It replaces any model that was previously added by this policy. If no slots are available, and no challenger was previously added by this policy, the model will only be saved to the Model Registry. Additionally, the retraining policy run fails because the model could not be added as a challenger. * **Initiate model replacement with new model**: Suitable for high-frequency (e.g., daily) replacement scenarios, this option automatically requests a model replacement as soon as the new model is created. This replacement is subject to defined [approval policies](dep-admin) and their applicability to the given deployment, based on its owners and importance level. Depending on that approval policy, reviewers may need to approve the replacement manually before it occurs. * **Save model**: In this case, no action is taken with the model other than adding it to the Model Registry. ### Modeling strategy {: #modeling-strategy } The modeling strategy for retraining defines how DataRobot should set up the new Autopilot project. Define the features, optimization metric, partitioning strategies, sampling strategies, weights, and [other advanced settings](adv-opt/index) that instruct DataRobot on how to build models for a given problem. You can either reuse the same features as the champion model uses (when the trigger initiates) or allow DataRobot to identify the [informative features](feature-lists#automatically-created-feature-lists) from the new data. By default, DataRobot reuses the same settings as the champion model (at the time of the trigger initiating). Alternatively, you can define new partitioning settings, choosing from a subset of options available in the project **Start** screen. ![](images/retrain-modeling-strategy.png) ## Manage retraining policies After creating a retraining policy, you can start it manually, cancel it, or update it, as explained in the table below. ![](images/retrain-manage-policies.png) | | Element | Definition | |---|---------|------------| | ![](images/icon-1.png) | Retraining policy row | Click on a retraining policy row to expand it. Once expanded, view or edit the retraining settings. | | ![](images/icon-2.png) | Run | Click the run button (![](images/icon-play.png)) to start a policy manually. Alternatively, edit the policy by clicking the policy row and scheduling a run using the retraining trigger. | | ![](images/icon-3.png) | Remove | Click the remove button (![](images/icon-delete.png)) to delete a policy. Click **Remove** in the confirmation window. | | ![](images/icon-4.png) | Cancel | Click the cancel button (![](images/icon-cancel.png)) to cancel a policy that is in progress or scheduled to run. You can't cancel a policy if it has finished successfully, reached the "Creating challenger" or "Replacing model" step, failed, or has already been canceled. | ## Retraining history {: #retraining-history } You can view all previous runs of a training policy, successful or failed. Each run includes a start time, end time, duration, and&mdash;if the run succeeded&mdash;links to the resulting project and model package. While only the DataRobot-recommended model for each project is added automatically to the deployment, you may want to explore the project's Leaderboard to find or build alternative models. ![](images/retrain-5.png) !!! note Policies cannot be deleted or interrupted while they are running. If the retraining worker and organization have sufficient workers, multiple policies on the same deployment can be running at once. ## Retraining strategies {: #retraining-strategies } The Challengers and Retraining tab allows for simple performance comparison, meaning retraining strategies can be evaluated empirically and customized for different use cases. You may benefit from initial experimentation, using various time frames for the "same-blueprint" and Autopilot strategies. For example, consider running "same-blueprint" retraining strategies using both a nightly and a weekly pattern and comparing the results. Typical strategies for implementing automatic retraining policies in a deployment include: * **High-frequency automatic schedule**: Frequently (e.g., daily) retrain the currently deployed blueprint on the newest data to stabilize the deployed model selection. * **Low-frequency automatic schedule**: Periodically (e.g., weekly, monthly) run Autopilot to explore alternative modeling techniques and potentially optimize performance. You can restrict this process to only Scoring Code-supported models if that is how you deploy. See the **Include only blueprints with Scoring Code support** [advanced option](additional) for more information. * **Drift status trigger**: Monitor data drift and trigger Autopilot to prepare an alternative model when the champion model has shown data drift due to changing situations. * **Accuracy status trigger**: Monitor accuracy drift and trigger Autopilot to search for a better-performing model after the champion model has shown accuracy decay. This strategy is most effective for use cases with fast access to actuals. ## Retraining availability {: #retraining-availability } Only binary, multiclass, and regression target types support retraining. The **Challengers and Retraining** tab doesn't appear when a deployment's champion has a multilabel target type. ### Unsupported models and projects {: #unsupported-models-and-projects } Retraining is not supported for the following DataRobot models and project types. In those cases, the **Challengers and Retraining** tab doesn't appear when a deployment's champion uses any of the listed functionality: === "SaaS" * [Feature Discovery models](fd-overview) * [Unsupervised learning projects](unsupervised/index) (including anomaly detection and clustering) * [Unstructured custom inference models](#unstructured-custom-models) === "Self-Managed" * [Feature Discovery models](fd-overview) * [Unsupervised learning projects](unsupervised/index) (including anomaly detection and clustering) * [Unstructured custom inference models](#unstructured-custom-models) * [Imported model packages](reg-transfer) ### Partially supported models {: #partially-supported-models } The following model types partially support retraining. For each partially supported model, only the supported (✔) options are available in retraining policies on the **Challengers and Retraining** tab: !!! note Only some retraining policy options are model-dependent. If the support matrix below doesn't include a model type, all options of a retraining policy are available for configuration. |Model type|Same blueprint as champion|Champion model's feature list|Project options from champion model|Custom project options| |------------------|---|---|---|---| | Custom inference | | | | ✔ | | External (agent) | | | | ✔ | | Blender | | | ✔ | ✔ | | Time series | ✔ | ✔ | ✔ | | ## Retraining for time series {: #retraining-for-time-series } Time series deployments support retraining, but there are limitations when configuring policies due to the time series [feature derivation process](feature-eng#feature-reference). This process generates features such as lags and [moving averages](ts-adv-opt#exponentially-weighted-moving-average) and creates a new modeling dataset. ### Time series model selection {: #time-series-model-selection } **Same blueprint as champion**: The retraining policy uses the same engineered features as the champion model's blueprint. The search for newly derived features does not occur because it could potentially generate features that are not captured in the champion's blueprint. **Autopilot**: When using Autopilot instead of the same blueprint, the time series feature derivation process <em>does</em> occur. However, Comprehensive Autopilot mode is not supported. Additionally, time series Autopilot does not support the options to only include Scoring Code blueprints and models with SHAP value support. ![](images/retrain-7.png) ### Time series modeling strategy {: #time-series-modeling-strategy } **Same blueprint as champion**: When creating a "same-blueprint" retraining policy for a time series deployment, you must use the champion model's feature list and advanced modeling options. The only option that you can override is the [calendar used](ts-adv-opt#calendar-files) because, for example, a new holiday or event may be included in an updated calendar that you want to account for during retraining. **Autopilot**: When creating an Autopilot retraining policy for a time series deployment, you must use the informative features modeling strategy. This strategy allows Autopilot to derive a new set of feature lists based on the informative features generated by new or different data. You cannot use the model's original feature list because time series Autopilot uses a feature extraction and reduction process by default. You can, however, override additional modeling options from the champion's project: | Option | Description | |------- | ----------| | [Treat as exponential trend](ts-adv-opt#treat-as-exponential-trend) | Apply a log-transformation to the target feature. | | [Exponentially weighted moving average](ts-adv-opt#exponentially-weighted-moving-average) (EWMA)| Set a smoothing factor for EWMA. | | [Apply differencing](ts-adv-opt#apply-differencing) | Set DataRobot to apply differencing to make the target stationary prior to modeling. | | [Add calendar](ts-adv-opt#calendar-files) | Upload, add from the catalog, or generate an event file that specifies dates or events that require additional attention. | ## Time-aware retraining {: #time-aware-retraining } For time-aware retraining, if you choose to reuse options from the champion model or override the champion model's project options, consider the following: * If the champion's project used the holdout start date and end date, the retraining project does not use these settings but instead uses holdout duration, the difference between these two dates. * If the champion project used the holdout duration with either the holdout start date or end date, the holdout start/end date is dropped, and holdout duration is used in the retraining project. A new holdout start date is computed (the end of the retraining dataset minus the holdout duration). Your [customizations to backtests](ts-date-time#edit-individual-backtests) are not retained; however, the *number* of backtests is retained. At retraining time, the training start and end dates will likely differ from the champion's start and end dates. The data used for retraining might have shifted so that it no longer contains all of the data from a specific backtest on the champion model.
set-up-auto-retraining
--- title: Deployment inventory description: Learn about the deployment inventory, which displays all actively deployed models and lets you monitor deployed model performance and take necessary action. --- # Deployment inventory {: #deployment-inventory } Once models are deployed, the deployment inventory is the central hub for deployment management activity. It serves as a coordination point for all stakeholders involved in operationalizing models. From the inventory, you can monitor deployed model performance and take action as necessary, as it provides an interface to all actively deployed models. ## Deployment lenses {: #deployment-lenses } There are two unique deployment lenses that modify the information displayed in the inventory: * The [Prediction Health lens](#prediction-health-lens) summarizes prediction usage and model status for all active deployments. * The [Governance lens](gov-lens) reports the operational and social aspects of all active deployments. To change deployment lenses, click the active lens in the top right corner and select a lens from the dropdown. ![](images/gov-lens-3.png) ### Prediction Health lens {: #prediction-health-lens } The Prediction Health lens is the default view of the deployment inventory, detailing prediction activity and model health for each deployment. Across the top of the inventory, the page summarizes the usage and status of all active deployments with [color-coded](#color-coded-health-indicators) health indicators. ![](images/deploy-tab-2.png) Beneath the summary is an individual report for each deployment. ![](images/deploy-tab-1.png) The following table describes the information available from the Prediction Health lens: | Category | Description | |----------------------|-------------| | Deployment Name | Name assigned to the deployment at creation, the type of prediction server used, and the project name (DataRobot models only). | | Service | [Service health](service-health) of the individual deployment. The [color-coded status](#color-coded-health-indicators) indicates the presence or absence of errors in the last 24 hours. | | Drift | [Data Drift](data-drift) occurring in the deployment. | | Accuracy | [Model accuracy](deploy-accuracy) evaluated over time. | | Activity | A bar graph indicating the pattern of predictions over the past seven days. The starting point is the same for each deployment in the inventory. For example, a new deployment will plot that day's activity and six (blank) days previous. | | Avg. Predictions/Day | Average number of predictions per day over the last seven days. | | Last Prediction | Elapsed time since the last prediction was made against the model. | | Creation Date | Elapsed time since the deployment was created. | | Actions | Menu of additional [model management activities](actions-menu), including adding data, replacing a model, setting data drift, and sharing and deleting deployments. | Click on any model entry in the table to [view details](dep-overview) about that deployment. Each model-specific page provides the above information in a status banner. ### Color-coded health indicators {: #color-coded-health-indicators } The [**Service Health**](service-health), [**Data Drift**](data-drift), and [**Accuracy**](deploy-accuracy) summaries in the top part of the display provide an at-a-glance indication of health and accuracy for all deployed models. To view this more detailed information for an individual model, click on the model in the inventory list. **Service Health Summary** measures the following error types over the last 24 hours. These are the **Data Error Rate** and **System Error Rate** errors recorded for an individual model on the **Service Health** tab. * 4xx errors indicate problems with the prediction request submission * 5xx errors indicate problems with the DataRobot prediction server Interpret the color indicators as follows: | Color | Service Health | Data Drift | Accuracy | Action | |-----------|----------------|-----------|----------------|-------------| | ![](images/icon-green.png) Green / Passing | Zero 4xx or 5xx errors | All attributes' distributions have remained similar since the model was deployed | Accuracy is similar to when the model was deployed. | No action needed. | | ![](images/icon-yellow.png) Yellow / At risk | At least one 4xx error and zero 5xx errors | At least one lower-importance attribute's distribution has shifted since the model was deployed. | Accuracy has declined since the model was deployed. | Concerns found but no immediate action needed; monitor. | | ![](images/icon-red.png) Red / Failing | At least one 5xx error | At least one higher-importance attribute's distribution has shifted since the model was deployed. | Accuracy has severely declined since the model was deployed. | Immediate action needed. | | ![](images/icon-gray.png) Gray / Unknown | No predictions made | Insufficient predictions made (min. 100 required). | Insufficient predictions made (min. 100 required) | [Make predictions](../../predictions/index). | ## Live inventory updates {: #live-inventory-updates } The inventory automatically refreshes every 30 seconds and updates the following information: ### Active Deployments The **Active Deployments** tile indicates the number of deployments currently in use. The legend interprets the bar below the active deployment count: * Your active deployments (blue) * Other active deployments (white) * Available new deployments (gray) !!! note Inactive deployments _do not_ count toward the allocated limit. ![](images/dep-ui-3.png) In the example above, the user's organization is allotted ten deployments. The user has seven active deployments, and there is one other active deployment in the organization. Users within the organization can create two more active deployments before reaching the limit. There are two inactive deployments not counted towards the deployment limit. If you're active in multiple organizations, under **Your active deployments**, you can see how many of those active deployments are in **This organization** or **Other organizations**: !!! note Your deployments in **Other organizations** _do not_ count toward the allocated limit in the current organization. ![](images/multi-organization-available-deployments.png) !!! info "Availability information" The availability information shown on the Active Deployments tile depends on the [MLOps configuration](pricing) for your organization. ### Predictions The Predictions tile indicates the number of predictions made since the last refresh. ![](images/dep-ui-2.png) Individual deployments show the number of predictions made on them during the last 30 seconds. ![](images/dep-ui-1.png) If a deployment's service health, drift, or accuracy status changes to Failing, the individual deployment will flash red to draw attention to it. ![](images/dep-ui-4.png) ### Sort deployments {: #sort-deployments } The deployment inventory is initially sorted by the most recent creation date (reported in the **Creation Date** column). You can click a different column title to sort by that metric instead. A blue arrow appears next to the sort column's title, indicating if the order is ascending or descending. !!! note When you sort the deployment inventory, your most recent sort selection persists in your local settings until you clear your browser's local storage data. As a result, the deployment inventory is usually sorted by the column you selected last. ![](images/inventory-1.png) You can sort in ascending or descending order by: * **Deployment Name** (alphabetically) * **Service**, **Drift**, and **Accuracy** (by status) * **Avg. Predictions/Day** (numerically) * **Last Prediction** (by date) * **Build Environment** (alphabetically) * **Creation Date** (by date) !!! note The list is sorted secondarily by the time of deployment creation (unless the primary sort is by **Creation Date**). For example, if you sorted by drift status, all deployments whose status is passing would be ordered from most recent creation to oldest, followed by failing deployments most recent to oldest. ### Filter deployments {: #filter-deployments } To filter the deployment inventory, select **Filters** at the top of the inventory page. ![](images/inventory-2.png) The filter menu opens, allowing you to select the criteria by which deployments are filtered. ![](images/inventory-3.png) | Filter | Description | |------------|-------| | Ownership | Filters by deployment [Owner](roles-permissions#deployment-roles). Select **Owned by me** to display only those deployments for which you have the Owner role. | | Activation Status | Filters by deployment [activity status](actions-menu#activating-a-deployment). Active deployments are able to monitor and return new predictions. Inactive deployments can only show insights and statistics about past predictions. | |Service Status| Filters by deployment [service health](service-health) status. Choose to filter by passing (![](images/icon-green.png)), at risk (![](images/icon-yellow.png)), or failing (![](images/icon-red.png)) status. If a deployment has never had service health enabled, then it will not be included when this filter is applied. | | Drift Status | Filters by deployment [data drift](data-drift) status. Choose to filter by passing (![](images/icon-green.png)), at risk (![](images/icon-yellow.png)), or failing (![](images/icon-red.png)) status. If a deployment previously had data drift enabled and reported a status, then the last-reported status is used for filtering, even if you later disabled data drift for that deployment. If a deployment has never had drift enabled, then it will not be included when this filter is applied. | Accuracy Status | Filters by deployment [accuracy](deploy-accuracy) status. Choose to filter by passing (![](images/icon-green.png)), at risk (![](images/icon-yellow.png)), or failing (![](images/icon-red.png)) status. If a deployment does not have accuracy information available, it is excluded from results when you apply the filter. | | Importance | Filters by the criticality of deployments, based on prediction volume, exposure to regulatory requirements, and financial impact. Choices include Critical, High, Moderate, and Low. | | Build environment | Filters by the environment in which the model was built. | !!! info "Availability information" The deployment inventory filtering options depend on the [MLOps configuration](pricing) for your organization. After selecting the desired filters, click **Apply Filters** to update the deployment inventory. The **Filters** link updates to indicate the number of filters applied. ![](images/inventory-4.png) You are notified if no deployments match your filters. To remove your filters, click the **Clear all 3 filters** shortcut, or open the filter dialog again and remove them manually. ![](images/inventory-5.png) ## Self-Managed AI Platform deployments with monitoring disabled {: #self-managed-ai-platform-deployments-with-monitoring-disabled } !!! info "Availability information" This section is only applicable to the Self-Managed AI Platform. If you are a Self-Managed AI Platform administrator interested in enabling model monitoring for deployments by implementing the necessary hardware, contact DataRobot Support. The use of DataRobot's monitoring functionality depends on having hardware with PostgreSQL and rsyslog installed. If you don't have these services, you will still be able to create, manage, and make predictions against deployments, but all monitoring-related functionality will be disabled automatically. When Deployment Monitoring is disabled, the **Deployments** inventory is still accessible, but monitoring tools and statistics are disabled. A notification at the top of the page informs you of the monitoring status. ![](images/disable-deploy-1.png) The [actions menu](actions-menu) on the **Deployments** inventory page still allows you to [share](actions-menu#share-a-deployment) or [delete](actions-menu#delete-a-deployment) a deployment and [replace](deploy-replace) a model. ![](images/disable-deploy-4.png) When you select a deployment, you can still access the predictions code snippet from the [**Predictions**](code-py) tab. ![](images/disable-deploy-3.png)
deploy-inventory
--- title: Manage prediction environments description: On the Prediction Environments page, you can edit, delete, or share external prediction environments. You can also deploy models to external prediction environments. --- # Manage predictions environments On the **Deployments > Prediction Environments** page, you can edit, delete, or share external prediction environments. You can also deploy models to external prediction environments. ## Edit a prediction environment {: #edit-a-prediction-environment } To edit the prediction environment details you set when you created the environment and to assign a **Service Account**, navigate to the **Deployments** > **Prediction Environments** page and click the row containing the prediction environment you want to edit: ![](images/pred-env-edit.png) * **Name**: Update the external prediction environment name you set when creating the environment. * **Description**: Update the external prediction environment description or add one if you haven't already. * **Platform**: Update the external platform you selected when creating the external prediction environment. * **Service Account**: Select the account that should have access to each deployment within this prediction environment. Only owners of the current prediction environment are available in the list of service accounts. !!! note DataRobot recommends using an administrative service account as the account holder (an account that has access to each deployment that uses the configured prediction environment). ## Share a prediction environment {: #share-a-prediction-environment } The sharing capability allows [appropriate user roles](roles-permissions#custom-model-and-environment-roles) to grant permissions for prediction environments. When you have created a prediction environment and want to share it with others, select **Share** (![](images/icon-share.png)) from the dashboard. ![](images/pred-env-4.png) This takes you to the sharing window, which lists each associated user and their role. To remove a user, click the X button to the right of their role. ![](images/pred-env-5.png) To re-assign a user's role, click on the assigned role and assign a new one from the dropdown. ![](images/pred-env-6.png) To add a new user, enter their username in the **Share with** field and choose their role from the dropdown. Then click **Share**. ![](images/pred-env-7.png) This action initiates an email notification. ## Delete a prediction environment {: #delete-a-prediction-environment } To delete a prediction environment, take the following steps: 1. Navigate to the **Deployments > Prediction Environments** page. 2. Next to the prediction environment you want to delete, click the delete icon ![](images/icon-delete.png). 3. In the **Delete** dialog box: * If the prediction environment isn't associated with a deployment, click **Yes**. * If the prediction environment is associated with one or more deployments, click each of the deployments listed to access the **Deployments > Overview** page and remove the related deployment. Once the prediction environment is no longer associated with a deployment, you can delete the environment.
pred-env-manage
--- title: Add external prediction environments description: You can manage and control user access to environments on the prediction environment dashboard and specify the prediction environment for any deployment. --- # Add external prediction environments {: #add-external-prediction-environments } Models that run on your own infrastructure (outside of DataRobot) may be run in different environments and can have differing deployment permissions and approval processes. For example, while any user may have permission to deploy a model to a test environment, deployment to production may require a strict approval workflow and only be permitted by those authorized to do so. Prediction environments support this deployment governance by grouping deployment environments and supporting grouped deployment permissions and approval workflows. ## Add a new prediction environment {: #add-a-new-prediction-environment } You can create, manage, and share prediction environments across DataRobot. This allows you to specify the prediction environments used for both DataRobot models running on the [Portable Prediction Server](portable-pps) and remote models monitored by the [monitoring agent](mlops-agent/index). To deploy models on external infrastructure, you create a custom external prediction environment: 1. Click **Deployments** > **Prediction Environments** and then click **Add prediction environment**. ![](images/pred-env-3.png) 2. In the **Add prediction environment** dialog box, complete the following fields: ![](images/pred-env-2.png) Field | Description ------------|------------ Name | Enter a descriptive prediction environment name. Description | _Optional_. Enter a description of the external prediction environment. Platform | Select the external platform on which the model is running and making predictions. 3. Under **Supported Model Formats**, select one or more formats to control which models can be deployed to the prediction environment, either manually or using the management agent. The available model formats are [**DataRobot**](dr-model-prep/index) or [**DataRobot Scoring Code**](scoring-code/index), [**External Model**](ext-model-prep/index), and [**Custom Model**](custom-models/index). !!! important You can only select one of **DataRobot** or **DataRobot Scoring Code**. ![](images/pred-env-8.png) 4. _Optional_. If you want to manage your external model with DataRobot MLOps, click **Use Management Agent** to allow the [MLOps Management Agent](mgmt-agent/index) to automate the deployment, replacement, and monitoring of models in this prediction environment. 5. Once you configure the environment settings, click **Add environment**. The environment is now available from the **Prediction Environments** page. ## Select a prediction environment for a deployment {: #select-a-prediction-environment-for-a-deployment } After you add a prediction environment to DataRobot, you can [deploy a model](deploy-methods/index) and use the prediction environment for the deployment. Specify the prediction environment in the **Inference** section: !!! warning After you specify a prediction environment and create the deployment, you *cannot* change the prediction environment. ![](images/pred-env-1.png)
pred-env
--- title: Register external models description: Register an external model package in the Model Registry. The Model Registry is an archive of your model packages where you can also deploy and share the packages. --- # Register external models {: #register-external-models } To create a model package for an external model that is monitored by the [monitoring agent](mlops-agent/index), navigate to **Model Registry** > **Model Packages**. Click **Add new package** and select **New external model package**. ![](images/reg-create-3.png) In the resulting dialog box, complete the fields pertaining to the agent-monitored model from which you are retrieving statistics. ![](images/reg-create-4.png) The following table describes the fields: | Field | Description | |-------------------|-----------------| | Package Name | The name of the model package. | | Package Description (optional) | Information to describe the model package. | | Model location (optional) | The location of the model running outside of DataRobot. Describe the location as a filepath, such as folder1/opt/model.tar. | | Build environment | The programming language in which the model was built. | | Training data (optional) | The filename of the training data, uploaded locally or via the **AI Catalog**. Click **Clear selection** to upload and use a different file. | | Holdout data (optional) | The filename of the holdout data, uploaded locally or via the **AI Catalog**. Use holdout data to set an [accuracy baseline](#set-an-accuracy-baseline) and enable support for target drift and challenger models. | Target | The dataset column name the model will predict on. | | Prediction type | The type of prediction the model is making, either binary classification or regression. For a classification model, you must also provide the positive and negative class labels and a prediction threshold. | | Prediction column | The column name in the holdout dataset containing the prediction result. | If registering a [time series](time/index) model, mark the checkbox **This is a time series model**. You must complete additional fields: ![](images/reg-create-5.png) | Field | Description | |-------------------|-----------------| | Forecast date feature | The column in the training dataset that contains date/time values used by DataRobot to detect the range of dates (the valid forecast range) available for use as the forecast point. | | Date/time format | The format used by the date/time features in the training dataset. | | Forecast point feature | The column in the training dataset that contains the point from which you are making a prediction. | | Forecast unit | The time unit (seconds, days, months, etc.) that comprise the [time step](glossary/index#time-step). | | Forecast distance feature | The column in the training dataset containing a unique time step—a relative position—within the forecast window. A time series model outputs one row for each forecast distance. | | Series identifier (optional, used for [multiseries models](multiseries) | The column in the training dataset that identifies which series each row belongs to. | Once all fields for the model package are defined, click **Create package**. The package populates in the **Model Registry** and is available for use. ## Set an accuracy baseline {: #set-an-accuracy-baseline } To set an accuracy baseline for external models (which enables target drift and challenger models when deployed), you must provide holdout data. This is because DataRobot cannot use the model to generate predictions that typically serve as a baseline, as the model is hosted in a remote prediction environment outside of the application. Provide holdout data when [registering](#register-external-model-packages) an external model package and specify the column containing predictions.
ext-model-reg
--- title: Prepare external models description: Prepare to create deployments from external models --- # Prepare for external model deployment External prediction environments and model packages allow you to deploy external (or remote) models to DataRobot. These models can make predictions on local infrastructure or any other external environment while DataRobot performs monitoring and management through the MLOps agents. Topic | Describes ------|----------- [Add an external prediction environment](pred-env) | How to set up prediction environments on your own infrastructure, group prediction environments, and configure permissions and approval workflows. [Manage external prediction environments](pred-env-manage) | How to edit, delete, and share external prediction environments, or deploy models to external prediction environments. [Register external models](ext-model-reg) | How to register external models in the Model Registry. [Manage external model packages](ext-model-manage) | How to deploy, share, or archive external models from the Model Registry.
index
--- title: Manage external model packages description: The Model Packages Actions menu allows users with appropriate permissions to share or permanently archive model packages. --- # Manage external model packages {% include 'includes/manage-model-packages.md' %}
ext-model-manage
--- title: Prepare custom models description: Prepare to create deployments from custom models --- # Prepare custom models for deployment Custom inference models allow you to bring your own pretrained models to DataRobot. By uploading a model artifact to the Custom Model Workshop, you can create, test, and deploy custom inference models to a centralized deployment hub. DataRobot supports models built with a variety of coding languages, including Python, R, and Java. If you've created a model outside of DataRobot and you want to upload your model to DataRobot, you need to define two components: * **Model content**: The compiled artifact, source code, and additional supporting files related to the model. * **Model environment**: The Docker image where the model will run. Model environments can be either _drop-in_ or _custom_, containing a Docker file and any necessary supporting files. DataRobot provides a variety of built-in environments. Custom environments are only required to accommodate very specialized models and use cases. !!! note Custom inference models are _not_ custom DataRobot models. They are _user-defined_ models created outside of DataRobot and assembled in the Custom Model Workshop for deployment, monitoring, and governance. See the associated [feature considerations](#feature-considerations) for additional information. ## Custom Model Workshop Topic | Describes ------|----------- [Custom Model Workshop](custom-model-workshop/index) | How you can bring your own pretrained models into DataRobot as custom inference models and deploy these models to a centralized deployment hub. [Create custom models](custom-inf-model) | How to create custom inference models in the Custom Model Workshop. [Manage custom model dependencies](custom-model-dependencies) | How to manage model dependencies from the workshop and update the base drop-in environments to support your model code. [Manage custom model resource usage](custom-model-resource-mgmt) | How to configure the resources a model consumes to facilitate smooth deployment and minimize potential environment errors in production. [Add custom model versions](custom-model-versions) | How to to create a new version of the model and/or environment after updating the file contents with new package versions, different preprocessing steps, updated hyperparameters, and more. [Add training data to a custom model](custom-model-training-data) | How to add training data to a custom inference model for deployment. [Add files from a remote repo to a custom model](custom-model-repos) | How to connect to a remote repository and pull custom model files into the Custom Model Workshop. [Test a custom model in DataRobot](custom-model-test) | How to test custom inference models in the Custom Model Workshop. [Manage custom models](custom-model-actions) | How to delete or share custom models and custom model environments. [Register custom models as model packages](custom-model-reg) | How to register custom inference models in the Model Registry. [Manage custom model packages](custom-model-manage) | How to deploy, share, or archive custom models from the Model Registry. ## Custom model assembly Topic | Describes ------|----------- [Custom model assembly](custom-model-assembly/index) | How to assemble the files required to run custom inference models. [Custom model components](custom-model-components) | How to identify the components required to run custom inference models. [Assemble structured custom models](structured-custom-models) | How to use DRUM to assemble and validate structured custom models compatible with DataRobot. [Assemble unstructured custom models](unstructured-custom-models) | How to use DRUM to assemble and validate unstructured custom models compatible with DataRobot. [DRUM CLI tool](custom-model-drum) | How to download and install DataRobot user model (DRUM) to work with Python, R, and Java custom models and to quickly test custom models, and custom environments locally before uploading into DataRobot. [Test a custom model locally](custom-local-test) | How to test custom inference models in your local environment using the DataRobot Model Runner (DRUM) tool. ## Custom model environments Topic | Describes ------|----------- [Custom model environments](custom-model-environments/index) | How to select a custom model environment from the drop-in environments or create additional custom environments. [Drop-in environments](drop-in-environments) | How to select the appropriate DataRobot drop-in environment when creating a custom model. [Custom environments](custom-environments) | How to assemble, validate, and upload a custom environment. ## Feature considerations {: #feature-considerations } * The creation of deployments using model images cannot be canceled while in progress. * Inference models receive raw CSV data and must handle all preprocessing themselves. * A model's existing training data can only be _changed_ if the model is not actively deployed. This restriction is not in place when adding training data for the first time. Also, training data cannot be unassigned; it can only be changed once assigned. * The target name can only be changed if a model has no training data and has not been deployed. * There is a per-user limit on the number of custom model deployments (30), custom environments (30), and custom environment versions (30) you can have. * Custom inference model server start-up is limited to 3 minutes. * The file size for training data is limited to 1.5GB. * Dependency management only works with packages in a proper index. Packages from URLs cannot be installed. * Unpinned python dependencies are not updated once the dependency image has been built. To update to a newer version, you will need to create a new requirements file with version constraints. DataRobot recommends always pinning versions. * *SaaS AI Platform only*: Custom inference models have no access to the internet and outside networks.
index
--- title: Monitor an external model with the monitoring agent description: How to monitor an external model with the monitoring agent. --- # Monitor an external model with the MLOps agent {: #monitor-an-external-model-with-the-mlops-agent } With DataRobot MLOps you can register an external model, create an external prediction environment, and deploy the model to the external prediction environment you registered. Next, you can install and configure the monitoring agent alongside the external model, establishing a deployment scenario for that external model. Once installed and configured, the monitoring agent allows you to monitor models running externally as an MLOps deployment so that you can take advantage of DataRobot's powerful MLOps model management tools to monitor accuracy and data drift, prediction distribution, latency, and more. To install the MLOps agent to monitor an external model running in a prediction environment outside of DataRobot, follow the workflow outlined below: ``` mermaid graph TB A[Decide to monitor an existing external model] --> B[Register an external model package] B --> C{Create an external prediction environment?} C -->|No|E[Deploy the model to an external prediction environment] C --> |Yes|D[Add an external prediction environment] D --> E E --> F[Obtain the MLOps agent tarball and your API key] F --> G[Install and configure the monitoring agent] G --> H[Configure monitoring agent and MLOps library communication] ``` ## Decide to monitor an existing external model {: #decide-to-monitor-an-external-model } The monitoring agent is a solution for monitoring external models on your infrastructure while reporting statistics to DataRobot MLOps. The API used by the monitoring agent allows you to request specific data to report to a deployment you create in DataRobot. For more information, see the MLOps agent overview. [MLOps agent overview <span style="vertical-align: sub">:material-arrow-right-circle:{.lg }</span>](../mlops-agent/monitoring-agent/index){ .md-button } ## Register an external model package {: #register-an-external-model-package } To report predictions metrics to an MLOps deployment in DataRobot, you must first register an external model's details as a model package in the DataRobot Model Registry; then, you can create an external MLOps deployment. [Register an external model package <span style="vertical-align: sub">:material-arrow-right-circle:{.lg }</span>](ext-model-reg){ .md-button } ## Add an external prediction environment {: #add-an-external-prediction-environment } To create an external deployment, you need an external prediction environment. Create an external prediction environment if you don't already have one in DataRobot. [Add an external prediction environment <span style="vertical-align: sub">:material-arrow-right-circle:{.lg }</span>](pred-env){ .md-button } ## Deploy the model to an external prediction environment {: #deploy-the-model-package-to-an-external-prediction-environment } To associate a model running externally with the external model package registered in the Model Registry, you must deploy the model from the Model Registry to an external prediction environment. After deploying this model externally, you can obtain the Model ID and Deployment ID from the external deployment's [Overview tab](dep-overview#content). The monitoring agent uses the Model ID and Deployment ID to report an external model's data to the deployment in DataRobot MLOps. [Deploy a model to an external prediction environment <span style="vertical-align: sub">:material-arrow-right-circle:{.lg }</span>](deploy-external-model){ .md-button } ## Obtain the MLOps agent tarball and your API key {: #obtain-the-mlops-agent-tarball-and-your-api-key } The monitoring agent is a solution for monitoring external models on your infrastructure while reporting statistics to DataRobot MLOps. In the monitoring agent's configuration file, you must provide your MLOps URL and an API key. API keys are the preferred method for authenticating requests to the DataRobot API; they replace the legacy API token method. To use the monitoring agent, you must obtain the MLOps agent tarball and an API key from [DataRobot's developer tools](api-key-mgmt). [Get started with the MLOps agent <span style="vertical-align: sub">:material-arrow-right-circle:{.lg }</span>](../mlops-agent/monitoring-agent/index#mlops-agent-tarball){ .md-button } ## Install and configure the MLOps agent {: #install-and-configure-the-mlops-agent } To monitor externally deployed models, you must implement the following software components included in the MLOps agent tarball download: * **The MLOps library**: Provides an API to communicate an external model's prediction data to the associated deployment in DataRobot (the [external deployment](#deploy-the-model-package-to-an-external-prediction-environment) of an [external model](#register-an-external-model) you created earlier). The function calls provided by the MLOps library allow you to request specific data that you want to report, including prediction time, the number of predictions, and other metrics and deployment statistics. The MLOps library writes this data to a spooler (or buffer) channel, from which the data can then be sent to DataRobot MLOps by either the monitoring agent or other MLOps library method calls. Libraries are available in Python 2, Python 3, and Java. * **The MLOps agent**: Monitors the spooler (or buffer) channel in a location you define when you [configure MLOps agent and library communication](spooler). The MLOps agent reads data from the spooler and reports to the the associated deployment in DataRobot (the [external deployment](#deploy-the-model-package-to-an-external-prediction-environment) of an [external model](#register-an-external-model) you created earlier). Depending on your configuration, the agent can read and report this data manually or automatically. [Install and configure the MLOps agent <span style="vertical-align: sub">:material-arrow-right-circle:{.lg }</span>](agent){ .md-button } ## Configure the MLOps agent and library spooler {: #configure-mlops-agent-and-library-spooler } The MLOps library communicates with the monitoring agent through a spooler, so it's essential that the library and the agent have matching spooler configurations. Some spooler configuration settings are required, and some are optional. You can configure these settings programmatically; settings configured through environment variables take precedence over those defined in configuration files. [Configure the monitoring agent and MLOps library spooler <span style="vertical-align: sub">:material-arrow-right-circle:{.lg }</span>](spooler){ .md-button }
ext-cus-model-ext-env
--- title: Scoring Code in an external environment description: How to deploy an exported DataRobot model's Scoring Code in an external environment. --- # Deploy Scoring Code in an external environment {: #deploy-scoring-code-in-an-external-environment } With DataRobot MLOps you can register a DataRobot model, create a prediction environment, and deploy that model's Scoring Code package to an external prediction environment, establishing a deployment scenario for that model outside DataRobot. You can download the monitoring agent packaged with [Scoring Code](scoring-code/index) from a deployment's Portable Predictions tab or the Deployments inventory. !!! note To access the Scoring Code package, make sure you train your model with Scoring Code enabled. Additionally, this package is only compatible with models running at the command line; it doesn't support models running on the [Portable Prediction Server](portable-pps). The monitoring agent packaged with the Scoring Code JAR file is configured for the deployment, allowing you to quickly integrate the agent to report model monitoring statistics back to DataRobot MLOps. To create and deploy a Scoring Code enabled AutoML model in an external environment, follow the workflow outlined below: ``` mermaid graph TB A[Select a Scoring Code enabled model] --> B[Register the model] B --> C{Create an external prediction environment?} C --> |No|D[Deploy the model to an external prediction environment] C --> |Yes|E[Add an external prediction environment] E --> D D --> F[Download the Java Scoring Code and monitoring agent package] ``` ## Select a Scoring Code enabled model {: #select-a-scoring-code-enabled-model } Only models compatible with scoring code (and trained with Scoring Code enabled) provide Scoring Code download as a Portable Prediction option. Scoring Code allows you to export DataRobot's AutoML-generated models as JAR files that you can use outside the platform. DataRobot automatically runs code generation for qualifying models and indicates Scoring Code availability with an [indicator badge](leaderboard-ref#tags-and-indicators) on the Leaderboard. [Select a Scoring Code enabled model <span style="vertical-align: sub">:material-arrow-right-circle:{.lg }</span>](sc-overview){ .md-button } ## Register the model {: #register-the-model } DataRobot AutoML automatically generates models and displays them on the Leaderboard. The [model recommended for deployment](model-rec-process) appears at the top of the page. To obtain the Scoring Code you need for this process, you can register this or any other model from the Leaderboard as long as the model has the Scoring Code indicator. [Register a model <span style="vertical-align: sub">:material-arrow-right-circle:{.lg }</span>](dr-model-reg){ .md-button } ## Deploy the model's Scoring Code externally {: #deploy-the-models-scoring-code-externally } To download a model's scoring code with the monitoring agent included and preconfigured, you must create an external MLOps deployment. ### Add an external prediction environment {: #add-an-external-prediction-environment } To create an external deployment, you need an external prediction environment. Create an external prediction environment if you don't already have one in DataRobot. [Add an external prediction environment <span style="vertical-align: sub">:material-arrow-right-circle:{.lg }</span>](pred-env){ .md-button } ### Deploy the model to an external prediction environment {: #deploy-the-model-to-an-external-prediction-environment } Once you've added an external prediction environment, deploy your Scoring Code enabled model to that external prediction environment. [Deploy a model to an external prediction environment <span style="vertical-align: sub">:material-arrow-right-circle:{.lg }</span>](deploy-model#deploy-from-the-model-registry){ .md-button } ### Download the Java Scoring Code and monitoring agent package {: #download-the-java-scoring-code-and-mlops-agent-package } You can download the monitoring agent packaged with [Scoring Code](scoring-code/index) from a deployment's Portable Predictions tab or the Deployments inventory. The monitoring agent that comes packaged with the Scoring Code JAR file is already configured for the deployment, allowing you to quickly integrate the agent from the command line using a snippet provided on the Scoring Code download page. [Download the Scoring Code package <span style="vertical-align: sub">:material-arrow-right-circle:{.lg }</span>](agent-sc){ .md-button }
ext-dr-model-ext-env
--- title: DataRobot model in a DataRobot environment description: How to deploy a DataRobot model in a DataRobot environment. --- # Deploy a DataRobot model in a DataRobot environment {: #deploy-a-datarobot-model-in-a-datarobot-environment } DataRobot AutoML models allow you to deploy to a DataRobot-managed prediction environment. This deployment method is the most direct route to making predictions and monitoring, managing, and governing your model in a centralized deployment hub. To create and deploy an AutoML model on DataRobot, follow the workflow outlined below: ``` mermaid graph TB A{Deployment method?} --> |Leaderboard|B[Deploy a model from the Leaderboard]; A --> |Model registry|C[Register a model] C --> D[Deploy a model from the Model Registry] ``` ## Deploy a model from the Leaderboard {: #deploy-a-model-from-the-leaderboard } DataRobot AutoML automatically generates models and displays them on the Leaderboard. The [model recommended for deployment](model-rec-process) appears at the top of the page. You can deploy this (or any other) model directly from the Leaderboard to start making and monitoring predictions. When you create a deployment from a model, DataRobot automatically creates a model package for the deployed model. You can access the model package at any time in the Model Registry. [Deploy from the Leaderboard <span style="vertical-align: sub">:material-arrow-right-circle:{.lg }</span>](deploy-model#deploy-from-the-leaderboard){ .md-button } ## Register a model {: #register-a-model } If you don't want to deploy immediately from the Leaderboard, you can add a model package to the Model Registry to deploy later. !!! note This method allows you to [share a model package](reg-action) or [generate compliance documentation](reg-compliance) before deploying a model. [Register a model <span style="vertical-align: sub">:material-arrow-right-circle:{.lg }</span>](dr-model-reg){ .md-button } ## Deploy a model from the Model Registry {: #deploy-a-model-from-the-model-registry } After you've added a model to the Model Registry, you can deploy it at any time to start making and monitoring predictions. [Deploy from the Model Registry <span style="vertical-align: sub">:material-arrow-right-circle:{.lg }</span>](deploy-model#deploy-from-the-model-registry){ .md-button }
dr-model-dr-env
--- title: Deployment workflows description: An overview of the most common DataRobot deployment workflows for various model and environment type combinations. --- # Deployment workflows DataRobot's MLOps monitoring is available for any models deployed in DataRobot prediction environments (including models on your own infrastructure using a Portable Prediction Server). With DataRobot MLOps, you can deploy models written in any open-source language or library and expose a production-quality REST API to support real-time or batch predictions. Custom inference models allow you to bring pre-trained models into DataRobot to make monitored predictions alongside DataRobot's models. In addition, you can configure monitoring for models running in external prediction environments with the MLOps agent. The workflows below provide high-level overviews of the most common deployment scenarios, including links to the relevant documentation for each step. ## Workflow types With the workflows provided for the common model and environment combinations below, you can learn to deploy DataRobot AutoML models and custom inference models to DataRobot prediction environments, either within DataRobot or containerized for external deployment. In addition, with the monitoring agent, you can monitor models deployed in completely external prediction environments: Model Type | Environment Type | Workflow ----------------|----------------------------------|----------------- DataRobot model | DataRobot prediction environment | [How to deploy a DataRobot model in a DataRobot prediction environment.](dr-model-dr-env) DataRobot model | Portable Prediction Server | [How to deploy a DataRobot model in a Portable Prediction Server (PPS).](dr-model-pps-env) Custom model | DataRobot prediction environment | [How to deploy a custom model in a DataRobot prediction environment.](cus-model-dr-env.md) Custom model | Portable Prediction Server | [How to deploy a custom model in a Portable Prediction Server (PPS).](cus-model-pps-env.md) Scoring Code | External prediction environment | [How to deploy exported DataRobot model Scoring Code in an external environment with monitoring agent enabled.](ext-dr-model-ext-env.md) External model | External prediction environment | [How to deploy an external model in an external prediction environment with monitoring agent enabled.](ext-cus-model-ext-env.md) ## Model types The model types referenced in the deployment workflows are defined below: Model type | Description -----------|------------ DataRobot model | A standard DataRobot model. Custom model | An external (Python, Java, or R) model assembled in the Custom Model Workshop. Scoring Code | A method for downloading select DataRobot models from the leaderboard for external deployments. Models downloaded this way are packaged as a Java Archive (JAR) file containing Java prediction calculation logic identical to the DataRobot API's calculation logic. However, Scoring Code predictions are made using a command-line interface (CLI) instead of API calls, allowing you to make low-latency predictions. External (remote) model | A model completely external to DataRobot, making predictions on local infrastructure or any other external environment. These models can be monitored by the MLOps agent, and deployment information can be reported to DataRobot MLOps. ## Prediction environment types The prediction environments referenced in the deployment workflows are defined below: Prediction environment type | Description | Evaluation ----------------------------|-------------|------------ DataRobot prediction environment | The default DataRobot prediction environment on DataRobot infrastructure. | Provides the most straightforward deployment, prediction, monitoring, and model replacement processes. However, predictions are subject to network performance limitations. Portable Prediction Server | A containerized (with all resources required to run on any infrastructure) DataRobot prediction environment for DataRobot models to make predictions on your infrastructure with MLOps monitoring. | You can make API-based predictions on local infrastructure to improve performance for low-latency predictions. However, the deployment, prediction, monitoring, and model replacement processes are more complex. Custom model Portable Prediction Server | A containerized (with all resources required to run on any infrastructure) DataRobot prediction environment for Custom models to make predictions on your infrastructure with MLOps monitoring. The custom model PPS bundle contains a deployed custom model, a custom environment, and the monitoring agent. | You can make API-based predictions on local infrastructure to improve performance for low-latency predictions. However, the deployment, prediction, monitoring, and model replacement processes are more complex. External prediction environment | A prediction environment completely external to DataRobot and used to make predictions monitored by the monitoring agent and reported to DataRobot MLOps. | External predictions or Scoring Code predictions can be made on local infrastructure to improve performance for low-latency predictions. However, the deployment, prediction, monitoring, and model replacement processes are more complex.
index
--- title: DataRobot model in a PPS description: How to deploy a DataRobot model in a Portable Prediction Server. --- # Deploy a DataRobot model in a Portable Prediction Server {: #deploy-a-datarobot-model-in-a-portable-prediction-server } DataRobot AutoML models can be deployed to a containerized DataRobot prediction environment called a Portable Prediction Server (PPS). To deploy an AutoML model to a PPS, you can build models with AutoML, deploy a chosen model to an external prediction environment, and then deploy the model package in a PPS with monitoring enabled. Once deployed, you can monitor this portable model alongside models deployed in DataRobot prediction environments. To create and deploy an AutoML model in a PPS, follow the workflow outlined below: ``` mermaid graph TB A[Register a model] --> B{Create an external prediction environment?} B --> |No|C[Deploy the model to an external prediction environment] B --> |Yes|D[Add an external prediction environment] D --> C C --> E[Deploy the model package to a PPS] ``` ## Register a model {: #register-a-model } DataRobot AutoML automatically generates models and displays them on the Leaderboard. The [model recommended for deployment](model-rec-process) appears at the top of the page. You can register this (or any other) model to the Model Registry directly from the Leaderboard. [Register a model <span style="vertical-align: sub">:material-arrow-right-circle:{.lg }</span>](dr-model-reg){ .md-button } ## Deploy the model externally to a PPS {: #deploy-the-model-to-a-pps } The Portable Prediction Server (PPS) is a solution for deploying a DataRobot model to an external prediction environment. You can download the PPS from the developer tools and use it to deploy a model package from the Model Registry. Once running, the PPS installation serves predictions via the DataRobot API. !!! note Depending on the [MLOps configuration](pricing) for your organization, you may be able to [download the PPS model package from the Leaderboard](portable-pps#leaderboard-download) for external deployment. However, without associating the model package with an external prediction environment, you won't be able to monitor the model's predictions. ### Optional: Add an external prediction environment {: #add-an-external-prediction-environment } To create an MLOps model deployment compatible with the PPS, you must add the model package to an external prediction environment. Create an external prediction environment if you don't already have one in DataRobot. [Add an external prediction environment <span style="vertical-align: sub">:material-arrow-right-circle:{.lg }</span>](pred-env){ .md-button } ### Deploy the model package to an external prediction environment {: #deploy-the-model-package-to-an-external-prediction-environment } To create an MLOps deployment with an external prediction environment, deploy a model package to an external prediction environment. [Deploy a model to an external prediction environment <span style="vertical-align: sub">:material-arrow-right-circle:{.lg }</span>](deploy-model#deploy-from-the-model-registry){ .md-button } ### Deploy the model package to a PPS {: #deploy-the-model-package-to-a-PPS } The model's PPS model package (`.mlpkg`) file and the command-line snippet used to initiate the PPS with monitoring are provided for any model tagged as having an [external prediction environment](pred-env) in the deployment inventory. You can download the model's PPS model package and use the provided docker commands to deploy the model with monitoring enabled. [Deploy a model to a PPS <span style="vertical-align: sub">:material-arrow-right-circle:{.lg }</span>](portable-pps){ .md-button }
dr-model-pps-env
--- title: Custom model in a DataRobot environment description: How to deploy a custom model in a DataRobot prediction environment. --- # Deploy a custom model in a DataRobot Environment {: #deploy-a-custom-model-in-a-datarobot-environment } Custom inference models allow you to bring your pre-trained models into DataRobot. To deploy a custom model to a DataRobot prediction environment, you can create a custom model in the Custom Model Workshop. Then, you can prepare, test, and register that model, and deploy it to a centralized deployment hub where you can monitor, manage, and govern it alongside your deployed DataRobot models. DataRobot supports custom models built in various programming languages, including Python, R, and Java. To create and deploy a custom model in DataRobot, follow the workflow outlined below: ``` mermaid graph TB A[Create a custom model] --> B{Use a custom model environment?} B --> |Yes|C[Create a custom model environment] B --> |No|D[Prepare the custom model]; C --> D D --> E{Test locally?} E --> |No|H[Test the custom model in DataRobot] E --> |Yes|F[Install the DataRobot Model Runner] F --> G[Test the custom model locally] G --> H H --> I[Register the custom model] I --> J[Deploy the custom model] ``` ## Create a custom model {: #create-a-custom-model } Custom inference models are user-created, pre-trained models (made up of a collection of files) uploaded to DataRobot via the Custom Model Workshop. You can assemble custom inference models in either of the following ways: * Create a custom model *with* the webserver Scoring Code and `start_server.sh` shell file in the model folder. This type of custom model can be paired with a [custom environment](custom-environments#create-a-custom-environment), a [DataRobot drop-in environment](drop-in-environments), or a custom drop-in environment. * Create a custom model *without* the webserver Scoring Code and `start_server.sh` shell file in the model folder. This type of custom model *requires* a drop-in environment to provide the webserver Scoring Code and `start_server.sh` file used by the model. You can use the [drop-in environments provided by DataRobot](drop-in-environments), or you can [create a custom drop-in environment](custom-environments#create-a-custom-environment). [Create a custom model <span style="vertical-align: sub">:material-arrow-right-circle:{.lg }</span>](custom-inf-model){ .md-button } ### Optional: Create a custom model environment {: #optional-create-a-custom-model-environment } If you decide to use a custom environment or a custom drop-in environment, you must create that environment in the Custom Model Workshop. You can reuse these environments for other custom models. You can assemble custom model environments in either of the following ways: * Create a custom drop-in environment *with* the webserver Scoring Code and a `start_server.sh` file for the model. DataRobot provides several [default drop-in environments](#drop-in-environments) in the Custom Model Workshop. * Create a custom environment *without* the webserver Scoring Code and `start_server.sh` file. Instead, you must provide the webserver Scoring Code and a `start_server.sh` file in the model folder for the custom model you intend to use with this environment. [Create a custom model environment <span style="vertical-align: sub">:material-arrow-right-circle:{.lg }</span>](custom-environments){ .md-button } ## Prepare the custom model {: #prepare-the-custom-model } Before adding custom models and environments to DataRobot, you must prepare and structure the files required to run them successfully. The tools and templates necessary to prepare custom models are hosted in the [Custom Model GitHub repository](https://github.com/datarobot/datarobot-user-models){:target="_blank"} ({% include 'includes/github-sign-in.md' %}). Once you verify the model's files and folder structure, you can proceed to test the model. [Prepare a custom model <span style="vertical-align: sub">:material-arrow-right-circle:{.lg }</span>](custom-model-assembly/index){ .md-button } ### Optional: Test locally {: #test-locally } The [DataRobot Model Runner (DRUM)](https://pypi.org/project/datarobot-drum/){:target="_blank"} is a tool you can use to work locally with Python, R, and Java custom models. It can verify that a custom model can run and make predictions before you add it to DataRobot. However, this testing is only for development purposes, and DataRobot recommends that you use the Custom Model Workshop to test any model you intend to deploy. [Test a custom model locally <span style="vertical-align: sub">:material-arrow-right-circle:{.lg }</span>](custom-local-test){ .md-button } ### Test in DataRobot {: #test-in-datarobot } Testing the custom model in the Custom Model Workshop ensures that the model is functional before deployment. These tests use the model environment to run the model and make predictions with test data. !!! note While you can deploy your custom inference model without testing, DataRobot strongly recommends that you ensure your model passes testing in the Custom Model Workshop before deployment. [Test a custom model in DataRobot <span style="vertical-align: sub">:material-arrow-right-circle:{.lg }</span>](custom-model-test){ .md-button } ## Register the custom model {: #register-the-custom-model } After successfully creating and testing a custom inference model in the Custom Model Workshop, you can add it to the Model Registry as a deployment-ready model package. [Register a custom model <span style="vertical-align: sub">:material-arrow-right-circle:{.lg }</span>](custom-model-reg){ .md-button } ## Deploy the custom model {: #deploy-the-custom-model } After you register a custom inference model in the Model Registry, you can deploy it. Deployed custom models make predictions using API calls to a dedicated prediction server managed by DataRobot. [Deploy a custom model <span style="vertical-align: sub">:material-arrow-right-circle:{.lg }</span>](deploy-custom-inf-model){ .md-button }.
cus-model-dr-env
--- title: Custom model in a PPS description: How to deploy a custom model in a Portable Prediction Server. --- # Deploy a custom model in a Portable Prediction Server {: #deploy-a-custom-model-in-a-portable-prediction-server } Custom inference models allow you to bring your pre-trained models into DataRobot through the Custom Model Workshop. DataRobot supports custom models built in various programming languages, including Python, R, and Java. Once you've created a custom model in DataRobot, you can deploy it to a containerized DataRobot prediction environment called a Portable Prediction Server (PPS). To deploy a custom model to a PPS, you can prepare and test it in the Custom Model Workshop, and then add it to the Model Registry. You can then deploy the custom model using a PPS bundle, which includes everything you need to deploy the model externally while monitoring it alongside models deployed within DataRobot. To create and deploy a custom model in a PPS, follow the workflow outlined below: ``` mermaid graph TB A[Create a custom model] --> B{Custom model environment?} B --> |Yes|C[Create a custom model environment] B --> |No|D[Prepare the custom model]; C --> D D --> E{Test locally?} E --> |No|H[Test the custom model in DataRobot] E --> |Yes|F[Install the DataRobot Model Runner] F --> G[Test the custom model locally] G --> H H --> I[Register the custom model] I --> J{Create an external prediction environment?} J --> |No|L[Deploy the custom model to an external prediction environment] J --> |Yes|K[Add an external prediction environment] K --> L L --> M[Deploy the custom model to a PPS] ``` ## Create a custom model {: #create-a-custom-model } Custom inference models are user-created, pre-trained models (made up of a collection of files) uploaded to DataRobot via the Custom Model Workshop. You can assemble custom inference models in either of the following ways: * Create a custom model *with* the webserver Scoring Code and `start_server.sh` shell file in the model folder. This type of custom model can be paired with a [custom environment](custom-environments#create-a-custom-environment), a [DataRobot drop-in](drop-in-environments) environment, or a custom drop-in environment. * Create a custom model *without* the webserver Scoring Code and `start_server.sh` shell file in the model folder. This type of custom model *requires* a drop-in environment to provide the webserver Scoring Code and `start_server.sh` file used by the model. You can use the [drop-in environments provided by DataRobot](drop-in-environments), or you can [create a custom drop-in environment](custom-environments#create-a-custom-environment). [Create a custom model <span style="vertical-align: sub">:material-arrow-right-circle:{.lg }</span>](custom-inf-model){ .md-button } ### Optional: Create a custom model environment {: #optional-create-a-custom-model-environment } If you decide to use a custom environment or a custom drop-in environment, you must create that environment in the Custom Model Workshop. You can reuse any environments you create this way for other custom models. You can assemble custom model environments in either of the following ways: * Create a custom drop-in environment *with* the webserver Scoring Code and a `start_server.sh` file for the model. DataRobot provides several [default drop-in environments](#drop-in-environments) in the Custom Model Workshop. * Create a custom environment *without* the webserver Scoring Code and `start_server.sh` file. Instead, you must provide the webserver Scoring Code and a `start_server.sh` file in the model folder for the custom model you intend to use with this environment. [Create a custom model environment <span style="vertical-align: sub">:material-arrow-right-circle:{.lg }</span>](custom-environments){ .md-button } ## Prepare the custom model {: #prepare-the-custom-model } Before adding custom models and environments to DataRobot, you must prepare and structure the files required to run them successfully. The tools and templates necessary to prepare custom models are hosted in the [Custom Model GitHub repository](https://github.com/datarobot/datarobot-user-models){:target="_blank"} ({% include 'includes/github-sign-in.md' %}). Once you verify the model's files and folder structure, you can proceed to test the model. [Prepare a custom model <span style="vertical-align: sub">:material-arrow-right-circle:{.lg }</span>](custom-model-assembly/index){ .md-button } ### Optional: Test locally {: #test-locally } The [DataRobot Model Runner (DRUM)](https://pypi.org/project/datarobot-drum/){:target="_blank"} is a tool you can use to work locally with Python, R, and Java custom models. It can verify that a custom model can run and make predictions before you add it to DataRobot. However, this testing is only for development purposes, and DataRobot recommends that you use the Custom Model Workshop to test any model you intend to deploy. [Test a custom model locally <span style="vertical-align: sub">:material-arrow-right-circle:{.lg }</span>](custom-local-test){ .md-button } ### Test in DataRobot {: #test-in-datarobot } Testing the custom model in the Custom Model Workshop ensures that the model is functional before deployment. These tests use the model environment to run the model and make predictions with test data. !!! note While you can deploy your custom inference model without testing, DataRobot strongly recommends that you ensure your model passes testing in the Custom Model Workshop before deployment. [Test a custom model in DataRobot <span style="vertical-align: sub">:material-arrow-right-circle:{.lg }</span>](custom-model-test){ .md-button } ## Register the custom model {: #register-the-custom-model } After successfully creating and testing a custom inference model in the Custom Model Workshop, you can add it to the Model Registry as a deployment-ready model package. [Register a custom model <span style="vertical-align: sub">:material-arrow-right-circle:{.lg }</span>](custom-model-reg){ .md-button } ## Deploy the custom model externally to a PPS {: #deploy-the-custom-model-to-a-pps } The custom model Portable Prediction Server (PPS) is a solution for deploying a custom model to an external prediction environment. The PPS is a downloadable tarball containing a deployed custom model, a custom model environment, and the MLOps monitoring agent. Once running, the PPS container serves predictions via the DataRobot API. ### Optional: Add an external prediction environment {: #add-an-external-prediction-environment } To create an MLOps custom model deployment compatible with the PPS bundle, you must add the custom model package to an external prediction environment. Create an external prediction environment if you don't already have one in DataRobot. [Add an external prediction environment <span style="vertical-align: sub">:material-arrow-right-circle:{.lg }</span>](pred-env){ .md-button } ### Deploy the custom model package to an external prediction environment {: #deploy-the-custom-model-package-to-an-external-prediction-environment } To create an MLOps custom model deployment with an external prediction environment, deploy your custom model package to an external prediction environment. [Deploy a custom model to an external prediction environment <span style="vertical-align: sub">:material-arrow-right-circle:{.lg }</span>](deploy-custom-inf-model){ .md-button } ### Deploy the custom model to a PPS {: #deploy-the-custom-model-to-a-PPS } The custom model PPS bundle is provided for any custom model tagged as having an [external prediction environment](pred-env) in the deployment inventory. You can download the custom model PPS bundle to deploy and monitor the custom model. [Deploy a custom model to a PPS <span style="vertical-align: sub">:material-arrow-right-circle:{.lg }</span>](custom-pps){ .md-button }
cus-model-pps-env
--- title: Agent event log description: On a deployment's Service Health tab, you can view Management and Monitoring events. --- # Agent event log On a deployment's **Service Health** tab, under **Recent Activity**, you can view **Management** events (e.g., deployment actions) and **Monitoring** events (e.g., spooler channel and rate limit events). The **Monitoring** events can help you quickly diagnose MLOps agent issues. The spooler channel error events can help you diagnose and fix [spooler configuration](spooler) issues. The rate limit enforcement events can help you identify if service health stats, data drift values, or accuracy values aren't updating because you exceeded the API request rate limit. ## Enable agent event log To view **Monitoring** events, you must provide a `predictionEnvironmentID` in the agent configuration file (`conf\mlops.agent.conf.yaml`) as shown below. If you haven't already installed and configured the MLOps agent, see the [Installation and configuration](agent) guide. ``` yaml linenums="1" hl_lines="23 24 25" # This file contains configuration for the MLOps agent # URL to the DataRobot MLOps service mlopsUrl: "https://<MLOPS_HOST>" # DataRobot API token apiToken: "<MLOPS_API_TOKEN>" # Execute the agent once, then exit runOnce: false # When dryrun mode is true, do not report the metrics to MLOps service dryRun: false # When verifySSL is true, SSL certification validation will be performed when # connecting to MLOps DataRobot. When verifySSL is false, these checks are skipped. # Note: It is highly recommended to keep this config variable as true. verifySSL: true # Path to write agent stats statsPath: "/tmp/tracking-agent-stats.json" # Prediction Environment served by this agent. # Events and errors not specific to a single deployment are reported against this Prediction Environment. predictionEnvironmentId: "<PE_ID_FROM_DATAROBOT_UI>" # Number of times the agent will retry sending a request to the MLOps service on failure. httpRetry: 3 # Http client timeout in milliseconds (30sec timeout) httpTimeout: 30000 # Number of concurrent http request, default=1 -> synchronous mode; > 1 -> asynchronous httpConcurrentRequest: 10 # Number of HTTP Connections to establish with the MLOps service, Default: 1 numMLOpsConnections: 1 # Comment out and configure the lines below for the spooler type(s) you are using. # Note: the spooler configuration must match that used by the MLOps library. # Note: Spoolers must be set up before using them. # - For the filesystem spooler, create the directory that will be used. # - For the SQS spooler, create the queue. # - For the PubSub spooler, create the project and topic. # - For the Kafka spooler, create the topic. channelConfigs: - type: "FS_SPOOL" details: {name: "filesystem", directory: "/tmp/ta"} # - type: "SQS_SPOOL" # details: {name: "sqs", queueUrl: "your SQS queue URL", queueName: "<your AWS SQS queue name>"} # - type: "RABBITMQ_SPOOL" # details: {name: "rabbit", queueName: <your rabbitmq queue name>, queueUrl: "amqp://<ip address>", # caCertificatePath: "<path_to_ca_certificate>", # certificatePath: "<path_to_client_certificate>", # keyfilePath: "<path_to_key_file>"} # - type: "PUBSUB_SPOOL" # details: {name: "pubsub", projectId: <your project ID>, topicName: <your topic name>, subscriptionName: <your sub name>} # - type: "KAFKA_SPOOL" # details: {name: "kafka", topicName: "<your topic name>", bootstrapServers: "<ip address 1>,<ip address 2>,..."} # The number of threads that the agent will launch to process data records. agentThreadPoolSize: 4 # The maximum number of records each thread will process per fetchNewDataFreq interval. agentMaxRecordsTask: 100 # Maximum number of records to aggregate before sending to DataRobot MLOps agentMaxAggregatedRecords: 500 # A timeout for pending records before aggregating and submitting agentPendingRecordsTimeoutMs: 5000 ``` ## View agent activity {: #view-monitoring-agent-activity } To view the agent event log, on the **Service Health** tab, navigate to the **Recent Activity** section. The most recent events appear at the top of the list. ### Event information Each event shows the time it occurred, a description, and an icon indicating its status: | Status icon | Description | | ----------- | ----------- | | ![](images/icon-green.png) Green / Passing | No action needed. | | ![](images/icon-red.png) Red / Failing | Immediate action needed. | | ![](images/icon-info.png) Gray / Informational | Details a deployment action (e.g., deployment launch has started). | ### Recent activity log In the **Recent Activity** log, you can filter the activity list and access additional information: ![](images/monitor-mgmt-activity.png) | Element | Description | |---|---| | ![](images/icon-1.png) | Set the **Event Type** filter to limit the list to **Management** events (e.g., deployment actions) or **Monitoring** events (e.g., spooler channel and rate limit events). | | ![](images/icon-2.png) | Click an event in the log to view additional **Event Details** for that event. The **Event Details** include the **Event** name, a **Timestamp**, a **Channel Name**, the event **Type**, the associated **Prediction Environment**, and an event **Message**.| | ![](images/icon-3.png) | Click the **Prediction Environment** name to open the [**Prediction Environments**](pred-env) tab, where you can create, manage, and share prediction environments.| ### Monitoring events **Monitoring** events can help you diagnose and fix MLOps agent issues. Currently, the following events can appear in the **Recent Activity** log: | Event | Description | | ----- | ----------- | | **Monitoring Spooler Channel** | Identify [spooler configuration](spooler) issues so you can resolve them. | | **Rate limit was enforced** | Identify when an operation exceeds API request rate limits, resulting in updates to service health stats, data drift calculations, or accuracy calculations stalling. This event reports how long the affected operation is suspended. Rate limits are applied per deployment, per operation. | ??? note "What are the rate limits for the deployments API?" | Operation | Endpoint (POST) | Limit | | --------- | --------------- | ----- | | Submit Metrics (Service Health) | `api/v2/deployments/<id>/predictionRequests/fromJSON/` | 1M requests / hour | | Submit Prediction Results (Data Drift) | `api/v2/deployments/<id>/predictionInputs/fromJSON/` | 1M requests / hour | | Submit Actuals (Accuracy) | `api/v2/deployments/<id>/actuals/fromJSON/` | 40 requests / second |
agent-event-log
--- title: MLOps agents description: Use the MLOPS agents to monitor and manage models running outside of DataRobot MLOps, and report predictions from these models as part of MLOps deployments. Learn about the MLOps agent workflows for DataRobot deployments and for remote deployments. --- # MLOps agents {: #mlops-agents } !!! info "Availability information" The MLOps agent feature is exclusive to DataRobot MLOps. Contact your DataRobot representative for information on enabling it. DataRobot MLOps provides powerful tools for tracking and managing models for prediction. But what if you already have&mdash;or need to have&mdash;deployments running in your own environment? How can you monitor external models that may have intermittent or no connectivity and so may report predictions sporadically? The MLOps agents allow you to monitor and manage external models&mdash;those running outside of DataRobot MLOps. With this functionality, predictions and information from these models can be reported as part of MLOps deployments. You can use the same powerful [model management tools](deploy-inventory) to monitor accuracy, data drift, prediction distribution, latency, and more, regardless of where the model is running. Data provided to DataRobot MLOps provides valuable insight into the performance and health of those externally deployed models. The MLOps agents provide: * The ability to manage, monitor, and get insight from all model deployments in a single system * API and communications constructed to ensure little or no latency when monitoring external models * Support for deployments that are always connected to the network and the MLOps system, as well as partially or never-connected deployments * The MLOps library (available in Python and Java), which can be used to monitor models written natively in those languages or to report the input and output of a model artifact in any language * Configuration with the [Portable Prediction Server](portable-pps) See the associated [feature considerations](#feature-considerations) for additional information. ## Monitoring agent {: #monitoring-agent } Topic | Describes ------|----------- [Installation and configuration](agent) | How to install and configure the monitoring agent. [Examples directory](agent-ex) | How to access and run monitoring agent code examples. [Use cases](agent-use) | How to configure the monitoring agent to support various use cases. [Environment variables](env-var) | How to configure the monitoring agent environment variables, including those required for a containerized configuration. [Library and agent spooler configuration](spooler) | How to configure the MLOps library and agent to communicate through various spoolers (or buffers). [Download Scoring Code](agent-sc) | How to download model Scoring Code packaged with the monitoring agent. [Monitor external multiclass deployments](agent-multi) | How to monitor external multiclass deployments. ## Management agent {: #management-agent } Topic | Describes ------|----------- [Installation and configuration](mgmt-agent-install) | How to install and configure the management agent. [Configure environment plugins](mgmt-agent-plugins) | How to use the example environment plugins as a starting point to configure the management agent for various prediction environments. [Installation for Kubernetes](mgmt-agent-kubernetes) | How to use a Helm chart to aid in the installation and configuration of the management agent and Kubernetes plugin. [Deployment status and events](mgmt-agent-events-status) | How to monitor the status and health of management agent deployments from the deployment inventory. [Relaunch deployments](mgmt-agent-relaunch) | How to relaunch management agent deployments. [Force delete deployments](mgmt-agent-delete) | How to delete a management agent deployment without waiting for the resolution of the deployment deletion request sent to the management agent. ## Feature considerations * The MLOps agents run on Linux. * The MLOps agents don't support Windows environments. * The MLOps agents' releases are backward compatible with the last two versions of DataRobot (i.e., the monitoring agent version 9.2 is compatible DataRobot 9.0 and above).
index
--- title: Register DataRobot models description: Register a model package in the Model Registry. The Model Registry is an archive of your model packages where you can also deploy and share the packages. --- # Register DataRobot models from the Leaderboard {: #register-datarobot-models-from-the-Leaderboard } The Model Registry is an organizational hub for the variety of models used in DataRobot. Models are registered as deployment-ready model packages; the registry lists each package available for use. To add a DataRobot model trained with AutoML to the Model Registry: 1. Select the model from the Leaderboard. 2. Click **Predict > Deploy** and click **Add to Model Registry**. ![](images/reg-comp-1.png) You can later [deploy the model from the Model Registry](deploy-model#deploy-from-the-model-registry).
dr-model-reg
--- title: Prepare DataRobot models description: Prepare to create deployments from DataRobot models --- # Prepare DataRobot models for deployment DataRobot AutoML models allow you to deploy to a DataRobot-managed prediction environment. This deployment method is the most direct route to making predictions and monitoring, managing, and governing your model in a centralized deployment hub. In addition, DataRobot AutoML models can be deployed to a containerized DataRobot prediction environment called a Portable Prediction Server (PPS). To prepare to deploy an AutoML model to a PPS, you can build models with AutoML and register the DataRobot model. Topic | Describes ------|----------- [Register DataRobot model packages](dr-model-reg) | How to add a DataRobot model to the Model Registry from the Leaderboard. [Manage model packages](dr-model-manage) | How to deploy, share, or archive DataRobot models from the Model Registry.
index
--- title: Manage DataRobot model packages description: The Model Packages Actions menu allows users with appropriate permissions to share or permanently archive model packages. --- # Manage DataRobot model packages {% include 'includes/manage-model-packages.md' %}
dr-model-manage
--- title: Model Registry description: How model packages are created and added to the Model Registry, manually or automatically. The Model Registry is an archive of your model packages where you can deploy and share packages. --- # Model Registry {: #model-registry } The Model Registry is an organizational hub for the variety of models used in DataRobot. Models are registered as deployment-ready model packages; the registry lists each package available for use. Each package functions the same way, regardless of the origin of its model. In the Model Registry, you can [generate model compliance documentation from model packages](reg-compliance) and [deploy, share, or archive models](reg-action). The Model Registry also contains the [Custom Model Workshop](custom-model-workshop/index), where you can create, deploy, and register custom models. ## Create model packages {: #create-model-packages } Model packages (model artifacts with associated metadata) can be created [manually](#create-model-packages-manually) or [automatically](#create-model-packages-automatically), depending on the type of model. All model packages are created and stored in the Model Registry, where you can deploy and share them. Model packages are available for any models created in AutoML and prepared for deployment. ### Create model packages manually {: #create-model-packages-manually } The following sections describe the steps necessary for manually creating model packages for custom inference models and external models. Custom inference models are [created and tested](custom-inf-model) in the Custom Model Workshop, and external models operate outside of DataRobot. Manual model package creation: * [Register DataRobot models](dr-model-reg) * [Register custom inference models](custom-model-reg) * [Register external models](ext-model-reg) ### Create model packages automatically {: #create-model-packages-automatically } The following sections describe the steps necessary to trigger the automatic creation of model packages for [custom inference models](custom-inf-model) and models provided when [adding a new deployment](deploy-methods/index). Automatic model package creation: * Deploy a custom model * Create a deployment via the “Add deployment” action on the Deployments page #### Deploy a custom model {: #deploy-a-custom-model } When you [deploy a custom model](deploy-custom-inf-model), DataRobot automatically creates a model package, which you can access in the **Model Registry** under the **Model Packages** tab. The deployment you create also uses this model package. #### Deploy from the inventory {: #deploy-from-the-inventory } When you [create a new deployment](deploy-methods/index) with any type of model, DataRobot automatically creates a model package for the model being deployed. You can access it in the **Model Registry** under the **Model Packages** tab.
reg-create
--- title: Generate model compliance documentation description: Generate automated compliance documentation for models from the Model Registry. --- # Generate model compliance documentation {: #generate-model-registry-compliance-documentation } After you [create a model package](reg-create) in the Model Registry (the inventory), you can generate automated compliance documentation for the model. The compliance documentation provides evidence that the components of the model work as intended, the model is appropriate for its intended business purpose, and the model is conceptually sound. This individualized model documentation is especially important for highly regulated industries. For the banking industry, for example, the report can help complete the Federal Reserve System's [SR 11-7: Guidance on Model Risk Management](https://www.federalreserve.gov/supervisionreg/srletters/sr1107.htm){ target=_blank }. !!! tip You can also generate compliance documentation by selecting a model on the Leaderboard and clicking the [Compliance tab](compliance). After you generate the compliance documentation, you can view it or download it as a Microsoft Word (DOCX) file and edit it further. You can also create [specialized templates](template-builder) for your organization. !!! note When model packages are shared with users, all generated compliance documentation for the model is also shared. ## Generate compliance documentation {: #generate-compliance-documentation } 1. Create a model package if it is not yet in the inventory (Model Registry). To create a model package, you can: * [Create a model package from the Leaderboard](reg-create#create-a-model-package-from-the-Leaderboard). * [Add a custom inference model](reg-create#add-a-custom-inference-model). * [Register an external model](reg-create#register-external-model-packages). * [Create a model package automatically during deployment](reg-create#create-model-packages-automatically). 2. Click **Model Registry > Model Packages** and select a model package. ![](images/reg-compliance-model-package.png) 3. Click the **Compliance** tab, select a **Report template**, and then click **Create Report**. ![](images/reg-compliance-tab.png) The default template is the Automated Compliance Document template. You can instead [create a custom report template](template-builder) and select that template. !!! warning "Compliance documentation for custom models without null imputation support" To generate the Sensitivity Analysis section of the default **Automated Compliance Document** template, your custom model must support null imputation (the imputation of NaN values), or compliance documentation generation will fail. If the custom model doesn't support null imputation, you can use a specialized template to generate compliance documentation. In the **Report template** drop-down list, select **Automated Compliance Document (for models that do not impute null values)**. This template excludes the Sensitivity Analysis report and is only available for custom models. If this template option is not available for your version of DataRobot, you can download the <a href="custom-template-for-models-without-null-imputation-regression.json" download>custom template for regression models</a> or the <a href="custom-template-for-models-without-null-imputation-binary.json" download>custom template for binary classification models</a>. Generating Compliance Documentation requires DataRobot to execute many dependent insight tasks. This can take several minutes to complete. The documentation appears below when complete: ![](images/reg-compliance-download.png) 4. After the compliance documentation is generated, you can: * Preview the report by clicking the eye (![](images/icon-eye.png)) icon. * Download a Microsoft Word (DOCX) version of the report by clicking the download (![](images/icon-down.png)) icon. * Delete the report by clicking the trash can (![](images/icon-trash.png)) icon. ## Feature considerations Consider the following when generating compliance documentation: * Compliance documentation, in most cases, is not available for managed AI Platform users. Interested users should contact their DataRobot representative for additional information. * Compliance documentation is available for the following project types: * Binary * Multiclass * Regression * Anomaly detection for time series projects with DataRobot models, but _not_ for non-time series unsupervised mode.
reg-compliance
--- title: Manage model packages description: The Model Registry Actions menu allows users with appropriate permissions to share or permanently archive model packages. --- # Manage model packages {: #manage-model-packages } {% include 'includes/manage-model-packages.md' %}
reg-action
--- title: Import model packages into MLOps description: Export a model created with DataRobot AutoML for import as a model package (.mlpkg file) in standalone MLOps environments. section_name: MLOps maturity: public-preview platform: self-managed-only --- # Import model packages into MLOps {: #import-model-packages-into-mlops } !!! info "Availability information" This feature is _only available for Self-Managed AI Platform users_ that require MLOps and AutoML to run in separate environments. The process outlined requires multiple feature preview flags. Contact your DataRobot representative for more information about this configuration. **Feature flags**: Contact your DataRobot representative. Models created with DataRobot AutoML can be [exported](#export-a-model) as a model package (.mlpkg file). This allows you to [import](#import-a-model-package-to-a-dataRobot-mlops-only-environment) a model package into standalone environments like DataRobot MLOps to make predictions and monitor the model. You can also [create a new deployment in MLOps](#import-a-model-package-to-a-dataRobot-mlops-only-environment) by importing a model package. ## Export a model from AutoML {: #export-a-model-from-automl } You can export models created with AutoML from the **Deploy** tab on the model's **Predict** page. !!! note The **MLOps Package** option on the **Predict** > **Downloads** tab directs you to **Open the Deploy tab** where you can deploy the model, add it to the **Model Registry**, or download the model package. To export your model a model package (.mlpkg) file from DataRobot AutoML, add it to the **Model Registry**, or deploy it directly to the **Deployments** inventory, take the following steps: 1. On the **Leaderboard**, click the model you want to export. 2. Click **Predict** > **Deploy**. 3. On the **Deploy** tab, there are three options available:: ![](images/model-pkg-6.png) | | Element | Description | |-|---------|-------------| |![](images/icon-1.png) | Deploy model | Deploy your model to the [**Deployments** inventory](deploy-inventory) in DataRobot AutoML. | |![](images/icon-2.png) | Add to Model Registry | Register your model as a model package in the [**Model Registry**](registry/index) to deploy later or to use the model package to replace a model for an existing deployment (if the model package is eligible). | |![](images/icon-3.png) | Download .mlpkg | Generate and download your model package for deployment creation with DataRobot MLOps. | 4. Click **Download .mlpkg**, to prepare the model package for export. View your progress in the Worker Queue under **Processing**. ![](images/model-pkg-7.png) After DataRobot finishes generating the model package, the download begins automatically, appearing in the downloads bar when complete. You now have an exported model package, fully capable of deployment to a different environment (such as DataRobot MLOps). ## Import a model package to a DataRobot MLOps-only environment {: #import-a-model-package-to-a-datarobot-mlops-only-environment } To add an exported .mlpkg file to DataRobot MLOps as a [model package](reg-create): 1. Click **Model Registry** and then click **Model Packages**. 2. On the **Model Packages** tab, click **Add new package** and then click **Import model package file (.mlpkg)**. 3. Browse for and upload, or drag-and-drop, the .mlpkg file you exported from DataRobot AutoML. ![](images/reg-transfer-2.png) The model package is uploaded and extracted. 4. When this process completes, DataRobot adds your model package to the **Model Packages** tab, complete with the metadata for your model package. ![](images/reg-transfer-3.png) ## Deploy a model package in MLOps {: #deploy-a-model-package-in-mlops } To import your model into DataRobot MLOps, you can add it as a new [deployment](deploy-methods/index). 1. Navigate to the **Deployments** page and click **Add deployment**. ![](images/model-pkg-4.png) 2. Under the **Add a model** header, click **Browse** and click **Local file** to upload your model package. You can also drag and drop a model package file into the **Add a model** box. ![](images/model-pkg-3.png) 3. After you upload a model, the **Deployments** tab opens. !!! note The information under the **Model** header appears automatically, as your model package contains that metadata. The model package also supplies the training data; you don't need to provide that information on this page. You can, however, add outcome data after you deploy the model. 4. Configure the [deployment creation settings](add-deploy-info) and decide if you want to allow [data drift](data-drift) tracking or require an [association ID](accuracy-settings.md#association-id) in prediction requests. 5. When you have added information about your data and your model is fully defined, you can click **Deploy model** at the top of the screen.
reg-transfer
--- title: Register models description: The Model Registry organizes all models used in DataRobot as deployment-ready model packages. All packages function the same way, regardless of model origin. --- # Register models {: #register-models } In the Model Registry, models are registered as deployment-ready model packages; the registry lists each package available for use. Each package functions the same way, regardless of the origin of its model. The Model Registry also contains the [Custom Model Workshop](custom-model-workshop/index), where you can create, deploy, and register custom models. Model packages in [the Model Registry](reg-create) can be created manually or automatically, depending on the model type. Custom inference model packages and external model packages are exclusive to MLOps. !!! info "Availability information" Contact your DataRobot representative for information on enabling MLOps-exclusive model package options. === "SaaS" Topic | Describes ------|----------- [Model Registry](reg-create) | How DataRobot AutoML models, custom inference models, and external models are automatically or manually added to the Model Registry. [Register DataRobot Models](dr-model-reg) | How to manually add a DataRobot model to the Model Registry from the Leaderboard. [Register custom inference models](custom-model-reg) <br> _(MLOps only)_ | How to manually register custom inference models in the Model Registry. [Register external models](ext-model-reg) <br> _(MLOps only)_ | How to manually register external models in the Model Registry. [Manage model packages](reg-action) | How to deploy, share, or archive models from the Model Registry. [Generate model compliance documentation](reg-compliance) | How to generate model compliance documentation from model packages in the Model Registry. [Custom Model Workshop](custom-model-workshop/index) | How to bring your own pretrained models into the Model Registry as custom inference models. | === "Self-Managed" Topic | Describes ------|----------- [Model Registry](reg-create) | How DataRobot AutoML models, custom inference models, and external models are automatically or manually added to the Model Registry. [Register DataRobot Models](dr-model-reg) | How to manually add a DataRobot model to the Model Registry from the Leaderboard. [Register custom inference models](custom-model-reg) <br> _(MLOps only)_ | How to manually register custom inference models in the Model Registry. [Register external models](ext-model-reg) <br> _(MLOps only)_ | How to manually register external models in the Model Registry. [Manage model packages](reg-action) | How to deploy, share, or archive models from the Model Registry. [Generate model compliance documentation](reg-compliance) | How to generate model compliance documentation from model packages in the Model Registry. [Custom Model Workshop](custom-model-workshop/index) | How to bring your own pretrained models into the Model Registry as custom inference models. | [Import .mlpkg files exported from DataRobot AutoML](reg-transfer) | How to transfer .mlpkg files from DataRobot AutoML to DataRobot MLOps.
index
--- title: Deploy external models description: How to deploy external models by registering and deploying a model package or by uploading training data for the external model directly. --- # Deploy external models {: #deploy-external-models } You can deploy external (remote) models using either of the following methods: * [Deploy an external model package](#deploy-an-external-model-package). * [Deploy an external model by uploading historical training data](#deploy-an-external-model-by-uploading-training-data). After you deploy, you can use the [monitoring agent](mlops-agent/index) to monitor the external deployment. ## Deploy an external model package {: #deploy-an-external-model-package } This section outlines how to create a deployment with a model package for an external (remote) model. Before proceeding, make sure you have [registered your external model package](reg-create#register-external-model-packages) in the **Model Registry**. !!! note To send predictions, first configure the [monitoring agent](mlops-agent/index). Reference the agent's internal documentation for configuration information. 1. Navigate to **Model Registry** > **Model Packages** and select **Deploy** from the action menu for the external model package you wish to deploy. ![](images/deploy-external-model-pkg-deploy.png) 4. Add [deployment information and complete the deployment](add-deploy-info). Once you create an external deployment, there are two options for additional configuration. You can: * [Upload historical prediction data](add-prediction-data-post-deploy) to the deployment to analyze data drift and accuracy in the past. * Instrument the deployment with the [monitoring agent](mlops-agent/index) to monitor future predictions. To do so, navigate to the [Predictions tab](code-py#monitoring-snippet) to access the monitoring snippet. If you add prediction data for scoring in the **Predictions** tab, you must include the required features for times series predictions in the prediction dataset: * `Forecast Distance`: Supplied by DataRobot when you download the .mlpkg file. * `dr_forecast_point`: Supplied by DataRobot when you download the .mlpkg file. * `Datetime_column_name`: Defines the date/time feature to use for time-stamping prediction rows. * `Series_column_name`: Defines the feature (series ID) used for multiseries deployments (if applicable). ## Deploy an external model by uploading training data {: #deploy-an-external-model-by-uploading-training-data } This section explains how to upload the training data for a model that made predictions in the past. Uploading the historical predictions directly to the deployment inventory enables you to analyze data drift and accuracy statistics in the past. Instrument the external deployment with the [monitoring agent](mlops-agent/index) to monitor future predictions and [add additional historical prediction data](add-prediction-data-post-deploy) after deployment. To create a deployment with training data: 1. Navigate to **Deployments** and click the **+ Add deployment** link. ![](images/add-deploy-1.png) 2. Under the **Add a training dataset** header, select **browse** and select **Local File** to upload your XLSX, CSV, or TXT formatted training data. You can also select training data from the **AI Catalog**. ![](images/add-deploy-2.png) 3. After selecting your training dataset, provide information about the model that used the training data. Once completed, select **Continue to deployment details** to further configure the deployment. ![](images/add-deploy-13.png) 4. Add [deployment information and complete the deployment](add-deploy-info). Once you create an external deployment, there are two options for additional configuration. You can upload historical prediction data to the deployment to analyze data drift and accuracy in the past. You can also instrument the deployment with the [monitoring agent](mlops-agent/index) to monitor future predictions. To do so, navigate to the [Predictions tab](code-py#monitoring-snippet) to access the monitoring snippet.
deploy-external-model
--- title: Deploy DataRobot models description: How to create new deployments, deploy custom models, create deployments with training data, and add data post-deployment. --- # Deploy DataRobot models {: #deploy-datarobot-models } You can deploy models you build with DataRobot AutoML using the following methods: * Deploy the model [from the Leaderboard](#deploy-from-the-leaderboard). * Save the model as a model package in the Model Registry and [deploy from the registry](#deploy-from-the-model-registry). !!! tip In most cases, before deployment, you should unlock holdout and [retrain your model](creating-addl-models#retrain-a-model) at 100% to improve predictive accuracy. DataRobot automatically runs [**Feature Impact**](feature-impact) for the model (this also calculates **Prediction Explanations**, if available). ## Deploy from the Leaderboard {: #deploy-from-the-leaderboard } {% include 'includes/deploy-leaderboard.md' %} ## Deploy from the Model Registry {: #deploy-from-the-model-registry } 1. Navigate to **Model Registry** > **Model Packages**. 2. Select **Deploy** from the action menu for the model package you wish to deploy. ![](images/deploy-from-model-registry.png) 3. Add [deployment information and create the deployment](add-deploy-info). ### Use shared modeling workers {: #use-shared-modeling-workers } If you don't have a dedicated prediction server instance available, you can use a node that shares workers with your model building activities. In this case, the page has a different interface. ![](images/deploy-5.png) Click **Show Example** to generate and display a usage example: ![](images/deploy-6.png) When using the sample code, specify your [API key](api-key-mgmt) (1). The project and model IDs (2) are available in the sample, as is the shared instance endpoint (3). The DataRobot Python client uses the API key for authentication and so no key or username is required. To execute the file, follow the instructions in the commented section of the snippet.
deploy-model
--- title: Configure deployment settings description: When you add a deployment, configure the deployment by adding the prediction environment and enabling accuracy and data drift tracking, among other settings. --- # Configure a deployment {: #configure-a-deployment } Regardless of where you create a new deployment (the Leaderboard, the Model Registry, or the deployment inventory) or the type of artifact (DataRobot model, custom inference model, or remote mode), you are directed to the deployment information page where you can customize the deployment. The deployment information page outlines the capabilities of your current deployment based on the data provided, for example, [training data](glossary/index#training-data), [prediction data](glossary/index#prediction-data), or [actuals](glossary/index#actuals). It populates fields for you to provide details about the training data, inference data, model, and your outcome data. ## Standard options and information {: standard-options-and-information } When you initiate model deployment, the **Deployments** tab opens to the **Model Information** and the **Prediction History and Service Health** options: ![](images/deploy-create-settings-1.png) ## Model Information {: #model-information } The **Model Information** section provides information about the model being used to make predictions for your deployment. DataRobot uses the files and information from the deployment to complete these fields, so they aren't editable. Field | Description ------------|------------------ Model name | The name of your model. Prediction type | The type of prediction the model is making. For example: Regression, Classification, Multiclass, Anomaly Detection, Clustering, etc. Threshold | The prediction threshold for binary classification models. Records above the threshold are assigned the positve class lable and records below the threshold are assigned the negative class label. This field isn't available for Regression or Multiclass models. Target | The dataset column name the model will predict on. Positive / Negative classes | The positive and negative class values for binary classification models. This field isn't visible for Regression or Multiclass models. Model Package Id | The id of the Model Package in the Model Registry. !!! note If you are part of an organization with deployment limits, the **Deployment billing** section notifies you of the number of deployments your organization is using against the [deployment limit](deploy-inventory#live-inventory-updates) and the deployment cost if your organization has exceeded the limit. ![](images/deployment-billing.png) ## Prediction History and Service Health {: #prediction-history-and-service-health } The **Prediction History and Service Health** section provides details about your deployment's inference (also known as scoring) data&mdash;the data that contains prediction requests and results from the model. Setting | Description ------------------------------------|------------------ Configure prediction environment | Environment where predictions are generated. [Prediction environments](pred-env) allow you to establish access controls and approval workflows. Configure prediction timestamp | Determines the method used to time-stamp prediction rows for [Data Drift](data-drift) and [Accuracy](deploy-accuracy) monitoring. <ul><li>**Use time of prediction request**: Use the time you _submitted_ the prediction request to determine the timestamp.</li><li>**Use value from date/time feature**: Use the date/time provided as a feature with the prediction data (e.g., forecast date) to determine the timestamp. Forecast date time-stamping is set automatically for time series deployments. It allows for a common time axis to be used between training data and the basis of data drift and accuracy statistics.</li></ul> This setting doesn't apply to the [Service Health](service-health) prediction timestamp. The Service Health tab _always_ uses the time the prediction server _received_ the prediction request. For more information, see [Time of Prediction](#time-of-prediction) below.<br> This setting cannot be changed after the deployment is created and predictions are made. Set deployment importance | Determines the importance level of a deployment. These levels&mdash;Critical, High, Moderate, and Low&mdash;determine how a deployment is handled during the [approval process](dep-admin). Importance represents an aggregate of factors relevant to your organization such as the prediction volume of the deployment, level of exposure, potential financial impact, and more. When a deployment is assigned an importance of Moderate or above, the **Reviewers** notification appears (under [**Model Information**](#model-information)) to inform you that DataRobot will automatically notify users assigned as reviewers whenever the deployment requires review. {% include 'includes/service-health-prediction-time.md' %} ## Advanced options {: #advanced-options } If you click **Show advanced options**, you can configure the following deployment settings: * [Data Drift](#data-drift) * [Accuracy](#accuracy) * [Challenger Analysis](#challenger-analysis) * [Segmented Analysis](#segmented-analysis) * [Fairness](#fairness) ![](images/deploy-create-settings-2.png) ### Data Drift {: #data-drift } When deploying a model, there is a chance that the dataset used for training and validation differs from the prediction data. To enable drift tracking you can configure the following settings: Setting | Description ----------------------------------|------------------ Enable feature drift tracking | Configures DataRobot to track feature drift in a deployment. Training data is required for feature drift tracking. Enable target monitoring | Configures DataRobot to track target drift in a deployment. [Actuals](accuracy-settings#add-actuals) are required for target monitoring, and target monitoring is required for [accuracy monitoring](accuracy-settings). Training data | Required to enable feature drift tracking in a deployment. {% include 'includes/how-dr-tracks-drift-include.md' %} DataRobot monitors both target and feature drift information by default and displays results in the [Data Drift dashboard](data-drift). Use the **Enable target monitoring** and **Enable feature drift tracking** toggles to turn off tracking if, for example, you have sensitive data that should not be monitored in the deployment. You can customize how data drift is monitored. See the data drift page for more information on [customizing data drift status](data-drift#customize-data-drift-status) for deployments. !!! note Data drift tracking is only available for deployments using deployment-aware prediction API routes (i.e., `https://example.datarobot.com/predApi/v1.0/deployments/<deploymentId>/predictions`). ### Accuracy {: #accuracy } Setting | Description ----------------------------------|------------------ Association ID | The column name that contains the association ID in the prediction dataset for your model. Association IDs are required for [setting up accuracy tracking](accuracy-settings#select-an-association-id) in a deployment. The association ID functions as an identifier for your prediction dataset so you can later match up outcome data (also called "actuals") with those predictions. Note that the **Create deployment** button is inactive until you enter an association ID or turn off this toggle. | Require association ID in prediction requests | Requires your prediction dataset to have a column name that matches the name you entered in the Association ID field. When enabled, you will get an error if the column is missing. Enable automatic actuals feedback for time series models | For time series deployments that have indicated an association ID. Enables the automatic submission of actuals, so that you do not need to submit them manually via the UI or API. Once enabled, actuals can be extracted from the data used to generate predictions. As each prediction request is sent, DataRobot can extract an actual value for a given date. This is because when you send prediction rows to forecast, historical data is included. This historical data serves as the actual values for the previous prediction request. ### Challenger Analysis {: #challenger-analysis } DataRobot can securely store prediction request data at the row level for deployments (not supported for external model deployments). This setting must be enabled for any deployment using the [**Challengers**](challengers) tab. In addition to enabling challenger analysis, access to stored prediction request rows enables you to thoroughly audit the predictions and use that data to troubleshoot operational issues. For instance, you can examine the data to understand an anomalous prediction result or why a dataset was malformed. !!! note Contact your DataRobot representative to learn more about data security, privacy, and retention measures or to discuss prediction auditing needs. Setting | Description ------------------|------------------ Enable prediction rows storage for challenger analysis | Enables the use of challenger models, which allow you to compare models post-deployment and replace the champion model if necessary. Once enabled, prediction requests made for the deployment are collected by DataRobot. Prediction explanations are not stored. !!! important Prediction requests are only collected if the prediction data is in a valid data format interpretable by DataRobot, such as CSV or JSON. Failed prediction requests with a valid data format are also collected (i.e., missing input features). ### Segmented Analysis {: #segmented-analysis } [**Segmented Analysis**](deploy-segment) identifies operational issues with training and prediction data requests for a deployment. DataRobot enables the drill-down analysis of data drift and accuracy statistics by filtering them into unique segment attributes and values. Setting | Description ------------------|------------------ Track attributes for segmented analysis of training data and predictions | Enables DataRobot to monitor deployment predictions by segments; for example, by categorical features. This setting requires training data and is required to enable Fairness monitoring. ### Fairness {: #fairness } The **Fairness** section allows you to define Bias and Fairness settings for your deployment to identify any biases in the model's predictive behavior. If fairness settings are defined prior to deploying a model, the fields are automatically populated. For additional information, see the section on [defining fairness tests](fairness-metrics#configure-metrics-and-mitigation). Setting | Description -----------------------------|------------------ Protected features | The dataset columns to measure fairness of model predictions against; must be categorical. Primary fairness metric | The statistical measure of parity constraints used to assess fairness. Favorable target outcome | The outcome value perceived as favorable for the protected class relative to the target. Fairness threshold | The fairness threshold helps measure if a model performs within appropriate fairness bounds for each protected class. ## Deploy the model {: #deploy-the-model } After you add the available data and your model is fully defined, click **Deploy model** at the top of the screen. !!! note If the **Deploy model** button is inactive, be sure to either specify an association ID (required for [enabling accuracy monitoring](accuracy-settings)) or toggle off **Require association ID in prediction requests**. The **Creating deployment** message appears, indicating that DataRobot is creating the deployment. After the deployment is created, the **Overview** tab opens. ![](images/deploy-overview-1.png) Click the arrow to the left of the deployment name to return to the deployment inventory.
add-deploy-info
--- title: Deploy custom inference models description: How to deploy custom inference models, your pretrained models that you assemble in the Custom Model Workshop. --- # Deploy custom inference models {: #deploy-custom-inference-models } After you [create a custom inference model](custom-inf-model) using the Custom Model Workshop, you can deploy it to a [custom model environment](custom-environments). !!! note While you can deploy your custom inference model to an environment without testing, DataRobot strongly recommends your model pass testing before deployment. To deploy a custom inference model: 1. Navigate to **Model Registry > Custom Model Workshop > Models** and select the model you want to deploy. 2. In the **Assemble** tab, click the **Deploy** link in the middle of the screen. ![](images/deploy-custom-deploy.png) !!! note If your model is not tested, you are prompted to **Test now** or to **Deploy package without testing**. DataRobot recommends testing that your model can make predictions prior to deploying. After uploading your model, you are directed to the deployment information page. Most information for your custom model is automatically provided. 3. Under the **Model** header, provide functional validation data. This data is a partition of the model's training data and is used to evaluate model performance. ![](images/deploy-custom-model-info.png) 4. Add [deployment information and complete the deployment](add-deploy-info). ### Make predictions {: #make-predictions } Once a custom inference model is deployed, it can make predictions using API calls to a dedicated prediction server managed by DataRobot. You can find more information about [using the prediction API](dr-predapi) in the Predictions documentation. ### Deployment logs {: #deployment-logs } When you deploy a custom model, it generates log reports unique to this type of deployment, allowing you to debug custom code and troubleshoot prediction request failures from within DataRobot. To view the logs for a deployed model, navigate to the deployment, open the actions menu, and select **View Logs**. ![](images/custom-log-1.png) You can access two types of logs: * **Runtime Logs** are used to troubleshoot failed prediction requests (via the **Predictions** tab or the API). The logs are captured from the Docker container running the deployed custom model and contain up to 1 MB of data. The logs are cached for 5 minutes after you make a prediction request. You can re-request the logs by clicking **Refresh**. ![](images/custom-log-2.png) * **Deployment logs** are automatically captured if the custom model fails while deploying. The logs are stored permanently as part of the deployment. ![](images/custom-log-3.png) !!! note DataRobot only provides logs from inside the Docker container from which the custom model runs. Therefore, it is possible in cases where a custom model fails to deploy or fails to execute a prediction request that no logs will be available. This is because the failures occurred outside of the Docker container. Use the **Search** bar to find specific references within the logs. Click **Download Log** to save a local copy of the logs. ![](images/custom-log-4.png)
deploy-custom-inf-model
--- title: Add prediction data post-deployment description: How to add historical prediction data after a model is deployed. --- # Add prediction data post-deployment {: #add-prediction-data-post-deployment } Users with the [Owner](roles-permissions) role can add historical prediction data to deployments if data drift is enabled. To do so, navigate to the **Settings** tab and select **choose file** under the **Inference** header to upload your prediction data in XLSX, CSV, or TXT format. You can also select prediction data from the **AI Catalog**. ![](images/add-deploy-15.png) Training data is a critical component for calculating data drift. If you did not include training data when you created a deployment, or if there was an error when uploading that data, add it from the [**Data Drift > Settings**](data-drift-settings) tab. You can also check for training data from the [**Data Drift > Summary**](data-drift) tab. The data must meet the following requirements: * Appended prediction data must have the same features as the original prediction dataset. After uploading new data, DataRobot prompts you to confirm the addition because you cannot later remove data from a deployment. To use different prediction data, create a new deployment. * An uploaded training dataset must include the same features as the prediction (scoring) dataset. You cannot replace training data. If you want a deployment to use different training data, create a new deployment with the appropriate data.
add-prediction-data-post-deploy
--- title: Deploy models description: How to create deployments from DataRobot models, custom inference models, and external models. --- # Deploy models {: #deploy-models } In DataRobot, the way you deploy a model to production depends on the type of model you start with and the prediction environment where the model will be used. The following sections describe how to add deployments for different types of artifacts, including models built in DataRobot, custom inference models, and remote models. === "SaaS" Topic | Describes -------|-------------- [Deploy DataRobot models](deploy-model) | How to deploy DataRobot models from the Leaderboard or the Model Registry. [Deploy custom inference models](deploy-custom-inf-model) | How to deploy custom inference models from the Custom Model Workshop. [Deploy external models](deploy-external-model) | How to deploy external (remote) models from the Model Registry or by uploading training data and deploying from the deployment inventory. [MLOps agents](mlops-agent/index) | How to monitor and manage deployments running in an external environment outside of DataRobot MLOps. [Configure a deployment](add-deploy-info) | How to complete deployments by configuring inference options. [Add prediction data post-deployment](add-prediction-data-post-deploy) | How to add historical prediction data to existing deployments. === "Self-Managed" Topic | Describes -------|-------------- [Deploy DataRobot models](deploy-model) | How to deploy DataRobot models from the Leaderboard or the Model Registry. [Deploy custom inference models](deploy-custom-inf-model) | How to deploy custom inference models from the Custom Model Workshop. [Deploy external models](deploy-external-model) | How to deploy external (remote) models from the Model Registry or by uploading training data and deploying from the deployment inventory. [MLOps agents](mlops-agent/index) | How to monitor and manage deployments running in an external environment outside of DataRobot MLOps. [Configure a deployment](add-deploy-info) | How to complete deployments by configuring inference options. [Add prediction data post-deployment](add-prediction-data-post-deploy) | How to add historical prediction data to existing deployments. [Imported `.mlpkg` file](reg-transfer) | How to import and deploy .mlpkg files from the Model Registry.
index
--- title: Add training data to a custom model description: How to assign training data to a custom model in the Custom Model Workshop. --- # Add training data to a custom model {: #add-training-data-to-a-custom-model } To enable feature drift tracking for a model deployment, you must add training data. To do this, assign training data to a model version. The method for providing training and holdout datasets for [*unstructured* custom inference models](unstructured-custom-models) requires you to upload the training and holdout datasets separately. Additionally, these datasets cannot include a partition column. !!! info "Deprecation notice" Currently, you assign training data directly to a custom model, meaning every version of that model uses the same data; however, this assignment method is deprecated and scheduled for removal. It remains the default method during the deprecation period, even for newly created models, to support backward compatibility. !!! warning "File size warning" When adding training data to a custom model, the training data can be subject to a [frozen run](frozen-run) to conserve RAM and CPU resources, limiting the file size of the training dataset to 1.5GB. === "Assign to a model version" 1. In **Model Registry** > **Custom Model Workshop**, in the **Models** list, select the model you want to add training data to. 2. To assign training data to a custom model's versions, you must convert the model. On the **Assemble** tab, locate the **Training data for model versions** alert and click **Permanently convert**: !!! warning Converting a model's training data assignment method is a one-way action. It _cannot_ be reverted. After conversion, you can't assign training data at the model level. This change applies to the UI _and_ the API. If your organization has any automation depending on "per model" training data assignment, before you convert a model, you should update any related automation to support the new workflow. As an alternative, you can create a new custom model to convert to the "per version" training data assignment method and maintain the deprecated "per model" method on the model required for the automation; however, you should update your automation before the deprecation process is complete to avoid gaps in functionality. ![](images/convert-custom-model.png) If the model was already assigned training data, after you convert the model, the **Datasets** section contains information about the existing training dataset. ![](images/after-conversion-info.png) 3. On the **Assemble** tab, next to **Datasets**: * If the model version _doesn't_ have training data assigned, click **Assign**: ![](images/cmodel-12.png) * If the model version _does_ have training data assigned, click the edit icon (![](images/icon-pencil.png)), and, in the **Change Training Data** dialog box, click the delete icon (![](images/icon-delete.png)) to remove the existing training data. 4. In the **Add Training Data** (or **Change Training Data**) dialog box, click and drag a training dataset file into the **Training Data** box, or click **Choose file** and do either of the following: * Click **Local file**, select a file from your local storage, and then click **Open**. * Click **AI Catalog**, select a training dataset you previously uploaded to DataRobot, and click **Use this dataset**. ![](images/cmodel-13.png) 5. _Optional_. **Specify the column name containing partitioning info for your data** (based on training/validation/holdout partitioning). If you plan to deploy the custom model and monitor its [data drift](data-drift) and [accuracy](deploy-accuracy), specify the holdout partition in the column to establish an accuracy baseline. !!! important You can track data drift and accuracy without specifying a partition column; however, in that scenario, DataRobot won't have baseline values. The selected partition column should only include the values `T`, `V`, or `H`. 6. When the upload is complete, click **Add Training Data**. ??? note "Training data assignment error" If the training data assignment fails, an error message appears in the new custom model version under **Datasets**. While this error is active, you can't create a model package to deploy the affected version. To resolve the error and deploy the model package, reassign training data to create a new version, or create a new version and _then_ assign training data. === "Assign to a model (deprecated)" If you want to add training data to a custom inference model (which allows you to deploy it), you can do so by selecting a custom model and navigating to the **Model Info** tab. The **Model Info** tab lists custom inference model attributes. Click **Add Training Data**. ![](images/cmodel-12-deprecated.png) A pop-up appears, prompting you to upload training data. ![](images/cmodel-13.png) Click **Choose file** to upload training data. Optionally, you can specify the column name containing the partitioning information for your data (based on training/validation/holdout partitioning). If you plan to deploy the custom model and monitor its [accuracy](deploy-accuracy), specify the holdout partition in the column to establish an accuracy baseline. You can still track accuracy without specifying a partition column; however, there will be no accuracy baseline. When the upload is complete, click **Add Training Data**.
custom-model-training-data
--- title: Add custom model versions description: Update a model's contents to create a new version of the model due to new package versions, different preprocessing steps, hyperparameters, etc. --- # Add custom model versions {: #add-custom-model-versions } If you want to update a model due to new package versions, different preprocessing steps, hyperparameters, and more, you can update the file contents to create a new version of the model. To upload a new version of a custom model _environment_, see [Add an environment version](custom-environments#add-an-environment-version). ## Create a new minor version When you update the contents of a model, the minor version (1.1, 1.2, etc.) of the model automatically updates. To create a minor custom model version, select the model from the Custom Model Workshop and navigate to the **Assemble** tab. Under the **Model** header, click **Add files** and upload the files or folders you updated. The minor version is also updated if you delete a file. ![](images/cus-model-minor-version.png) ## Create a new major version To create a new major version of a model (1.0, 2.0, etc.): 1. Select the model from the Custom Model Workshop and navigate to the **Assemble** tab. 2. Under the **Model** header, click **+ New Version**. ![](images/cus-model-major-version.png) 3. In the **Create new model version** dialog box, select a version creation strategy and configure the new version: ![](images/cmodel-17.png) | Setting | Description | |---------------------------------------|---------------------------| | **Copy contents of previous version** | Add the contents of the current version to the new version of the custom model. | | **Create empty version** | Discard the contents of the current version and add new files for the new version of the custom model. | | **Base Environment** | Select the base environment of the new version. The environment of the current version is selected by default. | | **New version description** | Enter a description of the new version. The version description is optional. | **Keep training data from previous version** | Enable or disable adding the training data from the current version to the new custom model version. This setting is enabled by default. | !!! note The **Keep training data from previous version** option is only available if your custom model assigns training data at the version level, not the model level. For more information, see [Add training data to a custom model](custom-model-training-data). 4. Click **Create new version**. You can now use a new version of the model in addition to its previous versions. Select the iteration of the model that you want to use from the **Version** dropdown. ![](images/cmodel-11.png)
custom-model-versions
--- title: Manage custom model packages description: The Model Packages Actions menu allows users with appropriate permissions to share or permanently archive model packages. --- # Manage custom model packages {% include 'includes/manage-model-packages.md' %}
custom-model-manage
--- title: Register custom models as model packages description: Register a custom model in the Model Registry. The Model Registry is an archive of your model packages where you can also deploy and share the packages. --- # Register custom models as model packages {: #register-custom-models-as-model-packages } You can create a model package for a custom inference model to replace the model package in existing deployments with a new one, or to [share](reg-action#sharing) with another user who wants to deploy your custom model package. When you have successfully created and tested a custom inference model, you have the option to add it to the Model Registry as a model package. To do so, navigate to **Model Registry** > **Custom Model Workshop** and select the custom model you wish to add. Under the **Test and Deploy** tab, click **Add to registry**. ![](images/reg-create-1.png) The custom model is then added to the **Model Registry**. The **Add to registry** link is replaced with **View registry package**, which if clicked, takes you to your newly created model package in the registry under the **Model Packages** tab. Note that although a model package can be created without testing the custom model, DataRobot recommends that you confirm the model passes testing before proceeding. Untested custom models prompt a dialog box warning that the custom model is not tested. ![](images/reg-create-2.png) For untested custom models, click **Test now** to start a model test, or **Create package without testing** to proceed with the creation of a model package.
custom-model-reg
--- title: Manage custom models description: How to use the Actions menu, which lets you share and delete custom models and environments. --- # Manage custom models {: #manage-custom-models } There are several **Actions** available from the menu on the **Model Registry** > **Custom Model Workshop** page, such as [sharing](#share) and [deleting](#delete-a-deployment) custom models or environments. ## Share {: #share } The sharing capability allows [appropriate user roles](roles-permissions#custom-model-and-environment-roles) to grant permissions on a custom model or environment. This is useful, for example, for allowing others to use your models and environments without requiring them to have the expertise to create them. When you have created a custom model or environment and are ready to share it with others, open the action menu to the right of the **Created** header and select **Share** (![](images/icon-share.png)). ![](images/custom-share-1.png) This takes you to the sharing modal, which lists each associated user and their role. To remove a user, click the X button to the right of their role. ![](images/custom-share-2.png) To re-assign a user's role, click on the assigned role and assign a new one from the dropdown. ![](images/custom-share-3.png) To add a new user, enter their username in the **Share With** field and choose their role from the dropdown. Then click **Share**. ![](images/custom-share-6.png) This action initiates an email notification. ## Delete a model or environment {: #delete-a-model-or-environment } If you have the appropriate [permissions](roles-permissions#custom-model-and-environment-roles), you can delete a custom model or environment from the Model Registry by clicking the trash can icon ![](images/icon-delete.png). This action initiates an email notification to all users with sharing privileges for the model or environment. ![](images/custom-share-5.png)
custom-model-actions
--- title: Manage custom model dependencies description: Describes how to manage these dependencies from the Workshop and update the base drop-in environments to support your model code. --- # Manage custom model dependencies {: #manage-custom-model-dependencies } Custom models can contain various machine learning libraries in the model code, but not every [drop-in environment](drop-in-environments) provided by DataRobot natively supports all libraries. However, you can manage these dependencies from the Workshop and update the base drop-in environments to support your model code. To manage model dependencies, you must include a `requirements.txt` file uploaded as part of your custom model. The text file must indicate the machine learning libraries used in the model code. For example, consider a custom R model that uses Caret and XGBoost libraries. If this model is added to the Workshop and the R drop-in environment is selected, the base environment will only support Caret, not XGBoost. To address this, edit `requirements.txt` to include the Caret and XGBoost dependencies. After editing and re-uploading the requirements file, the base environment includes XGBoost, making the model available within the environment. !!! important Custom model dependencies aren't applied when testing a model locally with [DRUM](custom-model-drum). List the following, depending on the model language, in `requirements.txt`: * For R models, list the machine learning library dependencies. ![](images/depend-3.png) * For Python models, list the dependencies <em>and</em> any version constraints for the libraries. Supported constraint types include `<`, `<=`, `==`, `>=`, `>`, and multiple constraints can be issued in a single entry (for example, `pandas >= 0.24, < 1.0`). ![](images/depend-4.png) Once the requirements file is updated to include dependencies and constraints, navigate to your custom model's **Assemble** tab. Upload the file under the **Model > Content** header. The **Model Dependencies** field updates to display the dependencies and constraints listed in the file. ![](images/depend-1.png) From the **Assemble** tab, select a base drop-in environment under the **Model Environment** header. DataRobot warns you that a new environment must be built to account for the model dependencies. Select **Build environment**, and DataRobot installs the required libraries and constraints to the base environment. ![](images/depend-2.png) Once the base environment is updated, your custom model will be usable with the environment, allowing you to test, deploy, or register it.
custom-model-dependencies
--- title: Custom Model Workshop description: Using custom inference models, you can bring your own pretrained models into DataRobot. DataRobot supports models built with languages like Python, R, and Java. --- # Custom Model Workshop {: #custom-model-workshop } !!! info "Availability information" The Custom Model Workshop is a feature exclusive to DataRobot MLOps. Contact your DataRobot representative for information on enabling it. The Custom Model Workshop allows you to upload model artifacts to create, test, and deploy custom inference models to a centralized model management and deployment hub. Custom inference models are pre-trained, user-defined models that support most of DataRobot's MLOps features. DataRobot supports custom inference models built in a variety of languages, including Python, R, and Java. If you've created a model outside of DataRobot and you want to upload your model to DataRobot, you need to define the model content and the model environment in the Custom Model Workshop. !!! important Custom inference models are _not_ custom DataRobot models. They are _user-defined_ models created outside of DataRobot and assembled in the Custom Model Workshop for access to deployment, monitoring, and governance. To support the local development of the models you want to bring into DataRobot through the Custom Model Workshop, the [DataRobot Model Runner (or DRUM)](custom-model-drum) provides you with tools to locally assemble, debug, test, and run the inference model before assembly in DataRobot. Before adding a custom model to the workshop, DataRobot recommends you reference the [custom model assembly guidelines](custom-model-assembly/index) for building a custom model to upload to the workshop. The following topics describe how you can manage custom model artifacts in DataRobot: Topic | Describes ------|----------- [Create custom models](custom-inf-model) | How to create custom inference models in the Custom Model Workshop. [Manage custom model dependencies](custom-model-dependencies) | How to manage model dependencies from the workshop and update the base drop-in environments to support your model code. [Manage custom model resource usage](custom-model-resource-mgmt) | How to configure the resources a model consumes to facilitate smooth deployment and minimize potential environment errors in production. [Add custom model versions](custom-model-versions) | How to to create a new version of the model and/or environment after updating the file contents with new package versions, different preprocessing steps, updated hyperparameters, and more. [Add training data to a custom model](custom-model-training-data) | How to add training data to a custom inference model for deployment. [Add files from a remote repo to a custom model](custom-model-repos) | How to connect to a remote repository and pull custom model files into the Custom Model Workshop. [Test a custom model in DataRobot](custom-model-test) | How to test custom inference models in the Custom Model Workshop. [Manage custom models](custom-model-actions) | How to delete or share custom models and custom model environments. [Register custom models as model packages](custom-model-reg) | How to register custom inference models in the Model Registry. [Manage custom model packages](custom-model-manage) | How to deploy, share, or archive custom models from the Model Registry. Once deployed to a prediction server managed by DataRobot, you can [make predictions via the API](dr-predapi) and [monitor your deployment](monitor/index) with a suite of capabilities.
index
--- title: Manage custom model resource usage description: Configure the resources the model consumes to facilitate smooth deployment and minimize potential environment errors in production. --- # Manage model resources {: #manage-model-resources } After creating a custom inference model, you can configure the resources the model consumes to facilitate smooth deployment and minimize potential environment errors in production. You can monitor a custom model's resource allocation from the **Assemble** tab. The resource settings are listed below the deployment status. ![](images/resource-1.png) To edit any resource settings, select the pencil icon (![](images/icon-pencil.png)). Note that users can determine the maximum memory allocated for a model, but only [organization admins](manage-users#set-admin-permissions-for-users) can configure additional resource settings. !!! warning DataRobot recommends configuring resource settings only when necessary. When you configure the **Memory** setting below, you set the Kubernetes memory "limit" (the maximum allowed memory allocation); however, you can't set the memory "request" (the minimum guaranteed memory allocation). For this reason, it is possible to set the "limit" value too far above the default "request" value. An imbalance between the memory "request" and the memory usage allowed by the increased "limit" can result in the custom model exceeding the memory consumption limit. As a result, you may experience unstable custom model execution due to frequent eviction and relaunching of the custom model. If you require an increased **Memory** setting, you can mitigate this issue by increasing the "request" at the Organization level; for more information, contact DataRobot Support. Configure the resource allocations that appear in the modal. === "SaaS" ![](images/resource-3.png) | Resource | Description | |----------|---------------| | Memory | Determines the maximum amount of memory that may be allocated for a custom inference model. If a model exceeds the allocated amount, it is evicted by the system. If this occurs during testing, the test is marked as a failure. If this occurs when the model is deployed, the model is automatically launched again by Kubernetes. | | Replicas | Sets the number of replicas executed in parallel to balance workloads when a custom model is running. Increasing the number of replicas may not result in better performance, depending on the custom model's speed. | === "Self-Managed" ![](images/resource-2.png) | Resource | Description | |----------------|--------------| | Memory | Determines the maximum amount of memory that may be allocated for a custom inference model. If a model allocates more than the configured maximum memory value, it is evicted by the system. If this occurs during testing, the test is marked as a failure. If this occurs when the model is deployed, the model is automatically launched again by Kubernetes.| | Replicas | Sets the number of replicas executed in parallel to balance workloads when a custom model is running. Increasing the number of replicas may not result in better performance, depending on the custom model's speed. | | Network access | Configures the egress traffic of the custom model. Choose between no access or public access. | Once you have fully configured the resource settings for a model, click **Save**. This creates a new version of the custom model with edited resource settings applied.
custom-model-resource-mgmt
--- title: Create custom inference models description: How to build a custom inference model in the Custom Model Workshop. --- # Create custom inference models {: #create-custom-inference-models } Custom inference models are user-created, pretrained models that you can upload to DataRobot (as a collection of files) via the **Custom Model Workshop**. You can then upload a model artifact to create, test, and deploy custom inference models to DataRobot's centralized deployment hub. You can assemble custom inference models in either of the following ways: * Create a custom model and include web server Scoring Code and a `start_server.sh` file in the model's folder. This type of custom model can be paired with a [custom](custom-environments#create-a-custom-environment) or [drop-in](drop-in-environments) environment. * Create a custom model <em>without</em> providing web server Scoring Code and a `start_server.sh` file. This type of custom model **must** use a drop-in environment. Drop-in environments contain the web server Scoring Code and a `start_server.sh` file used by the model. They are [provided by DataRobot](drop-in-environments) in the Workshop. You can also [create your own](custom-environments#create-a-custom-environment) drop-in custom environment. Be sure to review the guidelines for [assembling a custom model](custom-model-assembly/index) before proceeding. If any files overlap between the custom model and the environment folders, the model's files will take priority. !!! note Once a custom model's file contents are assembled, you can [test the contents locally](custom-local-test) for development purposes before uploading it to DataRobot. After you create a custom model in the Workshop, you can run a [testing suite](custom-model-test) from the **Assemble** tab. ## Create a new custom model {: #create-a-new-custom-model } 1. To create a custom model, navigate to **Model Registry** > **Custom Model Workshop** and select the **Models** tab. This tab lists the models you have created. Click **Add new model**. ![](images/cmodel-1.png) 2. In the **Add Custom Inference Model** window, enter the fields described in the table below: ![](images/cmodel-2.png) | | Element | Description | |---|---|---| | ![](images/icon-1.png) | Model name | Name the custom model.| | ![](images/icon-2.png) | Target type / Target name | Select the target type ([binary classification](glossary/index#classification), [regression](glossary/index#regression), [multiclass](glossary/index#multiclass), [anomaly detection](#anomaly-detection), or [unstructured](unstructured-custom-models)) and enter the name of the target feature. | | ![](images/icon-3.png) | Positive class label / Negative class label | These fields only display for binary classification models. Specify the value to be used as the positive class label and the value to be used as the negative class label. <br> For a multiclass classification model, these fields are replaced by a field to enter or upload the target classes in `.csv` or `.txt` format. | 3. Click **Show Optional Fields** and, if necessary, enter a prediction threshold, the coding language used to build the model, and a description. ![](images/cmodel-3.png) 4. After completing the fields, click **Add Custom Model**. 5. In the **Assemble** tab, under **Model Environment** on the right, select a model environment by clicking the **Base Environment** dropdown menu on the right and selecting an environment. The model environment is used for [testing](custom-model-test) and [deploying](deploy-custom-inf-model) the custom model. ![](images/cmodel-assemble-add-env.png) !!! note The **Base Environment** pulldown menu includes [drop-in model environments](drop-in-environments), if any exist, as well as [custom environments](custom-environments#create-a-custom-environment) that you can create. 6. Under **Model** on the left, add content by dragging and dropping files or browsing. Alternatively, select a [remote integrated repository](custom-model-repos). ![](images/cmodel-assemble-add-files.png) If you click **Browse local file**, you have the option of adding a **Local Folder**. The local folder is for dependent files and additional assets required by your model, not the model itself. Even if the model file is included in the folder, it will not be accessible to DataRobot unless the file exists at the root level. The root file can then point to the dependencies in the folder. !!! note You must also upload web server Scoring Code and a `start_server.sh` file to your model's folder unless you are pairing the model with a [drop-in environment](drop-in-environments). ### Anomaly detection {: #anomaly-detection } You can create custom inference models that support anomaly detection problems. If you choose to build one, reference the [DRUM template](https://github.com/datarobot/datarobot-user-models/tree/master/model_templates/python3_sklearn_anomaly){ target=_blank }. ({% include 'includes/github-sign-in.md' %}) When deploying custom inference anomaly detection models, note that the following functionality is not supported: * Data drift * Accuracy and association IDs * Challenger models * Humility rules * Prediction intervals
custom-inf-model
--- title: Add files from remote repos to custom models description: Add files from remote repositories, including Bitbucket, GitHub, GitHub Enterprise, S3, GitLab, and GitLab Enterprise to the models you create in the Custom Model Workshop. --- # Add files from remote repos to custom models {: #add-files-from-remote-repos-to-custom-models } If you [add a model](custom-inf-model#create-a-new-custom-model) to the Custom Model Workshop, you can add files to that model from a wide range of repositories, including Bitbucket, GitHub, GitHub Enterprise, S3, GitLab, and GitLab Enterprise. After adding a repository to DataRobot, you can [pull files](#pull-files-from-the-repository) from the repository and include them in the custom model. ## Add a remote repository {: #add-a-remote-repository } The following steps show how to add a remote repository so that you can pull files into a custom model. 1. Select a custom model you wish to add files to, and navigate to **Assemble** > **Add files** > **Remote repository**. ![](images/custom-repo-1.png) 2. Click **add new** to integrate a new remote repository with DataRobot. ![](images/custom-repo-2.png) See the following topics for next steps to register the repositories: * [Bitbucket Server](#bitbucket-server-repository) * [GitHub](#github-repository) * [GitHub Enterprise](#github-enterprise-repository) * [S3](#s3-repository) * [GitLab](#gitlab-cloud-repository) * [GitLab Enterprise](#gitlab-enterprise-repository) ### Bitbucket Server repository {: #bitbucket-server-repository } To register a Bitbucket Server repository: 1. Select **Bitbucket Server** from the list of repositories to be added in step 2 of the [Add a remote repository](#add-a-remote-repository) procedure. 2. Complete the required fields: ![](images/remote-3.png) | Field | Description | |-----------------------|--------------| | Name | The name of the Bitbucket Server repository.| | Repository location | The URL for the Bitbucket Server repository that appears in the browser address bar when accessed. Alternatively, select **Clone** from the Bitbucket Server UI and paste the URL.| | Personal access token | The token used to grant DataRobot access to the Bitbucket Server repository. Generate this token from the Bitbucket Server UI by navigating to **Profile > Manage account > Personal access tokens** and selecting **Create a token**. Name the token, review the permissions, and once created, copy the token string to this field. | | Description | Optional. A description of the Bitbucket Server repository.| 3. Click **Test** to verify connection to the repository. 4. Once you have verified the connection, click **Add repository**. The Bitbucket Server repository can now be used to [pull files](#pull-files-from-the-repository) for custom models. ### GitHub repository {: #github-repository } To register a public GitHub repository: 1. Select **GitHub** from the list of repositories to be added in step 2 of the [Add a remote repository](#add-a-remote-repository) procedure. 2. Authorize the GitHub app by clicking **Authorize GitHub App** and agreeing to grant DataRobot read-only access to your GitHub account's public repositories. ![](images/custom-repo-3.png) !!! note You can also use repositories that are part of any [GitHub organization](#github-organization-repository-access) you belong to. !!! tip At any time you can **Unauthorize** the app. This revokes access from all of your registered GitHub repositories in DataRobot. All registered repositories will be preserved, but without access to your GitHub repositories. You can re-authorize the app later. 3. Once authorized, complete the required fields: ![](images/custom-repo-4.png) | Field | Description | |-----------------------|--------------| | Name | The name of the GitHub repository.| | Edit repository permissions | To use a private repository, you need to [grant the GitHub app access](#edit-github-repository-permissions). | | Repository | Enter the GitHub repository URL. Start typing the repository name and repositories will populate in the autocomplete dropdown. Notes: <ul><li> When you [grant access to a private repository](#edit-github-repository-permissions), its URL is added to the **Repository** autocomplete dropdown. </li><li>To use an external public GitHub repository, you must [obtain the URL from the repo](#external-github-repositories).</li></ul> | | Description | Optional. A description of the GitHub repository. | 4. Click **Test** to verify the repository connection. 5. When validated, select **Add repository**. You can now [pull files](#pull-files-from-the-repository) from the repository to add to a custom model. #### Edit GitHub repository permissions {: #edit-github-repository-permissions } To use a private repository, click **Edit repository permissions** in the **Add GitHub repository** window. This gives the GitHub app access to your private repositories. You can give access to: * All current and future private repositories * A selected list of repositories ![](images/custom-repo-9.png) After access is granted, the private repositories appear in the autocomplete dropdown for the **Repository** field. #### External GitHub repositories {: #external-github-repositories } To use an external public GitHub repository that is not owned by you or your organization, navigate to the repository in GitHub and click **Code**. Copy and paste the URL into the **Repository** field of the the **Add GitHub repository** window. ![](images/custom-repo-8.png) #### GitHub organization repository access {: #github-organization-repository-access } If you belong to a GitHub organization, you can request access to an organization's repository for use with DataRobot. A request for access notifies the GitHub admin, who then who approves or denies your access request. !!! note If your admin approves a single user's access request, access is provided to **all** DataRobot users in that user's organization without any additional configuration. For more information, reference the [GitHub documentation](https://docs.github.com/en/github/setting-up-and-managing-organizations-and-teams/managing-access-to-your-organizations-repositories){ target=_blank }. ### GitHub Enterprise repository {: #github-enterprise-repository } To register a GitHub Enterprise repository: 1. Select **GitHub Enterprise** from the list of repositories to be added in step 2 of the [Add a remote repository](#add-a-remote-repository) procedure. 2. Complete the required fields: ![](images/remote-5.png) | Field | Description | |-----|-------| | Name | The name of the GitHub Enterprise repository. | | Repository location | The URL for the GitHub Enterprise repository that appears in the browser address bar when accessed. Alternatively, select **Clone** from the GitHub UI and paste the URL.| | Personal access token | The token used to grant DataRobot access to the GitHub Enterprise repository. Generate this token from the GitHub UI by selecting your user icon in the top right and navigating to **Settings > Developer Settings** and selecting **Personal access tokens**. Click **Generate new token**. Name the token and select "repo" for the scope of access. Once created, copy the token string to this field. | | Description | Optional. A description of the GitHub Enterprise repository.| 3. Click **Test** to verify connection to the repository. 4. Once you have verified the connection, click **Add repository**. The GitHub Enterprise repository can now be used to [pull files](#pull-files-from-the-repository) for custom models. #### Git Large File Storage {: #git-large-file-storage } Git Large File Storage (LFS) is supported by default for GitHub integrations. Reference the [Git documentation](https://git-lfs.github.com){ target=_blank } to learn more. Git LFS support for GitHub always requires having the GitHub application installed on the target repository, even if it's a public repository. Any non-authorized requests to the LFS API will fail with an HTTP 403. ### S3 repository {: #s3-repository } To register an S3 repository: 1. Select **S3** from the list of repositories to be added in step 2 of the [Add a remote repository](#add-a-remote-repository) procedure. 2. Complete the required fields. Note that AWS credentials are optional for public buckets. ![](images/custom-repo-5.png) | Field | Description | |-------|--------------| | Name | The name of the S3 repository. | | Bucket name | The name of the S3 bucket. If you are adding a public S3 repository, this is the **only** field you must complete. | | Access key ID | The key used to sign programmatic requests made to AWS. Use with the AWS Secret Access Key to authenticate requests to pull from the S3 repository. Required for private S3 repositories. | | Secret access key | The key used to sign programmatic requests made to AWS. Use with the AWS Access Key ID to authenticate requests to pull from the S3 repository. Required for private S3 repositories. | | Session token | Optional. A <a target="_blank" href="https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_temp_use-resources.html">token</a> that validates temporary security credentials when making a call to an S3 bucket. | | Description | Optional. A description of the S3 repository. | 3. Click **Test** to verify connection to the repository. 4. Once you have verified the connection, click **Add repository**. The S3 repository can now be used to [pull files](#pull-files-from-the-repository) for custom models. #### AWS S3 access configuration {: #aws-s3-access-configuration } DataRobot requires the AWS S3 `ListBucket` and `GetObject` permissions in order to ingest data. These permissions should be applied as an additional AWS IAM Policy for the AWS user or role the cluster uses for access. For example, to allow ingestion of data from a private bucket named `examplebucket`, apply the following policy: ```json { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": ["s3:ListBucket"], "Resource": ["arn:aws:s3:::examplebucket"] }, { "Effect": "Allow", "Action": ["s3:GetObject"], "Resource": ["arn:aws:s3:::examplebucket/*"] } ] } ``` #### Remove S3 credentials {: #remove-s3-credentials } You can remove any S3 credentials by editing the repository connection. Select the connection and click **Clear Credentials**. ### GitLab (cloud) repository {: #gitlab-cloud-repository } To register a GitLab cloud repository: 1. Select **GitLab** from the list of repositories to be added in step 2 of the [Add a remote repository](#add-a-remote-repository) procedure. 2. Authorize the DataRobot GitLab app by clicking **Authorize GitLab app**. ![](images/gitlab-1.png) !!! tip At any time you can **Unauthorize** the app. This revokes access from all of your registered GitLab repositories in DataRobot. All registered repositories will be preserved, but without access to your GitLab repositories. You can re-authorize the app later. 3. Once authorized, complete the required fields: ![](images/gitlab-2.png) | Field | Description | |-----------------------|--------------| | Name | The name of the GitLab repository.| | Edit repository permissions | To use a private repository, you need to [grant the GitLab app access](#edit-github-repository-permissions). | | Repository | Enter the GitLab repository URL. Start typing the repository name and repositories will populate in the autocomplete dropdown. | | Description | Optional. A description of the GitLab repository. | 4. Click **Test** to verify the repository connection. 5. When validated, select **Add repository**. You can now [pull files](#pull-files-from-the-repository) from the repository to add to a custom model. ### GitLab Enterprise repository {: #gitlab-enterprise-repository } To register a GitLab Enterprise repository: 1. Select **GitLab Enterprise** from the list of repositories to be added in step 2 of the [Add a remote repository](#add-a-remote-repository) procedure. 2. Authorize the DataRobot GitLab app by clicking **Authorize GitLab app**. ![](images/gitlab-1.png) !!! tip At any time you can **Unauthorize** the app. This revokes access from all of your registered GitLab repositories in DataRobot. All registered repositories will be preserved, but without access to your GitLab repositories. You can re-authorize the app later. 3. Once authorized, complete the required fields: ![](images/gitlab-5.png) | Field | Description | |-----------------------|--------------| | Name | The name of the GitLab repository.| | Edit repository permissions | To use a private repository, you need to [grant the GitLab app access](#edit-github-repository-permissions). | | Repository location | Enter the GitLab repository URL. Start typing the repository name and repositories will populate in the autocomplete dropdown. | | Personal access token | Enter the token used to grant DataRobot access to the GitLab Enterprise repository. [Generate this token](#create-a-personal-access-token-for-GitLab-enterprise) from GitLab. | | Description | Optional. A description of the GitLab repository. | 4. Click **Test** to verify the repository connection. 5. When validated, select **Add repository**. You can now [pull files](#pull-files-from-the-repository) from the repository to add to a custom model. #### Create a personal access token for GitLab Enterprise {: #create-a-personal-access-token-for-GitLab-enterprise } To create a personal access token: 1. [Navigate to GitLab](https://gitlab.com/-/profile/personal_access_tokens){ target=_blank }. 2. Enter a name for the new token, set the mandatory scopes (`read_api` and `read_repository`), and click **Create personal access token**. ![](images/gitlab-3.png) The newly generated token appears at the top of the page. ![](images/gitlab-4.png) 3. Enter the new token into the **Personal access token** field in the **Add GitLab Enterprise repository** window. ## Pull files from the repository {: #pull-files-from-the-repository } When you have added a repository to DataRobot, you can pull files from it to build custom models. The following example shows how to pull files from a GitHub repository. To do so: 1. Navigate to **Assemble** > **Add files** > **Remote repository**. ![](images/custom-repo-1.png) 2. Click **Select a remote repository** and choose a repository from the list. ![](images/custom-repo-6.png) For a GitHub repository: ![](images/custom-repo-7.png) 3. Enter the tag, branch, or commit hash from which you want to pull files. 4. Specify the path to the files being pulled. 5. Once specified, click **Pull into model**. The files populate under the **Model** header as part of the custom model.
custom-model-repos
--- title: Test custom models description: Follow the custom inference model testing workflow. Understand the types of tests employed and the insights available to verify performance, stability, and predictions. --- # Test custom models {: #test-custom-models } You can test custom models in the **Custom Model Workshop**. Alternatively, you can test custom models prior to uploading them by [testing locally with DRUM](custom-local-test). ## Testing workflow {: #testing-workflow } Testing ensures that the custom model is functional before it is deployed by using the environment to run the model with prediction test data. Note that there are some differences in how predictions are made during testing and for a deployed custom model: * Testing bypasses the prediction servers, but predictions for a deployment are done by using the deployment's prediction server. * For both custom model testing and a custom model deployment, the model's target and partition columns are removed from prediction data before making predictions. * A deployment can be used to make predictions with a dataset containing an association ID. In this case, run custom model testing with a dataset that contains the association ID to make sure that the custom model is functional with the dataset. [Read below](#testing-overview) for more details about the tests run for custom models. 1. To test a custom inference model, navigate to the **Test** tab. ![](images/cmodel-5.png) 2. Select **New test**. ![](images/cmodel-18.png) 3. Confirm the model version and upload the prediction test data. You can also configure the [resource settings](custom-inf-model#manage-model-resources), which are only applied to the test (not the model itself). ![](images/ctest-1.png) 4. After configuring the general settings, toggle the tests that you want to run. For more information about a test, reference the [testing overview](#testing-overview) section. When a test is toggled on, an unsuccessful check returns "Error", blocking the deployment of the custom model and aborting all subsequent tests. If toggled off, an unsuccessful check returns "Warning", but still permits deployment and continues the testing suite. Additionally, you can configure the tests' parameters (where applicable): * Maximum response time: The amount of time allotted to receive a prediction response. * Check duration limit: The total allotted time for the model to complete the performance check. * Number of parallel users: The amount of users making prediction request in parallel. ![](images/ctest-2.png) 5. Click **Start Test** to begin testing. As testing commences, you can monitor the progress and view results for individual tests under the **Summary & Deployment** header in the **Test** tab. For more information about a test, hover over the test name in the testing modal (displayed below) or reference the [testing overview](#testing-overview). ![](images/cmodel-20.png) 6. When testing is complete, DataRobot displays the results. If all testing succeeds, the model is ready to be deployed. If you are satisfied with the configured resource settings, you can [apply those changes](custom-inf-model#manage-model-resources) from the **Assemble** tab and create a new version of the model. To view any errors that occurred, select **View Full Log** (the log is also available for download by selecting **Download Log**). ![](images/cmodel-21.png) 7. After assessing any issues and fixing them locally for a model, upload the fixed file(s) and update the model version(s). Run testing again with the new model version. ## Testing overview {: #testing-overview } The following table describes the tests performed on custom models to ensure they are ready for deployment. Note that [unstructured](unstructured-custom-models) custom inference models only perform the "Startup Check" test, and skip all other procedures. | Test name | Description | |---------|-----------------------| | Startup | Ensures that the custom model image can build and launch. If the image cannot build or launch, the test fails and all subsequent tests are aborted. | | Prediction error | Checks that the model can make predictions on the provided test dataset. If the test dataset is not compatible with the model or if the model cannot successfully make predictions, the test fails.| | Null imputation | Verifies that the model can impute null values. Otherwise, the test fails. The model must pass this test in order to support [Feature Impact](feature-impact).| | Side effects | Checks that the batch predictions made on the entire test dataset match predictions made one row at a time for the same dataset. The test fails if the prediction results do not match.| | Prediction verification | Verifies predictions made by the custom model by comparing them to the reference predictions. The reference predictions are taken from the specified column in the selected dataset. | | Performance | Measures the time spent sending a prediction request, scoring, and returning the prediction results. The test creates 7 samples (from 1KB to 50MB), runs 10 prediction requests for each sample, and measures the prediction requests latency timings (minimum, mean, error rate etc). The check is interrupted and marked as a failure if it elapses more than 10 seconds. | | Stability | Verifies model consistency. Specify the payload size (measured by row number), the number of prediction requests to perform as part of the check, and what percentage of them require 200 response code. You can extract insights with these parameters to understand where the model may have issues (for example, if a model failures respond with non-200 codes most of the time). | | Duration | Measures the time elapsed to complete the testing suite. | ### Testing insights {: #testing-insights } #### Performance and stability checks {: #performance-and-stability-checks } Individual tests offer specific insights. Select **See details** on a completed test. ![](images/ctest-3.png) The performance check insights display a table showing the prediction latency timings at different payload sample sizes. For each sample, you can see the minimal, average, and maximum prediction request time, along with the request per second (RPS) and error rate. Note that the prediction requests made to the model during testing bypass the prediction server, so the latency numbers will be slightly higher in a production environment as the prediction server will add some latency. ![](images/ctest-4.png) Additionally, both Performance and Stability checks display a memory usage chart. This data requires the model to use a [DRUM-based execution environment](custom-local-test) in order to display. The red line represents the maximum memory allocated for the model. The blue line represents how memory was consumed by the model. Memory usage is gathered from several replicas; the data displayed on the chart is coming from a different replica each time. The data displayed on the chart is likely to differ from multi-replica setups. For multi-replica setups, the memory usage chart is constructed by periodically pulling the memory usage stats from a random replica. This means that if the load is distributed evenly across all the replica, the chart shows the memory usage of each replica's model. ![](images/ctest-5.png) Note that the model's usage can slightly exceed the maximum memory allocated because model termination logic depends on an underlying executor. Additionally, a model can be terminated even if the chart shows that its memory usage has not exceeded the limit, because the model is terminated before updated memory usage data is fetched from it. Memory usage data requires the model to use a [DRUM-based execution environment](custom-local-test). #### Prediction verification check {: #prediction-verification-check } The insights for the prediction verification check display a histogram of differences between the model predictions and the reference predictions. ![](images/ctest-6.png) Use the toggle to hide differences that represent matching predictions. ![](images/ctest-7.png) In addition to the histogram, the prediction verification insights include a table containing rows for which model predictions do not match with reference predictions. The table values can be ordered by row number, or by the difference between a model prediction and a reference prediction. ![](images/ctest-8.png)
custom-model-test
--- title: Drop-in environments description: Describes DataRobot's built-in custom model environments. --- # Drop-in environments {: #drop-in-environments } DataRobot provides drop-in environments in the Custom Model Workshop. Drop-in environments contain the web server Scoring Code and a `start_server.sh` file required for a custom model so that you don't need to provide them in the model's folder. The following table details the drop-in environments provided by DataRobot. Each environment is prefaced with **[DataRobot]** in the **Environments** tab of the **Custom Model Workshop**. You can select these drop-in environments when you [create a custom model](custom-inf-model). ![](images/c-env-6.png) | Environment name & example | Model compatibility & artifact file extension | |-------------------------------|----------------------------------------| | [Python 3 ONNX Drop-In](https://github.com/datarobot/datarobot-user-models/blob/master/public_dropin_environments/python3_onnx){ target=_blank } | ONNX models (`.onnx`) | | [Python 3 PMML Drop-In ](https://github.com/datarobot/datarobot-user-models/blob/master/public_dropin_environments/python3_pmml){ target=_blank } | PMML models (`.pmml`) | | [Python 3 PyTorch Drop-In](https://github.com/datarobot/datarobot-user-models/blob/master/public_dropin_environments/python3_pytorch){ target=_blank } | PyTorch models (`.pth`) | | [Python 3 Scikit-Learn Drop-In](https://github.com/datarobot/datarobot-user-models/blob/master/public_dropin_environments/python3_sklearn){ target=_blank } | Scikit-Learn models (`.pkl`) | | [Python 3 XGBoost Drop-In](https://github.com/datarobot/datarobot-user-models/blob/master/public_dropin_environments/python3_xgboost){ target=_blank } | Native XGBoost models (`.pkl`) | | [Python 3 Keras Drop-In](https://github.com/datarobot/datarobot-user-models/blob/master/public_dropin_environments/python3_keras){ target=_blank } | Keras models backed by tensorflow (`.h5`) | | [Java Drop-In](https://github.com/datarobot/datarobot-user-models/blob/master/public_dropin_environments/java_codegen){ target=_blank } | DataRobot Scoring Code models (`.jar`) | | [R Drop-in Environment](https://github.com/datarobot/datarobot-user-models/blob/master/public_dropin_environments/r_lang){ target=_blank } | R models trained using CARET (`.rds`) <br> Due to the time required to install all libraries recommended by CARET, only model types that are also package names are installed (e.g., `brnn`, `glmnet`). Make a copy of this environment and modify the Dockerfile to install the additional, required packages. To decrease build times when you customize this environment, you can also remove unnecessary lines in the `# Install caret models` section, installing only what you need. Review the [CARET documentation](http://topepo.github.io/caret/available-models.html){ target=_blank } to check if your model's method matches its package name. ({% include 'includes/github-sign-in.md' %}) | | [Julia Drop-In](https://github.com/datarobot/datarobot-user-models/blob/master/example_dropin_environments/julia_mlj){ target=_blank }<em>*</em> | Julia models (`.jlso`) <br> <em>* The Julia drop-in environment isn't officially supported; it is provided as an example.</em> | !!! note All Python environments contain Scikit-Learn to help with preprocessing (if necessary), but only Scikit-Learn can make predictions on sklearn models.
drop-in-environments
--- title: Custom environments description: Describes how to build a custom environment when a custom model requires something not contained in one of DataRobot's built-in environments. --- # Custom environments {: #custom-environments } Once uploaded into DataRobot, custom models run inside of environments&mdash;Docker containers running in Kubernetes. In other words, DataRobot copies the uploaded files defining the custom task into the image container. In most cases, adding a custom environment is not required because there are a variety of built-in environments available in DataRobot. Python and/or R packages can be easily added to these environments by uploading a `requirements.txt` file with the task’s code. A custom environment is only required when a custom task: * Requires additional Linux packages. * Requires a different operating system. * Uses a language other than Python, R, or Java. This document describes how to build a custom environment for these cases. To assemble and test a custom environment locally, install both Docker Desktop and the [DataRobot user model (DRUM) CLI tool](custom-model-drum) on your machine. ## Custom environment guidelines {: #custom-environment-guidelines } !!! note DataRobot recommends using an environment template and not building your own environment except for specific use cases. (For example, you don't want to use DRUM but you want to implement your own prediction server.) If you'd like to use a tool, language, or framework that is not supported by our template environments, you can make your own. DataRobot recommends modifying the provided environments to suit your needs; however, to make an easy-to-use, re-usable environment, you should adhere to the following guidelines: * Your environment must include a Dockerfile that installs any requirements you may want. * Custom models require a simple webserver to make predictions. DataRobot recommends putting this in your environment so you can reuse it with multiple models. The webserver must be listening on port `8080` and implement the following routes: !!! note `URL_PREFIX` is an environment variable that is available at runtime. It must be added to the routes below. Mandatory endpoints | Description --------------------|------------ `GET /URL_PREFIX/` | This route is used to check if your model's server is running. `POST /URL_PREFIX/predict/` | This route is used to make predictions. Optional extension endpoints | Description -----------------------------|------------ `GET /URL_PREFIX/stats/` | This route is used to fetch memory usage data for DataRobot Custom Model Testing. `GET /URL_PREFIX/health/` | This route is used to check if model is loaded and functioning properly. If model loading fails error with 513 response code should be returned. Failing to handle this case may cause the backend k8s container to enter crash and enter a restart loop for several minutes. * An executable `start_server.sh` file is required to start the model server. * Any code and `start_server.sh` should be copied to `/opt/code/` by your Dockerfile. !!! note To learn more about the complete API specification, you can review the [DRUM server API `yaml` file](https://github.com/datarobot/datarobot-user-models/blob/master/custom_model_runner/drum_server_api.yaml){ target=_blank }. ## Create the environment {: #create-the-environment } Once DRUM is installed, begin your environment creation by copying one of the examples from [GitHub](https://github.com/datarobot/datarobot-user-models/tree/master/public_dropin_environments){ target=_blank }. {% include 'includes/github-sign-in.md' %} Make sure: 1. The environment code stays in a single folder. 2. You remove the `env_info.json` file. ### Add Linux packages {: #add-linux-packages } To add Linux packages to an environment, add code at the beginning of `dockerfile`, immediately after the `FROM datarobot…` line. Use `dockerfile` syntax for an Ubuntu base. For example, the following command tells DataRobot which base to use and then to install packages `foo`, `boo`, and `moo` inside the Docker image: ``` FROM datarobot/python3-dropin-env-base RUN apt-get update --fix-missing && apt-get install foo boo moo ``` ### Add Python/R packages {: #add-python-r-packages } In some cases, you might want to include Python/R packages in the environment. To do so, note the following: * List packages to install in `requirements.txt`. For R packages, do not include versions in the list. * Do not mix Python and R packages in the same `requirements.txt` file. Instead, create multiple files and adjust `dockerfile` so DataRobot can find and use them. ## Test the environment locally {: #test-the-environment-locally } The following example illustrates how to quickly test your environment using Docker tools and DRUM. 1. To test a custom task with a custom environment, navigate to the local folder where the task content is stored. 2. Run the following, replacing placeholder names in `< >` brackets with actual names: ``` sh drum fit --code-dir <path_to_task_content> --docker <path_to_a_folder_with_environment_code> --input <path_to_test_data.csv> --target-type <target_type> --target <target_column_name> --verbose ``` ## Add a custom environment to DataRobot {: #add-a-custom-environment-to-datarobot } To add a custom environment, you must upload a compressed folder in `.tar`, `.tar.gz`, or `.zip` format. Be sure to review the guidelines for [preparing a custom environment folder](#custom-environment-guidelines) before proceeding. You may also consider creating a custom [drop-in environment](drop-in-environments) by adding Scoring Code and a `start_server.sh` file to your environment folder. Note the following environment limits and environment version limits: === "SaaS" Next to the **Add new environment** and the **New version** buttons, there is a badge indicating how many environments (or environment versions) you've added and how many environments (or environment versions) you can add in total. With the correct permissions, an administrator can set these limits at a [user](manage-users#manage-execution-environment-limits) or [group](manage-groups#manage-execution-environment-limits) level. The following status categories are available in this badge: === "Self-Managed" Next to the **Add new environment** and the **New version** buttons, there is a badge indicating how many environments (or environment versions) you've added and how many environments (or environment versions) you can add in total. With the correct permissions, an administrator can set these limits at a [user](manage-users#manage-execution-environment-limits), [group](manage-groups#manage-execution-environment-limits), or [organization](manage-orgs#manage-execution-environment-limits) level. The following status categories are available in this badge: Badge | Description ------|------------ ![](images/env-limit-badge.png){: style="height:30px; width:auto;"} | The number of environments is less than 75% of the environment limit. ![](images/env-limit-badge-alert.png){: style="height:30px; width:auto;"} | The number of environments is equal to or greater than 75% of the environment limit. ![](images/env-limit-badge-warn.png){: style="height:30px; width:auto;"} | The number of environments is equal to the environment limit. You can't add more environments without removing an environment first. Navigate to **Model Registry** > **Custom Model Workshop** and select the **Environments** tab. This tab lists the environments provided by DataRobot and those you have created. Click **Add new environment** to configure the environment details and add it to the workshop. ![](images/c-env-1.png) Complete the fields in the **Add New Environment** dialog box. ![](images/c-env-2.png) | Field | Description | | :------------- | :------------- | | Environment name | The name of the environment. | | Choose the file you want to upload | The tarball archive containing the Dockerfile and any other relevant files. | | Programming Language | The language in which the environment was made. | | Description (optional) | An optional description of the custom environment. | When all fields are complete, click **Add**. The custom environment is ready for use in the Workshop. After you upload an environment, it is only available to you unless you [share](#share-and-download-an-environment) it with other individuals. To make changes to an existing environment, create a new [version](#add-an-environment-version). ### Add an environment version {: #add-an-environment-version } Troubleshoot or update a custom environment by adding a new version of it to the Workshop. In the **Versions** tab, select **New version**. ![](images/c-env-4.png) Upload the file for a new version and provide a brief description, then click **Add**. ![](images/c-env-5.png) The new version is available in the **Verison** tab; all past environment versions are saved for later use. ### View environment information {: #view-environment-information } There is a variety of information available for each custom and built-in environment. To view: 1. Navigate to **Model Registry > Custom Model Workshop > Environments**. The resulting list shows all environments available to your account, with summary information. 2. For more information on an individual environment, click to select: ![](images/cml-env-3.png) The versions tab lists a variety of version-specific information and provides a link for downloading that version's environment context file. 3. Click **Current Deployments** to see a list of all deployments in which the current environment has been used. 4. Click **Environment Info** to view information about the general environment, not including version information. ### Share and download an environment {: #share-and-download-an-environment } You can share custom environments with anyone in your organization from the menu options on the right. These options are not available to built-in environments because all organization members have access and these environment options should not be removed. !!! note An environment is not available in the model registry to other users unless it was explicitly shared. That does not, however, limit users' ability to use blueprints that include tasks that use that environment. See the description of [_implicit sharing_](cml-custom-tasks#implicit-sharing) for more information. From **Model Registry > Custom Model Workshop > Environments**, use the menu to [share and/or delete](custom-model-actions) any custom environment that you have appropriate permissions for. (Note that the link points to custom model actions, but the options are the same for custom tasks and environments.) ![](images/cml-env-4.png) ## Self-Managed AI Platform admins {: #self-managed-ai-platform-admins } The following is available only on the Self-Managed AI Platform. ### Environment availability {: #environment-availability } Each custom environment is either public or private (the default availability). Making an environment public allows other users that are part of the same DataRobot installation to use it without the owner explicitly sharing it or users needing to create and upload their own version. Private environments can only be seen by the owner and the users that the environment has been shared with. Contact your DataRobot system administrator to make a custom environment public.
custom-environments
--- title: Custom model environments description: How to set up an environment for custom inference models created in the Custom Model Workshop. --- # Custom model environments {: #custom-model-environments } To [create a custom inference model](custom-inf-model), you must select an environment that the model will use. An environment includes packages and language and system libraries used for models. You can select one of two types of environments: Environment | Description ------------|------------ [Drop-in environments](drop-in-environments) | Contain web server Scoring Code and a `start_server.sh` file for the model to use. They are provided by DataRobot in the Custom Model Workshop. [Custom environments](custom-environments) | Do _not_ contain the Scoring Code and `start_server.sh` file, which instead must be provided in the folder of the custom model you intend to use with the environment. You can create your own environment in the Custom Model Workshop. You can also create a custom drop-in environment by including the Scoring Code and `start_server.sh` file in the environment folder. By providing an environment separate from a custom model, DataRobot can build the environment for you. This allows you to reuse the environment for as many models as you want. It also provides the ability to provide a model by uploading a folder containing its code and model artifacts without providing web server Scoring Code and a `start_server.sh` file with every model.
index
--- title: Custom model components description: Describes custom model support and how to structure a custom model's files. --- # Custom model components {: #custom-model-components } To create and upload a custom model, you need to define two components&mdash;the model’s content and an environment where the model’s content will run: * The [model content](#model-content) is code written in Python or R. To be correctly parsed by DataRobot, the code must follow certain criteria. The model artifact's structure should match the library used by the model. In addition, it should use the appropriate [custom hooks](#model-code) for Python, R, and Java models. Optionally, you can add files that will be uploaded and used together with the model’s code (for example, you might want to add a separate file with a dictionary if your custom model contains text preprocessing). * The [model environment](#model-environment) is defined using a Docker file and additional files that will allow DataRobot to build an image where the model will run. There are a variety of built-in environments; you only need to build your own environment when you need to install Linux packages. For more detailed information, see the section on [custom model environments](custom-model-environments/index). At a high level, the steps to define a custom model with these components include: 1. Define and test model content locally (i.e., on your computer). 2. Optionally, create a container environment where the model will run. 3. Upload the model content and environment (if applicable) into DataRobot. ## Model content {: #model-content } To define a custom model, create a local folder containing the files listed in the table below (detailed descriptions follow the table). !!! tip To ensure your assembled custom model folder has the correct contents, you can find examples of these files in the [DataRobot model template repository](https://github.com/datarobot/datarobot-user-models/tree/master/model_templates){ target=_blank } on GitHub. File | Description | Required -----|-------------|--------- Model artifact file<br>_or_<br>`custom.py`/`custom.R` file | Provide a model artifact and/or a custom code file. <ul><li>Model artifact: a serialized model artifact with a file extension corresponding to the chosen environment language.</li><li>Custom code: custom capabilities implemented with hooks (or functions) that enable DataRobot to run the code and integrate it with other capabilities. | Yes `model-metadata.yaml` | A file describing model's metadata, including input/output data requirements. You can supply a schema that can then be used to validate the model when building and training a blueprint. A schema lets you specify whether a custom model supports or outputs: <ul><li>Certain data types</li><li>Missing values</li><li>Sparse data</li><li>A certain number of columns</li> | Required when a custom model outputs non-numeric data. If not provided, a default schema is used. `requirements.txt` | A list of Python or R packages to add to the base environment. This list pre-installs Python or R packages that the custom model is using but are not a part of the base environment | No Additional files | Other files used by the model (for example, a file that defines helper functions used inside `custom.py`). | No === "requirements.txt Python example" For Python, provide a list of packages with their versions (1 package per row). For example: ``` txt numpy>=1.16.0, <1.19.0 pandas==1.1.0 scikit-learn==0.23.1 lightgbm==3.0.0 gensim==3.8.3 sagemaker-scikit-learn-extension==1.1.0 ``` === "requirements.txt R example" For R, provide a list of packages without versions (1 package per row). For example: ``` txt dplyr stats ``` ### Model code {: #model-code } To define a custom model using DataRobot’s framework, your custom model should include a model artifact corresponding to the chosen environment language, custom code in a `custom.py` (for Python models) or `custom.R` (for R models) file, or both. If you provide only the custom code (without a model artifact), you must use the `load_model` hook. The following hooks can be used in your custom code: Hook (Function) | Unstructured/Structured | Purpose ---------------------|-------------------------|--------- `init()` | Both | Initialize the model run by loading model libraries and reading model files. This hook is executed only once at the beginning of a run. `load_model()` | Both | Load all supported and trained objects from multiple artifacts, or load a trained object stored in an artifact with a format not natively supported by DataRobot. This hook is executed only once at the beginning of a run. `read_input_data()` | Structured | Customize how the model reads data; for example, with encoding and missing value handling. `transform()` | Structured | Define the logic used by custom transformers and estimators to generate transformed data. `score()` | Structured | Define the logic used by custom estimators to generate predictions. `score_unstructured` | Unstructured | Define the output of a custom estimator and returns predictions on input data. Do not use this hook for transform models. `post_process()` | Structured | Define the post processing steps applied to the model's predictions. !!! note These hooks are executed in the order listed. For more information on defining a custom model's code, see the hooks for [structured custom models](structured-custom-models) or [unstructured custom models](unstructured-custom-models). ### Model metadata {: #model-metadata } To define metadata, create a `model-metadata.yaml` file and put it in the top level of the model/model directory. The file specifies additional information about a custom model. ## Model environment {: #model-environment } There are multiple options for defining the environment where a custom model runs. You can: * Choose from a variety of [drop-in environments](drop-in-environments). * Modify a drop-in environment to include missing Python or R packages by specifying the packages in the model's `requirements.txt` file. If provided, the `requirements.txt` file must be uploaded together with the `custom.py` or `custom.R ` file in the model content. If model content contains subfolders, it must be placed in the top folder. * Build a [custom environment](custom-environments) if you need to install Linux packages. When creating a custom model with a custom environment, the environment used must be compatible with the model contents, as it defines the model's runtime environment. To ensure you follow the compatibility guidelines: * Use or modify the [custom environment templates](https://github.com/datarobot/datarobot-user-models/tree/master/public_dropin_environments){ target=_blank } that are compatible with your custom models. * Reference the [guidelines for building your own environment](custom-environments#custom-environment-guidelines). DataRobot recommends using an environment template, not building your own environment except for specific use cases; for example, if you don't want to use DRUM, but want to implement your own prediction server.
custom-model-components
--- title: GitHub Actions for custom models description: The custom models action manages custom inference models and deployments in DataRobot via GitHub CI/CD workflows. --- # GitHub Actions for custom models {: #github-actions-for-custom-models } The custom models action manages custom inference models and their associated deployments in DataRobot via GitHub CI/CD workflows. These workflows allow you to create or delete models and deployments and modify settings. Metadata defined in YAML files enables the custom model action's control over models and deployments. Most YAML files for this action can reside in any folder within your custom model's repository. The YAML is searched, collected, and tested against a schema to determine if it contains the entities used in these workflows. For more information, see the [custom-models-action repository](https://github.com/datarobot-oss/custom-models-action){ target=_blank }. ## GitHub Actions quickstart {: #github-actions-quickstart } This quickstart example uses a [Python Scikit-Learn model template](https://github.com/datarobot/datarobot-user-models/tree/master/model_templates/python3_sklearn){ target=_blank } from the [datarobot-user-model repository](https://github.com/datarobot/datarobot-user-models/tree/master/model_templates){ target=_blank }. To set up a custom models action that will create a custom inference model and deployment in DataRobot from a custom model repository in GitHub, take the following steps: 1. In the `.github/workflows` directory of your custom model repository, create a YAML file (with any filename) containing the following: {% raw %} ```yaml linenums="1" hl_lines="5 7 29 30 31" name: Workflow CI/CD on: pull_request: branches: [ master ] push: branches: [ master ] # Allows you to run this workflow manually from the Actions tab workflow_dispatch: jobs: datarobot-custom-models: # Run this job on any action of a PR, but skip the job upon merging to the main branch. This # will be taken care of by the push event. if: ${{ github.event.pull_request.merged != true }} runs-on: ubuntu-latest steps: - uses: actions/checkout@v3 with: fetch-depth: 0 - name: DataRobot Custom Models Step id: datarobot-custom-models-step uses: datarobot-oss/custom-models-action@v1.4.0 with: api-token: ${{ secrets.DATAROBOT_API_TOKEN }} webserver: https://app.datarobot.com/ branch: master allow-model-deletion: true allow-deployment-deletion: true ``` {% endraw %} Configure the following fields: * `branches`: Provide the name of your repository's main branch (usually either `master` or `main`) for `pull_request` and `push`. If you created your repository in GitHub, you likely need to update these fields to `main`. While `master` and `main` are the most common branch names, you can target any branch; for example, you could run the workflow on a `release` branch or a `test` branch. * `api-token`: Provide a value for the `DATAROBOT_API_TOKEN` variable by creating an [encrypted secret for GitHub Actions](https://docs.github.com/en/actions/security-guides/encrypted-secrets#creating-encrypted-secrets-for-a-repository){ target=_blank } containing your [DataRobot API key](api-key-mgmt#api-key-management). Alternatively, you can set the token string directly to this field; however, this method is highly discouraged because your API key is extremely sensitive data. If you use this method, anyone who has access to your repository can access your API key. * `webserver`: Provide your DataRobot webserver value here if it isn't the default DataRobot US server (`https://app.datarobot.com/`). * `branch`: Provide the name of your repository's main branch (usually either `master` or `main`). If you created your repository in GitHub, you likely need to update this field to `main`. While `master` and `main` are the most common branch names, you can target any branch; for example, you could run the workflow on a `release` branch or a `test` branch. 2. Commit the workflow YAML file and push it to the remote. After you complete this step, any push to the remote (or merged pull request) triggers the action. 3. In the folder for your DataRobot custom model, add a model definition YAML file (e.g., `model.yaml`) containing the following YAML and update the field values according to your model's characteristics: ```yaml user_provided_model_id: user/model-unique-id-1 target_type: Regression settings: name: My Awesome GitHub Model 1 [GitHub CI/CD] target_name: Grade 2014 version: # Make sure this is the environment ID is in your system. # This one is the '[DataRobot] Python 3 Scikit-Learn Drop-In' environment model_environment_id: 5e8c889607389fe0f466c72d ``` Configure the following fields: * `user_provided_model_id`: Provide any descriptive and unique string value. DataRobot recommends following a naming pattern, such as `<user>/<model-unique-id>`. !!! note By default, this ID will reside in a unique namespace, the GitHub repository ID. Alternatively, you can configure the namespace as an input argument to the custom models action. * `target_type`: Provide the correct target type for your custom model. * `target_name`: Provide the correct target name for your custom model. * `model_environment_id`: Provide the DataRobot execution environment required for your custom model. You can find these environments in the DataRobot application under [**Model Registry** > **Custom Model Workshop** > **Environments**](custom-environments). ![](images/pp-cus-model-github3.png) 4. In any directory in your repository, add a deployment definition YAML file (with any filename) containing the following YAML: ```yaml user_provided_deployment_id: user/my-awesome-deployment-id user_provided_model_id: user/model-unique-id-1 ``` Configure the following fields: * `user_provided_deployment_id`: Provide any descriptive and unique string value. DataRobot recommends following a naming pattern, such as `<user>/<deployment-unique-id>`. !!! note By default, this ID will reside in a unique namespace, the GitHub repository ID. Alternatively, you can configure the namespace as an input argument to the custom models action. * `user_provided_model_id`: Provide the exact `user_provided_model_id` you set in the model definition YAML file. 5. Commit these changes and push to the remote, then: * Navigate to your custom model repository in GitHub and click the `Actions` tab. You'll notice that the action is being executed. * Navigate to the DataRobot application. You'll notice that a new custom model was created along with an associated deployment. This action can take a few minutes. !!! warning Creating two commits (or merging two pull requests) in quick succession can result in a `ResourceNotFoundError`. For example, you add a model definition with a training dataset, make a commit, and push to the remote. Then, you immediately delete the model definition, make a commit, and push to the remote. The training data upload action may begin after model deletion, resulting in an error. To avoid this scenario, wait for an action's execution to complete before pushing new commits or merging new pull requests to the remote repository. ## Access commit information in DataRobot {: #access-commit-information-in-DataRobot } After your workflow creates a model and a deployment in DataRobot, you can access the commit information from the model's version info and the deployment's overview: === "Model version info" 1. In the **Model Registry**, click **Custom Model Workshop**. 2. On the **Models** tab, click a GitHub-sourced model from the list and then click the **Versions** tab. 3. Under **Manage Versions**, click the version you want to view the commit for. 4. Under **Version Info**, find the **Git Commit Reference** and then click the commit hash (or commit ID) to open the commit that created the current version. ![](images/pp-cus-model-github2.png) === "Model package info" 1. In the **Model Registry**, click **Model Packages**. 2. On the **Model Packages** tab, click a GitHub-sourced model package from the list. 3. Under **Package Info**, review the model information provided by your workflow, find the **Git Commit Reference**, and then click the commit hash (or commit ID) to open the commit that created the current model package. ![](images/pp-cus-model-github4.png) === "Deployment overview" 1. In the **Deployments** inventory, click a GitHub-sourced deployment from the list. 2. On the deployment's **Overview** tab, review the model and deployment information provided by your workflow. 3. In the **Content** group box, find the **Git Commit Reference** and click the commit hash (or commit ID) to open the commit that created the deployment. ![](images/pp-cus-model-github1.png)
custom-model-github-action
--- title: Assemble structured custom models description: DataRobot provides built-in support for a variety of libraries to create models that use conventional target types. --- # Assemble structured custom models DataRobot provides built-in support for a variety of libraries to create models that use conventional target types. If your model is based on one of these libraries, DataRobot expects your model artifact to have a matching file extension: === "Python libraries" | Library | File Extension | Example | |------------------------------|----------------|-----------------------| | Scikit-learn | *.pkl | sklean-regressor.pkl | | Xgboost | *.pkl | xgboost-regressor.pkl | | PyTorch | *.pth | torch-regressor.pth | | tf.keras (tensorflow>=2.2.1) | *.h5 | keras-regressor.h5 | | ONNX | *.onnx | onnx-regressor.onnx | | pmml | *.pmml | pmml-regressor.pmml | === "R libraries" | Library | File Extension | Example | |---------|----------------|--------------------| | Caret | *.rds | brnn-regressor.rds | === "Java libraries" | Library | File Extension | Example | |--------------------------|----------------|----------------------------------------------| | datarobot-prediction | *.jar | dr-regressor.jar | | h2o-genmodel | *.java | GBM_model_python_1589382591366_1.java (pojo) | | h2o-genmodel | *.zip | GBM_model_python_1589382591366_1.zip (mojo) | | h2o-genmodel-ext-xgboost | *.java | XGBoost_2_AutoML_20201015_144158.java | | h2o-genmodel-ext-xgboost | *.zip | XGBoost_2_AutoML_20201015_144158.zip | | h2o-ext-mojo-pipeline | *.mojo | ... | !!! note * DRUM supports models with DataRobot-generated Scoring Code and models that implement either the `IClassificationPredictor` or `IRegressionPredictor` interface from the <a target="_blank" href="https://mvnrepository.com/artifact/com.datarobot/datarobot-prediction">DataRobot-prediction library</a>. The model artifact must have a `.jar` extension. * You can define the `DRUM_JAVA_XMX` environment variable to set JVM maximum heap memory size (`-Xmx` java parameter): `DRUM_JAVA_XMX=512m`. * If you export an H2O model as `POJO`, you cannot rename the file; however, this limitiation doesn't apply to models exported as `MOJO`&mdash;they may be named in any fashion. * The `h2o-ext-mojo-pipeline` requires an h2o driverless AI license. * Support for DAI Mojo Pipeline has not been incorporated into tests for the build of `datarobot-drum`. If your model doesn't use one of the following libraries, you must create an [unstructured custom model](unstructured-custom-models). {% include 'includes/structured-vs-unstructured-cus-models.md' %} ## Structured custom model requirements {: #structured-custom-model-requirements } If your custom model uses one of the supported libraries, make sure it meets the following requirements: * Data sent to a model must be usable for predictions without additional pre-processing. * Regression models must return a single floating point per row of prediction data. * Binary classification models must return one floating point value <= 1.0 or two floating point values that sum to 1.0 per row of prediction data. * Single-value output is assumed to be the positive class probability. * For multi-value, it is assumed that the first value is the negative class probability and the second is the positive class probability. * There must be a single `pkl`/`pth`/`h5` file present. !!! note "Data format" When working with structured models DataRobot supports data as files of `csv`, `sparse`, or `arrow` format. DataRobot doesn't sanitize missing or abnormal (containing parentheses, slashes, symbols, etc. ) column names. ## Structured custom model hooks {: #structured-custom-model-hooks } To define a custom model using DataRobot’s framework, your artifact file should contain hooks (or functions) to define how a model is trained and how it scores new data. DataRobot automatically calls each hook and passes the parameters based on the project and blueprint configuration. However, you have full flexibility to define the logic that runs inside each hook. If necessary, you can include these hooks alongside your model artifacts in your model folder in a file called `custom.py` for Python models or `custom.R` for R models. !!! note Training and inference hooks can be defined in the same file. The following sections describe each hook, with examples. ??? note "Type annotations in hook signatures" The following hook signatures are written with Python 3 type annotations. The Python types match the following R types: Python type | R type | Description --------------------|--------------|------------ `DataFrame` | `data.frame` | A numpy `DataFrame` or R `data.frame`. `None` | `NULL` | Nothing `str` | `character` | String `Any` | An R object | The deserialized model. `*args`, `**kwargs` | `...` | These are keyword arguments, not types; they serve as placeholders for additional parameters. ************************************************** ### `init()` {: #init } The `init` hook is executed only once at the beginning of the run to allow the model to load libraries and additional files for use in other hooks. ``` py init(**kwargs) -> None ``` #### `init()` input {: #init-input } Input parameter | Description ----------------|------------ `**kwargs` | An additional keyword argument. `code_dir` provides a link, passed through the `--code_dir` parameter, to the folder where the model code is stored. #### `init()` example {: #init-example } The following provides a brief code snippet using `init()`; see a more complete example [here](https://github.com/datarobot/datarobot-user-models/blob/master/model_templates/2_estimators/5_r_binary_classification/custom.R){ target=_blank }. === "Python" ``` py def init(code_dir): global g_code_dir g_code_dir = code_dir ``` === "R" ``` r init <- function(...) { library(brnn) library(glmnet) } ``` #### `init()` output {: #init-output } The `init()` hook does not return anything. ************************************************** ### `load_model()` {: #load-model } The `load_model()` hook is executed only once at the beginning of the run to load one or more trained objects from multiple artifacts. It is only required when a trained object is stored in an artifact that uses an unsupported format or when multiple artifacts are used. The `load_model()` hook is not required when there is a single artifact in one of the supported formats: * Python: `.pkl`, `.pth`, `.h5`, `.joblib` * Java: `.mojo` * R: `.rds` ``` py load_model(code_dir: str) -> Any ``` #### `load_model()` input {: #load-model-input } Input parameter | Description ----------------|------------ `code_dir` | A link, passed through the `--code_dir` parameter, to the directory where the model artifact and additional code are provided. #### `load_model()` example {: #load-model-example } The following provides a brief code snippet using `load_model()`; see a more complete example [here](https://github.com/datarobot/datarobot-user-models/blob/master/model_templates/3_pipelines/14_python3_keras_joblib/custom.py){ target=_blank }. === "Python" ``` py def load_model(code_dir): model_path = "model.pkl" model = joblib.load(os.path.join(code_dir, model_path)) ``` === "R" ``` r load_model <- function(input_dir) { readRDS(file.path(input_dir, "model_name.rds")) } ``` #### `load_model()` output {: #load-model-output } The `load_model()` hook returns a trained object (of any type). ************************************************** ### `read_input_data()` {: #read-input-data } The `read_input_data` hook customizes how the model reads data; for example, with encoding and missing value handling. ``` py read_input_data(input_binary_data: bytes) -> Any ``` #### `read_input_data()` input {: #read-input-data-input } Input parameter | Description --------------------|------------ `input_binary_data` | Data passed through the `--input` parameter in `drum score` mode, or a payload submitted to the `drum server` `/predict` endpoint. #### `read_input_data()` example {: #read-input-data-example } === "Python" ``` py def read_input_data(input_binary_data): global prediction_value prediction_value += 1 return pd.read_csv(io.BytesIO(input_binary_data)) ``` === "R" ``` r read_input_data <- function(input_binary_data) { input_text_data <- stri_conv(input_binary_data, "utf8") read.csv(text=gsub("\r","", input_text_data, fixed=TRUE)) } ``` #### `read_input_data()` output {: #read-input-data-output } The `read_input_data()` hook must return a pandas `DataFrame` or R `data.frame`; otherwise, you must write your own score method. ************************************************** ### `transform()` {: #transform } The `transform()` hook defines the output of a custom transform and returns transformed data. Do not use this hook for estimator models. This hook can be used in both transformer and estimator tasks: * For transformers, this hook applies transformations to the data provided and passes it to downstream tasks. * For estimators, this hook applies transformations to the prediction data before making predictions. ``` py transform(data: DataFrame, model: Any) -> DataFrame ``` #### `transform()` input {: #transform-input } Input parameter | Description ----------------|------------ `data` | A pandas `DataFrame` (Python) or R `data.frame` containing the data that the custom model should transform. Missing values are indicated with `NaN` in Python and `NA` in R, unless otherwise overridden by the `read_input_data` hook. `model` | A trained object DataRobot loads from the artifact (typically, a trained transformer) or loaded through the `load_model` hook. #### `transform()` example {: #transform-example } The following provides a brief code snippet using `transform()`; see a more complete example [here](https://github.com/datarobot/datarobot-user-models/blob/master/model_templates/1_transforms/1_python_missing_values/custom.py){ target=_blank }. === "Python" ``` py def transform(data, model): data = data.fillna(0) return data ``` === "R" ``` r transform <- function(data, model) { data[is.na(data)] <- 0 data } ``` #### `transform()` output {: #transform-output } The `transform()` hook returns a pandas `DataFrame` or R `data.frame` with transformed data. ************************************************** ### `score()` {: #score } The `score()` hook defines the output of a custom estimator and returns predictions on input data. Do not use this hook for transform models. ``` py score(data: DataFrame, model: Any, **kwargs: Dict[str, Any]) -> DataFrame ``` #### `score()` input {: #score-input } Input parameter | Description ----------------|------------ `data` | A pandas DataFrame (Python) or R data.frame containing the data the custom model will score. If the `transform` hook is used, `data` will be the transformed data. `model` | A trained object loaded from the artifact by DataRobot or loaded through the `load_model` hook. `**kwargs` | Additional keyword arguments. For a binary classification model, it contains the positive and negative class labels as the following keys:<ul><li>`positive_class_label`</li><li>`negative_class_label`</li></ul> #### `score()` examples {: #score-examples } The following provides a brief code snippet using `score()`; see a more complete example [here](https://github.com/datarobot/datarobot-user-models/blob/master/model_templates/2_estimators/4_python_binary_classification/custom.py){ target=_blank }. === "Python" ``` py def score(data: pd.DataFrame, model: Any, **kwargs: Dict[str, Any]) -> pd.DataFrame: predictions = model.predict(data) predictions_df = pd.DataFrame(predictions, columns=[kwargs["positive_class_label"]]) predictions_df[kwargs["negative_class_label"]] = ( 1 - predictions_df[kwargs["positive_class_label"]] ) return predictions_df ``` === "R" ``` r score <- function(data, model, ...){ scores <- predict(model, newdata = data, type = "prob") names(scores) <- c('0', '1') return(scores) } ``` #### `score()` output {: #score-output } The `score()` hook should return a pandas `DataFrame` (or R `data.frame` or `tibble`) of the following format: * For regression or anomaly detection projects, the output must have a single numeric column named **Predictions**. * For binary or multiclass projects, the output must have one column per class, with class names used as column names. Each cell must contain the probability of the respective class, and each row must sum up to 1.0. ************************************************** ### `post_process()` {: #post-process } The `post_process` hook formats the prediction data returned by DataRobot or the `score` hook when it doesn't match the output format expectations. ``` py post_process(predictions: DataFrame, model: Any) -> DataFrame ``` #### `post_process()` input {: #post-process-input } Input parameter | Description ----------------|------------ `predictions` | A pandas DataFrame (Python) or R data.frame containing the scored data produced by DataRobot or the `score` hook. `model` | A trained object loaded from the artifact by DataRobot or loaded through the `load_model` hook. #### `post_process()` example {: #post-process-example } === "Python" ``` py def post_process(predictions, model): return predictions + 1 ``` === "R" ``` r post_process <- function(predictions, model) { names(predictions) <- c('0', '1') } ``` #### `post_process()` output {: #post-process-output } The `post_process` hook returns a pandas `DataFrame` (or R `data.frame` or `tibble`) of the following format: * For regression or anomaly detection projects, the output must have a single numeric column named **Predictions**. * For binary or multiclass projects, the output must have one column per class, with class names used as column names. Each cell must contain the probability of the respective class, and each row must sum up to 1.0.
structured-custom-models
--- title: DRUM CLI tool description: DataRobot Model Runner (DRUM) is a tool that allows you to work with and test Python, R, and Java custom models and custom tasks. --- {% include 'includes/drum-tool.md' %} {% include 'includes/drum-for-ubuntu.md' %} {% include 'includes/drum-for-mac.md' %} {% include 'includes/drum-for-windows.md' %}
custom-model-drum
--- title: Custom model assembly description: Describes how to assemble custom models and environments. --- # Custom model assembly {: #custom-model-assembly } While DataRobot provides hundreds of built-in models, there are situations where you need preprocessing or modeling methods that are not currently supported out of the box. To create a custom inference model, you must provide a serialized model artifact with a file extension corresponding to the chosen environment language and any additional custom code required to use the model. Before adding custom models and environments to DataRobot, you must prepare and structure the files required to run them successfully. The tools and templates necessary to prepare custom models are hosted in the <a target="_blank" href="https://github.com/datarobot/datarobot-user-models">Custom Model GitHub repository</a>. ({% include 'includes/github-sign-in.md' %}) DataRobot recommends understanding the following requirements to prepare your custom model for upload to the Custom Model Workshop. Topic | Describes ------|----------- [Custom model components](custom-model-components) | How to identify the components required to run custom inference models. [Assemble structured custom models](structured-custom-models) | How to assemble and validate structured custom models compatible with DataRobot. [Assemble unstructured custom models](unstructured-custom-models) | How to assemble and validate unstructured custom models compatible with DataRobot. [DRUM CLI tool](custom-model-drum) | How to download and install the DataRobot user model (DRUM) to work with and test custom models and custom environments locally before uploading to DataRobot. [Test a custom model locally](custom-local-test) | How to test custom inference models in your local environment using the DataRobot Model Runner (DRUM) tool. [GitHub Actions for custom models](custom-model-github-action) | The custom models action manages custom inference models and deployments in DataRobot via GitHub CI/CD workflows.
index
--- title: Assemble unstructured custom models description: Unstructured models can use arbitrary data for input and output, allowing you to deploy and monitor models regardless of the target type. --- # Assemble unstructured custom models If your custom model doesn't use a target type supported by DataRobot, you can create an unstructured model. Unstructured models can use arbitrary (_i.e., unstructured_) data for input and output, allowing you to deploy and monitor models regardless of the target type. This characteristic of unstructured models gives you more control over how you read the data from a prediction request and response; however, it requires precise coding to assemble correctly. You must implement [custom hooks to process the unstructured input data](#unstructured-custom-model-hooks) and generate a valid response. {% include 'includes/structured-vs-unstructured-cus-models.md' %} Inference models support unstructured mode, where input and output are not verified and can be almost anything. This is your responsibility to verify correctness. For assembly instructions specific to unstructured custom inference models, reference the model templates for <a target="_blank" href="https://github.com/datarobot/datarobot-user-models/tree/master/model_templates/python3_unstructured">Python</a> and <a target="_blank" href="https://github.com/datarobot/datarobot-user-models/tree/master/model_templates/r_unstructured">R</a> provided in the DRUM documentation. !!! note "Data format" When working with unstructured models DataRobot supports data as a text or binary file. ## Unstructured custom model hooks {: #unstructured-custom-model-hooks } Include any necessary hooks in a file called `custom.py` for Python models or `custom.R` for R models alongside your model artifacts in your model folder: ??? note "Type annotations in hook signatures" The following hook signatures are written with Python 3 type annotations. The Python types match the following R types: Python type | R type | Description --------------------|--------------|------------ `None` | `NULL` | Nothing `str` | `character` | String `bytes` | `raw` | Raw bytes `dict` | `list` | A list of key/value pairs. `tuple` | `list` | A list of data. `Any` | An R object | The deserialized model. `*args`, `**kwargs` | `...` | These are keyword arguments, not types; they serve as placeholders for additional parameters. ************************************************** ### `init()` {: #init } The `init` hook is executed only once at the beginning of the run to allow the model to load libraries and additional files for use in other hooks. ``` py init(**kwargs) -> None ``` #### `init()` input {: #init-input } Input parameter | Description ----------------|------------ `**kwargs` | An additional keyword argument. `code_dir` provides a link, passed through the `--code_dir` parameter, to the folder where the model code is stored. #### `init()` example {: #init-example } === "Python" ``` py def init(code_dir): global g_code_dir g_code_dir = code_dir ``` === "R" ``` r init <- function(...) { library(brnn) library(glmnet) } ``` #### `init()` output {: #init-output } The `init()` hook does not return anything. ************************************************** ### `load_model()` {: #load-model } The `load_model()` hook is executed only once at the beginning of the run to load one or more trained objects from multiple artifacts. It is only required when a trained object is stored in an artifact that uses an unsupported format or when multiple artifacts are used. The `load_model()` hook is not required when there is a single artifact in one of the supported formats: * Python: `.pkl`, `.pth`, `.h5`, `.joblib` * Java: `.mojo` * R: `.rds` ``` py load_model(code_dir: str) -> Any ``` #### `load_model()` input {: #load-model-input } Input parameter | Description ----------------|------------ `code_dir` | A link, passed through the `--code_dir` parameter, to the directory where the model artifact and additional code are provided. #### `load_model()` example {: #load-model-example } === "Python" ``` py def load_model(code_dir): model_path = "model.pkl" model = joblib.load(os.path.join(code_dir, model_path)) ``` === "R" ``` r load_model <- function(input_dir) { readRDS(file.path(input_dir, "model_name.rds")) } ``` #### `load_model()` output {: #load-model-output } The `load_model()` hook returns a trained object (of any type). ************************************************** ### `score_unstructured()` {: #score } The `score_unstructured()` hook defines the output of a custom estimator and returns predictions on input data. Do not use this hook for transform models. ``` py score_unstructured(model: Any, data: str/bytes, **kwargs: Dict[str, Any]) -> str/bytes [, Dict[str, str]] ``` #### `score_unstructured()` input {: #score-input } Input parameter | Description ----------------|------------ `data` | Data represented as `str` or `bytes`, depending on the provided `mimetype`. `model` | A trained object loaded from the artifact by DataRobot or loaded through the `load_model` hook. `**kwargs` | Additional keyword arguments. For a binary classification model, it contains the positive and negative class labels as the following keys:<ul><li>`mimetype: str`: Indicates the nature and format of the data, taken from request `Content-Type` header or `--content-type` CLI argument in batch mode.</li><li>`charset: str`: Indicates the encoding for text data, taken from request `Content-Type` header or `--content-type` CLI argument in batch mode.</li><li>`query: dict`: Parameters passed as query params in a http request or the `--query` CLI argument in batch mode.</li><li>`headers: dict`: Request headers passed in http request.</li></ul> #### `score_unstructured()` examples {: #score-unstructured-examples } === "Python" ``` py def score_unstructured(model, data, query, **kwargs): text_data = data.decode("utf8") if isinstance(data, bytes) else data text_data = text_data.strip() words_count = model.predict(text_data) return str(words_count) ``` === "R" ``` r score_unstructured <- function(model, data, query, ...) { kwargs <- list(...) if (is.raw(data)) { data_text <- stri_conv(data, "utf8") } else { data_text <- data } count <- str_count(data_text, " ") + 1 ret = toString(count) ret } ``` #### `score_unstructured()` output {: #score-unstructured-output } The `score_unstructured()` hook should return: * A single value `return data: str/bytes`. * A tuple `return data: str/bytes, kwargs: dict[str, str]` where `kwargs = {"mimetype": "users/mimetype", "charset": "users/charset"}` can be used to return `mimetype` and `charset` for the `Content-Type` response header. ************************************************** ## Unstructured model considerations {: #unstructured-model-considerations } ### Incoming data type resolution {: #incoming-data-type-resolution } The `score_unstructured` hook receives a `data` parameter, which can be of either `str` or `bytes` type. You can use type-checking methods to verify types: * Python: `isinstance(data, str)` or `isinstance(data, bytes)` * R: `is.character(data)` or `is.raw(data)` DataRobot uses the `Content-Type` header to determine a type to cast `data` to. The `Content-Type` header can be provided in a request or in `--content-type` CLI argument. The `Content-Type` header format is `type/subtype;parameter` (e.g., `text/plain;charset=utf8`). The following rules apply: * If `charset` is not defined, default `utf8` charset is used, otherwise provided charset is used to decode data. * If `Content-Type` is not defined, then incoming `kwargs={"mimetype": "text/plain", "charset":"utf8"}`, so data is treated as text, decoded using `utf8` charset and passed as `str`. * If `mimetype` starts with `text/` or `application/json`, data is treated as text, decoded using provided charset and passed as `str`. * For all other `mimetype` values, data is treated as binary and passed as `bytes`. ### Outgoing data and kwargs parameters {: #outgoing-data-and-kwargs-parameters } As mentioned above, `score_unstructured` can return: * A single data value: `return data`. * A tuple (data and additional parameters: `return data, {"mimetype": "some/type", "charset": "some_charset"}`). #### Server mode {: #server-mode } In server mode, the following rules apply: * `return data: str`: The data is treated as text, the default `Content-Type="text/plain;charset=utf8"` header is set in response, and data is encoded and sent using the `utf8` `charset`. * `return data: bytes`: The data is treated as binary, the default `Content-Type="application/octet-stream;charset=utf8"` header is set in response, and data is sent as-is. * `return data, kwargs`: If `mimetype` value is missing in `kwargs`, the default `mimetype` is set according to the data type `str`/`bytes` -> `text/plain`/`application/octet-stream`. If `charset` value is missing, the default `utf8` charset is set; then, if the data is of type `str`, it will be encoded using resolved `charset` and sent. #### Batch mode {: #batch-mode } The best way to debug in batch mode is to provide `--output` file. The returned data is written to a file according to the type of data returned: * `str` data is written to a text file using default `utf8` or returned in `kwargs` `charset`. * `bytes` data is written to a binary file. The returned `kwargs` are not shown in batch mode, but you can still print them during debugging. ### Auxiliaries {: #auxiliaries } You may use the `datarobot_drum.RuntimeParameters` in your code (e.g. `custom.py`) to read runtime parameters delivered to the executed custom model. The runtime parameters should be defined in the DataRobot UI. Below is a simple example of how to read a string of credential runtime parameters: ``` py from datarobot_drum import RuntimeParameters def load_model(code_dir): target_url = RuntimeParameters.get("TARGET_URL") s3_creds = RuntimeParameters.get("AWS_CREDENIAL") ... ```
unstructured-custom-models
--- title: Test custom models locally description: Use the DataRobot Model Runner tool (DRUM) to test and verify a Python, R, or Java custom model locally, before you upload it to DataRobot. --- # Test custom models locally {: #test-custom-models-locally } !!! info "Availability information" To access the DataRobot Model Runner tool, contact your DataRobot representative. The DataRobot Model Runner tool, named DRUM, is a tool that allows you to test Python, R, and Java custom models locally. The test verifies that a custom model can successfully run and make predictions before you [upload it to DataRobot](custom-inf-model#create-cmodel). However, this testing is only for development purposes. DataRobot recommends that any custom model you wish to deploy is also tested in the [Custom Model Workshop](custom-inf-model#test-a-custom-inference-model) after uploading it. Before proceeding, reference the guidelines for [setting up a custom model or environment folder](custom-model-assembly/index). !!! note The DataRobot Model Runner tool supports Python, R, and Java custom models. Reference the <a target="_blank" href="https://pypi.org/project/datarobot-drum/">DRUM readme</a> for details about additional functionality, including: * Autocompletion * Custom hooks * Performance tests * Running models with a prediction server * Running models inside a Docker container ### Model requirements {: #model-requirements } In addition to the required folder contents, DRUM requires the following for your serialized model: * Regression models must return a single floating point per row of prediction data. * Binary classification models must return two floating point values that sum to 1.0 per row of prediction data. * The first value must be the positive class probability, and the second the negative class probability. * There is a single pkl/pth/h5 file present. ## Run tests with the DataRobot CM Runner {: #run-tests-with-the-datarobot-cm-runner } Use the following commands to execute local tests for your custom model: ``` sh title="List all possible arguments" drum -help ``` <hr> ``` sh title="Test a custom binary classification model" drum score -m ~/custom_model/ --input <input-dataset-filename.csv> [--positive-class-label <labelname>] [--negative-class-label <labelname>] [--output <output-filename.csv>] [--verbose] # Use --verbose for a more detailed output. Make batch predictions with a custom binary classification model. Optionally, specify an output file. Otherwise, predictions are returned to the command line. ``` ``` sh title="Example: Test a custom binary classification model" drum score -m ~/custom_model/ --input 10k.csv --positive-class-label yes --negative-class-label no --output 10k-results.csv --verbose ``` <hr> ``` sh title="Test a custom regression model" drum score -m ~/custom_model/ --input <input-dataset-filename.csv> [--output <output-filename.csv>] [--verbose] # Use --verbose for a more detailed output. Make batch predictions with a custom regression model. Optionally, specify an output file. Otherwise, predictions are returned to the command line. ``` ``` sh title="Example: Test a custom regression model" drum score -m ~/custom_model/ --input fast-iron.csv --verbose # This is an example that does not include an output command, so the prediction results return in the command line. ```
custom-local-test
--- title: Relaunch deployments description: Relaunch an MLOps management agent deployment without changes to the deployment's metadata, --- # Relaunch management agent deployments To manually relaunch a management agent deployment without changes to the deployment's metadata, you can trigger a relaunch from the deployment's [Actions menu](actions-menu). To manually relaunch a deployment, take the following steps: 1. On the [**Deployments**](deploy-inventory) page or any tab within a deployment, next to the name of the deployment you want to relaunch, click the [Actions menu](actions-menu) and click **Relaunch**. ![](images/mgmt-agent-relaunch.png) 2. In the **Relaunch deployment** dialog box, click **Relaunch**. ![](images/mgmt-agent-relaunch-confirm.png)
mgmt-agent-relaunch
--- title: Management agent deployment status and events description: Monitor the status and health of MLOps management agent deployments. --- # Management agent deployment status and events {: #management-agent-deployment-status-and-events } To monitor the status and health of management agent deployments, you can view the overall deployment status and specific deployment service health events. ## Deployment status {: #deployment-status } When the management agent is performing an action on an external deployment that it is managing, a badge appears under the deployment name in the [deployment inventory](deploy-inventory), and on any tab within the deployment, to indicate the deployment status. The following four deployment status values are possible when an action is being taken on a deployment managed by the management agent: | Status | Badge | |----------------|---------------| LAUNCHING | ![](images/mgmt-agent-launch.png){: style="height:20px; width:auto"} STOPPING | ![](images/mgmt-agent-stop.png){: style="height:20px; width:auto"} REPLACING MODEL | ![](images/mgmt-agent-replace.png){: style="height:20px; width:auto"} ERRORED | ![](images/mgmt-agent-error.png){: style="height:20px; width:auto"} ## Deployment events {: #deployment-events } The management agent sends periodic updates about deployment health and status via the API. These are reported as MLOps events and are listed on the [Service Health](service-health) page. DataRobot allows you to monitor and work with deployment events for external deployments once set up with the management agent. From one place, you can: | Action | Example use case | |--------|------------------| Record and persist deployment-related events | Recording deployment actions, health changes, state changes, etc. | View all related events | Auditing deployment events. | Filter and search events | Viewing all model changes. | Extract data | Reporting and offline storage. | Receive notification of certain incidents | Receiving a Slack message for an outage. | Enforce a retention policy | Ensuring that a log-retention policy is followed (90 days of retention guaranteed; older events may be purged). | To view an overview of deployment events, select the deployment from the inventory and navigate to the [**Service Health**](service-health) tab. All events are recorded under the **Recent Management Agent Activity** section. ![](images/mgmt-agent-4.png) The most recent events are listed at the top of the list. Each event shows the time it occurred, a description, and an icon indicating its status: | Icon | Description | |------|-------------| | ![](images/icon-green.png) Green / Passing | No action needed. | | ![](images/icon-yellow.png) Yellow / At risk | Concerns found but no immediate action needed; continue monitoring. | | ![](images/icon-red.png) Red / Failing | Immediate action needed. | | ![](images/icon-gray.png) Gray / Unknown | Unknown | | ![](images/icon-info.png) Informational | Details a deployment action (e.g., the deployment has launched). | !!! note The management agent's most recently reported service health status is prioritized. For example, if data drift is green and passing on a deployment, but the management agent delivers an inferior status (red and failing), the list updates to reflect that condition. Select an event row to view its details on the right-side panel. ![](images/mgmt-agent-5.png)
mgmt-agent-events-status
--- title: Configure environment plugins description: Configure prediction environment plugins for the MLOps management agent. --- # Configure management agent environment plugins Management agent plugins deploy and manage models in a given prediction environment. The management agent submits commands to the plugin, and the plugin executes them and returns the status of the command to the management agent. To facilitate this interaction, you provide prediction environment details during plugin configuration, allowing the plugin to execute commands in that environment. For example, a Kubernetes plugin can launch a deployment (container) in a Kubernetes cluster, replace a model in the deployment, stop the container, etc. The MLOps management agent contains the following example plugins: * Filesystem plugin. * Docker plugin. * Kubernetes plugin. * Test plugin. !!! note These example plugins are installed as part of the `datarobot_bosun-*-py3-none-any.whl` wheel file. ## Configure example plugins {: #configure-example-plugins } The following example plugins require additional configuration for use with the management agent: === "Filesystem" To enable communication between the management agent and the deployment, the filesystem plugin creates one directory per deployment in the local filesystem, and downloads each deployment's model package and configuration `.yaml` file into the deployment's local directory. These artifacts can then be used to serve predictions from a PPS container. ``` yaml title="plugin.filesystem.conf.yaml" # The top-level directory that will be used to store each deployment directory baseDir: "." # Each deployment directory will be prefixed with the following string deploymentDirPrefix: "deployment_" # The name of the deployment config file to create inside the deployment directory. # Note: If working with the PPS, DO NOT change this name; the PPS expects this filename. deploymentInfoFile: "config.yml" # If defined, this string will be prefixed to the predictions URL for this deployment, # and the URL will be returned, with the deployment id suffixed to the end with the # /predict endpoint. deploymentPredictionBaseUrl: "http://localhost:8080" # If defined, create a yaml file with the kv of the deployment. # If the name of the file is the same as the deploymentInfoFile, # the key values are added to the same file as the other config. # deploymentKVFile: "kv.yaml" ``` === "Docker" The Docker plugin can deploy native DataRobot models and custom models on a Docker server. In addition, the plugin automatically runs the monitoring agent to monitor deployed models and uses the `traefik` reverse proxy to provide a single prediction endpoint for each deployment. The management agent's Docker plugin supports the use of the [Portable Prediction Server](portable-pps), allowing a single Docker container to serve multiple models. It enables you to configure the PPS to indicate where models for each deployment are located and gives you the ability to start, stop, and manage deployments. The Docker plugin can: * Retrieve a model package from DataRobot for a deployment. * Launch the DataRobot model within the Docker container. * Shut down and clean up the Docker container. * Report status back via events. * Monitor predictions using the monitoring agent. To configure the Docker plugin, take the following steps: 1. Set up the environment required for the Docker plugin: ``` bash docker pull rabbitmq:3-management docker pull traefik:2.3.3 docker network create bosun ``` 2. Build the monitoring agent container image: ``` bash cd datarobot_mlops_package-*/ cd tools/agent_docker make build ``` 3. Download the [Portable Prediction Server](portable-pps) from the DataRobot UI. If you are planning to use a custom model image, make sure the image is built and accessible to the Docker service. 4. Configure the Docker plugin configuration file: ```yaml title="plugin.docker.conf.yaml" # Docker network on which to run all containers. # This network must be created prior to running # the agent (i.e., 'docker network create <NAME>`) dockerNetwork: "bosun" # Traefik image to use traefikImage: "traefik:2.3.3" # Address that will be reported to DataRobot outfacingPredictionURLPrefix: "http://10.10.12.22:81" # MLOps Agent image to use for monitoring agentImage: "datarobot/mlops-tracking-agent:latest" # RabbitMQ image to use for building a channel rabbitmqImage: "rabbitmq:3-management" # PPS base image ppsBaseImage: "datarobot/datarobot-portable-prediction-api:latest" # Prefix for generated images generatedImagePrefix: "mlops_" # Prefix for running containers containerNamePrefix: "mlops_" # Mapping of traefik proxy ports (not mandatory) traefikPortMapping: 80: 81 8080: 8081 # Mapping of RabbitMQ (not mandatory) rabbitmqPortMapping: 15672: 15673 5672: 5673 ``` === "Kubernetes" DataRobot provides a plugin to deploy and manage models in your Kubernetes cluster without writing any additional code. For configuration information, see the README file in the `tools/charts/datarobot-management-agent` folder in the tarball. ``` yaml title="plugin.k8s.conf.yaml" ## The following settings are related to connecting to your Kubernetes cluster # # The name of the kube-config context to use (similar to --context argument of kubectl). There is a sepcial # `IN_CLUSTER` string to be used if you are running the plugin inside a cluster. The default is "IN_CLUSTER" # kubeConfigContext: IN_CLUSTER # The namespace that you want to create and manage external deployments (similar to --namespace argument of kubectl). You # can leave this as `null` to use the "default" namespace, the namespace defined in your context, or (if running `IN_CLUSTER`) # manage resources in the same namespace the plugin is executing in. # kubeNamespace: ## The following settings are related to whether or not MLOps monitoring is enabled # # We need to know the location of the dockerized agent image that can be launched into your Kubernetes cluster. # You can build the image by running `make build` in the tools/agent_docker/ directory and retagging the image # and pushing it to your registry. # agentImage: "<FILL-IN-DOCKER-REGISTRY>/mlops-tracking-agent:latest" ## The following settings are all related to accessing the model from outside the Kubernetes cluster # # The URL prefix used to access the deployed model, i.e., https://example.com/deployments/ # The model will be accessible via <outfacingPredictionURLPrefix/<model_id>/predict outfacingPredictionURLPrefix: "<FILL-CORRECT-URL-FOR-K8S-INGRESS>" # We are still using the beta Ingress resource API, so a class must be provided. If your cluster # doesn't have a default ingress class, please provide one. # ingressClass: ## The following settings are all related to building the finalized model image (base image + mlpkg) # # The location of the Portable Prediction Server base image. You can download it from DataRobot's developer # tools section, retag it, and push it to your registry. ppsBaseImage: "<FILL-IN-DOCKER-REGISTRY>/datarobot-portable-prediction-api:latest" # The Docker repo to which this plugin can push finalized models. The built images will be tagged # as follows: <generatedImageRepo>:m-<model_pkg_id> generatedImageRepo: "<FILL-IN-DOCKER-REGISTRY>/mlops-model" # We use Kaniko to build our finalized image. See https://github.com/GoogleContainerTools/kaniko#readme. # The default is to use the image below. # kanikoImage: "gcr.io/kaniko-project/executor:v1.5.2" # The name of the Kaniko ConfigMap to use. This provides the settings Kaniko will need to be able to push to # your registry type. See https://github.com/GoogleContainerTools/kaniko#pushing-to-different-registries. # The default is to not use any additional configuration. # kanikoConfigmapName: "docker-config" # The name of the Kaniko Secret to use. This provides the settings Kaniko will need to be able to push to # your registry type. See https://github.com/GoogleContainerTools/kaniko#pushing-to-different-registries. # The default is to not use any additional secrets. The secret must be of the type: kubernetes.io/dockerconfigjson # kanikoSecretName: "registry-credentials" # The name of a service account to use for running Kaniko if you want to run it in a more secure fashion. # See https://github.com/GoogleContainerTools/kaniko#security. # The default is to use the "default" service account in the namespace in which the pod runs. # kanikoServiceAccount: default ``` === "Test" To configure the test plugin, use the `--plugin test` option and set the temporary directory and sleep time (in seconds) for each action executed by the test plugin. For example, the deployment `launch_time_sec` set in the test plugin configuration below creates a temporary file for the deployment, sleeps for 1 second, and then returns. ``` yaml title="plugin.test.conf.yaml" tmp_dir: "/tmp" launch_time_sec: 1 stop_time_sec: 1 replace_model_time_sec: 1 pe_status_time_sec: 1 deployment_status_time_sec: 1 deployment_list_time_sec: 1 plugin_start_time: 1 plugin_stop_time: 1 ``` ## Create a custom plugin {: #create-a-custom-plugin } The management agent's plugin framework is flexible enough to accommodate custom plugins. This flexibility is helpful when you have a custom prediction environment (different from, for example, the standard Docker or Kubernetes environment) in which you deploy your models. You can implement a plugin for such a prediction environment either by modifying the existing plugin or by implementing one from scratch. You can use the filesystem plugin as a reference when creating a custom Python plugin. !!! note Currently, custom Java plugins are not supported. If you decide to write a custom plugin, the following section describes the interface definition provided to write a Python plugin. ### Implement the plugin interface {: #implement-the-plugin-interface } The management agent Python package defines the [abstract base class](https://docs.python.org/3/library/abc.html) `BosunPluginBase`. Each management agent plugin *must* inherit and implement the interface defined by this base class. To start implementing a custom plugin (`SamplePlugin` below), inherit the `BosunPluginBase` base class. As an example, implement the plugin under `sample_plugin` directory in the file `sample_plugin.py`: ``` python class SamplePlugin(BosunPluginBase): def __init__(self, plugin_config, private_config_file=None, pe_info=None, dry_run=False): ``` #### Python plugin arguments {: #python-plugin-arguments } The constructor is invoked with the following arguments: Argument | Definition ---------|----------- `plugin_config` | A dictionary containing general information about the plugin. We will go over the details in the following section. `private_config_file` | Path to the private configuration file for the plugin as passed in by the `--private-config` flag when calling the `bosun-plugin-runner` script. This file is optional and the contents are fully at the discretion of your custom plugin. `pe_info` | An instance of `PEInfo`, which contains information about the prediction environment. This parameter is unset for certain actions. `dry_run` | The invocation for dry run (development) or the actual run. #### Python plugin methods {: #python-plugin-methods } This class implements the following methods: !!! note The return type for each of the following functions must be `ActionStatusInfo`. ``` python def plugin_start(self): ``` This method initializes the plugin; for example, it can check if the plugin can connect with the prediction environment (e.g., Docker, Kubernetes). In the case of the filesystem plugin, this method checks if the `baseDir` exists on the filesystem. Management agent invokes this method typically only once during the startup process. This method is guaranteed to be called before any deployment-specific action can be invoked. <hr> ``` python def plugin_stop(self): ``` This method implements any tear-down process, for example, close client connections to the prediction environment. The management agent invokes this method typically only once during the shutdown process. This plugin method is guaranteed to be called after all deployment-specific actions are done. <hr> ``` python def deployment_list(self): ``` This method returns the list of deployments already running in the given prediction environment. The management agent typically invokes this method during the startup to determine which deployments are already running in the prediction environment. The list of deployments is returned as a map of `deployment_id` -> Deployment Information, using the `data` field in the `ActionStatusInfo` (described below) <hr> ``` python def deployment_start(self, deployment_info): ``` This method implements a deployment launch process. Management Agent invokes this method when deployment is created or activated in DataRobot. For example, this method can launch the container in the Kubernetes or Docker service. In the case of the filesystem plugin, this method creates a directory with the name `deployment_<deployment_id>`. It then places the deployment's model and a YAML configuration file under the new directory. The plugin should ensure that the deployment in the prediction environment is uniquely identifiable by the deployment id and, ideally, by the paired deployment id and model id. For example, the built-in Docker plugin launches the container with the following name: `deployment_<deployment_id>_<model-id>` <hr> ``` python def deployment_stop(self, deployment_info): ``` This method implements a deployment stop process. Management Agent invokes this method when deployment is deactivated or deleted in DataRobot. For example, this method can stop the container in the Kubernetes or Docker service. The deployment id and model id from the `deployment_info` uniquely identifies the container that needs to be stopped. In the case of the filesystem plugin, this method removes the directory created for that deployment by the `deployment_start` method. <hr> ``` python def deployment_replace_model(self, deployment_info): ``` This method implements a model replacement process in the deployment. The management agent invokes this method when a model is replaced in a deployment in DataRobot. `modelArtifact` contains the path to the new model, and `newModelId` contains the id of the new model to use for replacement. In the case of the Docker or Kubernetes plugin, a potential implementation of this method could stop the container with the old model id and then start a new container with the new model. In the case of filesystem plugin, it removes the old deployment directory and creates a new one with the new model. <hr> ``` python def pe_status(self): ``` This method queries for the status of the prediction environment, for example, whether the Kubernetes or Docker service is still reachable. The management agent periodically invokes this method to ensure the prediction environment is in a good state. In order to improve the experience, the plugin can support queries for the status of the deployments running in the prediction environment in addition to the status of the prediction environment itself. In this case, the IDs of the deployments are included in the `deployments` field of the `peInfo` structure (described below), and the status of each deployment is returned using `data` field in the `ActionStatusInfo` object (described below). The deployment status is returned as a map of `deployment_id` to Deployment Information. <hr> ``` python def deployment_status(self): ``` This method queries the status of the deployment deployed in a prediction environment, for example, whether the container corresponding to the deployment is still up and running. The management agent periodically invokes this method to ensure that the deployment is in a good state. <hr> ``` python def deployment_relaunch(self, deployment_info): ``` This method implements the process of relaunching (stopping + starting) the deployment. The management agent Python package already provides a **default implementation** of this method by invoking `deployment_stop` followed by `deployment_start`; however, the plugin can implement its own relaunch mechanism if there is an optimal way to relaunch a deployment. <hr> #### Python plugin return value {: #python-plugin-return-value } The return value for all these operations is an `ActionStatusInfo` object providing the status of the action: ```python class ActionStatusInfo: def __init__(self, status, msg=None, state=None, duration=None, data=None): ``` This object contains the following fields: Field | Definition ------|----------- `status` | Indicates the status of the action. <br> **Values**: `ActionStatus.OK`, `ActionStatus.WARN`, `ActionStatus.ERROR`, and `ActionStatus.UNKNOWN` `msg` | Returns a `string` type message that the plugin can forward to the management agent, which in turn, will forward the message to the MLOps service (DataRobot). `state` | Indicates the state of the deployment after the execution of action. <br> **Values**: `ready`, `stopped`, and `errored`. `duration` | Indicates the time the action took to execute. `data` | Returns information that plugin can forward to the management agent. Currently, `deployment_list` method uses this field to list the deployments in the form of a dictionary of `deployment_id` to Deployment Information. This field can also be used by the `pe_status` method to report the status of deployments running in the prediction environment in addition to the prediction environment status. !!! note The base class automatically adds the `timestamp` to the object to keep track of different action status values. ### Use the bosun-plugin-runner {: #configure-example-plugins } The management agent Python package provides the `bosun-plugin-runner` CLI tool, which allows you to invoke the custom plugin class and run a specific action. Using this tool, you can run your plugin in standalone mode while developing and debugging your plugin. For example: ``` shell bosun-plugin-runner \ --plugin sample_plugin/sample_plugin \ --action pe_status \ --config sample_configs/action_config_pe_status_only.yaml \ --private-config sample_configs/sample_plugin_config.yaml \ --status-file /tmp/status.yaml \ --show-status ``` The `bosun-plugin-runner` accepts the following arguments: Argument | Definition ---------|----------- `--plugin` | Specifies the module containing the plugin class. In this case, we used sample_plugin/sample_plugin since the plugin class is inside the sample_plugin directory in the sample_plugin.py file. `--action` | Specifies the action to run. Here we use the `pe_status` action. Other supported actions are listed below. `--config` | Provides the configuration file to use for the action specified. We describe this in more detail in the next section. When your plugin runs as part of the Management agent service, this file will be generated for you but when testing specific actions manually via the `bosun-plugin-runner` you will have to generate the configuration file yourself. `--private-config` | Provides a plugin specific configuration file used only by plugin. `--status-file` | Provides a path for saving the plugin status that results from the action. `--show-status` | Shows the contents of the `--status-file` on stdout. To view the list of actions supported by `bosun-plugin-runner` use the `--list-actions` option: ``` shell bosun-plugin-runner --list-actions # plugin_start # plugin_stop # deployment_start # deployment_stop # deployment_replace_model # deployment_status # pe_status # deployment_list ``` ### Create the action config file {: #create-the-action-config-file } The `--config` flag is used to pass a YAML configuration file to the plugin. This is the structure of the configuration that the management agent prepares and invokes the plugin action with; however, during plugin development, you may need to write this configuration file yourself. The typical contents of such a config file are shown below: ``` yaml pluginConfig: name: "ExternalCommand-1" type: "ExternalCommand" platform: "os" commandPrefix: "python3 sample_plugin.py" mlopsUrl: "https://app.datarobot.com" peInfo: id: "0x2345" name: "Sample-PE" description: "some description" createdOn: "iso formatted date" createdBy: "some username" deployments: ["deployment-1", "deployment-2"] keyValueConfig: max_models: 5 deploymentInfo: id: "deployment-1" name: "deployment-1" description: "Deployment 1 for testing" modelId: "model-A" modelArtifact: "/tmp/model-A.txt" modelExecutionType: "dedicated" keyValueConfig: key1: "some-value-for-key-1" ``` The action configuration file contains three sections: `pluginConfig`, `peInfo`, and `deploymentInfo`. The `pluginConfig` section contains general information about the plugin, for example, ID of the prediction environment, its type, and the platform. It may also contain the `mlopsUrl`, the address of the MLOps service (DataRobot) (in case the plugin would like to connect). This is the section that translates to the `pluginConfig` dictionary and is passed as a constructor argument. The `peInfo` section contains information about the prediction environment this action refers to. Typically, this information is used for `pe_status` action. If `deployments` key contains valid deployment ids, the plugin is expected to return not only the status of the prediction environment but also the status of the deployments listed under `deployments`. The `deploymentInfo` section contains the information about the deployment in the prediction environment this action refers to. All the deployment-related actions use this section to identify which deployment and model to work on. As this is a particularly important section of the config, let us go over some of the important fields: * `id`, `name`, and `description`: Provides information about the deployment as set in DataRobot. * `modelId`, `modelArtifact`: Indicates the ID of the model and the path where the model can be found. Note that the management agent will place the right model at this path before invoking `deployment_start` or `deployment_replace_model`. * `keyValueConfig`: Lists the additional configuration for the deployment. Note that this additional config can be set on the deployment in DataRobot. For example, this can be used to specify how much memory the container corresponding to this deployment should use. ### Run actions with bosun-plugin-runner {: #run-actions-with-bosun-plugin-runner } As covered above, during plugin development, you can use the `bosun-plugin-runner` to invoke the actions. For example, here is how a `deployment_start` action can be invoked. We will use the same config as described in the previous section and dump it to a file `sample_configs/config_deployment-1_model-A.yaml` file. ``` shell bosun-plugin-runner \ --plugin sample_plugin/sample_plugin \ --config sample_configs/action_config_deployment_1_model_A.yaml \ --private-config sample_configs/sample_plugin_config.yaml \ --action deployment_start \ --status-file /tmp/status.yaml \ --show-status ``` The status of this `deployment_start` action is captured in the file `/tmp/status.yaml` ### Configure the command prefix {: #configure-the-command-prefix } Now that your plugin is ready for the management agent, you can configure the `command` prefix in the management agent configuration file as: ```yaml command: "<BOSUN_VENV_PATH>/bin/bosun-plugin-runner --plugin sample_plugin --private-config <CONF_PATH>/plugin.sample_plugin_.conf.yaml" ``` You will need to install the sample plugin in the same virtual environment as the management agent Python package. Ensure the private configuration file path for the plugin is set correctly.
mgmt-agent-plugins
--- title: Install the management agent for Kubernetes description: Install and configure the MLOps management agent to use a Kubernetes Namespace as a Prediction Environment --- # Management agent Helm installation for Kubernetes This process provides an example of a management agent use case, using a Helm chart to aid in the installation and configuration of the management agent and the Kubernetes plugin. !!! important The Kubernetes plugin and Helm chart used in this process are examples; they may need to be modified to suit your needs. ## Overview {: #overview } The MLOps management agent provides a mechanism to automate model deployment to any infrastructure. Kubernetes is a popular solution for deploying and monitoring models outside DataRobot, orchestrated by the management and monitoring agents. To streamline the installation and configuration of the management agent and the Kubernetes plugin, you can use the contents of the `/tools/charts/datarobot-management-agent` directory in the agent tarball. The `/tools/charts/datarobot-management-agent` directory contains the files required for a [Helm chart](https://helm.sh/docs/topics/charts/) that you can modify to install and configure the management agent and its Kubernetes plugin for your preferred cloud environment: Amazon Web Services, Azure, Google Cloud Platform, or OpenShift. It also supports standard Docker Hub installation and configuration. This directory includes the default `values.yaml` file (located at `/tools/charts/datarobot-management-agent/values.yaml` in the agent tarball) and customizable example `values.yaml` files for each environment (located in the `/tools/charts/datarobot-management-agent/examples` directory of the agent tarball). You can copy and update the environment-specific `values.yaml` file you need and use `--values <filename>` to overlay the default values. ### Architecture overviews {: #architecture-overviews } === "General overview" ![](images/mgmt-agent-helm-arch.png) === "Detailed overview" ![](images/mgmt-agent-helm-arch2.png) === "Monitoring overview" ![](images/mgmt-agent-helm-arch3.png) The diagram above shows a detailed view of how the management agent deploys models into Kubernetes and enables model monitoring. === "Docker image build overview" ![](images/mgmt-agent-helm-arch4.png) The diagram above shows the specifics of how DataRobot models are packaged into a deployable image for Kubernetes. This architecture leverages an open-source tool maintained by Google called [Kaniko](https://github.com/GoogleContainerTools/kaniko), designed to build Docker images inside a Kubernetes cluster securely. ## Prerequisites {: #prerequisites } Before you begin, you must build and push the management agent Docker image to a registry accessible by your Kubernetes cluster. If you haven't done this, see the [MLOps management agent](mgmt-agent/index.md) overview. Once you have a management agent Docker image, set up a Kubernetes cluster with the following requirements: === "Software Requirements" * Kubernetes clusters (version v1.21+) * Nginx Ingress * Docker Registry === "Hardware Requirements" * 2+ CPUs * 40+ GB of instance storage (image cache) * 6+ GB of memory !!! important All requirements are for the latest version of the management agent. ## Configure software requirements {: #configure-software-requirements } To install and configure the required software resources, follow the processes outlined below: === "Kubernetes" Any Kubernetes cluster running version 1.21 or higher is supported. Follow the documentation for your chosen distribution to create a new cluster. This process also supports OpenShift version 4.8 and above. === "Nginx Ingress" !!! important If you are using OpenShift, you should skip this prerequisite. OpenShift uses the built-in [Ingress Controller](https://docs.openshift.com/container-platform/latest/networking/ingress-operator.html). Currently, the only supported ingress controller is the open-source [Nginx-Ingress](https://kubernetes.github.io/ingress-nginx/) controller (>=4.0.0). To install Nginx Ingress in your environment, see the Nginx Ingress documentation or try the example script below: ``` sh # Create a namespace for your ingress resources kubectl create namespace ingress-mlops # Add the ingress-nginx repository helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx helm repo update # Use Helm to deploy an NGINX ingress controller # # These settings should be considered sane defaults to help quickly get you started. # You should consult the official documentation to determine the best settings for # your expected prediction load. With Helm, it is trival to change any of these # settings down the road. helm install nginx-ingress ingress-nginx/ingress-nginx \ --namespace ingress-mlops \ --set controller.ingressClassResource.name=mlops \ --set controller.autoscaling.enabled=true \ --set controller.autoscaling.minReplicas=2 \ --set controller.autoscaling.maxReplicas=3 \ --set controller.config.proxy-body-size=51m \ --set controller.config.proxy-read-timeout=605s \ --set controller.config.proxy-send-timeout=605s \ --set controller.config.proxy-connect-timeout=65s \ --set controller.metrics.enabled=true ``` === "Docker Registry" This process supports the major cloud vendor's managed registries (ECR, ACR, GCR) in addition to Docker Hub or any standard V2 Docker registry. If your registry requires pre-created repositories (i.e., ECR), you should create the following repositories: * `datarobot/mlops-management-agent` * `datarobot/mlops-tracking-agent` * `datarobot/datarobot-portable-prediction-api` * `mlops/frozen-models` !!! important You must provide the management agent push access to the `mlops/frozen-model` repo. Examples of several common registry types are provided [below](#configure-registry-credentials). If you are using GCR or OpenShift, the path for each Docker repository above must be modified to suit your environment. ## Configure registry credentials {: #configure-registry-credentials } To configure the Docker Registry for your cloud solution, follow the relevant process outlined below. The section provides examples for the following registries: * Amazon Elastic Container Registry (ECR) * Microsoft Azure Container Registry (ACR) * Google Cloud Platform Container Registry (GCR) * OpenShift Integrated Registry * Generic Registry (Docker Hub) === "ECR" First, create all required repositories listed above using the ECR UI or using the following command: ```sh repos="datarobot/mlops-management-agent datarobot/mlops-tracking-agent datarobot/datarobot-portable-prediction-api mlops/frozen-model" for repo in $repos; do aws ecr create-repository --repository-name $repo done ``` To provide push credentials to the agent, use an [IAM role for the service account](https://docs.aws.amazon.com/eks/latest/userguide/iam-roles-for-service-accounts.html): ```sh eksctl create iamserviceaccount --approve \ --cluster <your-cluster-name> \ --namespace datarobot-mlops \ --name datarobot-management-agent-image-builder \ --attach-policy-arn arn:aws:iam::aws:policy/AmazonEC2ContainerRegistryPowerUser ``` Next, create a file called `config.json` with the following contents: ```json { "credsStore": "ecr-login" } ``` Use that JSON file to create a `ConfigMap`: ```sh kubectl create configmap docker-config \ --namespace datarobot-mlops \ --from-file=<path to config.json> ``` Update the `imageBuilder` section of the `values.yaml` file (located at `/tools/charts/datarobot-management-agent/values.yaml` in the agent tarball) to use the `configMap` you created and to configure `serviceAccount` with the IAM role you created: ```yaml imageBuilder: ... configMap: "docker-config" serviceAccount: create: false name: "datarobot-management-agent-image-builder" ``` === "ACR" First, in your ACR registry, under **Settings** > **Access keys**, enable the [**Admin user** setting](https://docs.microsoft.com/en-us/azure/container-registry/container-registry-authentication?tabs=azure-cli#admin-account). Then, use one of the generated passwords to create a new secret: ```sh kubectl create secret docker-registry registry-creds \ --namespace datarobot-mlops \ --docker-server=<container-registry-name>.azurecr.io \ --docker-username=<admin-username> \ --docker-password=<admin-password> ``` !!! note This process assumes you already created the `datarobot-mlops` namespace. Next, update the `imageBuilder` section of the `values.yaml` file (located at `/tools/charts/datarobot-management-agent/values.yaml` in the agent tarball) to use the `secretName` for the secret you created: ```yaml imageBuilder: ... secretName: "registry-creds" ``` === "GCR" You should use Workload Identity in your GKE cluster to provide GCR push credentials to the Docker image building service. This process consists of the following steps: * [Enable Workload Identity](https://cloud.google.com/kubernetes-engine/docs/how-to/workload-identity#enable_on_cluster) * [Migrate existing node pools](https://cloud.google.com/kubernetes-engine/docs/how-to/workload-identity#migrate_applications_to) * [Authenticate with Workload Identity](https://cloud.google.com/kubernetes-engine/docs/how-to/workload-identity#authenticating_to) In this section, you can find the minimal configuration required to complete this guide. First, enable Workload Identity on your cluster and **all** of your node groups: ```sh # Enable workload identity on your existing cluster gcloud container clusters update <CLUSTER-NAME> \ --workload-pool=<PROJECT-NAME>.svc.id.goog # Enable workload identity on an existing node pool gcloud container node-pools update <NODE-POOL-NAME> \ --cluster=<CLUSTER-NAME> \ --workload-metadata=GKE_METADATA ``` When the cluster is ready, create a new IAM Service Account and attach a role that provides all necessary permissions to the image builder service. The image builder service must be able to push new images into GCR, and the IAM Service Account must be able to bind to the GKE ServiceAccount created upon installation: ```sh # Create Service Account gcloud iam service-accounts create gcr-push-user # Give user push access to GCR gcloud projects add-iam-policy-binding <PROJECT-NAME> \ --member=serviceAccount:[gcr-push-user]@<PROJECT-NAME>.iam.gserviceaccount.com \ --role=roles/cloudbuild.builds.builder # Link GKE ServiceAccount with the IAM Service Account gcloud iam service-accounts add-iam-policy-binding \ --role roles/iam.workloadIdentityUser \ --member "serviceAccount:<PROJECT-NAME>.svc.id.goog[datarobot-mlops/datarobot-management-agent-image-builder]" \ gcr-push-user@<PROJECT-NAME>.iam.gserviceaccount.com ``` Finally, update the `imageBuilder` section of the `values.yaml` file (located at `/tools/charts/datarobot-management-agent/values.yaml` in the agent tarball) to create a `serviceAccount` with the `annotations` and `name` created in previous steps: ```yaml imageBuilder: ... serviceAccount: create: true annotations: { iam.gke.io/gcp-service-account: gcr-push-user@<PROJECT-NAME>.iam.gserviceaccount.com } name: datarobot-management-agent-image-builder ``` === "OpenShift Integrated Registry" OpenShift provides a [built-in registry solution](https://docs.openshift.com/container-platform/4.8/registry/index.html). This is the recommended container registry if you are using OpenShift. Later in this guide, you are required to push images built locally *into* the registry. To make this easier, use the following command to expose the registry externally: ```sh oc patch configs.imageregistry.operator.openshift.io/cluster --patch '{"spec":{"defaultRoute":true}}' --type=merge ``` See the [OpenShift documentation](https://docs.openshift.com/container-platform/4.8/registry/securing-exposing-registry.html) to learn to log in to this registry to push images to it. In addition, you should create a dedicated Image Builder service account with [permission](https://docs.openshift.com/container-platform/4.8/authentication/managing-security-context-constraints.html) to run as `root` and to [push](https://docs.openshift.com/container-platform/4.8/registry/accessing-the-registry.html) to the integrated Docker registry: ```sh oc new-project datarobot-mlops oc create sa datarobot-management-agent-image-builder # Allows the SA to push to the registry oc policy add-role-to-user registry-editor -z datarobot-management-agent-image-builder # Our Docker builds require the ability to run as `root` to build our images oc adm policy add-scc-to-user anyuid -z datarobot-management-agent-image-builder ``` When OpenShift created a Docker registry authentication secret, it created it in the incorrect format (`kubernetes.io/dockercfg` instead of `kubernetes.io/dockerconfigjson`). To fix this, create a secret using the appropriate token. To do this, find the existing `Image pull secrets` assigned to the `datarobot-management-agent-image-builder` ServiceAccount: ```sh $ oc describe sa/datarobot-management-agent-image-builder Name: datarobot-management-agent-image-builder Namespace: datarobot-mlops Labels: <none> Annotations: <none> Image pull secrets: datarobot-management-agent-image-builder-dockercfg-p6p5b Mountable secrets: datarobot-management-agent-image-builder-dockercfg-p6p5b datarobot-management-agent-image-builder-token-pj9ks Tokens: datarobot-management-agent-image-builder-token-p6dnc datarobot-management-agent-image-builder-token-pj9ks Events: <none> ``` Next, track back from the pull secret back to the raw token: ```sh $ oc describe secret $(oc get secret/datarobot-management-agent-image-builder-dockercfg-p6p5b -o jsonpath='{.metadata.annotations.openshift\.io/token-secret\.name}') Name: datarobot-management-agent-image-builder-token-p6dnc Namespace: datarobot-mlops Labels: <none> Annotations: kubernetes.io/created-by: openshift.io/create-dockercfg-secrets kubernetes.io/service-account.name: datarobot-management-agent-image-builder kubernetes.io/service-account.uid: 34101931-d402-49bf-83df-7a60b31cdf44 Type: kubernetes.io/service-account-token Data ==== ca.crt: 11253 bytes namespace: 10 bytes service-ca.crt: 12466 bytes token: eyJhbGciOiJSUzI1NiIsImtpZCI6InJqcEx5LTFjOElpM2FKRzdOdDNMY... ``` ```sh oc create secret docker-registry registry-creds \ --docker-server=image-registry.openshift-image-registry.svc:5000 \ --docker-username=imagebuilder \ --docker-password=eyJhbGciOiJSUzI1NiIsImtpZCI6InJqcEx5LTFjOElpM2FKRzdOdDNMY... ``` Update the `imageBuilder` section of the `values.yaml` file (located at `/tools/charts/datarobot-management-agent/values.yaml` in the agent tarball) to reference the `serviceAccount` created above: ```yaml imageBuilder: ... secretName: registry-creds rbac: create: false serviceAccount: create: false name: datarobot-management-agent-image-builder ``` It's common for the internal registry to be signed by an internal CA. To avoid this, skip TLS verification in the `values.yaml` configuration: ```yaml imageBuilder: ... skipSslVerifyRegistries: - image-registry.openshift-image-registry.svc:5000 ``` If you have the CA certificate, a more secure option would be to mount it as a `secret` or a `configMap` and then configure the `imageBuilder` to use it. Below we will show a third option of how you can obtain the CA directly from the underlying node: ```yaml imageBuilder: ... extraVolumes: - name: cacert hostPath: path: /etc/docker/certs.d extraVolumeMounts: - name: cacert mountPath: /certs/ readOnly: true extraArguments: - --registry-certificate=image-registry.openshift-image-registry.svc:5000=/certs/image-registry.openshift-image-registry.svc:5000/ca.crt ``` !!! note The example above requires elevated SCC privileges. ```sh oc adm policy add-scc-to-user hostmount-anyuid -z datarobot-management-agent-image-builder ``` === "Docker Hub" If you have a generic registry that uses a simple Docker username/password to log in, you can use the following procedure. Create a secret containing your Docker registry credentials: ```sh kubectl create secret docker-registry registry-creds \ --namespace datarobot-mlops \ --docker-server=<container-registry-name>.your-company.com \ --docker-username=<push-username> \ --docker-password=<push-password> ``` Update the `imageBuilder` section of the `values.yaml` file (located at `/tools/charts/datarobot-management-agent/values.yaml` in the agent tarball) to use the new secret you created: ```yaml imageBuilder: ... secretName: "registry-creds" ``` If your registry is running on HTTP, you will need to add the following to the above example: ```yaml imageBuilder: ... secretName: "registry-creds" insecureRegistries: - <container-registry-name>.your-company.com ``` ## Install the management agent with Helm {: #install-the-management-agent-with-helm } After the prerequisites are configured, install the MLOps management agent. In these steps, you will be building and pushing large docker images up to your remote registry. DataRobot recommends running these steps in parallel while downloads or uploads are happening. ### Fetch the Portable Prediction Server image {: fetch-the-portable-prediction-server-image } The first step is to download the latest version of the [Portable Prediction Server Docker Image](api-key-mgmt#portable-prediction-server-docker-image) from DataRobot's Developer Tools. When the download completes, run the following commands: 1. Load the PPS Docker image: ``` sh docker load < datarobot-portable-prediction-api-<VERSION>.tar.gz ``` 2. Tag the PPS Docker image with an image name: !!! note Don't use `latest` as the `<VERSION>` tag. ``` sh docker tag datarobot/datarobot-portable-prediction-api:<VERSION> registry.your-company.com/datarobot/datarobot-portable-prediction-api:<VERSION> ``` 3. Push the PPS Docker image to your remote registry: ``` sh docker push your-company.com/datarobot/datarobot-portable-prediction-api:<VERSION> ``` ### Build the required Docker images {: #build-the-required-docker-images } First, build the management agent image with a single command: ``` sh make -C tools/bosun_docker REGISTRY=registry.your-company.com push ``` Next, build the monitoring agent with a similar command: !!! note If you don't plan on enabling model monitoring, you can skip this step. ``` sh make -C tools/agent_docker REGISTRY=registry.your-company.com push ``` ### Create a new Prediction Environment {: #create-a-new-prediction-environment } To create a new prediction environment, see the [Prediction environments](pred-env) documentation. Record the **Prediction Environment ID** for later use. !!! note Only the `DataRobot` and `Custom Model` model formats are currently supported. ### Install the Helm chart {: #install-the-helm-chart } DataRobot recommends installing the agent into its own namespace. To do so, pre-create it and install the MLOps API key in it. ``` sh # Create a namespace to contain the agent and all the models it deploys kubectl create namespace datarobot-mlops # You can use an existing key or we recommend creating a key dedicated to the agent # by browsing here: # https://app.datarobot.com/account/developer-tools kubectl -n datarobot-mlops create secret generic mlops-api-key --from-literal=secret=<YOUR-API-TOKEN> ``` You can modify one of several common examples for the various cloud environments (located in the `/tools/charts/datarobot-management-agent/examples` directory of the agent tarball) to suit your account; then you can install the agent with the appropriate version of the following command: ``` sh helm upgrade --install bosun . \ --namespace datarobot-mlops \ --values ./examples/AKE_values.yaml ``` If none of the provided examples suit your needs, the *minimum* command to install the agent is as follows: ``` sh helm upgrade --install bosun . \ --namespace datarobot-mlops \ --set predictionServer.ingressClassName=mlops \ --set predictionServer.outfacingUrlRoot=http://your-company.com/deployments/ \ --set datarobot.apiSecretName=mlops-api-key \ --set datarobot.predictionEnvId=<PRED ENV ID> \ --set managementAgent.repository=registry.your-company.com/datarobot/mlops-management-agent \ --set trackingAgent.image=registry.your-company.com/datarobot/mlops-tracking-agent:latest \ --set imageBuilder.ppsImage=registry.your-company.com/datarobot/datarobot-portable-prediction-api:<VERSION> \ --set imageBuilder.generatedImageRepository=registry.your-company.com/mlops/frozen-models ``` There are several additional configurations to review in the `values.yaml` file (located at `/tools/charts/datarobot-management-agent/values.yaml` in the agent tarball) or using the following command: ``` sh helm show values . ```
mgmt-agent-kubernetes
--- title: Management agent description: Automate model deployment to any type of infrastructure and monitor deployment events. --- # Management agent The MLOps management agent provides a standard mechanism to automate model deployment to any type of infrastructure. It pairs automated deployment with automated monitoring to ease the burden on remote models in production, especially with critical MLOps features such as challenger models and retraining. The agent, accessed from the DataRobot application, ships with an assortment of plugins that support custom configuration. ![](images/mgmt-agent-1.png) ## Management agent setup {: #management-agent-setup } To configure the management agent, you must prepare its various components, detailed below: * Register a prediction environment. * Download the agent tarball. * Configure an environment plugin. * Configure the management agent. * Create a deployment. ### Register a prediction environment {: #register-a-prediction-environment } You can use the management agent to automate the deployment, replacement, and monitoring of models in an external prediction environment. Management agent setup begins with configuring a prediction environment to use with deployments. Before proceeding, register the [prediction environment](pred-env) with DataRobot. Once registered, navigate to **Deployments > Prediction Environments**. Select the prediction environment to use from the list and toggle on **Use Management Agent**. ![](images/mgmt-agent-6.png) Once enabled, you must indicate the email address of the management agent service account holder. DataRobot recommends using an administrative service account as the account holder (an account that has access to each deployment that uses the configured prediction environment). ### Download the agent {: #download-the-agent } Access the management agent by downloading the MLOps agent tarball and installing it on the remote environment from which you are hosting models to make predictions. You can download it directly from the DataRobot application by clicking on your user icon and navigating to **Developer Tools**. Under the **External Monitoring Agent** header, click the download icon. The tarball appears in your browser's downloads bar when complete. ![](images/api-key-6.png) ### Configure an environment plugin {: #configure-an-environment-plugin } The management agent translates deployment events (model replacement, deployment launch, etc.) into processes for an environment plugin to run in response to that event. The tarball includes configurable, example environment plugins. These plugins can support various types of infrastructure used by remote models as is; however, you may need to modify these plugins to fully support your particular environment. Initially, you can choose, configure, and potentially modify the plugin that best supports your infrastructure. Advanced users can create new plugins, either completely customized or by using the provided plugins as a starting point. !!! note The provided management agent plugins are examples. They are not intended to work for all use-cases; however, you can modify them to suit your needs. The MLOps management agent contains the following example plugins: * Docker plugin. * Filesystem plugin. * Kubernetes plugin. * Test plugin. !!! tip The tarball includes README files to help with the installation and configuration of the plugins. For more information, see [Configure management agent environment plugins](mgmt-agent-plugins). ### Configure the agent {: #configure-the-agent } After downloading the tarball and configuring an agent plugin, edit the agent's config file: * Provide your DataRobot service URL and API key so the management agent can authenticate to DataRobot. * Provide the prediction environment id so the management agent can access it and any associated deployments. * Indicate which management agent plugin to use. For more information, see [Management agent installation and configuration](mgmt-agent-install). ### Create a deployment {: #create-a-deployment } After configuring the prediction environment and the management agent for use, you can create an external deployment with events monitored by the agent. The deployment must use the prediction environment configured in the steps above in order to support the agent's monitoring functionality. To do so, DataRobot recommends [registering an external model package](reg-create#register-external-model-packages) and [deploying it](deploy-external-model#deploy-an-external-model-package) from the **Model Registry**. ![](images/mgmt-agent-3.png) Once deployed, you have a deployment fully configured with the management agent, capable of monitoring deployment events and automating actions in response to those events.
index
--- title: Force delete deployments description: Delete a deployment without waiting for the resolution of the deployment deletion request sent to the management agent. --- # Force delete management agent deployments If the management agent is not running or has errored, you can delete a deployment without waiting for the resolution of the deployment deletion request sent to the management agent. !!! warning This will remove the deployment from the deployments area for all users. This action cannot be undone. To force the deletion of a management agent deployment without waiting for the resolution of the deletion request sent to the agent, take the following steps: 1. On the [**Deployments**](deploy-inventory) page or any tab within a deployment, next to the name of the deployment you want to delete, click the [Actions menu](actions-menu) and click **Delete**. ![](images/mgmt-agent-delete.png) 2. In the **Delete deployment** dialog box, click **Ignore Management Agent**, and then click **Delete deployment**. ![](images/mgmt-agent-delete-confirm.png)
mgmt-agent-delete
--- title: Installation and configuration description: Install and configure the MLOps management agent. --- # Management agent installation and configuration The MLOps agent `.tar` file contains all artifacts required to run the management agent. You can run the management agent in either of the following configurations: * Inside a container. * On a host machine, as a standalone process. === "Run in a container" 1. To build and install the management agent container, run the following commands to unpack the tarball in a suitable location and build the container image: ``` bash tar -zxf datarobot_mlops_package-*.tar.gz cd datarobot_mlops_package-*/ cd tools/bosun_docker/ make build ``` This tags the management agent image with the appropriate `version` tag and the `latest` tag. 2. To build the management agent image and run the container such that the management agent is configurable from the command line, run the following: ``` bash tar -zxf datarobot_mlops_package-*.tar.gz cd datarobot_mlops_package-*/ cd tools/bosun_docker/ make run ``` 3. Enter the `mlopsUrl`, the `apiToken`, and the ID of the prediction environment to monitor: ``` bash Generate MLOps Management-Agent configuration file. Enter DataRobot App URL (e.g. https://app.datarobot.com): <https://<MLOPS_HOST>> Enter DataRobot API Token: <MLOPS_API_TOKEN> Enter DataRobot Prediction Environment ID: <MLOPS_PREDICTION_ENVIRONMENT_ID> ``` By default, the management agent uses the filesystem plugin. If you want to use a different plugin, you can configure the management agent configuration file to use that plugin and then map it to the container. For example, you can use the following commands to run the management agent with the Kubernetes plugin: ``` bash cd datarobot_mlops_package-*/ docker run -it \ -v conf/mlops.bosun.conf.yaml:/opt/datarobot/mlops/bosun/conf/mlops.bosun.conf.yaml \ -v conf/plugin.k8s.conf.yaml:/opt/datarobot/mlops/bosun/conf/plugin.k8s.conf.yaml \ datarobot/mlops-management-agent ``` === "Run on a host machine" 1. To install and run the management agent on the host machine, Python 3.7+ and Java 11 must be installed on the system. Then, you can create a Python virtual environment to install the management agent plugins: ``` bash mkdir /opt/management-agent-demo cd /opt/management-agent-demo python3 -m venv .venv source .venv/bin/activate tar -zxf datarobot_mlops_package-*.tar.gz cd datarobot_mlops_package-*/ pip install lib/datarobot_mlops-*-py2.py3-none-any.whl pip install lib/datarobot_mlops_connected_client-*-py3-none-any.whl pip install lib/datarobot_bosun-*-py3-none-any.whl ``` 2. Configure the management agent by modifying the configuration file: ``` bash <your-chosen-editor> ./conf/mlops.bosun.conf.yaml ``` 3. Start the management agent: ``` bash ./bin/start-bosun.sh ``` 4. To configure the management agent on the host machine, edit the management agent configuration file, `conf/mlops.bosun.conf.yaml`: * Update the values for `mlopsUrl` and `apiToken`. * Verify that `<BOSUN_VENV_PATH>` points to the virtual environment created during installation (e.g., `/opt/management-agent-demo/bin`). * Specify the Prediction Environment ID at `<MLOPS_PREDICTION_ENVIRONMENT_ID>`. * Uncomment the appropriate `command:` line in the `predictionEnvironments` section to use the correct plugin. Ensure you comment out the `command:` line for any unused plugins. * Optionally, you may need to configure the configuration file for the plugin you're using. For more information, see [Configure management agent plugins](mgmt-agent-plugins). ``` yaml title="mlops.bosun.conf.yaml" # This file contains configuration for the Management Agent # Items marked "Required" must be set. Other settings can use the defaults set below. # Required. URL to the DataRobot MLOps service. mlopsUrl: "https://<MLOPS_HOST>" # Required. DataRobot API token. apiToken: "<MLOPS_API_TOKEN>" # When true, verify SSL certificates when connecting to DR app. When false, SSL verification will not be # performed. It is highly recommended to keep this config variable as true. verifySSL: true # Whether to run management agent as the workload coordinator. The default value is true. isCoordinator: true # Whether to run management agent as worker. The default value is true. isWorker: true # When true, start a REST server. This will provide several API endpoints (worker health check enables) serverMode: false # The port to use for the above REST server serverPort: "12345" # The url where to reach REST server, will be use by external configuration services serverAddress: "http://localhost" # Specify the configuration service. This is 'internal' by default and the # workload coordinator and worker are expected to run in the same JVM. # When run in high availability mode, the configuration needs to be provided by # a service such as Consul. configurationService: tag: "tag" type: "internal" connectionDetail: "" # Path to write Bosun stats statsPath: "/tmp/management-agent-stats.json" # HTTP client timeout in milliseconds (30sec timeout). httpTimeout: 30000 # Number of times the agent will retry sending a request to the MLOps service after it receives a failure. httpRetry: 3 # Number of active workers to process management agent commands numActionWorkers: 2 # Timeout in seconds processing active commands, eg. launch, stop, replaceModel actionWorkerTimeoutSec: 300 # Timeout in seconds for requesting status of PE and the deployment statusWorkerTimeoutSec: 300 # How often (in seconds) status worker should update DR MLOps about the status of PE and deployments statusUpdateIntervalSec: 120 # How often (in seconds) to poll MLOps service for new deployment / PE Actions mlopsPollIntervalSec: 60 # Optional: Plugins directory in which all required plugin jars can be found. # If you are only using external commands to run plugin actions then there is # no need to use this option. # pluginsDir: "../plugins/" # Model Connector configuration modelConnector: type: "native" # Scratch place to work on, default "/tmp" scratchDir: "/tmp" # Config file for private / secret configuration, management agent will not read this file, just # forward the filename in configuration, optional secretsConfigFile: "/tmp/secrets.conf" # Python command that implements model connector. # mcrunner is installed as part the bosun python package. You should either # set your PATH to include the location of mcrunner, or provide the full path. command: "<BOSUN_VENV_PATH>/bin/mcrunner" # prediction environments this service will monitor predictionEnvironments: # This Prediction Environment ID matches the one in DR MLOps service - id: "<MLOPS_PREDICTION_ENVIRONMENT_ID>" type: "ExternalCommand" platform: "os" # Enable monitoring for this plugin, so that the MLOps information # (viz, url and token) can be forwarded to plugin, default: False # enableMonitoring: true # Provide the command to run the plugin: # You can either fix PATH to point to where bosun-plugin-runner is located, or # you can provide the full path below. # The filesystem plugin used in the example below if one of the built in plugins provided # by the bosun-plugin-runner command: "<BOSUN_VENV_PATH>/bin/bosun-plugin-runner --plugin filesystem --private-config <CONF_PATH>/plugin.filesystem.conf.yaml" # The following example will run the docker plugin # (one of the built in plugins provided by bosun-plugin runner) # command: "<BOSUN_VENV_PATH>/bin/bosun-plugin-runner --plugin docker --private-config <CONF_PATH>/plugin.docker.conf.yaml" # The following example will run the kubernetes plugin # (one of the built in plugins provided by bosun-plugin runner) # WARNING: this plugin is currently considered ALPHA maturity; please consult your account representative if you # are interested in trying it. # command: "<BOSUN_VENV_PATH>/bin/bosun-plugin-runner --plugin k8s --private-config <CONF_PATH>/plugin.k8s.conf.yaml" # If your plugin was installed as a python module (using pip), you can provide the name # of the module that contains the plugin class. For example --plugin sample_plugin.my_plugin # command: "<BOSUN_VENV_PATH>/bin/bosun-plugin-runner --plugin sample_plugin.my_plugin --private-config <CONF_PATH>/my_config.yaml" # If your plugin is in a directory, you can provide the name of the plugin as the path to the # file that contains your plugin. For example: --plugin sample_plugin/my_plugin.py # command: "<BOSUN_VENV_PATH>/bin/bosun-plugin-runner --plugin sample_plugin/my_plugin.py --private-config <CONF_PATH>/my_config.yaml" # Note: you can control the plugin logging via the --log-config option of bosun-plugin-runner ``` === "Run natively in Docker" 1. To run the management agent natively in Docker, first build the `datarobot/mlops-management-agent` image from the MLOps agent tarball: ``` make build -C tools/bosun_docker ``` 2. Configure the monitoring agent in Docker, mounted to the default directory or a custom location: * To run the management agent with the filesystem plugin and with the configuration mounted to the default directory: ``` docker run \ -v /path/to/mlops.bosun.conf.yaml:/opt/datarobot/mlops/bosun/conf/mlops.bosun.conf.yaml \ -v /path/to/plugin.filesystem.conf.yaml:/opt/datarobot/mlops/bosun/conf/plugin.filesystem.conf.yaml \ datarobot/mlops-management-agent ``` * To run the management agent with the filesystem plugin and with agent configuration mounted to a custom location: ``` docker run \ -v /path/to/mlops.bosun.conf.yaml:/var/tmp/mlops.bosun.conf.yaml \ -v /path/to/plugin.filesystem.conf.yaml:/opt/datarobot/mlops/bosun/conf/plugin.filesystem.conf.yaml \ -e MLOPS_AGENT_CONFIG_YAML=/var/tmp/mlops.bosun.conf.yaml \ datarobot/mlops-management-agent ``` * To use the Docker-based plugin while _also_ running the management agent in a docker container, you will need to include a few extra options, and you will need to mount in the entire config directory since there are multiple files to modify: ``` $ docker run \ -v ${PWD}/conf/:/opt/datarobot/mlops/bosun/conf/ \ -v /tmp:/tmp \ -v /var/run/docker.sock:/var/run/docker.sock \ --user root \ --network bosun \ datarobot/mlops-management-agent:latest ```
mgmt-agent-install
--- title: Installation and configuration description: How to install and configure the monitoring agent to forward buffered messages from the MLOps library to DataRobot MLOps. --- # Monitoring agent installation and configuration {: #monitoring-agent-installation-and-configuration } When the monitoring agent is running, it looks for buffered messages in the configured directory or a message queuing system and forwards them. To forward buffered messages from the MLOps library to DataRobot MLOps, install and configure the monitoring agent as indicated below. === "Run on a host machine" 1. Unpack the MLOps .tar file: ``` shell tar -xvf datarobot_mlops_package-*.tar.gz ``` 2. Update the configuration file: ``` shell cd datarobot_mlops_package-*; <your-favorite-editor> ./conf/mlops.agent.conf.yaml ``` 3. Configure the monitoring agent: In the agent configuration file, `conf\mlops.agent.conf.yaml`, you must update the values for `mlopsUrl` and `apiToken`. By default, the agent will use the `filesystem` channel. If you use the `filesystem` channel, make sure you create the spooler directory (by default, this is `/tmp/ta`). !!! important For the `filesystem` spooler channel, the `directory` path you provide _must_ be an absolute path (containing the complete directory list) for the agent to access the `/tmp/ta` directory (or a custom directory you create). If you want to use a different channel, follow the comments in the agent configuration file to update the path. ``` yaml title="mlops.agent.conf.yaml" # This file contains configuration for the MLOps agent # URL to the DataRobot MLOps service mlopsUrl: "https://<MLOPS_HOST>" # DataRobot API token apiToken: "<MLOPS_API_TOKEN>" # Execute the agent once, then exit runOnce: false # When dryrun mode is true, do not report the metrics to MLOps service dryRun: false # When verifySSL is true, SSL certification validation will be performed when # connecting to MLOps DataRobot. When verifySSL is false, these checks are skipped. # Note: It is highly recommended to keep this config variable as true. verifySSL: true # Path to write agent stats statsPath: "/tmp/tracking-agent-stats.json" # Prediction Environment served by this agent. # Events and errors not specific to a single deployment are reported against this Prediction Environment. # predictionEnvironmentId: "<PE_ID_FROM_DATAROBOT_UI>" # Number of times the agent will retry sending a request to the MLOps service on failure. httpRetry: 3 # Http client timeout in milliseconds (30sec timeout) httpTimeout: 30000 # Number of concurrent http request, default=1 -> synchronous mode; > 1 -> asynchronous httpConcurrentRequest: 10 # Number of HTTP Connections to establish with the MLOps service, Default: 1 numMLOpsConnections: 1 # Comment out and configure the lines below for the spooler type(s) you are using. # Note: The spooler configuration must match that used by the MLOps library. # Note: The filesystem spooler directory must be an absolute path to the "/tmp/ta" directory. # Note: Spoolers must be set up before using them. # - For the filesystem spooler, create the directory that will be used. # - For the SQS spooler, create the queue. # - For the PubSub spooler, create the project and topic. # - For the Kafka spooler, create the topic. channelConfigs: - type: "FS_SPOOL" details: {name: "filesystem", directory: "<path_to_spooler_directory>/tmp/ta"} # - type: "SQS_SPOOL" # details: {name: "sqs", queueUrl: "your SQS queue URL", queueName: "<your AWS SQS queue name>"} # - type: "RABBITMQ_SPOOL" # details: {name: "rabbit", queueName: <your rabbitmq queue name>, queueUrl: "amqp://<ip address>", # caCertificatePath: "<path_to_ca_certificate>", # certificatePath: "<path_to_client_certificate>", # keyfilePath: "<path_to_key_file>"} # - type: "PUBSUB_SPOOL" # details: {name: "pubsub", projectId: <your project ID>, topicName: <your topic name>, subscriptionName: <your sub name>} # - type: "KAFKA_SPOOL" # details: {name: "kafka", topicName: "<your topic name>", bootstrapServers: "<ip address 1>,<ip address 2>,..."} # The number of threads that the agent will launch to process data records. agentThreadPoolSize: 4 # The maximum number of records each thread will process per fetchNewDataFreq interval. agentMaxRecordsTask: 100 # Maximum number of records to aggregate before sending to DataRobot MLOps agentMaxAggregatedRecords: 500 # A timeout for pending records before aggregating and submitting agentPendingRecordsTimeoutMs: 5000 ``` === "Run natively in Docker" 1. To run the monitoring agent natively in Docker, first build the `datarobot/mlops-tracking-agent` image from the MLOps agent tarball: ``` shell make build -C tools/agent_docker ``` 2. Configure the monitoring agent in Docker, mounted to the default directory or a custom location: * To run the monitoring agent with the configuration mounted to the default directory: ``` shell docker run \ -v /path/to/mlops.agent.conf.yaml:/opt/datarobot/mlops/agent/conf/mlops.agent.conf.yaml \ datarobot/mlops-tracking-agent ``` * To run the monitoring agent with the configuration mounted to a custom location: ``` shell docker run \ -v /path/to/mlops.agent.conf.yaml:/var/tmp/mlops.agent.conf.yaml \ -e MLOPS_AGENT_CONFIG_YAML=/var/tmp/mlops.agent.conf.yaml \ datarobot/mlops-tracking-agent ``` ## Use the monitoring agent {: #use-the-monitoring-agent } Once the monitoring agent is configured, you can run the agent, check the agent status, and shut down the agent. ### Run the monitoring agent {: #run-the-monitoring-agent } Start the agent using the config file: ``` shell cd datarobot_mlops_package-*; ./bin/start-agent.sh ``` Alternatively, start the agent using environment variables: ``` shell export AGENT_CONFIG_YAML=<path/to/conf/mlops.agent.conf.yaml> export AGENT_LOG_PROPERTIES=<path/to/conf/mlops.log4j2.properties> export AGENT_JVM_OPT=-Xmx4G export AGENT_JAR_PATH=<path/to/bin/mlops-agent-ver.jar> ./bin/start-agent.sh ``` For a complete reference of the available environment variables, see [MLOps agent environment variables](env-var). ### Check the agent's status {: #check-the-agents-status } To check the agent's status: ``` shell title="Check status" ./bin/status-agent.sh ``` ``` shell title="Check status with real-time resource usage" ./bin/status-agent.sh --verbose ``` ### Shut down the agent {: #shut-down-the-agent } To shut down the agent: ``` shell ./bin/stop-agent.sh ```
agent
--- title: Monitoring agent use cases description: Investigate MLOPs reporting and monitoring use cases, including how to report metrics when the prediction environment isn't connected to DataRobot and how to monitor Spark environments. --- # Monitoring agent use cases {: #monitoring-agent-use-cases } Reference the use cases below for examples of how to apply the monitoring agent: * [Enable large-scale monitoring](#enable-large-scale-monitoring) * [Perform advanced agent memory tuning for large workloads](#perform-advanced-agent-memory-tuning-for-large-workloads) * [Report metrics](#report-metrics) * [Monitor a Spark environment](#monitor-a-spark-environment) * [Monitor using the MLOps CLI](#monitor-using-the-mlops-cli) ## Enable large-scale monitoring {: #enable-large-scale-monitoring } To support large-scale monitoring, the MLOps library provides a way to calculate statistics from raw data on the client side. Then, instead of reporting raw features and predictions to the DataRobot MLOps service, the client can report anonymized statistics without the feature and prediction data. Reporting prediction data statistics calculated on the client side is the optimal (and highly performant) method compared to reporting raw data, especially at scale (billions of rows of features and predictions). In addition, because client-side aggregation only sends aggregates of feature values, it is suitable for environments where you don't want to disclose the actual feature values. To enable the large-scale monitoring functionality, you must set one of the feature type settings. These settings provide the dataset's feature types and can be configured programmatically in your code (using setters) or by defining environment variables. !!! note If you configure these settings programmatically in your code _and_ by defining environment variables, the environment variables take precedence. === "Environment variables" The following environment variables are specific to large-scale monitoring: | Variable | Description | |----------|--------------| | `MLOPS_FEATURE_TYPES_FILENAME` | The path to the file containing the dataset's feature types in JSON format. <br> **Example**: `"/tmp/feature_types.json"` | | `MLOPS_FEATURE_TYPES_JSON` | The JSON containing the dataset's feature types. <br> **Example**: `[{"name": "feature_name_f1","feature_type": "date", "format": "%m-%d-%y",}]` | | **Optional configuration** | :~~: | | `MLOPS_STATS_AGGREGATION_MAX_RECORDS` | The maximum number of records in a dataset to aggregate. <br> **Example**: `10000` | | `MLOPS_STATS_AGGREGATION_PREDICTION_TS_COLUMN_NAME` | The name of the prediction timestamp column in the dataset you want to aggregate on the client side. <br> **Example**: `"ts"` | | `MLOPS_STATS_AGGREGATION_PREDICTION_TS_COLUMN_FORMAT` | The format of the prediction timestamp values in the dataset. <br> **Example**: `"%Y-%m-%d %H:%M:%S.%f"` | | `MLOPS_STATS_AGGREGATION_SEGMENT_ATTRIBUTES` | The custom attribute used to segment the dataset for segmented analysis of data drift and accuracy. <br> **Example**: `"country"` | === "Setters" The following code snippets show how you can configure large-scale monitoring settings programmatically: ``` title="Provide feature types as a file" mlops = MLOps() \ .set_stats_aggregation_feature_types_filename("/tmp/feature_types.json") \ .set_aggregation_max_records(10000) \ .set_prediction_timestamp_column("ts", "yyyy-MM-dd HH:mm:ss") \ .set_segment_attributes("country") \ .init() ``` ``` title="Provide feature types as JSON" mlops = MLOps() \ .set_stats_aggregation_feature_types_json([{"name": "feature_name_f1","feature_type": "date", "format": "%m-%d-%y",}]) \ .set_aggregation_max_records(10000) \ .set_prediction_timestamp_column("ts", "yyyy-MM-dd HH:mm:ss") \ .set_segment_attributes("country") \ .init() ``` !!! note If you don't provide the `MLOPS_STATS_AGGREGATION_PREDICTION_TS_COLUMN_NAME` and `MLOPS_STATS_AGGREGATION_PREDICTION_TS_COLUMN_FORMAT` environment variables, the timestamp is generated based on the current local time. The large-scale monitoring functionality is available for Python, the Java Software Development Kit (SDK), and the MLOps Spark Utils Library: === "Python" Replace calls to `report_predictions_data()` with calls to: ``` python report_aggregated_predictions_data( self, features_df, predictions, class_names, deployment_id, model_id ) ``` === "Java SDK" Replace calls to `reportPredictionsData()` with calls to: ``` java reportAggregatePredictionsData( Map<String, List<Object>> featureData, List<?> predictions, List<String> classNames ) ``` === "MLOps Spark Utils Library" Replace calls to `reportPredictions()` with calls to `predictionStatisticsParameters.report()`. The `predictionStatisticsParameters.report()` function has the following builder constructor: ``` java PredictionStatisticsParameters.Builder() .setChannelConfig(channelConfig) .setFeatureTypes(featureTypes) .setDataFrame(df) .build(); predictionStatisticsParameters.report(); ``` !!! tip You can find an example of this use-case in the agent `.tar` file in `examples/java/PredictionStatsSparkUtilsExample`. !!! note To support the use of challenger models, you must send raw features. For large datasets, you can report a small sample of raw feature and prediction data to support challengers and reporting; then, you can send the remaining data in aggregate format. ### Map supported feature types {: #map-supported-feature-types } Currently, large-scale monitoring supports numeric and categorical features. When configuring this monitoring method, you must map each feature name to the corresponding feature type (either numeric or categorical). When mapping feature types to feature names, there is a method for Scoring Code models and a method for all other models. === "Non Scoring Code" Often, a model can output the feature name and the feature type using an existing access method; however, if access is not available, you may have to manually categorize each feature you want to aggregate as `Numeric` or `Categorical`. Map a feature type (`Numeric` or `Categorical`) to each feature name using the `setFeatureTypes` method on `predictionStatisticsParameters`. === "Scoring Code" Map a feature type (`Numeric` or `Categorical`) to each feature name after using the `getFeatures` query on the `Predictor` object to obtain the features. You can find an example of this use-case in the agent `.tar` file in `examples/java/PredictionStatsSparkUtilsExample/src/main/scala/com/datarobot/dr_mlops_spark/Main.scala`. ## Perform advanced agent memory tuning for large workloads {: #perform-advanced-agent-memory-tuning-for-large-workloads } The monitoring agent's default configuration is tuned to perform well for an average workload; however, as you increase the number of records the agent groups together for forwarding to DataRobot MLOps, the agent's total memory usage increases steadily to support the increased workload. To ensure the agent can support the workload for your use case, you can estimate the agent's total memory use and then set the agent's memory allocation or configure the maximum record group size. ### Estimate agent memory use {: #estimate-agent-memory-use } When estimating the monitoring agent's approximate memory usage (in bytes), assume that each feature reported requires an average of `10 bytes` of memory. Then, you can estimate the memory use of each message containing raw prediction data from the number of features (represented by `num_features`) and the number of samples (represented by `num_samples`) reported. Each message uses approximately `10 bytes × num_features × num_samples` of memory. !!! note Consider that the estimate of 10 bytes of memory per feature reported is most applicable to datasets containing a balanced mix of features. Text features tend to be larger, so datasets with an above-average amount of text features tend to use more memory per feature. When grouping many records at one time, consider that the agent groups messages together until reaching the limit set by the `agentMaxAggregatedRecords` setting. In addition, at that time, the agent will keep messages in memory up to the limit set by the `httpConcurrentRequest` setting. Combining the calculations above, you can estimate the agent's memory usage (and the necessary memory allocation) with the following formula: `memory_allocation = 10 bytes × num_features × num_samples × max_group_size × max_concurrency` Where the variables are defined as: * `num_features`: The number of features (columns) in the dataset. * `num_samples`: The number of rows reported in a single call to the MLOPS reporting function. * `max_group_size`: The number of records aggregated into each HTTP request, set by `agentMaxAggregatedRecords` in the [agent config file](agent). * `max_concurrency`: The number of concurrent HTTP requests, set by `httpConcurrentRequest` in the [agent config file](agent). Once you use the dataset and agent configuration information above to calculate the required agent memory allocation, this information can help you fine-tune the agent configuration to optimize the balance between performance and memory use. ### Set agent memory allocation {: #set-agent-memory-allocation } Once you know the agent's memory requirement for your use case, you can increase the agent’s Java Virtual Machine (JVM) memory allocation using the `MLOPS_AGENT_JVM_OPT` [environment variable](env-var): ``` MLOPS_AGENT_JVM_OPT=-Xmx2G ``` !!! important When running the agent in a container or VM, you should configure the system with _at least_ 25% more memory than the `-Xmx` setting. ### Set the maximum group size {: #set-the-maximum-group-size } Alternatively, to reduce the agent's memory requirement for your use case, you can decrease the agent's maximum group size limit set by `agentMaxAggregatedRecords` in the [agent config file](agent): ```yaml # Maximum number of records to group together before sending to DataRobot MLOps agentMaxAggregatedRecords: 10 ``` Lowering this setting to `1` disables record grouping by the agent. ## Report metrics {: #report-metrics } If your prediction environment cannot be network-connected to DataRobot, you can instead use monitoring agent reporting in a disconnected manner. 1. In the prediction environment, configure the MLOps library to use the `filesystem` spooler type. The MLOps library will report metrics into its configured directory, e.g., `/disconnected/predictions_dir`. 2. Run the monitoring agent on a machine that _is_ network-connected to DataRobot. 3. Configure the agent to use the `filesystem` spooler type and receive its input from a local directory, e.g., `/connected/predictions_dir`. 4. Migrate the contents of the directory `/disconnected/predictions_dir` to the connected environment `/connected/predictions_dir`. ### Reports for Scoring Code models {: #reports-for-scoring-code-models } You can also use monitoring agent reporting to send monitoring metrics to DataRobot for downloaded Scoring Code models. Reference an example of this use case in the MLOps agent tarball at `examples/java/CodeGenExample`. ## Monitor a Spark environment {: #monitor-a-spark-environment } A common use case for the monitoring agent is monitoring scoring in Spark environments where scoring happens in Spark and you want to report the predictions and features to DataRobot. Since Spark usually uses a multi-node setup, it is difficult to use the agent's `fileystem` spooler channel because a shared consistent file system is uncommon in Spark installations. To work around this, use a network-based channel like RabbitMQ or AWS SQS. These channels can work with multiple writers and single (or multiple) readers. The following example outlines how to set up agent monitoring on a Spark system using the MLOps Spark Util module, which provides a way to report scoring results on the Spark framework. Reference the documentation for the MLOpsSparkUtils module in the MLOps Java examples directory at `examples/java/SparkUtilsExample/`. The Spark example's source code performs three steps: 1. Given a scoring JAR file, it scores data and delivers results in a DataFrame. 2. Merges the feature's DataFrame and the prediction results into a single DataFrame. 3. Calls the `mlops_spark_utils.MLOpsSparkUtils.reportPredictions` helper to report the predictions using the merged DataFrame. You can use `mlops_spark_utils.MLOpsSparkUtils.reportPredictions` to report predictions generated by any model as long as the function retrieves the data via a DataFrame. This example uses RabbitMQ as the channel of communication and includes channel setup. Since Spark is a distributed framework, DataRobot requires a network-based channel like RabbitMQ or AWS SQS in order for the Spark workers to be able to send the monitoring data to the same channel regardless of the node the worker is running on. ### Spark prerequisites {: #spark-prerequisites } The following steps outline the prerequisites necessary to execute the Spark monitoring use case. 1. Run a spooler (RabbitMQ in this example) in a container: * This Docker command will also run the management console for RabbitMQ. * You can access the console via your browser at http://localhost:15672 (username=`guest`, password=`guest`). * In the console, view the message queue in action when you run the `./run_example.sh` script below. ``` sh docker run -d -p 15672:15672 -p 5672:5672 --name rabbitmq-spark-example rabbitmq:3-management ``` 2. Configure and start the monitoring agent. * Follow the quickstart guide provided in the agent tarball. * Set up the agent to communicate with RabbitMQ. * Edit the agent channel config to match the following: ``` sh - type: "RABBITMQ_SPOOL" details: {name: "rabbit", queueUrl: "amqp://localhost:5672", queueName: "spark_example"} ``` 3. If you are using mvn, install the `datarobot-mlops` JAR into your local mvn repository before testing the examples by running: ``` sh ./examples/java/install_jar_into_maven.sh ``` This command executes a shell script to install either the `mlops-utils-for-spark_2-<version>.jar` or `mlops-utils-for-spark_3-<version>.jar` file, depending on the Spark version you're using (where `<version>` represents the agent version). 4. Create the example JAR files, set the `JAVA_HOME` environment variable, and then run `make` to compile. * For Spark2/Java8: `export JAVA_HOME=$(/usr/libexec/java_home -v 1.8)` * For Spark3/Java11: `export JAVA_HOME=$(/usr/libexec/java_home -v 11)` 5. Install and run Spark locally. * Download the latest version of Spark 2 (2.x.x) or Spark 3 (3.x.x) built for Hadoop 3.3+. To download the latest Spark 3 version, see the [Apache Spark downloads page](http://spark.apache.org/downloads.html){ target=_blank }. !!! note Replace the `<version>` placeholders in the command and directory below with the versions of Spark and Hadoop you're using. * Unarchive the tarball: `tar xvf ~/Downloads/spark-<version>-bin-hadoop<version>.tgz`. * In the `spark-<version>-bin-hadoop<version>` directory, start the Spark cluster: ``` sh sbin/start-master.sh -i localhost` sbin/start-slave.sh -i localhost -c 8 -m 2G spark://localhost:7077 ``` * Ensure your installation is successful: ``` sh bin/spark-submit --class org.apache.spark.examples.SparkPi --master spark://localhost:7077 --num-executors 1 --driver-memory 512m --executor-memory 512m --executor-cores 1 examples/jars/spark-examples_*.jar 10 ``` * Add the Spark bin directory to your `$PATH`: ``` sh env SPARK_BIN=/opt/ml/spark-3.2.1-bin-hadoop3.*/bin ./run_example.sh ``` !!! note The monitoring agent also supports Spark2. ### Spark use case {: #spark-use-case } After meeting the prerequisites outlined above, run the Spark example. 1. Create the model package and initialize the deployment: ``` sh ./create_deployment.sh ``` Alternatively, [use the DataRobot UI](deploy-external-model) to create an external model package and deploy it. 2. Set the environment variables for the deployment and the model returned from creating the deployment by copying and pasting them into your shell. ``` sh export MLOPS_DEPLOYMENT_ID=<deployment_id> export MLOPS_MODEL_ID=<model_id> ``` 3. Generate predictions and report statistics to DataRobot: ``` sh run_example.sh ``` 4. If you want to change the spooler type (the communication channel between the Spark job and the monitoring agent): * Edit the Scala code under `src/main/scala/com/datarobot/dr_mlops_spark/Main.scala`. * Modify the following line to contain the required channel configuration: ``` sh val channelConfig = "output_type=rabbitmq;rabbitmq_url=amqp://localhost;rabbitmq_queue_name=spark_example" ``` * Recompile the code by running `make`. ## Monitor using the MLOps CLI {: #monitor-using-the-mlops-cli } MLOps supports a command line interface (CLI) for interacting with the MLOps application. You can use the CLI for most MLOps actions, including creating deployments and model packages, uploading datasets, reporting metrics on predictions and actuals, and more. Use the MLOps CLI help page for a list of available operations and syntax examples: `mlops-cli [-h]` !!! info "Monitoring agent vs. MLOps CLI" Like the monitoring agent, the MLOps CLI is also able to post prediction data to the MLOps service, but its usage is slightly different. MLOps CLI is a Python app that can send an HTTP request to the MLOps service with the current contents of the spool file. It does not run the monitoring agent or call any Java process internally, and it does not continuously poll or wait for new spool data; once the existing spool data is consumed, it exits. The monitoring agent, on the other hand, is a Java process that continuously polls for new data in the spool file as long as it is running, and posts that data in an optimized form to the MLOps service.
agent-use
--- title: Environment variables description: Describes the environment variables specific to operating the monitoring agent. --- # Monitoring agent environment variables {: #monitoring-agent-environment-variables } In addition to the environment variables used to configure the attached [spooler](spooler), you can configure the monitoring agent with the environment variables documented below. ### General configuration {: #general-configuration } When you run the agent using the provided `start-agent.sh` script from the `bin/` directory, the following environment configuration options are available: | Variable | Description | |----------|---------------| | `MLOPS_AGENT_CONFIG_YAML`| The full path to a custom configuration YAML file. | | `MLOPS_AGENT_LOG_DIR` | The directory for writing the agent log file and stdout/error. | | `JAVA_HOME` | The Java Virtual Machine (JVM) to run the agent code. If you don't provide a JVM, Java should be included in the system PATH. | ### Containerized configuration {: #container-configuration } When you run the agent using the provided `Dockerfile` from the `tools/agent_docker/` directory, the following environment configuration options are available: | Variable | Description | |----------|-----------------| | `MLOPS_AGENT_CONFIG_YAML` | The full path to a custom configuration YAML file. | | `MLOPS_AGENT_LOG_DIR` | The directory for writing the agent log file and stdout/error. | | `MLOPS_AGENT_TMP_DIR` | The directory for writing temporary files (a useful override if the container runs with a read-only root filesystem). | | `MLOPS_SERVICE_URL` | Specify the service URL to access MLOps via this environment variable instead of specifying it in the YAML configuration file. | | `MLOPS_API_TOKEN` | Provide your API token through this environment variable instead of specifying it in the YAML configuration file. | | **Advanced configuration** | :~~: | | `MLOPS_AGENT_JVM_OPT` | Configure to override the default JVM option `-Xmx1G`. | `MLOPS_AGENT_LOGGING_CONFIG` | Specify a full path to a completely custom Log4J2 configuration file for the MLOps monitoring agent. | | `MLOPS_AGENT_LOGGING_FORMAT` | If using our default logging configuration, you can set the logging output format to either `plain` or `json`. | | `MLOPS_AGENT_LOGGING_LEVEL` | If using our default logging configuration, set the overall logging level for the agent (e.g. `trace`, `debug`, `info`, `warning`, `error`). | | `MLOPS_AGENT_LOG_PROPERTIES` | Configure to override the default path to `mlops.log4j2.properties`. | | `MLOPS_AGENT_SERVER_PORT` | Set a free port number to activate the embedded HTTP server; this is useful for health and metric monitoring. |
env-var
--- title: Examples directory description: Use sample code available in the MLOps agent tarball as a starting point for creating and managing deployments. Examples include model configuration, data, and scripts used to create deployments and run the examples. --- # Examples directory {: #examples-directory } The `examples` directory in the MLOps agent tarball contains both sample code (snippets for manual inspection) and example code (self-contained examples that you can run) in Python and Java. Navigate to the subdirectory for the language you wish to use and reference the respective `README` for further instruction. The examples directory includes model configuration, data, and scripts used to create deployments and run the examples, using Python to create the model package and deployment programmatically. Therefore, you must install the Python version of the MLOps library (described below). These examples also use the [MLOps Command Line Interface (mlops-cli)](agent-use#monitor-using-the-mlops-cli) to set up deployments and perform deployment actions. You must provide the `MLOPS_SERVICE_URL` and `MLOPS_API_TOKEN` environment variables to use the `mlops-cli`. In addition, most examples use the `mlops-cli` to upload monitoring data for faster setup; however, while the `mlops-cli` tool is suitable for simple use cases, you should use the agent for production scenarios. ## Run code examples with Python {: #run-code-examples-with-python } To run the Python code examples, you must install the dependencies used by the examples: ``` sh pip install -r examples/python/requirements.txt ``` See the `README` file in each example directory for further example-specific configuration requirements. In general, to run an example: 1. Initialize the model package and deployment: ``` sh ./create_deployment.sh ``` 2. Generate predictions and report statistics to DataRobot: ``` sh ./run_example.sh ``` 3. Verify that metrics were sent successfully: ``` sh ./verify_example.sh ``` 4. Delete resources created in the example: ``` sh ./cleanup.sh ```
agent-ex
--- title: Monitoring agent description: Set up a remote environment so you can use the monitoring agent to monitor external models. --- # Monitoring agent When you enable the monitoring agent feature, you have access to the agent installation and MLOps components, all packaged within a single tarball. The image below illustrates the roles of these components in enabling DataRobot MLOps to monitor external models. ![](images/agent-highlevel-componentdetails.png) | | Component | Description | |---|---|---| | ![](images/icon-1.png) | External model | External models are machine learning models running outside of DataRobot, within your environment. The deployments (running in Python or Java) score data and generate predictions along with other information, such as the number of predictions generated and the length of time to generate each. | | ![](images/icon-2.png) | DataRobot MLOps library | The MLOps library, available in Python (v2 and v3) and Java, provides APIs to report prediction data and information from a specified deployment (identified by deployment ID and model ID). Supported library calls for the MLOps client let you specify which data to report to the MLOps service, including prediction time, number of predictions, and other metrics and deployment statistics. | | ![](images/icon-3.png) | Spooler (Buffer) | The library-provided APIs pass messages to a configured spooler (or buffer). | | ![](images/icon-4.png) | Monitoring agent | The monitoring agent detects data written to the target buffer location and reports it to the MLOps service. | | ![](images/icon-5.png) | DataRobot MLOps service | If the monitoring agent is running as a service, it retrieves the data as soon as it’s available; otherwise, it retrieves prediction data when it is run manually. | If models are running in isolation and disconnected from the network, the MLOps library will not have networked access from the buffer directory. For these deployments, you can manually copy prediction data from the buffer location via USB drive as needed. The agent then accesses that data as configured and reports it to the MLOps service. Additional monitoring agent configuration settings specify where to read data from and report data to, how frequently to report the data, and so forth. The flexible monitoring agent design ensures support for a variety of deployment and environment requirements. Finally, from the [deployment inventory](deploy-inventory) you can view your deployments and view and manage prediction statistics and metrics. ![](images/agent-deploy-mmm.png) ## Monitoring agent requirements To use the monitoring agent with a remote deployment environment, you must provide: * The URL of DataRobot MLOps. (For *Self-Managed AI Platform* installations, this is typically of the form `https://10.0.0.1` or `https://my-server-name`.) * An API key from DataRobot. You can configure this through the UI by going to the [**Developer Tools** tab](api-key-mgmt) under account settings and finding the **API Keys** section. Additionally, reference the documentation for [creating](reg-create#register-external-model-packages) and [deploying](deploy-external-model#deploy-an-external-model-package) a model package. ## MLOps agent tarball You can download the MLOps agent tarball from two locations: * The [**Developer Tools**](api-key-mgmt#mlops-agent-tarball) page * The [**Predictions > Monitoring**](code-py#monitoring-snippet) tab of a deployment configured to monitor an external model The MLOps agent tarball contains the MLOps libraries for you to install. See [monitoring agent and prediction reporting setup](monitoring-agent/index#monitoring-agent-and-prediction-reporting-setup) to configure the monitoring agent. !!! note "Python library public download" You can download the MLOps Python libraries from the public [Python Package Index site](https://pypi.org){ target=_blank }. Download and install the [DataRobot MLOps metrics reporting library](https://pypi.org/project/datarobot-mlops){ target=_blank } and the [DataRobot MLOps Connected Client](https://pypi.org/project/datarobot-mlops-connected-client){ target=_blank }. These pages include instructions for installing the libraries. !!! note "Java library public download" You can download the MLOps Java library and agent from the public [Maven Repository](https://mvnrepository.com/){ target=_blank } with a `groupId` of `com.datarobot` and an `artifactId` of `datarobot-mlops` (library) and `mlops-agent` (agent). In addition, you can access the [DataRobot MLOps Library](https://mvnrepository.com/artifact/com.datarobot/datarobot-mlops){ target=_blank } and [DataRobot MLOps Agent](https://mvnrepository.com/artifact/com.datarobot/mlops-agent){ target=_blank } artifacts in the Maven Repository to view all versions and download and install the JAR file. In addition to the MLOps library, the tarball includes Python and Java API examples and accompanying datasets to: * Create a deployment that generates (example) predictions for both regression and classification models. * Report metrics from deployments using the MLOps library. The tarball also includes scripts to: * Start and stop the agent, as well as retrieve the current agent status. * Create a remote deployment that uploads a training dataset and returns the deployment ID and model ID for the deployment. ## How the agent works {: #how-the-agent-works } This section outlines the basic workflow for using the monitoring agent from different environments. Using DataRobot MLOps: 1. Use the [**Model Registry**](reg-create) to create a model package with information about your model's metadata. 2. [Deploy the model package](deploy-external-model#deploy-an-external-model-package). Create a deployment to display metrics about the running model. 3. Use the deployment **Predictions** tab to view a [code snippet](code-py#monitoring-snippet) demonstrating how to instrument your prediction code with the monitoring agent to report metrics. Using a remote deployment environment: 1. Install the monitoring agent. 2. Use the MLOps library to report metrics from your prediction code, as demonstrated by the snippet. The MLOps library buffers the metrics in a [spooler](spooler) (i.e., Filesystem, Rabbit MQ, Kafka, among others), which enables high throughput without slowing down the deployment. 4. The monitoring agent forwards the metrics to DataRobot MLOps. 5. You can view the reported metrics via the DataRobot MLOps [**Deployment** inventory](deploy-inventory). ![](images/HowItWorks.png) ## Monitoring agent and prediction reporting setup {: #monitoring-agent-and-prediction-reporting-setup } The following sections outline how to configure both the machine using the monitoring agent to upload data, and the machine using the MLOps library to report predictions. ### Monitoring agent configuration {: #monitoring-agent-configuration } Complete the following workflow for each machine using the monitoring agent to upload data to DataRobot MLOps. This setup only needs to be performed once for each deployment environment. 1. Ensure that Java (version 8) is installed. 2. Download the MLOps agent tarball, available through the [**Developer Tools**](api-key-mgmt) tab. The tarball includes the monitoring agent and library software, example code, and associated scripts. 3. Change the directory to the unpacked directory. 4. [Install the monitoring agent](agent#install-the-monitoring-agent). 5. [Configure the monitoring agent](agent#configure-the-monitoring-agent). 6. [Run the agent service](agent#run-the-monitoring-agent). ### Host predictions {: #host-predictions } For each machine using the MLOps library to report predictions, ensure that appropriate libraries and requirements are installed. There are two locations where you can obtain the libraries: === "MLOps agent tarball (for Java and Python)" Download the MLOps agent tarball and install the libraries: * **Java**: The Java library is included in the .tar file in `lib/datarobot-mlops-<version>.jar`. * **Python**: The Python version of the library is included in the .tar file in `lib\datarobot_mlops-*-py2.py3-none-any.whl`. This works for both Python2 and Python3. You can install it using: * `pip install lib\datarobot_mlops-*-py2.py3-none-any.whl` === "Python Package Index (for Python)" Download the MLOps Python libraries from the [Python Package Index site](https://pypi.org){ target=_blank }: * DataRobot [MLOps metrics reporting library](https://pypi.org/project/datarobot-mlops){ target=_blank } * Download and then install: `pip install datarobot-mlops` * DataRobot [MLOps Connected client](https://pypi.org/project/datarobot-mlops-connected-client){ target=_blank } (mlops-cli) * Download and then install: `pip install datarobot-mlops-connected-client` The MLOps agent `.tar` file includes [several end-to-end examples](agent-ex) in various languages. ### Create and deploy a model package {: #create-and-deploy-a-model-package } A model package stores metadata about your external model: the problem type (e.g., regression), the training data used, and more. You can create a model package using the [**Model Registry**](reg-create) and deploy it. In the deployment's **Integrations** tab, you can view example code as well as the values for the `MLOPS_DEPLOYMENT_ID` and `MLOPS_MODEL_ID` that are necessary to report statistics from your deployment. If you wish to instead create a model package using the API, you can follow the pattern used in the [helper scripts](agent-ex) in the examples directory for creating model packages and deployments. Each example has its own `create_deployment.sh` script to create the related model package and deployment. This script interacts with DataRobot MLOps directly and so must be run on a machine with connectivity to it. When run, each script outputs a deployment ID and model ID that are then used by the `run_example.sh` script, in which the model inference and subsequent metrics reporting actually happens. When creating and deploying an external model package, you can upload the data used to train the model: the training dataset, the holdout dataset, or both. When you upload this data, it is used to monitor the model. The datasets you upload provide the following functionality: * Training dataset: Provides the baseline for feature drift monitoring. * Holdout dataset: Provides the predictions used as a baseline for accuracy monitoring. You can find examples of the expected data format for holdout datasets in the `examples/data` folder of the agent tar file: * `mlops-example-lending-club-holdout.csv`: Demonstrates the holdout data format for regression models. * `mlops-example-iris-holdout.csv`: Demonstrates the holdout data format for classification models. !!! note For classification models, you must provide the predictions for all classes. ### Instrument deployments with the monitoring agent {: #instrument-deployments-with-the-monitoring-agent } To configure the monitoring agent with each deployment: 1. Locate the MLOps library and sample code. These are included within the MLOps `.tar` file distribution. 2. Configure the deployment ID and model ID in your environment. 3. Instrument your code with MLOps calls as shown in the [sample code](agent-ex) provided for your programming language. 4. To report results to DataRobot MLOps, you must configure the library to use the same channel as is configured in the agent. For testing, you can configure the library to output to stdout though these calls will not be forwarded to the agent or DataRobot MLOps. Configure the library via the :ref:`mlops API <mlops-lib>`. 5. You can view your deployment in the DataRobot MLOps UI under the **Deployments** tab.
index
--- title: Download Scoring Code description: How to download a deployment’s monitoring agent with Scoring Code. --- # Download Scoring Code {: #download-scoring-code } You can download the monitoring agent packaged with [Scoring Code](scoring-code/index) directly from a deployment. !!! note Note that the deployed model must be trained with Scoring Code enabled in order to access the package. Additionally, this package is only compatible with models running at the command line. The package does not support models running on [the Portable Prediction Server.](portable-pps) After deploying your Scoring Code model, Select **Get Scoring Code** from the [**Actions**](actions-menu) menu to download the Scoring Code JAR file. In addition to Scoring Code, you can now download a monitoring agent package preconfigured for the deployment. This allows you to quickly integrate the monitoring agent and report model monitoring statistics back to DataRobot. Reference the quickstart guide in the agent tarball for instruction on the initial setup after downloading the package. If you do not wish to integrate the monitoring agent, instead download just the Scoring Code, available in Source and Binary formats. ![](images/agent-sc-2.png)
agent-sc
--- title: Library and agent spooler configuration description: How to configure the MLOps library and monitoring agent spooler so that the library can communicate with the agent through the spooler. --- # MLOps library and agent spooler configuration {: #mlops-library-and-agent-spooler-configuration } The MLOps library communicates to the agent through a spooler, so it is important that the [agent](#mlops-agent-configuration) and [library](#mlops-library-configuration) spooler configurations match. When configuring the MLOps agent and library's spooler settings, some settings are required, and some are optional (optional settings are identified in each table under **Optional configuration**). The required settings can be configured programmatically or through the environment variables documented in the [General configuration](#general-configuration) and [Spooler-specific configurations](#spooler-specific-configurations) sections. If you configure any settings programmatically *and* by defining an environment variable, the environment variable takes precedence. MLOps agent and library communication can be configured to use any of the following spoolers: * [Filesystem](#filesystem) * [Amazon SQS](#amazon-sqs) * [RabbitMQ](#rabbitmq) * [Google Cloud Pub/Sub](#google-cloud-pubsub) * [Apache Kafka](#apache-kafka) * [Azure Event Hubs](#azure-event-hubs) ## MLOps agent configuration {: #mlops-agent-configuration } When running the monitoring agent as a separate service, specify the spooler configuration in `mlops.agent.conf.yaml` by uncommenting the `channelConfigs` section and entering the required configs. For more information on setting the `channelConfigs` see [Configure the monitoring agent](agent#configure-the-monitoring-agent). ## MLOps library configuration {: #mlops-library-configuration } The MLOps library can be configured programmatically or by using environment variables. To configure the spooler programmatically, specify the spooler during the MLOps `init` call; for example, to configure the filesystem spooler using the Python library: ``` mlops = MLOps().set_filesystem_spooler("your_spooler_directory").init() ``` !!! note You must create the directory specified in the code above; the program will not create it for you. Equivalent interfaces exist for other spooler types. <!--private start--> See the [MLOps API documentation](https://app.datarobot.com/apidocs/entities/mlops.html){ target=_blank } for details. <!--private end--> To configure the MLOps library and agent using environment variables, see the [General configuration](#general-configuration) and [Spooler-specific configurations](#spooler-specific-configurations) sections. ## General configuration {: #general-configuration } Use the following environment variables to configure the MLOps agent and library and to select a spooler type: | Variable | Description | |-------------|----------| | `MLOPS_DEPLOYMENT_ID` | The deployment ID of the DataRobot deployment that should receive metrics from the MLOps library. | | `MLOPS_MODEL_ID` | The model ID of the DataRobot model that should be reported on by the MLOps library. | | `MLOPS_SPOOLER_TYPE` | The spooler type that the MLOps library will use to communicate with the monitoring agent. The following are valid spooler types: <ul><li>`FILESYSTEM`: Enable local filesystem spooler.</li><li>`SQS`: Enable Amazon SQS spooler.</li><li>`RABBITMQ`: Enable RabbitMQ spooler.</li><li>`KAFKA`: Enable Apache Kafka or Azure Event Hubs spooler.</li><li>`PUBSUB`: Enable Google Cloud Pub/Sub spooler.</li><li>`NONE`: Disable MLOps library reporting.</li><li>`STDOUT`: Print the reported metrics to stdout rather than forward them to the agent</li></ul>. | | **Optional configuration** | :~~: | | `MLOPS_SPOOLER_DEQUEUE_ACK_RECORDS` | Ensure that the monitoring agent does not dequeue a record until processing is complete. Set this option to `true` to ensure records are not dropped due to connection errors. Enabling this option is highly recommended. The dequeuing operation behaves as follows for the spooler channels: <ul><li>`SQS`: Deletes a message.</li><li>`RABBITMQ` and `PUBSUB`: Acknowledges the message as complete.</li><li>`KAFKA` and `FILESYSTEM`: Moves the offset.</li></ul> | | `MLOPS_ASYNC_REPORTING` | Enable the MLOps library to asynchronously report metrics to the spooler. | | `MLOPS_FEATURE_DATA_ROWS_IN_ONE_MESSAGE` | The number of feature rows that will be in a single message to the spooler. | | `MLOPS_SPOOLER_CONFIG_RECORD_DELIMITER` | The delimiter to replace the default value of `;` between key-value pairs in a spooler configuration string (e.g., `key1=value1;key2=value2` to `key1=value1:key2=value2`). | | `MLOPS_SPOOLER_CONFIG_KEY_VALUE_SEPARATOR` | The separator to replace the default value of `=` between keys and values in a spooler configuration string (e.g., `key1=value1` to `key1:value1`). | !!! note Setting the environment variable here takes precedence over variables definitions specified in the configuration file or configured programmatically. After setting a spooler type, you can configure the spooler-specific environment variables. ## Spooler-specific configurations {: #spooler-specific-configurations } Depending on the `MLOPS_SPOOLER_TYPE` you set, you can provide configuration information as environment variables unique to the supported spoolers. ### Filesystem {: #filesystem } Use the following environment variable to configure the `FILESYSTEM` spooler: | Variable | Description | |----------|-------------| | `MLOPS_FILESYSTEM_DIRECTORY` | The directory to store the metrics to report to DataRobot. You must create this directory; the program will not create it for you. | | **Optional configuration** | :~~: | | `MLOPS_FILESYSTEM_MAX_FILE_SIZE` | Override the default maximum file size (in bytes).<br> **Default**: 1 GB | | `MLOPS_FILESYSTEM_MAX_NUM_FILE` | Override the default maximum number of files.<br> **Default**: 10 files | !!! note You can also [programmatically configure the filesystem spooler for the MLOps library](#mlops-library-configuration). ### Amazon SQS {: #amazon-sqs } When using Amazon `SQS` as a spooler, you can provide your credential set in either of two ways: * Set your credentials in the `AWS_ACCESS_KEY_ID`, `AWS_SECRET_ACCESS_KEY`, and `AWS_REGION` or `AWS_DEFAULT_REGION` environment variables. Only AWS software packages use these credentials; DataRobot doesn't access them. * If you are in an AWS environment, create an [AWS IAM (Identity and Access Management) role](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles.html){ target=_blank } for credential authentication. Use *one* of the following environment variables to configure the `SQS` spooler: | Variable | Description | |----------|-------------| | `MLOPS_SQS_QUEUE_URL` | The URL of the SQS queue used for the spooler. | | `MLOPS_SQS_QUEUE_NAME` | The queue name of the SQS queue used for the spooler. | !!! note When using the `SQS` spooler type, only provide the spooler name *or* the URL. ### RabbitMQ {: #rabbitmq } Use the following environment variables to configure the `RABBITMQ` spooler: | Variable | Description | |----------|-------------| | `MLOPS_RABBITMQ_QUEUE_URL` | The URL of the RabbitMQ queue used for the spooler. | | `MLOPS_RABBITMQ_QUEUE_NAME`| The queue name of the RabbitMQ queue used for the spooler. | | **Optional configuration** | :~~: | | `MLOPS_RABBITMQ_SSL_CA_CERTIFICATE_PATH` | The path to the CA certificate file (`.pem` file). | | `MLOPS_RABBITMQ_SSL_CERTIFICATE_PATH` | The path to the client certificate (`.pem` file). | | `MLOPS_RABBITMQ_SSL_KEYFILE_PATH` | The path to the client key (`.pem` file). | | `MLOPS_RABBITMQ_SSL_TLS_VERSION` | The TLS version used for the client. The TLS version must match server version. | !!! note RabbitMQ configuration requires keys in RSA format without a password. You can convert keys from PKCS8 to RSA as follows: `openssl rsa -in mykey_pkcs8_format.pem -text > mykey_rsa_format.pem` To generate keys, see [RabbitMQ TLS Support](https://www.rabbitmq.com/ssl.html#automated-certificate-generation){ target=_blank }. ### Google Cloud Pub/Sub {: #google-cloud-pubsub } When using Google Cloud `PUBSUB` as a spooler, you must provide the appropriate credentials in the `GOOGLE_APPLICATION_CREDENTIALS` environment variable. Only Google Cloud software packages use these credentials; DataRobot doesn't access them. Use the following environment variables to configure the `PUBSUB` spooler: | Variable | Description | |----------|-------------| | `MLOPS_PUBSUB_PROJECT_ID` | The Pub/Sub project ID of the project used by the spooler; this should be the full path of the project ID. | | `MLOPS_PUBSUB_TOPIC_NAME` | The Pub/Sub topic name of the topic used by the spooler; this should be the topic name within the project, *not* the fully qualified topic name path that includes the project ID. | | `MLOPS_PUBSUB_SUBSCRIPTION_NAME` | The Pub/Sub subscription name of the subscription used by the spooler. | ### Apache Kafka {: #apache-kafka } Use the following environment variables to configure the Apache `KAFKA` spooler: | Variable | Description | |----------|-------------| | `MLOPS_KAFKA_TOPIC_NAME` | The name of the specific Kafka topic to produce to or consume from. <br> **Apache Kafka Reference**: [Main Concepts and Terminology](https://kafka.apache.org/documentation/#intro_concepts_and_terms){ target=_blank }| | `MLOPS_KAFKA_BOOTSTRAP_SERVERS` | The list of servers that the agent connects to. Use the same syntax as the `bootstrap.servers` config used upstream. <br> **Apache Kafka Reference**: [`bootstrap.servers`](https://kafka.apache.org/documentation/#connectconfigs_bootstrap.servers){ target=_blank }| | **Optional configuration** | :~~: | | `MLOPS_KAFKA_CONSUMER_POLL_TIMEOUT_MS` | The amount of time to wait while consuming messages before processing them and sending them to DataRobot <br> **Default value**: 3000 ms.| | `MLOPS_KAFKA_CONSUMER_GROUP_ID` | A unique string that identifies the consumer group this consumer belongs to. <br> **Default value**: `tracking-agent`. <br> **Apache Kafka Reference**: [`group.id`](https://kafka.apache.org/documentation/#consumerconfigs_group.id){ target=_blank }| | `MLOPS_KAFKA_CONSUMER_MAX_NUM_MESSAGES` | The maximum number of messages to consume at one time before processing them and sending the results to DataRobot MLOps. <br> **Default value**: 500 <br> **Apache Kafka Reference**: [`max.poll.records`](https://kafka.apache.org/documentation/#consumerconfigs_max.poll.records){ target=_blank }| | `MLOPS_KAFKA_SESSION_TIMEOUT_MS` | The timeout used to detect client failures in the consumer group. <br> **Apache Kafka Reference**: [`session-timeout.ms`](https://kafka.apache.org/documentation/#consumerconfigs_session.timeout.ms){ target=_blank }| | `MLOPS_KAFKA_MESSAGE_BYTE_SIZE_LIMIT` | The maximum chunk size when producing events to the channel. <br> **Default value**: 1000000 bytes| | `MLOPS_KAFKA_DELIVERY_TIMEOUT_MS` | The absolute upper bound amount of time to send messages before considering it permanently failed. <br> **Apache Kafka Reference**: [`delivery.timeout.ms`](https://kafka.apache.org/documentation/#producerconfigs_delivery.timeout.ms){ target=_blank }| | `MLOPS_KAFKA_REQUEST_TIMEOUT_MS` | The maximum amount of time a client will wait for a response to a request before retrying. <br> **Apache Kafka Reference**: [`request.timeout.ms`](https://kafka.apache.org/documentation/#producerconfigs_request.timeout.ms){ target=_blank }| | `MLOPS_KAFKA_METADATA_MAX_AGE_MS` | The maximum amount of time (in ms) the client will wait before refreshing its cluster metadata. <br> **Apache Kafka Reference**: [`metadata.max.age.ms`](https://kafka.apache.org/documentation/#connectconfigs_metadata.max.age.ms){ target=_blank }| | `MLOPS_KAFKA_SECURITY_PROTOCOL` | Protocols used to connect to the brokers. <br> **Apache Kafka Reference**: [`security.protocol`](https://kafka.apache.org/documentation/#connectconfigs_security.protocol){ target=_blank } valid values. | | `MLOPS_KAFKA_SASL_MECHANISM` | The mechanism clients use to authenticate with the broker. <br> **Apache Kafka Reference**: [`sasl.mechanism`](https://kafka.apache.org/documentation/#connectconfigs_sasl.mechanism){ target=_blank }| | `MLOPS_KAFKA_SASL_JAAS_CONFIG` *(Java only)* | Connection settings in a format used by JAAS configuration files. <br> **Apache Kafka Reference**: [`sasl.jaas.config`](https://kafka.apache.org/documentation/#connectconfigs_sasl.jaas.config){ target=_blank }| | `MLOPS_KAFKA_SASL_LOGIN_CALLBACK_CLASS` *(Java only)* | A custom login handler class. <br> **Apache Kafka Reference**: [`sasl.login.callback.handler.class`](https://kafka.apache.org/documentation/#connectconfigs_sasl.login.callback.handler.class){ target=_blank }| | `MLOPS_KAFKA_CONNECTIONS_MAX_IDLE_MS` *(Java only)* | The maximum amount of time (in ms) before the client closes an inactive connection. This value should be set *lower* than any timeouts your network infrastructure may impose. <br> **Apache Kafka Reference**: [`connections.max.idle.ms`](https://kafka.apache.org/documentation/#connectconfigs_connections.max.idle.ms){ target=_blank }| | `MLOPS_KAFKA_SASL_USERNAME` *(Python only)* | SASL username for use with the PLAIN and SASL-SCRAM-\* mechanisms. <br> **Reference**: See the `sasl.username` setting in [`librdkafka`](https://github.com/edenhill/librdkafka/blob/master/CONFIGURATION.md){ target=_blank }. | | `MLOPS_KAFKA_SASL_PASSWORD` *(Python only)* | SASL password for use with the PLAIN and SASL-SCRAM-\* mechanisms. <br> **Reference**: See the `sasl.password` setting in [`librdkafka`](https://github.com/edenhill/librdkafka/blob/master/CONFIGURATION.md){ target=_blank }| | `MLOPS_KAFKA_SASL_OAUTHBEARER_CONFIG` *(Python only)* | Custom configuration to pass the OAuth login callback. <br> **Reference**: See the `sasl.oauthbearer.config` setting in [`librdkafka`](https://github.com/edenhill/librdkafka/blob/master/CONFIGURATION.md){ target=_blank }| | `MLOPS_KAFKA_SOCKET_KEEPALIVE` *(Python only)* | Enable TCP keep-alive on network connections, sending packets over those connections periodically to prevent the required connections from being closed due to inactivity. <br> **Reference**: See the `socket.keepalive.enable` setting in [`librdkafka`](https://github.com/edenhill/librdkafka/blob/master/CONFIGURATION.md){ target=_blank }| ### Azure Event Hubs {: #azure-event-hubs } DataRobot allows you to use Microsoft Azure Event Hubs as a monitoring agent spooler by leveraging the existing [Kafka spooler type](#kafka). To set this up, see [Using Azure Event Hubs from Apache Kafka applications](https://docs.microsoft.com/en-us/azure/event-hubs/event-hubs-for-kafka-ecosystem-overview){ target=_blank }. !!! note Azure supports the Kafka protocol for Event Hubs only for the Standard and Premium pricing tiers. The Basic tier does not offer Kafka API support, so it is not supported as a spooler for the monitoring agent. See [Azure Event Hubs quotas and limits](https://docs.microsoft.com/en-us/azure/event-hubs/event-hubs-quotas){ target=_blank } for details. To use Azure Event Hubs as a spooler, you need to set up authentication for the monitoring agent and MLOps library using one of these methods: * [SAS-based authentication](#sas-based-authentication-for-event-hubs) * [Azure Active Directory OAuth 2.0](#azure-active-directory-oauth-20-for-event-hubs) #### SAS-based authentication for Event Hubs {: #sas-based-authentication-for-event-hubs } To use Event Hubs SAS-based authentication for the monitoring agent and MLOps library, set the following environment variables using the example shell fragment below: ``` shell title="Sample environment variables script for SAS-based authentication" # Azure recommends setting the following values; see: # https://docs.microsoft.com/en-us/azure/event-hubs/apache-kafka-configurations export MLOPS_KAFKA_REQUEST_TIMEOUT_MS='60000' export MLOPS_KAFKA_SESSION_TIMEOUT_MS='30000' export MLOPS_KAFKA_METADATA_MAX_AGE_MS='180000' # Common configuration variables for both Java- and Python-based libraries. export MLOPS_KAFKA_BOOTSTRAP_SERVERS='XXXX.servicebus.windows.net:9093' export MLOPS_KAFKA_SECURITY_PROTOCOL='SASL_SSL' export MLOPS_KAFKA_SASL_MECHANISM='PLAIN' # The following setting is specific to the Java SDK (and the monitoring agent daemon) export MLOPS_KAFKA_SASL_JAAS_CONFIG='org.apache.kafka.common.security.plain.PlainLoginModule required username="$ConnectionString" password="Endpoint=sb://XXXX.servicebus.windows.net/;SharedAccessKeyName=XXXX;SharedAccessKey=XXXX";' # For the Python SDK, you will need the following settings (in addition to the common ones above) export MLOPS_KAFKA_SASL_USERNAME='$ConnectionString' export MLOPS_KAFKA_SASL_PASSWORD='Endpoint=sb://XXXX.servicebus.windows.net/;SharedAccessKeyName=XXX;SharedAccessKey=XXXX' ``` !!! note The environment variable values above use single-quotes (`'`) to ensure that the special characters `$` and `"` are not interpreted by the shell when setting variables. If you are setting environment variables via DataBricks, you should follow their [guidelines](https://docs.microsoft.com/en-us/azure/databricks/kb/clusters/validate-environment-variable-behavior) on escaping special characters for the version of the platform you are using. #### Azure Active Directory OAuth 2.0 for Event Hubs {: #azure-active-directory-oauth-20-for-event-hubs } DataRobot supports Azure Active Directory OAuth 2.0 for Event Hubs authentication. To use this authentication method, you must create a new Application Registration with the necessary permissions over your Event Hubs Namespace (i.e., Azure Event Hubs Data Owner). See [Authenticate an application with Azure AD to access Event Hubs resources](https://docs.microsoft.com/en-us/azure/event-hubs/authenticate-application){ target=_blank } for details. To use Event Hubs Azure Active Directory OAuth 2.0 authentication, set the following environment variables using the example shell fragment below: ``` shell title="Sample environment variables script for Azure AD OAuth 2.0 authentication" # Azure recommends setting the following values; see: # https://docs.microsoft.com/en-us/azure/event-hubs/apache-kafka-configurations export MLOPS_KAFKA_REQUEST_TIMEOUT_MS='60000' export MLOPS_KAFKA_SESSION_TIMEOUT_MS='30000' export MLOPS_KAFKA_METADATA_MAX_AGE_MS='180000' # Common configuration variables for both Java- and Python-based libraries. export MLOPS_KAFKA_BOOTSTRAP_SERVERS='XXXX.servicebus.windows.net:9093' export MLOPS_KAFKA_SECURITY_PROTOCOL='SASL_SSL' export MLOPS_KAFKA_SASL_MECHANISM='OAUTHBEARER' # The following setting is specific to the Java SDK (and the tracking-agent daemon) export MLOPS_KAFKA_SASL_JAAS_CONFIG='org.apache.kafka.common.security.oauthbearer.OAuthBearerLoginModule required aad.tenant.id="XXXX" aad.client.id="XXXX" aad.client.secret="XXXX";' export MLOPS_KAFKA_SASL_LOGIN_CALLBACK_CLASS='com.datarobot.mlops.spooler.kafka.ActiveDirectoryAuthenticateCallbackHandler' # For the Python SDK, you will need the following settings (in addition to the common ones above) export MLOPS_KAFKA_SASL_OAUTHBEARER_CONFIG='aad.tenant.id=XXXX-XXXX-XXXX-XXXX-XXXX, aad.client.id=XXXX-XXXX-XXXX-XXXX-XXXX, aad.client.secret=XXXX' ``` !!! note Some environment variable values contain double quotes (`"`). Take care when setting environment variables that include this special character (or others). ## Dynamically load required spoolers in a Java application {: #dynamically-load-required-spoolers-in-a-java-application } To configure Monitoring Agent spoolers using third-party code, you can dynamically load a separate JAR file for the required spooler. This configuration is required for the [Amazon SQS](#amazon-sqs), [RabbitMQ](#rabbitmq), [Google Cloud Pub/Sub](#google-cloud-pubsub), and [Apache Kafka](#apache-kafka) spoolers. The natively supported file system spooler is configurable without loading a JAR file. !!! note Previously, the `datarobot-mlops` and `mlops-agent` packages included all spooler types by default; however, that configuration meant the code was always present, even if it was unused. ### Include spooler dependencies in the project object model {: #include-spooler-dependencies-in-the-project-object-model } To use a third-party spooler in your MLOps Java application, you must include the required spoolers as dependencies in your POM (Project Object Model) file, along with `datarobot-mlops`: ``` xml title="Dependencies in a POM file" <properties> <mlops.version>8.3.0</mlops.version> </properties> <dependency> <groupId>com.datarobot</groupId> <artifactId>datarobot-mlops</artifactId> <version>${mlops.version}</version> </dependency> <dependency> <groupId>com.datarobot</groupId> <artifactId>spooler-sqs</artifactId> <version>${mlops.version}</version> </dependency> <dependency> <groupId>com.datarobot</groupId> <artifactId>spooler-rabbitmq</artifactId> <version>${mlops.version}</version> </dependency> <dependency> <groupId>com.datarobot</groupId> <artifactId>spooler-pubsub</artifactId> <version>${mlops.version}</version> </dependency> <dependency> <groupId>com.datarobot</groupId> <artifactId>spooler-kafka</artifactId> <version>${mlops.version}</version> </dependency> ``` ### Provide an executable JAR file for the spooler {: #provide-an-executable-jar-file-for-the-spooler } The spooler JAR files are included in the [MLOps agent tarball](monitoring-agent/index#mlops-agent-tarball). They are also available individually as downloadable JAR files in the public Maven repository for the [DataRobot MLOps Agent](https://mvnrepository.com/artifact/com.datarobot/mlops-agent){ target=_blank }. To use a third-party spooler with the executable agent JAR file, add the path to the spooler to the classpath: ``` shell title="Classpath without spooler" java ... -cp path/to/mlops-agent-8.2.0.jar com.datarobot.mlops.agent.Agent ``` ``` shell title="Classpath with Kafka spooler" java ... -cp path/to/mlops-agent-8.3.0.jar:path/to/spooler-kafka-8.3.0.jar com.datarobot.mlops.agent.Agent ``` The `start-agent.sh` script provided as an example automatically performs this task, adding any spooler JAR files found in the `lib` directory to the classpath. If your spooler JAR files are in a different directory, set the `MLOPS_SPOOLER_JAR_PATH` environment variable. === "Troubleshoot MLOps applications" * If a dynamic spooler is loaded successfully, the Monitoring Agent logs an `INFO` message: `Creating spooler type <type>: success.` * If loading a dynamic spooler fails, the Monitoring Agent logs an `ERROR` message: `Creating spooler type <type>: failed`, followed by the reason (a `class not found` error, indicating a missing dependency) or more details (a system exception message, helping you diagnose the issue). If the class was not found, ensure the dependency for the spooler is included in the application's POM. Missing dependencies will not be discovered until runtime. === "Troubleshooting the Monitoring Agent" * If a dynamic spooler is loaded successfully, the Monitoring Agent logs an `INFO` message: `Creating spooler type <type>: success.` * If loading a dynamic spooler fails, the Monitoring Agent logs an `ERROR` message: `Creating spooler type <type>: failed`, followed by the reason (a `class not found` error, indicating a missing JAR file) or more details (a system exception message, helping you diagnose the issue). If the class was not found, ensure the matching JAR file for that spooler is included in the classpath of the `java` command that starts the agent. !!! tip If the agent is configured with a `predictionEnvironmentId` and can connect to DataRobot, the agent sends an `MLOps Spooler Channel Failed` event to DataRobot MLOps with information from the log message. These events appear in the [event log on the Service Health page of any deployment](agent-event-log) associated with that prediction environment. You can also create a notification channel and policy to be notified (by email, Slack, or webhook) of these errors.
spooler
--- title: Monitoring external multiclass deployments description: How to configure the monitoring agent to monitor multiclass models deployed to external prediction environments. --- # Monitor external multiclass deployments {: #monitor-external-multiclass-deployments } Users with multiclass models deployed to external prediction environments can configure the monitoring agent to monitor their deployments. To do so, you must create a deployment in DataRobot with an [external model package](reg-create#register-external-model-packages) for the multiclass model. When creating an external model package, you can indicate the prediction type for the model as multiclass: ![](images/multi-dep-8.png) Once indicated, via direct text input or via upload of a CSV or TXT file, provide the target classes for your model (one class per line) in the **Target classes** field: ![](images/multi-dep-9.png) Once all fields for the model package are defined, click **Create package**. The package populates in the **Model Registry** and is available for use. When you have configured an external model package for your multiclass model, follow the workflow to [create a deployment](deploy-external-model#deploy-an-external-model-package) with the external model package and [configure it with the monitoring agent](monitoring-agent/index).
agent-multi
--- title: Algorithmia description: Algorithmia is an MLOps platform where you can deploy, govern, and monitor your models as microservices. --- # Algorithmia DataRobot Algorithmia is an MLOps platform where you can deploy, govern, and monitor your models as microservices. The platform lets you connect models to data sources and deploy them quickly to production. See the [Algorithmia Developer Center](https://algorithmia.com/developers) for details.
algorithmia
--- title: Companion tools description: This section includes links to user documentation for DataRobot companion tools, including Algorithmia and Data Prep (Paxata). --- # Companion tools {: #companion-tools } The information in this section provides user documentation for Data Prep and Notebooks. Topic | Describes... ----- | ------ Paxata Data Prep | Use the Paxata Data Prep application to visually explore, profile, clean, enrich, and shape diverse data into AI assets.<br>[Download PDF](pdf/Paxata-Data-Prep-EN-v2021.2.pdf){:target="_blank"} [Algorithmia](algorithmia) | Use Algorithmia, an MLOps platform, to deploy, govern, and monitor your models as microservices.
index
--- title: MLOps compatibility description: DataRobot's MLOps feature availability is based on version. This page explains which features are accessible to each plan. --- # MLOps compatibility {: #mlops-compatibility } DataRobot offers various plans to users, each with its own set of features. Reference this page to understand what features are accessible to each plan. ## Compatibility considerations {: #compatibility-considerations } !!! info "Availability information" Contact your DataRobot representative to discuss trials and updates to the Pricing 5.0 plan, which offers all of the capabilities listed below. DataRobot users who subscribed before the introduction of MLOps in March 2020 and have not purchased MLOps experience different behavior in several aspects of predictions, monitoring, and model management. The table below outlines the features available for MLOps users compared to legacy users: | Feature | MLOps users | Legacy users | | :------------- | :------------- | :------------- | | Access to the Deployments page, including alerts and notifications. | ✔ | ✔ | | Model monitoring of service health, data drift, and accuracy. | ✔ | ✔ | | Deployments only support DataRobot models on DataRobot predictions servers. | ✔ | ✔ | | [Batch prediction jobs](batch-pred-jobs) | ✔ | ✔ | | [Portable Prediction Servers](portable-pps) (PPS) | ✔ | | | Monitor exported DataRobot Scoring Code or PPS | ✔ | | | Monitor remote custom models | ✔ | | | Host, serve, and monitor custom models | ✔ | | | [Governance workflows](governance/index) | ✔ | | | [Automated Retraining)](set-up-auto-retraining) and [Challenger models](challengers) | ✔ | | | [Humble AI](humble) | ✔ | | Contact your DataRobot representative to discuss trials and updates to the Pricing 5.0, which offers all of the capabilities listed above. ## Pricing 5.0 {: #pricing-50 } Pricing 5.0 is the newest plan available to DataRobot users. With this plan, a number of capabilities supporting DataRobot MLOps are introduced: * Each user or organization has a set number of active deployments they can have at one time. The limit is displayed in the [Deployment page status tiles](deploy-inventory#live-inventory-updates). Pricing 5.0 users can filter the leaderboard by active or inactive deployments. When users create a new deployment, DataRobot indicates that their organization has space available to create it. ![](images/pricing-1.png) Additionally, at the end of the deployment workflow, users are notified of the activity and billing status of their deployment. ![](images/pricing-2.png) * Users who built models in AutoML can download model packages (.mlpkgs) to use the [Portable Prediction Server](portable-pps) directly from the [model Leaderboard](portable-pps#leaderboard-download) without engaging in the deployment workflow. * Users who built models in AutoML can download Scoring Code for the model via the [model Leaderboard](code-py#leaderboard-download) without engaging in the deployment workflow. Previously, downloading Scoring Code made the associated deployment a permanent fixture. Now, these deployments can be deactivated or deleted. Additionally, users can choose to include prediction explanations with their Scoring Code download.
pricing
--- title: Value Tracker description: Helps you to measure success with using DataRobot by defining the value you expect to get and tracking the actual value you receive in real time. --- ## Value Tracker {: #value-tracker } The Value Tracker allows you to specify what you expect to accomplish by using DataRobot. You can measure success by defining the value you expect to get and tracking the actual value you receive in real time. The Value Tracker also provides a platform for you to collect the various DataRobot assets you are using to achieve your goals and collaborate with others. Navigate to **Value Tracker** to view the dashboard. ![](images/use-case-1.png) The dashboard provides details for existing value trackers and allows you to navigate to any value tracker you currently have access to. You can search existing value trackers and create new ones with the actions in the upper left. ## Create a value tracker {: #create-a-value-tracker } From the dashboard, select **+ Add new value tracker**. This brings you to the **Overview** tab for the new value tracker. Complete the fields to provide general information about what you want to track. ![](images/use-case-6.png) | Category | Description | |----------|-------------| | Name | The name of the value tracker. | | Description | A basic description of what the value tracker is designed to accomplish. | | Number(s) or event(s) to predict | Targets or metrics that you measure in the value tracker (used for information purposes only.) | | Owner | The user who owns the value tracker. | | Target dates | The dates that outline when the value tracker is expected to reach individual project stages. You can only set one target date for each stage. | The **Overview** tab displays a stage bar, visible on every tab, to track the current progress of a value tracker. Update the stage at any time by clicking on one of the radio buttons. ![](images/use-case-7.png) After completing the fields, click **Save** in the top right corner. To undo all changes, click **Cancel**. Once saved, your value tracker is created and is accessible from the dashboard. ## Value tab {: #value-tab } The **Value** tab offers tools to estimate the monetary value generated by using AI. This data helps you prioritize decisions that are more likely to be successful and provide greater value. ![](images/use-case-8.png) ### Prioritization assessment {: #prioritization-assessment } Use the snap grid to estimate the business impact (Y-axis) and feasibility (X-axis) of a value tracker. Each axis has 5 data points: None, Low, Medium, Med-High, High. The grid also displays built-in warnings, described in the table below: | Warning | Cause | Description | | ---------- | ----------- | ---- | | Try to simplify | Feasibility = None | There is value in pursuing the value tracker, but DataRobot recommends simplifying the problem. | | Do not attempt | Business impact = None | There is no business impact, and DataRobot does not recommend pursuing the value tracker. | To change a value tracker's location on the grid, click on the desired section and the dot snaps to the closest datapoint. After making any changes, click **Save**. ### Potential value estimation {: #potential-value-estimation } The **Potential value estimation** modal assists you in estimating the expected value to receive (in actual currency) when implementing a value tracker. Select **Raw value** to manually provide an estimation and **Value calculator** to calculate an estimation with the interface. ### Value calculator {: #value-calculator } To use the value calculator, you must choose from either the Binary Classification or Regression template. Provide numeric values to answer the questions that accompany each template. The example below displays the template for a Binary Classification value tracker. ![](images/use-case-9.png) After completing the fields for either template, DataRobot calculates two values based on your inputs: * **Value saved annually**: The expected value that can be produced by this value tracker each year. * **Average value per decision**: The value produced by the value tracker each time a decision is made. After making any calculations, click **Save**. ### Raw value {: #raw-value } Use the raw value method to provide an estimated value manually. Specify a currency and expected annual value for the value tracker. Provide details for how you calculated the value in the text box. ![](images/use-case-10.png) ### Production value metrics {: #production-value-metrics } When a value tracker is moved to the **In Production** stage, the **Value** tab updates from estimating potential value to tracking realized value. The tab displays new metrics: ![](images/use-case-11.png) | Category | Description | |----------|-------------| | Potential value | The estimated annual value for the value tracker. | | Realized value | The total realized value of the value tracker. Realized value is calculated by multiplying "Average value per decision” by the number of predictions made in the history of deployments tied to the value tracker. | | Predictions | The total number of predictions made for all deployments attached to the value tracker. | | Deployment status | The aggregated health statuses of deployments attached to this value tracker. The icons represent service health, data drift, and accuracy. If multiple deployments are attached to a value tracker, the most severe status per health type is shown. If the status shows an issue, you can view the status of individual deployments on the **Attachments** page. | All predictions from an attached deployment contribute to realized value, including predictions made before the deployment was attached to the value tracker. If a deployment is removed from a value tracker, realized value derived from that deployment will also be removed from the value tracker. If the average value per decision is updated, the new value will apply to all realized value calculations, including value from previous predictions. If a value calculator was not used to define potential value, then realized value _cannot_ be calculated because the value tracker lacks an “Average value per decision” value. However, you can still track deployment health and predictions over time. Two new charts appear on the value tab when you move a value tracker to the "In Production" stage: * A realized value chart, measuring the realized value (Y-axis) over time (X-axis). * Number of predictions, measuring the number of predictions made (Y-axis) over time (X-axis). ![](images/use-case-12.png) Modify the time range in the upper right corner of each chart. To review or edit information displayed in the **Value** tab before the value tracker was moved to "In Production" (snap grid, value calculator, etc.), click **Show potential value information** below the charts. ## Assets tab {: #assets-tab } The **Assets** tab provides a collection of all of the other DataRobot objects&mdash;datasets, deployments, and more&mdash;contributing to the value tracker. ![](images/use-case-13.png) Six types of objects can be attached to value trackers: * Datasets * Projects * Deployments * Model packages * Custom models * Applications To attach an object, click the plus sign for the type of object you want to attach. The example below outlines the workflow for adding a modeling project. ![](images/use-case-14.png) You are directed to the attachment modal, listing all objects of the chosen type that you currently have access to. In order to attach objects, click **Select** next to the object. You can select multiple objects at once. When you have selected the projects to attach, click **Add**. ![](images/use-case-15.png) When assets are added to a value tracker, they will be listed on the **Assets** tab with some metadata. Click on an individual asset name to view it in detail. Click the orange "X" to remove it. ![](images/use-case-16.png) ### Assets access {: #assets-access } Sharing a value tracker does not automatically grant access to its datasets, modeling projects, deployments, model packages, custom models, and applications. Each of these assets could have different owners as well as collaborators with different permissions. If you need access to assets in a shared value tracker, hover over the asset and click **Request Access**. ![](images/use-case-20.png) The Owner of the asset will receive an email notification of your request. While you are awaiting approval, you will see the following message when you hover over the asset: ![](images/use-case-21.png) ## Activity tab {: #activity-tab } The **Activity** tab provides a way to track changes to a value tracker over time. Any action taken on a value tracker is recorded, along with who took the action and when it was taken. Navigate pages with the arrows in the top right corner. ![](images/use-case-18.png) ## Value Tracker actions {: #value-tracker-actions } Each value tracker has three actions available: edit, share, and delete. ![](images/use-case-19.png) * Select the edit icon to modify the overview page for the value tracker. * Select the share icon to allow other users access to the value tracker and assign them a [role](roles-permissions). * Select the trash icon to permanently delete a value tracker. Once deleted, the value tracker cannot be recovered, but associated objects will persist (datasets, deployments, etc.) In addition to the three action icons, each value tracker has a **Comments** sections, where all users who have a value tracker can host discussions. The comments support tagging users and sending email notifications to those users. Comments can be edited or deleted. ![](images/use-case-17.png) ## Filtering {: #filtering } Select the **Stage** dropdown above the list to filter the list by the stage value trackers are in. When filtering by stage, the contents of the chart will change based on what stage you filter by. ![](images/use-case-2.png) When filtering by stage, the contents of the chart will change based on what stage you filter by. See the table below for a description of the filtering options: Filter | Description ---------- | ----------- All | View basic information for each value tracker and perform any [actions](#value-tracker-actions). Ideation | View the value and business impact of the value tracker. In Production | View the performance of deployed value trackers.
value-tracker
--- title: Account and project management description: This section introduces the management toolbar and includes links to information on how you can manage account settings. --- # Account and project management {: #account-and-project-management } The information in this section provides information to help manage your DataRobot account using the management toolbar&mdash;the navigation elements in the upper right&mdash;to access many account and project management tools. ![](images/help-toolbar.png) &nbsp; | Option | Description ------ | --------------------------------- | ---------------------------------- ![](images/icon-1.png) | [Manage projects](manage-projects) | Opens the page for starting a new project and provides a link to your existing project repository. ![](images/icon-2.png) | [Resources](getting-help) | Provides access to DataRobot UI and API documentation, Enterprise Support, and the DataRobot Community. Additionally, it allows you to send suggestions and issues to DataRobot. ![](images/icon-3.png) | [Notifications](user-notif-center) | Opens a modal that lists notifications sent from the DataRobot platform. ![](images/icon-4.png) | [Account settings](acct-settings/index) | Provides access to profile information, [two-factor authentication](2fa) and other settings, data sources, and your [membership assignments](view-memberships). ![](images/icon-5.png) |[Value Tracker](value-tracker) | Specify what you expect to accomplish by using DataRobot, and share. ![](images/icon-6.png) |[Pricing](value-tracker) | Learn which DataRobot features are accessible to each pricing plan.
index
--- title: Notification center --- # Notification center {: #notification-center } The alert icon (![](images/icon-alert.png)) provides access to notifications sent from the DataRobot platform. A numeric indicator on top of the alert icon indicates that you have unread notifications. ![](images/note-alert.png) Click the icon to see a list of notifications. Note that once you click on the icon, the indicator disappears. To remove a notification, hover on it and click the trash can icon ![](images/icon-delete.png). If you do not delete them, the notification center lists up to the last 100 events. Notifications expire and are removed after one month. ![](images/note-alert-1.png) Notifications are delivered for the following events. Click the notification to open the project or the model in the deployment area that is related to the event. | Event name | Description | |-----------------------------|------------------------| | Autopilot has finished | Reports that Autopilot—either Quick or full mode—has completed. | | Project shared | Reports that a project has been shared with you. DataRobot also delivers an email notification with a link to the project. | | New comment | Alerts that a comment has been added to a project you own, and displays the comment. | | New mention | Reports that you have been mentioned in a project. | | Data drift detected | Indicates that a deployed model has experienced [data drift](data-drift) with a status of failing (red). | | Deployment is unhealthy | Indicates that [service health](service-health) for a deployed model—its ability to respond to prediction requests quickly and reliably—has severely declined since the model was deployed. | | Deployed model accuracy decreased | Indicates that model [accuracy](deploy-accuracy) for a deployed model has severely declined since the model was deployed. |
user-notif-center
--- title: Project control center description: Use the project control center to quickly access recently used projects and the Manage Projects inventory. --- # Manage projects {: #manage-projects } Each DataRobot project includes a dataset, which is the source used for training, and any models built from that dataset. Use the **Projects** dropdown to view information about the current project, to quickly switch between recently accessed projects, and to view the [Manage Projects](#manage-projects-control-center) page, which provides a complete listing of projects and tools to work with them. ## Projects dropdown {: #projects-dropdown } When you click the folder icon, DataRobot displays a dropdown of the 10 most recently accessed projects. ![](images/manage-projects-interface.png) Listed projects are either active or inactive. An <em>active</em> project is either the current project or any project that has models in progress. <em>Inactive</em> projects have no workers assigned to them (and you cannot change the number of workers for inactive models from this interface). No project status is reported. DataRobot displays up to nine inactive projects (based on most recent activity) in the **Projects** dropdown; to see a complete list of inactive projects, click the **Manage Projects** link. Note that projects that failed to complete are not included in the **Manage Projects** dropdown but are included in the full inventory. The **Projects** dropdown provides the following information, as well as [worker usage](#control-worker-usage-from-the-projects-dropdown) information: | &nbsp; | Component | Description | |---| ----------|-----------------| | ![](images/icon-1.png) | [Create New Project](#create-a-new-project) | Opens the data ingest page, the first step in building a DataRobot project. | | ![](images/icon-2.png) | [Manage Projects control center](#manage-projects-control-center) | Opens the projects inventory page, which lists all projects created by or shared with the logged in user. By default, projects are listed by creation date, but click a column header to change the display. From this page you can rename, share, copy, and tag projects, as well as unlock a project's holdout. | | ![](images/icon-3.png) | Current Project | Displays details for the current project. This is the project with content displayed on the **Data** page and **Leaderboard**, as applicable. The dropdown displays [summary information](#project-summaries) about the current project. | | ![](images/icon-4.png) | [Edit, share, duplicate, and delete](#project-actions-menu) | Provides input for editing the project name in place, sharing the project with others in your organization, duplicating a project, or deleting a project. | ![](images/icon-5.png) | Recent Projects | Lists the last nine most recently visited projects. Clicking a project makes it the current project. | ## Project summaries {: #project-summaries } The project summary report is useful for providing an at-a-glance picture of the current project, including: * General and dataset information (the number of features, data points, and models built). * Project settings. * Model statistics. * User and permissions settings. Use the **Show more** or **Show less** arrows to control the display. Prior to building models, the summary only reports general information about the project, dataset, and user. After you have built models for a project, the summary reports additional information, including project settings and statistics. ![](images/project-summary-after.png) ### Control worker usage from the Projects dropdown {: #control-worker-usage-from-the-projects-dropdown } The project dropdown reports the number of workers, both in use and available, for the: * Current project. * Most recent projects. * Total across all projects. When there is no activity, you will see: ![](images/no-active-workers.png) As EDA2 completes, you can see status as models queue: ![](images/active-workers-queue.png) And as they start to build: ![](images/active-workers-build.png) If you were to start a project build and then switch to another project&mdash;making the destination project current and the building project "recent"&mdash;you may see something like the following. The controls allow you to increase or decrease the number of workers assigned to the project and pause model building: ![](images/active-workers-change.png) The bottom of the dropdown interface provides a worker summary: ![](images/manage-projects-worker-summary.png) The summary indicates the number of workers DataRobot is using across all active projects. The values displayed report: * The number of workers actually being used by (not just assigned to) your models in progress. * The total number of workers you are configured for, across all projects. ## Create a new project {: #create-a-new-project } There are two ways to create a new DataRobot project from **Manage Projects**. Click the DataRobot logo in the upper left corner and then the folder icon in the upper right corner to open the **Projects** dropdown: * Click the **Create New Project** link. ![](images/create-new-project.png) * Click **Manage Projects** link and click the **Create New Project** link. ![](images/create-new-project-1.png) Once the data ingest page is open, you can either drag a data file onto it or select the appropriate button to import from an [external data source](data-conn), a [URL](import-to-dr#import-a-dataset-from-a-url), [HDFS](import-to-dr#import-a-dataset-from-hdfs), or a local file. ## Manage Projects control center {: #manage-projects-control-center } The project management control center provides many new features to help identify and classify projects. This is particularly useful when you have many projects that use the same dataset (or datasets with the same name). The new page not only annotates each project with a variety of metadata (dataset name, model type, target, number of models built, and more) but also extends filtering capabilities by allowing you to filter by the newly surfaced metadata. Access the control center by clicking the **Manage Projects** link from the **Projects** dropdown: ![](images/project-center-1.png) The following table lists the functions available from the **Manage Projects** control center. | Component | Description | |-----------------------------------|-------------------| | [Batch delete or share](#batch-deletion-and-sharing) (1) | Delete or share multiple projects at once via the Menu dropdown. | | | Search (2) | Search the page for text matching the entered text string. DataRobot redisplays the page showing only those projects with metadata matching the string. | | Create New Project (3) | Open the data ingest page. From there you can drag a data file onto the page, import from an external data source, URL, HDFS, or a local file, or access the [**AI Catalog**](catalog) to start a project. | | Tags (4) | Search, create, or filter by [tags.](#tag-a-project) | | [Filter Projects](#filter-projects) (5) | Filter project display by job status, model type, time-aware-status, and/or owner. | | Sort (6) | Click the **Dataset** header to sort project listings alphabetically based on the header. Click **Created On** to sort by time stamp. Click again to reverse the order. By default, projects are listed by creation date. | | Page View (7) | Click the right and left arrows to page through the list of projects. | | [Actions Menu](#project-actions-menu) (8) | Take action on an individual project. | ### Batch deletion and sharing {: #batch-deletion-and-sharing } Simplify batch deletion and [sharing](#share-a-project) using the Menu dropdown options. You can: * Individually select projects by checking the box to the left of the project name. * Use the menu to select or deselect all projects. * Click in the **Project Name** box to select all, or deselect all selected, projects. Once projects are selected, use the menu dropdown to delete or [share](#share-a-project) the selected projects: ![](images/share-project5.png) Alternatively, you can use the [**Delete**](#project-actions-menu) or [**Share**](#project-actions-menu) methods in the **Actions** menu to modify projects individually. !!! warning On the managed AI Platform, deleted projects cannot be recovered. For Self-Managed AI Platform deployments, if you delete a project it can only be recovered by the system administrator. ### Filter Projects {: #filter-projects } Use the **Filter Projects** link to modify the listing so that it only shows those projects matching the selected criteria. You can apply multiple filters. ![](images/project-center-3.png) The following table describes the filter options: Filter | When selected... | When none selected ----------|------------------|-------------- JOBS STATUS | Displays running or queued. Helps to identify which projects are using worker resources. | Displays running, queued, and completed projects. MODEL TYPE | Displays only the selected model type(s). | Displays regression, binary classification, multiclass classification, and unsupervised models. [TIME-AWARE](whatis-time) | Displays only projects containing models of the selected (mutually exclusive) type.| Displays non-time-aware, time series, and out-of-time validation (OTV) models. ## Tag a project {: #tag-a-project } You can assign a tag name and color to specific projects so that you can later filter your project list. To assign a tag: 1. From the project listing, select all projects you want tagged together by checking the box to the left of the project name. 2. From the top bar, select **Tags**: ![](images/tag-project.png) 3. Enter a tag name and select a color, then click the plus sign or press Enter. 4. Mouse over the tag name and then click **Apply All**. ![](images/add-tag.png) Once assigned, you can filter the projects list by tag name. Alternatively, filter by projects with no tags. ![](images/add-tag-2.png) The new tag displays next to the project name in the project list. To remove tags, select **Tags** and select the trash can icon (![](images/icon-delete.png)): * **Remove**: Removes the tag from the selected model. * **Remove All**: Removes the tag from all selected models. * **Delete tag**: Deletes the configured tag from the project and all tagged models. To edit a tag's title or color, select the pencil icon (![](images/icon-pencil.png)). After making any changes, click **Save**. ## Project actions menu {: #project-actions-menu } The project actions menu provides access to a variety of actions for an individual project. ![](images/project-center-4.png) From the menu you can: | Menu item | Description | |-----------|-------------| | Edit Info| Opens an editing box for the current project name, allowing you to enter a new name (up to a total of 100 characters), provide a description, and manage the associated tags. | | [Duplicate Project](#duplicate-a-project) | Duplicate the dataset of the original project into a new project. Copying a project is a faster way to work with your dataset as there is no need to re-upload the data. | | [Share Project](#share-a-project) | Invite other users, user groups, and organizations to view or collaborate on your project(s). | | Leave Project| Change your [role](roles-permissions#project-roles) on the project so that you no longer are a participant. DataRobot removes the selected project from your project center inventory. | | Delete Project | Remove the project from the project control center and make the data unavailable. On the managed AI Platform, deleted projects cannot be recovered. For Self-Managed AI Platform deployments, if you delete a project it can only be recovered by the system administrator. | !!! note Be certain to read and [understand the implications](unlocking-holdout) of unlocking holdout before answering yes to the <b>Are you sure?</b> prompt. ### Duplicate a project {: #duplicate-a-project } You can duplicate the dataset of a project into a new project as a faster method to work with your data than re-uploading it. 1. Click the **Actions** menu and select **Duplicate Project**. 2. In the resulting dialog, enter a project name and select whether to copy only the dataset or to copy the dataset, the target, _and_ [advanced settings](adv-opt/index) and custom feature lists of the original project. For time-aware projects, duplication also: * Clones the feature derivation and forecast window values. * Clones any selected calendars, KA features, and series IDs. * If cloning a segmented modeling project created from a clustering project, it clones the clustering model package. * If you used the data prep tool to address irregular time step issues, cloning uses the modified dataset (which is the dataset used for model building in the parent project.) ![](images/duplicate-1.png) 3. When complete, DataRobot opens to the target selection page so that you can begin the model building process. See also the [**AI Catalog**](catalog) for efficient ways to reuse your data. ### Share a project {: #share-a-project } You can invite other users, user groups, and organizations to view or collaborate on your project. When you share a project, DataRobot assigns the default role of User to each selected target. You can [change project access roles](roles-permissions#project-roles) to the selected targets to control how they can use that project. 1. Click the Actions menu icon and select **Share Project**. 2. In the resulting dialog, type the name of the user, group, or organization you would like to share the project with. As you type, names with similar characters are displayed for your selection. DataRobot returns names of users (1), user groups (2), and organizations (3) that contain the characters you type. ![](images/share-project2.png) 3. Select the users, user groups, and/or organizations to share the project with. If you share with multiple targets at the same time, all will have the same role. (After sharing the project, you can [modify the role assignments](roles-permissions#project-roles).) ![](images/share-project2-groupsorgs.png) 4. Assign a role to the selection (or leave the default) and, optionally, include a custom note with the email invitation. Then click **Share**. When successful you see the message "Shared Successfully" and the dialog shows all targets for the project. ![](images/share-project2b.png) DataRobot sends email invitations to join the project to any individual users selected; members of user groups or organizations selected can find the shared project in the **Manage Projects** page. Alternatively, you can share a project by clicking the **Share** icon (![](images/icon-share.png)) in the top menu. ![](images/share-project4.png) Once shared, you can change the role (1) or remove the user from the project (2): ![](images/share-project2c.png) See the [role and permissions page](roles-permissions#project-roles) for help in determining the best role to assign and to make sure your project role allows sharing.
manage-projects
--- title: Help resources --- # Help resources {: #help-resources } Click the question mark icon in the upper right navigation. ![](images/help-1.png) From this menu you can access: Option | Description ---------- | ----------- **Documentation** | :~~: UI Documentation | Documentation for UI-based DataRobot use. API Documentation | Documentation for code-based DataRobot use. Support | :~~: **Customer Support Portal** | Open the DataRobot Support portal to send a [question, bug, or suggestion](#send-comments-to-datarobot-support), with or without screenshots. Report a Bug | Report a product bug to the DataRobot support team. Optionally, use your screen capture software to highlight the area of focus and include a PNG, JPEG, or JPG image that provides additional information. Note that you may want to block areas of the screen that you do not want sent as part of the communication (sensitive data or user names, for example). Ask a Question | Send a question to the DataRobot support team. **Community** | :~~: Ask the Community | Go to the <a target="_blank" href="https://community.datarobot.com/">Community</a> where you can talk to other DataRobot users. **Customer Delight** | :~~: Send Us Feedback | Send DataRobot product feedback or communicate your DataRobot experience. Contact sales | Contact the sales team at DataRobot about pricing information or a general inquiry. !!! tip For Self-Managed AI Platform deployments you can only report a bug through the application if your SMTP server was configured to allow it during DataRobot installation. If you do not have SMTP configured, see how to [report a bug without SMTP](#report-a-bug-without-smtp). ## Send comments to DataRobot Support {: #send-comments-to-datarobot-support } You can include a screenshot with the **Ask a Question**, **Report a Bug**, or **Suggest a Feature** links. To communicate with DataRobot Support: 1. From the question mark (![](images/icon-question.png)) dropdown, click on the desired link to open the corresponding modal. ![](images/feedback-modal.png) 2. Complete the fields&mdash;a category and sub-category and a description. If you selected "Other" as a sub-category, you are also prompted for a classification for the issue. 3. Optionally, use your screen capture software to highlight the area of focus and include a PNG, JPEG, or JPG image that provides additional information. Note that you may want to block areas of the screen that you do not want sent as part of the communication (sensitive data or user names, for example). 4. Click **Send** to send your message to DataRobot Support. Click **Close** to cancel. ## Report a bug without SMTP {: #report-a-bug-without-smtp } If your deployment does not have SMTP configured, you cannot report a bug or comment through the application. Instead, you can click the provided link to open your email and report, with screenshots, via email. ![](images/send-email.png) To report without SMTP: 1. Click the link to open an email addressed to support@datarobot.com. Any comments you entered are transferred to the email. 2. If you included a screenshot, copy or drag the image to your email. 3. Enter any additional comments or images and send the email. ## Self-Managed AI Platform admins {: #self-managed-ai-platform-admins } The following is available only on the Self-Managed AI Platform. ### DataRobot license {: #datarobot-license } At times you may see messages in the banner across the top of the application window, indicating the DataRobot license is close to expiring or even that the license has expired. If you see these messages, we recommend that you make sure your system administrator is aware of the license status. If the license expiration warning message appears (as shown below), you can click the link to stop it from appearing for 4 days: ![](images/license-banner-expiring.png) If the license expires, you can still access DataRobot and get predictions from existing models, but you cannot build new models or use features like compliance documentation, feature effects, etc. If model building ([EDA2](eda-explained)) is running for a project when the license expires, the current round finishes. The banner shows a message when a new license is applied successfully; at that point, you can again create new models and use all features. Any previously stopped projects (EDA1/EDA2) run to completion.
getting-help
--- title: Workflow overview description: Overview of typical admin workflow for creating user accounts, defining groups, assigning access roles, monitoring and managing worker allocation, and more. --- # DataRobot overview and administrator workflow {: #datarobot-overview-and-administrator-workflow } DataRobot sets up the basic deployment configuration, which defines the available system-wide features and resource allocations. The following describes the typical admin workflow for setting up users on DataRobot: === "SaaS" 1. Log in using the default administrator account and create an Admin account. 2. [Create user accounts](manage-users), starting with your own. 3. Set [user permissions](manage-users#set-admin-permissions-for-users) and [user roles](manage-users#rbac-for-users). 4. *Optional*. [Create groups](manage-groups) and add [multiple users](manage-groups#add-users-to-a-group) or add users [individually](manage-users#manage-groups-and-organization-membership). 5. *Optional*. [Manage personal worker allocation](#change-personal-worker-allocation), which determines the maximum number of workers users can allocate to their project. See also: * [Common administrator tasks](admin-guide/index) to troubleshoot or prevent issues. * [Monitoring and managing users](main-uam-overview) === "Self-Managed" 1. Log in using the default administrator account and create an Admin account. 2. [Create user accounts](manage-users), starting with your own. 3. Set [user permissions](manage-users#set-admin-permissions-for-users) and [user roles](manage-users#rbac-for-users). 4. *Optional*. [Create groups](manage-groups) and add [multiple users](manage-groups#add-users-to-a-group) or add users [individually](manage-users#manage-groups-and-organization-membership). 5. *Optional*. To control and allocate resources, [create organizations](manage-orgs) and add [multiple users](manage-orgs#orgs-addusers) or add users [individually](manage-users#manage-groups-and-organization-membership). 6. *Optional*. Configure SAML SSO. 7. *Optional*. [Manage personal worker allocation](#change-personal-worker-allocation), which determines the maximum number of workers users can allocate to their project. See also: * [Common administrator tasks](admin-guide/index) to troubleshoot or prevent issues. * [Monitoring and managing the cluster and users](manage-cluster/index) * [Monitoring and managing users](main-uam-overview) ## Important concepts {: #important-concepts } The following sections explain concepts that are an important part of the Self-Managed AI Platform setup and configuration. Later sections assume you understand these elements: * [Workers and worker allocation](#what-are-workers) * [About groups](#what-are-groups) * [About organizations](#what-are-organizations) A DataRobot project is the combination of the dataset used for model training and the models built from that dataset. DataRobot builds a project through several distinct phases. During the first phase, DataRobot imports the specified dataset, reads the raw data, and performs EDA1 (Exploratory Data Analysis) to understand the data. The next phase, EDA2, begins when the user selects a target feature and starts the model building process. Once EDA2 completes, DataRobot ranks the resulting models by score on the model Leaderboard. ## What are workers? {: #what-are-workers } *Workers* are the processing power behind the DataRobot platform, used for creating projects, training models, and making predictions. They represent the portion of processing power allocated to a task. DataRobot uses different types of workers for different phases of the project workflow, including DSS workers (Dataset Service workers), EDA workers, secure modeling workers, and quick workers. All workers, with the exception of modeling workers, are based on system and license settings. They are available to the installation's users on a first come, first served basis. Refer to the *Installation and Configuration* guide (provided with your release) for information about those worker types. This guide explains how to monitor and manage *modeling workers*. During EDA2, modeling workers train data on the target feature and build models. Modeling worker allocation is key to building models quickly; more modeling workers means faster build time. Because model development is time and resource intensive, the more models that are training at one time, the greater the chances for resource contention. ### Modeling worker allocation {: #modeling-worker-allocation } The admin and users each have some ability to modify modeling worker allocation. The admin [sets a total allocation](#change-personal-worker-allocation) and the user has the ability to set per-project allocations, up to their assigned limit. Note that modeling worker allocation is independent of hardware resources in the cluster. Each user is allocated four workers by default. This "personal worker allocation" means, at any one time, no more than four workers (if left to the default) are processing a user's tasks. This task count applies across all projects in the cluster&mdash;multiple browser windows building models are all a part of the personal worker count, more windows does **not** provide more workers. The number of workers allocated when a project is created is the "project worker allocation." While this allocation stays with the project if it is shared, any user participating on the project is still restricted to their personal worker allocation. For example, a project owner may have 12 personal workers allocated to a project and share it with a user who has a four-worker personal allocation. The person invited to the project is still limited by their personal allocation, even if the project reflects a higher worker count. ### Change worker allocation {: #change-worker-allocation } The workers used during EDA1 (EDA workers) are set and controlled by the system; neither the admin or user can increase allocation of these workers. Increasing the displayed workers count during EDA does not affect how quickly data is analyzed or processed during this phase. During model development (EDA2), a user can increase the workers count as long as there are workers available to that user (based on personal worker allocation). Adjusting the worker toggle in the worker usage panel causes more workers to participate in processing. Users can read full details about [the Worker Queue](worker-queue) for a better understanding of how it works. ![](images/admin-worker-toggle1.png) !!! note If the user's personal worker allocation is changed (increased or decreased), existing projects are not affected. ### Monitor and manage worker counts (Self-Managed only) {: #monitor-and-manage-worker-counts-self-managed-only } For admins, the [**Resource Monitor**](resource-monitor) provides a dashboard showing modeling worker use across the cluster. This helps to monitor worker usage, ensure that workers are being shared as needed between DataRobot users, and determine when to make changes to a user's worker counts. To prevent resource contention and restrict worker access, the admin can add users to [organizations](manage-orgs). ### Change personal worker allocation {: #change-personal-worker-allocation } Admins can set the maximum number of workers each user can allocate to their project. 1. Expand the profile icon located in the upper right and click **APP ADMIN > Users** from the dropdown menu. ![](images/admin-create-user-2.png) 2. Locate and select the user to open their profile page. 3. Click **Permissions** and scroll down to **Modeling workers limit**. ![](images/admin-personal-worker-1.png) 4. Enter the worker limit in the field and click **Save changes**. ![](images/admin-worker-limit.png) ## What are groups? {: #what-are-groups } You can create groups as a way to manage users, control project sharing, apply actions across user populations, and more. A user can be a member of up to ten groups, or not a member of any group. A group can be associated with a single organization, and a single organization can be the "parent organization" for multiple groups. Project owners can share their projects with a group; this makes the project accessible to all members of that group. Essentially, groups are a container of users that you can take bulk actions on. For example, if you share a project with a group, all users in the group can see the project (and work with it depending on the permission granted when sharing). Or, you can apply bulk permissions so that all users in a group have a permission set. See the section on [creating groups](manage-groups) for information on setting up groups in your installation. ## What are organizations? {: #what-are-organizations } To ensure workers are available as needed and prevent resource contention, an admin can add users to organizations. Organizations provide a way to help administrators manage DataRobot users and [groups](#what-are-groups?). === "SaaS" * You can use organizations to control [worker allocation](#modeling-worker-allocation) for groups of users to restrict the total number of workers available to all members of the organization. * Project owners can share their projects with an organization; this makes the project accessible to all members of that organization. * All users are required to belong to one and only one organization. Most commonly, organizations are used to set a cap on the total number of shared modeling workers the members of an organization can use in parallel. For example, you can create an organization of five users that has an allocation of ten workers. For this organization, the 5 users can collectively use up to 10 workers at one time, regardless of their personal worker allocations. *This is not a cascading allocation; each user in the organization does not receive that allocation, they all share it.* If a user with a personal allocation of 4 workers is defined in an organization with a worker allocation of 10 workers, the user can use no more than his personal allocation, i.e., 4 workers in this example. Organization membership is not a requirement for DataRobot users and users can be defined in only one organization at a time. The *system admin* manages the type of organization described above. Additionally there is a *Organization User Admin*, which is a user that has access to manage all users within their own organization and create groups for the organization. This type of admin does not have access to view other organizations or view users/groups outside of their organization. === "Self-Managed" * You can use organizations to control [worker allocation](#modeling-worker-allocation) for groups of users to restrict the total number of workers available to all members of the organization. * Project owners can share their projects with an organization; this makes the project accessible to all members of that organization. * You can create organizations configured with a "restricted sharing" setting. When set, organization members cannot share projects with users and groups outside of their organizations. There is one exception: members of a "restricted sharing" organization can share projects with users outside of the organization who have the admin setting "Enable support of restricted organizations." This is useful, for example, when a member needs to share a project with a customer support user who is outside of the organization. Most commonly, organizations are used to set a cap on the total number of shared modeling workers the members of an organization can use in parallel. For example, you can create an organization of five users that has an allocation of ten workers. For this organization, the 5 users can collectively use up to 10 workers at one time, regardless of their personal worker allocations. *This is not a cascading allocation; each user in the organization does not receive that allocation, they all share it.* If a user with a personal allocation of 4 workers is defined in an organization with a worker allocation of 10 workers, the user can use no more than his personal allocation, i.e., 4 workers in this example. Organization membership is not a requirement for DataRobot users and users can be defined in only one organization at a time. The *system admin* manages the type of organization described above. Additionally there is a *Organization User Admin*, which is a user that has access to manage all users within their own organization and create groups for the organization. This type of admin does not have access to view other organizations or view users/groups outside of their organization. See the section on [creating organizations](manage-orgs) for information on setting up organizations in your installation.
admin-overview
--- title: Manage feature settings description: With the proper access and permissions, you can view and manage feature settings and permissions for your account and for other users. --- # Manage feature settings {: #manage-feature-settings } With the proper access and permissions, you can view and manage feature settings for your account and for other users. !!! info "Availability information" The ability to manage feature settings is off for most users by default; however, **Org Admin**s may have access to a limited selection of feature settings as defined by your DataRobot configuration. Contact your DataRobot representative or administrator for information on enabling features. **Required permission**: Can manage users ## Manage your feature settings {: #manage-your-feature-settings } To manage feature settings for your account, click your profile avatar (or the default avatar ![](images/icon-gen-settings.png)) in the upper-right corner of DataRobot and click **Settings**. On the **Settings** page, you can enable or disable features for a user and see which features are already enabled. Some features may not be available for pre-existing projects, in which case you could rebuild the project, or some models, to apply the new feature. ## Manage feature settings for users {: manage-feature-settings-for-users} To manage feature settings for other users, click your profile avatar (or the default avatar ![](images/icon-gen-settings.png)) in the upper-right corner of DataRobot and then, under **APP ADMIN**, click **Users**. On the **All Users** page, you can click an individual user in the list and click the **Permissions** tab to enable or disable features for their account. Some features may not be available for pre-existing projects, in which case the user could rebuild the project, or some models, to apply the new feature. If they are unsure, suggest that they recreate the project. ## Settings page sections {: #settings-page-sections} The **Settings** (or **Permissions**) page is divided into product sections (i.e., **MLDev**, **MLOps**) and then maturity levels for the features within each product (i.e. **GA**, **Public Preview**). To navigate to a product, click the associated tab. The **Platform** section also includes an **Admin Controls** tab (for system and organization admins only). On the **Settings** page, you can do the following: * Search for a feature or permission by clicking the search box (or press **Ctrl**/**Cmd** + **F**), and typing the feature name or label. * Display a tooltip describing a feature by hovering over the feature name; contact DataRobot Support if you need more detail. * Enable a feature or permission by turning on the associated toggle. !!! warning When you search for features or permissions, the list is filtered. Be sure to clear the search bar to view all of the features and permissions again. ## Premium and enabled features {: #premium-and-enabled-features } === "SaaS" On the **Settings** page, you can see which **Premium** products and **Enabled** features are included in your DataRobot license. Contact DataRobot support if you need more information on premium and enabled features. === "Self-Managed" The **Premium** products available in a deployed cluster depend on your organization's DataRobot contract. When premium products are defined in the cluster configuration file (`config.yaml`), you can enable them for any user in that cluster. If you need to change the available premium products, contact customer support. The **Enabled** features in a deployed cluster are defined in the cluster configuration file (`config.yaml`). These features are available to all users, are not configurable via the UI, and cannot be set for individual users. If you need to change the enabled features, contact Customer Support. ## Self-Managed AI Platform admins {: #self-managed-ai-platform-admins } The following is available only on the Self-Managed AI Platform. ### Cluster-wide features {: #cluster-wide-features } On the **Settings** page, you can see which **Premium** products and **Enabled** features are defined in your cluster configuration. !!! note For more information on changing the cluster configuration in the `config.yaml` file, see the *DataRobot Installation and Configuration Guide*. ### Grant permissions to users {: #grant-permissions-to-users } If you are a system administrator, you can grant the **Can manage users** permission to another user, allowing them to manage their feature settings *and* those of other users. !!! note Consider and control how you provide these permissions to non-administrator users. One way to do this is to add permissions on an "as-needed" basis and remove those permissions after the user completes the related tasks.
user-settings
--- title: Manage groups description: Learn about creating and deleting groups, adding users to groups, setting group permissions, and role-based access control (RBAC) for groups. --- # Manage groups {: #manage-groups } === "SaaS" !!! info "Availability information" **Required permission:** Org Admin === "Self-Managed" !!! info "Availability information" **Required permission:** Can manage users ## Create a group {: #create-a-group } Follow these steps to create a group: === "Saas" 1. Expand the profile icon located in the upper right and click **APP ADMIN > Groups**. The displayed page lists all existing groups. From here you create, search, delete, and perform actions on one or more groups. 2. Click **Create a group**. 3. On the resulting page, enter a name and, optionally, description for the group. 4. When the information is complete, click **Create a group**. The new group appears in the **Manage Groups** listing, including parent organization (enterprise only), number of members, and group description. === "Self-Managed" 1. Expand the profile icon located in the upper right and click **APP ADMIN > Groups**. The displayed page lists all existing groups. From here you create, search, delete, and perform actions on one or more groups. 2. Click **Create a group**. 3. On the resulting page, enter a name and, optionally, description for the group. If you want to associate this group with an organization, select that organization. This will be the "parent organization" for the group. !!! note If a group is assigned a parent organization, the assignment cannot be removed. ![](images/admin-create-group.png) 4. When the information is complete, click **Create a group**. The new group appears in the **Manage Groups** listing, including parent organization (enterprise only), number of members, and group description. ## Add users to a group {: #add-users-to-a-group } === "SaaS" Once a group is created, you add and remove one or more users from the **Manage Groups** page. (After you configure a user's profile, you can also add or remove individuals from groups through their profile [**Membership**](manage-users#manage-groups-and-organization-membership) page.) === "Self-Managed" Once a group is created, you add and remove one or more users from the **Manage Groups** page. (After you configure a user's profile, you can also add or remove individuals from groups through their profile [**Membership**](manage-users#manage-groups-and-organization-membership) page.) In LDAP-authenticated deployments, users can be added to LDAP groups. When a user logs into DataRobot, if there is a DataRobot user group whose name matches the LDAP group they are a part of, that user is automatically added to the DataRobot user group. !!! note If the group has a parent organization, you are [restricted](manage-orgs#restrict-orgs) to adding only users already associated with that organization. To add users through **Manage Groups**: 1. Open the **Manage Groups > All Groups** page. From the list of configured groups, click to select the group to add the user to. 2. When the profile opens, select **Members** to see all members in this group. ![](images/admin-groups-members.png) 3. Click **Add a user**. 4. From the displayed page, type any part of the name of a user to add. As you type, DataRobot shows usernames containing those characters: ![](images/admin-group-addusertogroup.png) 5. Select one or more users to add to the group. When done, click **Add users**. The **Members** list for the group updates to show all members and information for each, including first name, last name, organization (if any), and status ([active/inactive](manage-users#deactivate-user-accounts) in DataRobot). ## Delete groups {: #delete-groups } You can delete one or more groups at a time. This removes the group profile only; members of that group continue to have access to DataRobot. Any projects shared with the deleted group are no longer shared with any users for that group. (If needed, re-share projects with individual users.) 1. View the **Groups** page and do one of the following: * If you want to delete a single group&mdash;locate the group, select the **Actions** menu, and click **Delete**. * If you want to delete multiple groups&mdash;select the check boxes for those groups (or select the check box next to the **Groups** heading), click **Menu**, and then select **Delete selected**. ![](images/admin-groups-deletegroupsmenu.png) 2. In the displayed **Confirm Delete** window, click **Delete group(s)**. Deleted groups are removed from the list of groups. ## Configure group permissions {: #configure-group-permissions } You can configure permissions and apply them to all users in an existing group in addition to managing each user individually. This allows for easier tracking and management of permissions for larger groups. !!! note Note that a user's effective permissions are the union of those granted to their organization, any user groups they are a member of, and permissions granted to the user directly. A user who is added to a group obtains all permissions of that group. A user who is removed from a group does not maintain any permissions that were granted to them by that group. However, a user may still have the permissions granted by that group if the organization they belong to also grants those permissions. 1. To configure permissions for a group, select it from the **Groups** page and navigate to the **Permissions** tab. ![](images/group-permission-1.png) 2. The **Permissions** tab displays the premium products, optional products, public preview features, and admin settings available for configuration for your group. Select the desired permissions to grant your group from the list. 3. When you have finished selecting the desired permissions for your group, click **Save Changes**. Once saved, all existing users in your group and those added to it in the future are granted the configured permissions. A user can view their permissions on the **Settings** page. Hover over a permission to see what group(s) or organization(s) enabled it. ![](images/group-permission-3.png) ## RBAC for groups {: #rbac-for-groups } Role-based access (RBAC) controls access to the DataRobot application by assigning users roles with designated privileges. The assigned role controls both what the user sees when using the application and which objects they have access to. RBAC is additive, so a user's permissions will be the sum of all permissions set at the user and group level. Assign a default role for group members in a group's **Group Permissions** page: ![](images/rbac-group-1.png) Review the [role and access definitions](rbac-ref) to understand the permissions enabled for each role. !!! note RBAC overrides [sharing-based role permissions](roles-permissions). For example, consider a group member is assigned the Viewer role via RBAC, which only has *Read* access to objects. If this group member has a project shared with them that grants Owner permissions (which offers *Read and Write* access), the Viewer role takes priority and denies the user *Write* access. ## Manage execution environment limits {: #manage-execution-environment-limits } {% include 'includes/ex-env-limits.md' %} 1. Click your profile avatar (or the default avatar ![](images/icon-gen-settings.png)) in the upper-right corner of DataRobot, and then, under **APP ADMIN**, click **Groups**. 2. To configure permissions for a group, select it from the **Groups** page and then click **Permissions**. ![](images/group-permission-1.png) !!! important If you access the **Members** tab to set these permissions, you are setting the permissions on an individual user level, not the group level. 3. On the **Permissions** tab, click **Platform**, and then click **Admin Controls**. ![](images/platform-admin-controls.png) 4. Under **Admin Controls**, set either or both of the following settings: * **Execution Environments limit**: The maximum number of custom model execution environments users in this group can add. This limit setting can't exceed 999. * **Execution Environments versions limit**: The maximum number of versions users in this group can add to each custom model execution environment. This limit setting can't exceed 999. ![](images/execution-env-controls.png) 5. Click **Save changes**.
manage-groups
--- title: Manage organizations description: How to create an organization if you wish to restrict user access to workers. You can prevent members from sharing the organization's projects to outside users or groups. --- # Manage organizations {: #manage-organizations } !!! info "Required permission" “Can manage users” Set an organization for users if you wish to restrict access to workers. See the [overview](admin-overview#what-are-organizations?) for a description of the organization functionality in DataRobot. Some notes on using organizations: * Organization membership is not a requirement for DataRobot users. * Users can belong to only one organization at a time; if you add a user to a new organization, they are automatically removed from the existing organization. * Assigning a parent organization to a group limits <em>group</em> membership to members of the parent organization (i.e., you are restricted to adding only users who are members of that parent organization). * You cannot change a group's parent organization once assigned. ## Understand restricted sharing {: #understand-restricted-sharing } You can use a "restricted sharing" setting to prevent organization members from sharing projects *from* the organization to users or groups outside of it. This setting does not prevent users outside the organization from sharing projects *to* the organization. Users outside of a "restricted sharing" organization can share projects with any members and groups. After creating an organization, you can change its restricted sharing policy from the the organization profile. If you set **Restrict sharing to organization members** after the organization was created, any project sharing that already exists is not affected; making this change only prevents the ability to start new project sharing with users or groups outside of the organization. ## Create organizations {: #create-organizations } Follow these steps to create an organization: 1. Expand the profile icon located in the upper right and click **APP ADMIN > Organizations**. The displayed page lists all existing organizations. From here you create and manage organizations. 2. Click **Create new organization**. 3. In the displayed dialog, enter a name and the [number of workers](admin-overview#what-workers) to assign to the organization. 4. If you want to [restrict users](#restrict-orgs) in this organization to sharing projects only with other users and groups within this organization, select **Restrict sharing to organization members**. ![](images/admin-add-org.png) 5. Click **Create Organization**. The **All Organizations** profile lists the new organization. 6. Click an organization to see its profile. In the below image, for example, the new ACME3 organization is allocated a maximum of 14 workers and members of the organization can share projects with users and groups outside of the ACME3 organization. ![](images/admin-orgprofile.png) !!! note The worker resource allocation ("Workers limit") is independent from your cluster configuration. To avoid worker resource issues, don’t oversubscribe the physical capacity of the cluster. ## Add users to organizations {: #add-users-to-organizations } You can add one or more users to (or remove users from) any defined organization using the **Organizations** pages. You can add or remove individual users from the [**User Profile > Membership**](manage-users#manage-groups-and-organization-membership) page. 1. Open the **Organizations** page and select the organization to which you want to add a user. The profile page for that organization opens. 2. Click **Members** to list all members of that organization. ![](images/admin-orgs-members.png) 3. Click **Add a user** and from the displayed dialog, start typing the name of a user to add. As you type, DataRobot shows usernames containing those characters. 4. Select the intended user and repeat for each user you want to add. When done, click **Add users**. The displayed list shows all members of the organization and information for each, including first name, last name, and status ([active/inactive](manage-users#user-accounts-disable) in DataRobot). ## Delete organizations {: #delete-organizations } You can only delete an organization once all the members have been removed. Any projects previously shared with the organization (and therefore, to each user in the organization), are no longer shared with those (ex) members. If needed, re-share projects with individual users. To delete an organization: 1. Expand the profile icon located in the upper right and click **APP ADMIN > Manage Organizations**. In the displayed list of organizations, check the **Total Members** column for the organization you want to delete to determine whether there are members assigned it. If the organization has members, you must remove them before you can delete the organization. 2. If the organization has members, do the following: * Select that organization to open its profile and click **Members**. * From **Menu** click **Select All**, then click **Delete selected**. ![](images/admin-deleteselected-menu.png) * In the displayed **Confirm Removal** window, click **Remove members**. Removed members are removed from the list of organizations only. 3. When the organization has no members, view the **Organizations** page and do one of the following: * If you want to delete a single organization&mdash;locate the organization, select the **Actions** menu, and click **Delete organization**. (You could also delete the organization from the organization profile, using **Actions** > **Delete**.) * If you want to delete multiple organizations&mdash;select the check boxes for those organizations (or select the check box next to the **Organization** heading), click **Menu**, and then select **Delete selected**. 4. In the displayed **Confirm Delete** window, click **Delete organization** (or **Delete organizations** if shown). Deleted organizations are removed from the list of organizations. ## Configure organization permissions {: #configure-organization-permissions } You can configure permissions and apply them to all users in an existing organization in addition to managing each user individually. This allows for easier tracking and management of permissions for larger organizations. !!! note Note that a user's effective permissions are the union of those granted to their organization, any user groups they are a member of, and permissions granted to the user directly. A user who is added to an organization obtains all permissions of that organization. A user who is removed from an organization does not maintain any permissions that were granted to them by the organization. However, a user may still have the permissions granted by that organization if a group they belong to also grants those permissions. 1. To configure permissions for an organization, select it from the **Organizations** page and navigate to the **Permissions** tab. ![](images/org-permission-1.png) 2. The **Permissions** tab displays the premium products, optional products, public preview features, and admin settings available for configuration for your organization. Select the desired permissions to grant your organization from the list. 3. When you have finished selecting the desired permissions for your organization, click **Save Changes**. Once saved, all existing users in your organization and those added to it in the future are granted the configured permissions. A user can view their permissions on the **Settings** page. Hover over a permission to see what group(s) or organization(s) enabled it. ![](images/group-permission-3.png) ## Manage custom model resource allocation {: #manage-custom-model-resource-allocation } For DataRobot MLOps users, you can determine the [resources allocated for each custom inference model](custom-inf-model#manage-custom) within an organization. Configuring these resources facilitates smooth deployment and minimizes potential environment errors in production. To manage these resources for an organization, navigate to the organization's **Profile** page and find **Custom model resource allocation settings**. Configure the fields: ![](images/resource-4.png) | Resource | Description | |----------------|---------------------------| | Desired memory | Determines the minimum reserved memory for the Docker container used by the custom inference model.| | Maximum memory | Determines the maximum amount of memory that may be allocated for a custom inference model. Note that if a model allocates more than the configured maximum memory value, it is evicted by the system. If this occurs during testing, the test is marked as a failure. If this occurs when the model is deployed, the model is automatically launched again by kubernetes. | | Maximum replicas | Sets the maximum number of replicas executed in parallel to balance workloads when a custom model is running. | When you have fully configured the resource settings, click **Save**. ## Manage execution environment limits {: #manage-execution-environment-limits } {% include 'includes/ex-env-limits.md' %} 1. Click your profile avatar (or the default avatar ![](images/icon-gen-settings.png)) in the upper-right corner of DataRobot, and then, under **APP ADMIN**, click **Organizations**. 2. To configure permissions for an organization, select it from the **Organizations** page and then click **Permissions**. ![](images/org-permission-1.png) !!! important If you access the **Members** tab to set these permissions, you are setting the permissions on an individual user level, not the organization level. 3. On the **Permissions** tab, click **Platform**, and then click **Admin Controls**. ![](images/platform-admin-controls.png) 4. Under **Admin Controls**, set either or both of the following settings: * **Execution Environments limit**: The maximum number of custom model execution environments a user in this organization can add. This limit setting can't exceed 999. * **Execution Environments versions limit**: The maximum number of versions a user in this organization can add to each custom model execution environment. This limit setting can't exceed 999. ![](images/execution-env-controls.png) 5. Click **Save changes**.
manage-orgs
--- title: Administrator's guide description: Help for system administrators in managing DataRobot Self-Managed AI Platform deployments. --- # Administrator's guide {: #administrators-guide } === "SaaS" The _DataRobot Administrator's Guide_ is intended to help administrators manage their DataRobot application. Before starting to set up your users and monitoring tools, you may want to review the [overview](admin-overview) for a description of a typical admin workflow as well as important concepts for managing users in a DataRobot. You will work with some or all of the actions described in the following sections: Topic | Describes... ------|------------- **Admin** | :~~: [Workflow overview](admin-overview) | Preview the workflow and learn about important admin concepts. [Manage user accounts](manage-users) | Learn about setting permissions, RBAC, passwords, and user activity. [Manage groups](manage-groups) | Create, assign, and mange group memberships. [Monitor activity](main-uam-overview) | Monitor user and system activity. [Approval policies for deployments](deploy-approval) | Configure approval policies for governance and control. [Feature settings](user-settings) | View and change feature settings. [Notification service](webhooks/index) | Integrate centralized notifications for change and incident management. **Reference** | :~~: [SSO](sso-ref) | Configure DataRobot and an external Identity Provider (IdP) for user authentication via single sign-on (SSO). [Role-based access control (RBAC)](rbac-ref) | Assign roles with designated privileges. [User Activity Monitor (UAM)](uam-ref) | View Admin, App, and Prediction usage reports, as well as system information. === "Self-Managed" The *DataRobot Administrator's Guide* helps system administrators manage their DataRobot Self-Managed AI Platform deployments. Before setting up your users and monitoring tools, you may want to review the [overview](admin-overview) for a description of a typical admin workflow, as well as important concepts for managing users in DataRobot: Topic | Describes... ------|-------------- [Workflow overview](admin-overview) | Preview the workflow and learn about important admin concepts. [Manage user accounts](manage-users) | Learn about setting permissions, role-based access controls (RBAC), passwords, and user activity. [Manage groups](manage-groups) | Create, assign, and mange group memberships. [Manage organizations](manage-orgs) | Create and assign resources to organizations. [Monitor activity](main-uam-overview) | Monitor user and system activity. After setting up your users, continue to [Managing the cluster](manage-cluster/index). The following describes common on-going admin activities: Topic | Describes... ------|------------- **Admin** | :~~: [Manage the cluster](manage-cluster/index) | Manage the user agreement, the application, deleted projects, and JDBC drivers; monitor system resources and user activity. [Approval policies for deployments](deploy-approval) | Configure approval policies for governance and control. [Feature settings](user-settings) | View and change feature settings. [Notification service](webhooks/index) | Integrate centralized notifications for change and incident management. [Worker resource allocation](manage-users#additional-permissions-options) | Allocate worker resources, whether for individual users or across groups of users via organizations. [Activity monitoring](main-uam-overview) | Monitor user and system activity with the User Activity Monitor. JDBC drivers | [Create and manage JDBC drivers](manage-drivers), and for locked-down systems, [restrict access](manage-drivers#restrict-access-to-jdbc-data-stores) to JDBC data stores. [Worker allocation monitoring](resource-monitor) | Monitor worker allocation with the Resource Monitor. [User management](manage-users#deactivate-user-accounts) | Deactivate or reactivate users. [Project management](delete-restore) | Delete or restore projects. [Licenses](manage-access) | Apply a new license. **Reference** | :~~: [SSO](sso-ref) | Configure DataRobot and an external Identity Provider (IdP) for user authentication via single sign-on (SSO). [Role-based access control (RBAC)](rbac-ref) | Assign roles with designated privileges. [User Activity Monitor (UAM)](uam-ref) | View Admin, App, and Prediction usage reports, as well as system information. **Other related documentation** * For detailed explanations of all DataRobot features, see the DataRobot in-app documentation included with the installation (`domain-name/docs/`). * For information on installing and configuring your DataRobot cluster deployment (including `config.yaml` deployment settings), see the <em>DataRobot Installation and Configuration Guide</em> provided for your release. !!! note `config.yaml` is the master configuration file for the DataRobot cluster. To modify the configuration, or better understand how your cluster is configured, contact DataRobot Customer Support.
index
--- title: Manage user accounts description: How to create LDAP or local authentication user accounts, set permissions, and manage membership of groups and organizations. --- # Manage user accounts {: #manage-user-accounts } The DataRobot deployment provides support for local authentication users. These are user accounts you create manually (through **APP ADMIN > Manage Users**). DataRobot provides restrictions for login and password settings. The login credentials for these locally authenticated users are stored as fully qualified domain names. === "SaaS" !!! info "Availability information" **Required permission:** Org Admin === "Self-Managed: LDAP" !!! info "Availability information" **Required permission:** Can manage users The DataRobot deployment provides support for three types of user accounts: | User Account Type | Description | |--------------------|--------------------------| | Internal | This is the default DataRobot administrator account, which authenticates using admin@datarobot.com. This account has full administrator access to the deployed cluster. You cannot revoke administrator privileges; the only change you can make to this account is password updates. | | Local authentication | These are user accounts you create manually (through **APP ADMIN > Manage Users**). DataRobot provides restrictions for login and password settings. The login credentials for these locally authenticated users are stored as fully qualified domain names. | | LDAP authentication configuration | These user accounts are created through an authentication integration with a defined LDAP directory service; you do not use the DataRobot UI to create these user accounts. | **LDAP accounts** When LDAP users sign into DataRobot for the first time, their user profiles are created and saved in DataRobot but their passwords are not. Usernames for these LDAP-authenticated users are simple usernames and not fully qualified domain names. Passwords cannot be changed. Not that if a user is removed from the LDAP Directory server or group, they are not able to access DataRobot. The user account, however, remains intact. !!! note [Local authentication](#create-user-accounts) is not supported when LDAP is enabled (i.e., no "mixed mode"). See the instructions below for creating local authentication accounts. ## Create user accounts {: #create-user-accounts } As an administrator, you create and add new users to your DataRobot installation. The first user account you should create is one for yourself, so that you can access DataRobot as a user in addition to using the default administrator account. Use the following steps to create your own user account, and then repeat them for each additional user. === "SaaS" 1. Expand the profile icon located in the upper right and click **APP ADMIN > Users** from the dropdown menu. ![](images/admin-create-user-2.png) 2. Click **Create a user** at the top of the displayed page. ![](images/admin-add-new-user.png) 3. In the displayed dialog, enter the username (i.e., email address), first name, and password for the new user (other account settings are optional at this point). ![](images/admin-create-user-3.png) 4. Click **Create user**. If successful, you see the message "Account created successfully" and the username for the new account. 5. Click **View user profile** to view and configure user settings for this user, or click **Close**. === "Self-Managed" 1. Expand the profile icon located in the upper right and click **APP ADMIN > Users** from the dropdown menu. ![](images/admin-create-user-2.png) 2. Click **Create a user** at the top of the displayed page. ![](images/admin-add-new-user.png) 3. In the displayed dialog, enter the username (i.e., email address), first name, and password for the new user (other account settings are optional at this point). If shown, selecting [Require Clickthrough Agreement](manage-access#create-a-user-agreement) may be necessary for your cluster deployment. 4. Click **Create user**. If successful, you see the message "Account created successfully" and the username for the new account. 5. Click **View user profile** to view and configure user settings for this user, or click **Close**. The new user will now be listed in the Users table. You can open the User Profile to see some important information including the user's application-assigned ID. ## Set admin permissions for users {: #set-admin-permissions-for-users } === "SaaS" As an admin, you can set organization admin permissions for other DataRobot users within the application, including your personal user account. These permissions allow the recipient to enable or disable features per user, as needed. Visit the **Settings** page to see a list of available features; hover over a feature name for a brief description. Below are the steps to enable administrator access for any user. This user will have administrator access to all DataRobot functionality configured for the application. !!! note Consider and control how you provide admin settings to non-administrator users. One way to do this is to add settings only on an as-needed basis and then remove those settings when related tasks are completed. 1. From the **Users** page, locate the user and select to open the user's profile page. 2. Click **Membership** to display the organization and groups that the user is a member of. 3. Under the **Organization** header, check the box in the **Org Admin** column to enable organization admin permissions for the user. ![](images/org-admin-2.png) This user can now modify settings for other users. At any point, if you want to disable these permissions for the user, uncheck the box; the user will no longer have administrator capabilities. === "Self-Managed" As an admin, you can set admin permissions for other DataRobot users within the application, including your personal user account. These permissions allow the recipient to enable or disable features per user, as needed. Visit the **Settings** page to see a list of available features; hover over a feature name for a brief description. Below are the steps to enable administrator access for any user. This user will have administrator access to all DataRobot functionality configured for the application. !!! note Consider and control how you provide admin settings to non-administrator users. One way to do this is to add settings only on an as-needed basis and then remove those settings when related tasks are completed. 1. From the **Users** page, locate the user and select to open the user's profile page. 2. On **User Profile**, click **Change Permissions** to display the **User Permissions > Manage Settings** page for the user ![](images/user-profile-admin-guide.png) 3. Select the Admin setting “Can manage users” and click **Save**. This user now can modify settings for other users. At any point, if you want to disable the “Can manage users” setting for this user, uncheck the box and click **Save**; the user will no longer have administrator capabilities. ## Self-Managed AI Platform admins {: #self-managed-ai-platform-admins } The following is available only on the Self-Managed AI Platform. ### Additional permissions options {: #additional-permissions-options } To set permissions and supported features for users, repeat the previous process selecting the desired permissions from those listed in the user's **User Permissions > Manage Settings** page. See the settings and features description for information on the available admin settings and optional features. For each user you can also: * Set their maximum [personal worker allocation](admin-overview#define-workers). * Set their RAM usage limit. * Set their file upload size limit. * Set the rate at which the [Deployment page](deploy-inventory#inventory-update) refreshes (three second minimum). * Assign them to an organization (you must create the organization first). ![](images/admin-additional-settings.png) ## RBAC for users {: #rbac-for-users } Role-based access (RBAC) controls access to the DataRobot application by assigning users roles with designated privileges. The assigned role controls both what the user sees when using the application and which objects they have access to. RBAC is additive, so a user's permissions will be the sum of all permissions set at the user and group level. To assign a user role: 1. From the **Users** page, locate and select the user to open their profile page. ![](images/admin-create-user-2.png) 2. Click the **Permissions** tab to view a list of settings and permissions. ![](images/admin-personal-worker-1.png) 3. Open the **User roles** dropdown menu and select the appropriate role(s) for the user. ![](images/rbac-1.png) 4. When you're done, click **Save changes**. Review the [role and access definitions](rbac-ref) to understand the permissions enabled for each role. !!! tip Avoid granting access to specific features by assigning roles at the user-level because this makes managing permissions more difficult&mdash;causing you to have to modify several users, rather than a few groups, as well as increasing the possiblity of having users with non-standardized levels of access. Make sure access to features required to complete work are defined at the group- or org-level, and that the user is a member. !!! note Note that RBAC overrides [sharing-based role permissions](roles-permissions). For example, consider a user is assigned the Viewer role via RBAC, which only has <em>Read</em> access to objects. If this user has a project shared with them that grants Owner permissions (which offers <em>Read and Write</em> access), the Viewer role takes priority and denies the user <em>Write</em> access. ## Manage execution environment limits {: #manage-execution-environment-limits } {% include 'includes/ex-env-limits.md' %} 1. Click your profile avatar (or the default avatar ![](images/icon-gen-settings.png)) in the upper-right corner of DataRobot, and then, under **APP ADMIN**, click **Users**. ![](images/admin-create-user-2.png) 2. From the **Users** page, locate and select the user to open their profile page. 3. Click the **Permissions** tab to view a list of settings and permissions. ![](images/admin-personal-worker-1.png) 4. On the **Permissions** tab, click **Platform**, and then click **Admin Controls**. ![](images/platform-admin-controls.png) 5. Under **Admin Controls**, set either or both of the following settings: * **Execution Environments limit**: The maximum number of custom model execution environments a user can add. This limit setting can't exceed 999. * **Execution Environments versions limit**: The maximum number of versions a user can add to each custom model execution environment. This limit setting can't exceed 999. ![](images/execution-env-controls.png) 6. Click **Save changes**. ## Change passwords {: #change-passwords } You can change passwords for internal and local authentication user accounts. If your cluster uses LDAP authentication, you cannot change the password for any of the user types (individual users or the `admin@datarobot.com` account). If you need help generating a new password for the default administrator, contact Customer Support. ### Change your own password {: #change-your-own-password } To change your own password: 1. Expand the profile icon located in the upper right and click **Settings**. 2. In the displayed page, enter your current password and then the new password twice (to create and confirm). Click **Change Password**. DataRobot enforces the following password policy: - Only printable ASCII characters - Minimum one capital letter - Minimum one number - Minimum 8 characters - Maximum 512 characters - Username and password cannot be the same ### Change a user's password {: #change-a-users-password } 1. From the **APP ADMIN > Manage Users** page, locate the user and click to open their profile. 2. Click **Change Password**. ![](images/admin-change-users-password.png) 4. In the displayed page, enter and confirm the new password. 5. When finished, click **Change Password**. ## Manage groups and organization membership {: #manage-groups-and-organization-membership } SaaS admins can manage groups; Self-Managed admins can manage groups and organizations. === "Saas" Configuring groups helps you to manage users across the DataRobot platform. For more information, see: * [Group overview](admin-overview#what-are-groups) * [Creating groups](manage-groups#create-a-group) Once created, you can add one or more users as members from the group creation page. To add users individually, follow the steps below. !!! note Note that *users* can see which groups they belong to from the **Membership** page, but they do not have permissions to make changes to those memberships. Browse to the **Users** page, select the user, and in **User Profile** click **Membership**. The **User Membership** page shows the currently configured groups for this user. ![](images/org-admin-1.png) Work with the page as follows: | Field | Description | Notes | |---------------------|-------------|-----------------| | **Join groups** (1) | Click to open the Add User to Groups dialog and enter the group name(s). | Users can have membership in up to ten groups. | Select Groups | Saves group assignments. | When successful, the dialog closes and the list updates to show the user's group membership (2). | === "Self-Managed" Configuring groups and organizations helps you to manage users and resources across the DataRobot platform. For more information, see: * [Group overview](admin-overview#what-are-groups) * [Creating groups](manage-groups#create-a-group) * [Organization overview](admin-overview#what-are-organizations) * [Creating organizations](manage-orgs#creating-organizations) Once created, you can add one or more users as members from the group and organization creation pages. To add users individually, follow the steps below. !!! note Note that *users* can see which organization and groups they belong to from the **Membership** page, but they do not have permissions to make changes to those memberships. Browse to the Users page, select the user, and in **User Profile** click **Membership**. The **User Membership** page shows the currently configured organization and any groups for this user. ![](images/admin-usermembership-unconfigured.png) Work with the page as follows: | Field | Description | Notes | |--------------|---------------------------|----------------------| | Organization (1) | Enter the name for the organization. | Each user can be a member of only one organization. | | Go to org profile (2) | Click to view information about that organization. | If you do not see the organization you want, you must first create it. | | Add user to groups (3) | Click to open the **Add User to Groups** dialog and enter the group name(s). | • If the user is a member of an organization, only groups also part of the same organization, or part of no organization, are available for selection. • Users can have membership in up to ten groups. | | Select groups | Saves group assignments. | When successful, the dialog closes and the list updates to show the user's group membership (4). | When you next look at this user's profile, you see the organization for the user. ![](images/admin-org-orgprofile.png) ## Deactivate user accounts {: #deactivate-user-accounts } You cannot delete a user account from DataRobot&mdash;this ensures that your company's data is not lost, regardless of employee movement. However, the admin can block a user's access to DataRobot while ensuring the data and projects they worked on remain intact. From **APP ADMIN > Manage Users**, locate the user: * To deactivate, click the padlock icon next to their name, changing it to locked ![](images/icon-lock.png). * To restore access, click the padlock icon to open ![](images/icon-unlock.png). You can also change user account access from **Users > User Profile** by clicking **Enable User** or **Disable User**. ![](images/enable-disable-user-profile.png) ## View latest user activity {: #view-latest-user-activity } From the **User Profile**, you can quickly access the most recent app usage activities for the user. * Click **Recent activity** (near the bottom of the page) to see the last five app activities recorded for this user. Clicking the refresh link updates the list of activities: ![](images/useractivity-userprofile-recentactivity.png) * Click **View Activity** to see the [user activity monitor](main-uam-overview#view-activity-and-events) showing all app activities recorded for this user. ![](images/admin-view-users-activity.png)
manage-users
--- title: Deployment approval policies description: To enable effective governance and control across DataRobot, admins can create and modify global approval policies for deployment-related activities. --- # Deployment approval policies {: #deployment-approval-policies } !!! info "Availability information" **Required permission:** "Can Manage Approval Policies" To enable effective governance and control across DataRobot, you can create approval policies for deployment-related activities. Approval policies help ensure that [DataRobot deployments](deployment/index) are being produced and used safely with the necessary guardrails in place. Administrators can [create policies](#create-an-approval-policy) from the **Approval Policies** page, or [use an existing policy as a template](#use-templates-to-create-policies). Once created, you can perform a variety of [actions](#approval-policy-actions) to manage approval policies. Note that the policies an administrator sees are specific to their organization&mdash;each organization may have its own approval policies configured. ## Create an approval policy {: #create-an-approval-policy } To create a new deployment approval policy, click on your user icon and navigate to the **Approval Policies** page. This page is also accessible from the app administrator page. ![](images/approval-1.png) ### Create a new policy {: #create-a-new-policy } To create a new approval policy, select **Create new policy**. ![](images/approval-2.png) Begin completing the required fields for the new approval policy: 1. Select a policy trigger. This is the deployment event that triggers the approval workflow for [reviewers](dep-admin): deployment creation, deletion, importance changes, and model replacements. You must also indicate the importance level required for the event to trigger the approval policy (critical, high, moderate, low, any, or all levels above moderate). ![](images/approval-3.png) 2. Add the user groups that can trigger the approval policy (optional). Start typing a group name and select the groups you want to include. If no groups are specified, the policy applies to all users in the organization. ![](images/approval-4.png) 3. Assign the reviewers (optional). These are the users (or groups) that, when the policy is triggered, can review the deployment event to approve it or request changes. Once a user is added as a reviewer, they gain access to each deployment that triggers the policy and are notified when a review is requested. All MLOps Admins in the organization have reviewer permissions by default, and serve as the reviewers if none are specified. ![](images/approval-5.png) 4. Configure the reminder settings for reviewers. ![](images/approval-6.png) Use the toggles to: * Automatically send reminders to reviewers. Set the reminder frequency (every 12 hours, 24 hours, 3 days, or 7 days). * Assign an automatic action if a deployment event is not reviewed. Choose the action and when to apply it. For example, you can cancel a model replacement if the event was not reviewed within 7 days of the request for review. 5. Name the policy. Once you have fully configured the settings for your new approval policy, click **Save Policy**. ![](images/approval-7.png) Once saved, approval policies can be viewed from the **Approval Policies** page. ![](images/approval-8.png) ### Use templates to create policies {: #use-templates-to-create-policies } Any approval policy can serve as a template for a new policy. To use a template, select **Add policies from templates**. ![](images/approval-16.png) The following policies are provided by default, covering the four deployment events available as policy triggers. ![](images/approval-14.png) 1. Choose the policy you want to serve as a template and click the copy icon under the **Actions** header. ![](images/approval-9.png) 2. This brings you to the [policy creation](#create-an-approval-policy) page. The user groups that trigger the policy, the reviewers, and the reminder settings are carried over from the existing template policy. You must provide a new policy trigger and optionally list the groups for which you want to apply the policy. ![](images/approval-15.png) 3. Complete the fields and click **Save Policy**. The new policy created with the template can be viewed from the **Approval Policies** page. ## Approval policy actions {: #approval-policy-actions } After creating policies, there are multiple activities available for managing them: [editing](#edit-existing-policies), [pausing](#pause-policies), and [deletion](#delete-policies). ### Edit existing policies {: #edit-existing-policies } To edit the configuration of an existing policy, access it from the **Approval Policies** page. Hover on the field you want to change and select **Edit**. ![](images/approval-10.png) After editing the field, click **Save Change** to apply the edits. ### Pause policies {: #pause-policies } If you want to temporarily disable a policy, you can do so from the **Approval Policies** page. Under the **Actions** header, select the pause icon (![](images/icon-pause.png)). Once enabled, the pause icon is replaced with a play icon (![](images/icon-play.png)). Select it to re-enable the policy. ### Delete policies {: #delete-policies } You can permanently delete an approval policy by selecting the trash can icon under the **Actions** header. ![](images/approval-13.png) A modal prompts you to confirm the deletion, warning that the automated action set up for the policy will automatically be applied to any deployments awaiting approval at the time of deletion. If no automated action was configured, the deployments need to be manually resolved. Confirm deletion by selecting **Yes, remove policy**. The approval policy will no longer appear on the **Approval policies** page. ![](images/approval-14.png)
deploy-approval
--- title: User Activity Monitor description: The User Activity Monitor (UAM) provides a means for accessing and analyzing various usage data and prediction statistics as online reports or via export. --- # User Activity Monitor {: #user-activity-monitor } !!! info "Availability information" For Managed Cloud AI users, the User Activity Monitor feature requires an upgraded package. If you do not see the monitor, and would like access, contact your DataRobot representative. === "SaaS" !!! info "Availability information" **Required permission:** Enable organization level UAM === "Self-Managed" !!! info "Availability information" **Required permission:** Enable Activity Monitoring DataRobot continuously collects user and system data and makes it available to you through the User Activity Monitor (UAM). The tool provides a means for accessing and analyzing various usage data and prediction statistics. You can view reports online or export the data as CSV files. System information about the deployed cluster is available as well. You can use this information to understand how DataRobot is being used, troubleshoot model or prediction errors, monitor user activities, and more. User activity data is available for review online and can be downloaded for offline access. Filters enable you to access and limit data records to specified time frames, users, and projects. The information provided in these reports proves invaluable to DataRobot Support when understanding your deployed system and resolving issues. You can also exclude sensitive "identifying information" like usernames, IP addresses, project names, etc. from generated reports. ## User activity types {: #user-activity-types } Three types of user activity reports are available: Admin, App, and Prediction. See the [report reference](uam-ref) for fields and accompanying descriptions. === "SaaS" | Report Type | Description | |-------------|-------------| | [Admin Usage](uam-ref#admin-usage-activity-report) | Provides a report of all administrator-initiated audited events. Information provided by this report can identify who modified an organization or an account and what changes were made.| | [App Usage](uam-ref#app-usage-activity-report) | Provides information related to model development. This report can show models by user and identify the most commonly created types of models and projects, average time spent fitting each type of model, etc. | | [Prediction Usage](uam-ref#prediction-usage-activity-report) | Provides a report with data around predictions and deployments. Information provided by this report can show how many models a user deployed, how predictions are being used, error codes generated for prediction requests, which model types generate the most predictions, and more. | === "Self-Managed" Self-Managed AI Platform admins, you can also download a report with system information (no online preview available). | Report Type | Description | |-------------|-------------| | [Admin Usage](uam-ref#admin-usage-activity-report) | Provides a report of all administrator-initiated audited events. Information provided by this report can identify who modified an organization or an account and what changes were made.| | [App Usage](uam-ref#app-usage-activity-report) | Provides information related to model development. This report can show models by user and identify the most commonly created types of models and projects, average time spent fitting each type of model, etc. | | [Prediction Usage](uam-ref#prediction-usage-activity-report) | Provides a report with data around predictions and deployments. Information provided by this report can show how many models a user deployed, how predictions are being used, error codes generated for prediction requests, which model types generate the most predictions, and more. | | [System Information](uam-ref#system-information-report) | Provides a report with system information for the deployed cluster, such as installation type, operating system version, Python version, etc. Only accessible by download.| ## Access the UAM {: #access-the-uam } Some ways to access the User Activity Monitor: * From the profile icon located in the upper right, click **User Activity Monitor** to access all data in all reports for any users. * From the **User Profile** page for a specific user, click **View Activity** to view that user's events. * Once on an individual's activity page, remove the value in the **User ID** field and click **Search** to once again view all users. * From the **User Profile** you can quickly view the [last five app events](manage-users#view-latest-user-activity) for the user. ## View activity and events {: #view-activity-and-events } When you open the User Activity Monitor, you see the 50 most recently recorded application events. (By default, the User Activity Monitor displays data in descending timestamp order). You can change the displayed report and view different report data. ![](images/useractivitymonitor-1.png) | Component | Description of use | |--------------------|--------------------------| | Report view (1) | Selects the report view: App usage, Admin usage, or Prediction usage. (The System information report is not available to preview.) | | Timestamp—UTC (2) | Sorts all records, in all pages, in ascending or descending timestamp order. | | << or >> (3) | Pages forward or backward through the records. | | **Export CSV** (4) | Exports and downloads the user events and system data to CSV files.| ### Search report preview {: #search-report-preview } Use values in the Search dialog to filter the data returned for the selected online report view. Specify the filter values and click **Search** to apply the filter(s); the User Activity Monitor preview updates to show all records matching the filters. !!! note Search values apply to the online report preview only. Click **Reset** to remove filters. !!! note The [System Information report](uam-ref#system-information-report) is not available for online report preview. ![](images/uam-search.png) | Component | Description of use | |--------------------|--------------------------| | Username or User ID (1) | Filters by username or user ID. For App Usage and Prediction Usage reports, you can additionally apply Project ID. If needed, you can copy the username, UID, or project ID values from the report preview and paste in the related search field. | | Project ID (2) | Filters by project; for App Usage and Prediction Usage reports, you can additionally apply Username or User ID. The Project ID field is disabled for the Admin Usage report. | | [Include identifying fields](uam-ref) (3) | Uncheck to hide "sensitive" information (columns display in the report without values). | | Time Range (4) | Limits the number of records shown in the preview (previous year of records by default). You can select one of the predefined time ranges or specify a custom time range using the date picker\*. (See [restrictions](#prediction-usage-preview) when previewing the Prediction Usage activity report.) | | Search or Reset | Generates the online preview of the report using the selected search filters or clears filters to view all available records. | \* To specify a custom range, use the calendar controls to select the start and/or end dates for the records. All time values use the UTC standard. ### Prediction Usage preview {: #prediction-usage-preview } DataRobot can display up to 24 hours of data for the Prediction Usage online report preview. When applying a time range search filter for this report, select **Last day** or **Custom range** (and select a specific day). Note that this applies only when previewing the Prediction Usage activity report online; when downloading Prediction Usage activity report data, you can select any of the **Time Range** values provided in the **Export reports as CSV** dialog. ## Download activity data {: #download-activity-data } Clicking **Export CSV** to generate and download selected usage activity reports prompts you to filter which records to include when downloading reports. You can apply the same filters you created for online report preview or set new filters. === "SaaS" ![](images/uam-exportcsv-saas.png) === "Self-Managed" ![](images/useractivitymonitor-exportcsv.png) The **Export reports as CSV** dialog prompts you to configure and download reports of user activity data. | Component | Description| |------------|-------------| | Data Selection* (1) | Select **Queried Data** to apply the same time range filters you set when previewing reports or **All Data** to ignore any time range filters set for online preview. If you select **All Data**, you can then set new time range filters for downloading data. | | Reports to include (2) | Select one or more reports to download. | | Time Range* (3) | If you select **All Data**, you can set this filter: Specify the time range of records you want to include in the reports. The end time for each of these ranges is the current day. For example, **Last day** creates reports with data recorded starting 24 hours ago and ending at the current time. The default selection downloads records generated over the past year. | | Include identifying fields (4) | If checked, downloaded reports include [identifying information](uam-ref). | | Download CSV (5) | Click **Download CSV** to save the selected report(s) to your local machine. The file is named with a randomly generated hash value and the current date (year-month-day).The filename for each report includes the DataRobot Platform version number, current date, and type of data for the report. | \* Fields do not apply to the System Information report. When DataRobot indicates the selected report(s) are ready, click the link at the top of the application window to download a ZIP archive of the usage report CSV files to your local machine. ![](images/useractivitymonitor-notificationdownload.png) !!! note The time to create and download reports depends on the time range for the data and number of reports. DataRobot creates the reports for export in the background and notifies you when the reports are ready for download.
main-uam-overview
--- title: Roles and permissions description: Describes the many layers of security DataRobot employs to help protect customer data through controlled user-assigned access levels. --- # Roles and permissions {: #roles-and-permissions } DataRobot employs many layers of security to help protect customer data&mdash;at the architecture, entity access, and [authentication](authentication/index) levels. The sections on this page provide details for roles and permissions at each level. ## General access guidance {: #general-access-guidance } Access is comprised of roles and permissions. *Roles* categorize a user's access; *permissions* specify the function-based privileges associated with the role. ### Role definitions {: #role-definitions } In general, role types have the following access: | Role | Access | |---------------------|------------------------| | Consumer/Observer | Read-only | | Editor/User | Read/Write | | Owner | Read/Write/Administer | ### Role priority and sharing {: #role-priority-and-sharing } Role-based access control (RBAC) controls access to the DataRobot application and is managed by organization administrators. The RBAC roles are named differently but covey the same read/write/admin permissions. The assigned role controls both what you can see when using the application and which objects you can access. RBAC overrides sharing-based role permissions. For example, let's say you share with another user who was assigned the RBAC Viewer role (Read-only access) by the admin. You grant them User permissions (Read/Write access). However, because the Viewer role takes priority, the user is denied Write access. A user can have multiple roles assigned for a single entity&mdash;the most permissive role takes precedence and is then updated according to RBAC. Consider: * A dataset is shared with an organization, with members assigned the consumer role. The dataset is then shared with a user in that organization and assigned the editor role. The user will have editor capabilities. Other organization members will be consumers. * A dataset is shared to a group, with members given owner permissions. You want one user in the group to have consumer access only. Remove that user from the group and reassign them individually to restrict their permissions. ## Project roles {: #project-roles } The following table describes the general capabilities allowed by each role. See also specific roles and privileges below. | Capability | Owner | User | Consumer | |-------------------------------|-------|------|----------| | View everything | ✔ | ✔ | ✔ | | Launch IDEs | ✔ | ✔ | | | Make predictions | ✔ | ✔ | | | Create and edit feature lists | ✔ | ✔ | | | Set target | ✔ | ✔ | | | Delete jobs from queue | ✔ | ✔ | | | Run Autopilot | ✔ | ✔ | | | Share a project with others | ✔ | ✔ | | | Rename project | ✔ | ✔ | | | Delete project | ✔ | | | | Unlock holdout | ✔ | | | | Clone project | ✔ | ✔ | | ## Shared data connection and data asset roles {: #shared-data-connection-and-data-asset-roles } The user roles below represent three levels of permissions to support nuanced access across collaborative data connections and data sources (entities). When you share entities, you must assign a role to the user(s) you share with: !!! note Only an administrator can add database drivers. | User role | Description | |-----------|-------------| | Editor | An active user of an entity. This role has limitations based on the entity (read and write). | | Consumer | A passive user of an entity (read-only). | | Owner | The creator or assigned administrator of an entity. This role has the highest access and ability (read, write, administer). | The following table indicates which role is required for tasks associated with the **AI Catalog**. The table refers to the following roles: | User role | Code | |-------------------------|------| | Consumer | C | | Consumer w/ data access | CA | | Editor | E | | Editor w/ data access | EA | | Owner | O | | Task | Permission | |-------|------------| | **Data store/Data connections** | :~~: | | View data connections | C, CA, E, EA, O | | Test connections | C, CA, E, EA, O | | Create new data sources from a data connection | E, EA, O | | List schemas and tables | E, EA, O | | Edit and rename data connection | E, EA, O | | Delete data connection | O | | **Dataset/Data asset** | :~~: | | View metadata and collaborators | C, CA, E, EA, O | | Share | Collaborators can share with others, assigning a role as high as their own role. For example, a Consumer can share and assign the Consumer role but not the Editor role. The Owner role can assign any available roles. | | Download data sample | CA, EA, O | | Download dataset | CA, EA, O | | View sample data | CA, EA, O | | Use dataset for project creation | CA, EA, O | | Use dataset for custom model training | CA, EA, O | | Use dataset for predictions | CA, EA, O| | Modify metadata | E, EA, O | | Create a new version (remote or snapshot)* | EA, O | | Reload** | EA, O | | Delete dataset | O | \* "Remote" refers to information on where to find data (e.g., a URL link); "snapshot" is actual data \** If the dataset is "remote," it is converted to a snapshot ## Deployment roles {: #deployment-roles } The following table defines the permissions for each deployment role: | Capability | Owner | User | Consumer | |------------------------------------------------|--------|------|----------| | Consume predictions* | ✔ | ✔ | ✔ | | Get data via API | ✔ | ✔ | | | View deployment in inventory | ✔ | ✔ | | | View batch prediction jobs and job definitions | | | | | Edit batch prediction job definitions | | | | | Replace model | ✔ | | | | Edit deployment metadata | ✔ | | | | Delete deployment | ✔ | | | | Add user to deployment | ✔ | ✔ | | | Change permission levels of users | ✔ | ✔** | | | Remove users from shared deployment | ✔*** | ✔ | | \* Consumers can make predictions using the deploy API route, but the deployment will not be part of their deployment inventory. \** To Consumer or User only. \*** Can remove self only if there is another user with the Owner role. ## Model Registry roles {: #model-registry-roles } The following table defines the permissions for each model package role: | Option | Description | Availability | |--------|---------------|--------------| | View a model package | View the metadata for a model package, including the model target, prediction type, creation date, and more. | Owner, User, Consumer | | [Deploy](deploy-model#deploy-from-the-model-registry) a model package | Creates a new deployment with the selected model package. | Owner, User, Consumer | | [Share](reg-action#sharing) a model package | Provides sharing capabilities independent of project permissions. | Owner, User, Consumer | | [Permanently archive](reg-action#permanent-archiving) a model package | Provides sharing capabilities independent of project permissions. | Owner | ## Custom Model and Environment roles {: #custom-model-and-environment-roles } The following tables define the permissions for each custom model or environment role: ! note There isn't an editor role for custom environments, only for custom models. #### Environment Roles and Permissions {: #environment-roles-and-permissions } | Capability | Owner | Consumer | |---------------------------------------------------------|-------|----------| | Use and view the environment | ✔ | ✔ | | Update metadata and add new versions of the environment | ✔ | | | Delete the environment | ✔ | | #### Model Roles and Permissions {: #model-roles-and-permissions } | Capability | Owner | Editor | Consumer | |---------------------------------------------------|-------|--------|----------| | Use and view the model | ✔ | ✔ | ✔ | | Update metadata and add new versions of the model | ✔ | ✔ | | | Delete the model | ✔ | ✔ | | *All roles can share an application by sharing the application link with an embedded authorization token. ## No-Code AI App roles {: #no-code-ai-app-roles } The following table defines the permissions for each role supported for [Automated Applications](app-builder/index). | Capability | Owner | Editor | Consumer | |--------------------------------------------------------|-------|--------|----------| | Make predictions | ✔ | ✔ | ✔ | | Deactivate an application | ✔ | ✔ | | | Share an application to other DataRobot licensed users | ✔ | | | | Delete an application | ✔ | | | | Upgrade an application | ✔ | ✔ | | | Update an application's settings | ✔ | ✔ | |
roles-permissions
--- title: Data and sharing description: This section introduces sharing within DataRobot, including how roles and permissions affect sharing. --- # Sharing {: #sharing } The information in this section provides information on data and sharing requirements throughout DataRobot. Topic | Describes... ----- | ------------ [Sharing](sharing) | How to share assets in DataRobot, including datasets, projects, and deployments. [Roles and permissions](roles-permissions) | Details roles and permissions at the architecture-, entity-, and authentication-level.
index
--- title: Sharing description: How to share DataRobot datasets, projects, and deployments. --- # Sharing {: #sharing } You can share datasets, projects, and deployments ("assets") within your organization. You may want to do this, for example, to get the assistance of an in-house data scientist who has offered to help optimize your data and models. Or, perhaps a colleague in a different group would benefit from your model's predictions. When you invite a user, user group, or organization to share a project, DataRobot assigns the default role of User or Editor to each selected target (See [Roles and permissions](roles-permissions) for more information). Note the following when sharing or removing access: * Not all entities allow sharing beyond a specific user. * You can only share with active accounts. * You can only share up to your own access level (a consumer cannot grant an editor role) and you cannot downgrade the access of a collaborator with a higher access level than your own. * Every entity must have at least one owner (entity creator by default). To remove the creator, that user must assign the owner role to one or more additional collaborators and then remove themselves. * Data connection and data asset entities must have at least one user who can share (based on the “Allow sharing” setting). !!! note You can also [share custom models and environments](custom-model-actions) as part of the MLOps workflow. ## Share assets {: #share-assets } To increase collaboration, you can share assets other DataRobot users, and with members of your groups and organization. === "Share datasets" You can share any dataset that you have stored in the [**AI Catalog**](catalog). 1. Navigate to the **AI Catalog** (top menu) and select the dataset to share. ![](images/tut-share-select-dataset.png) 2. On the **Dataset Info** page, click **Share**. ![](images/tut-share-dataset.png) 3. Complete the fields of the [**Share** dialog](#share-dialog) and click **Share**. ![](images/tut-share-dataset-dialog.png) If you encounter an error, review the [data asset roles](roles-permissions#shared-data-connection-and-data-asset-roles). === "Share projects" You can share either a single project or multiple projects at one time, as described below. **To share a single project:** 1. From an active project, click the share icon (![](images/icon-share.png)) in the top right. ![](images/tut-top-rt-icons-share.png) 2. Complete the fields of the [**Share** dialog](#share-dialog) and click **Share**. ![](images/tut-share-proj-projmgr-blur2.png) **To share multiple projects:** Use the [**Projects** control center](manage-projects) to share multiple projects, with the same recipients, at one time. !!! note You can only share with users&mdash;not groups and organizations&mdash;from the Project control center. 1. Click the folder icon (![](images/icon-folder.png)) and then **Manage Projects** to open the project control center. ![](images/tut-share-sel-mng-button.png) The page lists all projects you have created or have been shared with you. 2. To select projects to share, check the boxes to the left of the project name. ![](images/tut-share-sel-multi-proj.png) 3. From the **Menu**, click **Share Selected Projects**. (You can also click **Menu > Share Multiple Projects** without first selecting projects and instead enter all project names manually.) You must use this menu, not the menu to the right of the project name, to share multiple projects. ![](images/tut-share-selected-proj.png) The **Selected projects** field is prepopulated with the projects you checked in the project listing. To include additional projects, begin typing the name and DataRobot will string match your entry. Click to add. 4. Complete the fields of the [**Share** dialog](#share-dialog) and click **Share**. ![](images/tut-share-multi-proj.png) If you encounter an error, review the [project roles](roles-permissions#project-roles). === "Share deployments" You can share deployments with users, groups, and organizations. 1. Click the [**Deployments** tab](deploy-inventory) (top menu) to list all deployments to which you are subscribed. ![](images/tut-share-list-deps.png) 2. For the desired deployment, expand the **Actions** menu and click **Share**. ![](images/tut-share-deplo-action-menu.png) 3. Complete the fields of the [**Share** dialog](#share-dialog) and click **Share**. ![](images/tut-share-deployment.png) If you encounter an error, review the [deployment roles](roles-permissions#deployment-roles). ## Share dialog modal {: #share-dialog-modal } The fields of the **Share** dialog differ slightly. Generally: Field | Description | Dataset | Project | Deployment ----- | ----------- | ------- | ------- | ---------- Share with | Identifies the recipient(s). This can be any combination of [user email, group, or organization](admin-overview) (if supported). | ✔ | ✔ | ✔ [Role](roles-permissions) | Specifies recipient access level to the asset. You can grant access at the same or a lower level than your own access.| Owner, Consumer, Editor | Owner, User, Observer | Owner, User, Consumer Shared with | Lists the recipients and their assigned roles (and, for dataset, additional privileges). Use the dropdown to change the role, click the **x** (![](images/icon-revoke.png)) to remove a recipient. | ✔ | ✔ | ✔ Allow sharing |Provides the recipient with permission to re-share the asset with others (up to their level of access). | ✔| | | Can use data | Allows the recipient to use the dataset for certain operations (e.g., download data, create project from dataset).| ✔ | | | Add note | Allows you to add a note that to be included in the notification sent to the recipient. | |✔ | ## More info... {: #more-info } ??? tip "Share dialog quick reference" Recipients can be: * _User_: Any individual in your organization. * _Group_: A selection of users, assigned by your admin, to which permissions are applied in bulk. * _Organization_: A larger collection of users, also configured by the admin. Role selections are dependent on the access type and can be: * Owner (read/write/administer) * Editor or User (read/write) * Observer or Consumer (read-only) ??? tip "How is the recipient alerted?" When sharing is succesful, DataRobot sends the recipient(s) a notification email containing a link to the shared asset. Once clicked, the asset (or a log in prompt) opens in DataRobot. ![](images/tut-share-email.png) Additionally, DataRobot sends an alert to the recipients [**Notification center**](getting-help#notification-center). Click the notification icon (![](images/icon-alert.png)) for details. ![](images/tut-share-notifications.png) ??? tip "Why can't I share my asset?" If you get an error when you try to share, it may be because: - Your recipient is not a DataRobot user or is a user but is outside your DataRobot organization. - There is a problem with roles and/or permissions. Both your role and access level and the recipient's role influence whether and how you can share an asset with that user or group of users. **Effect of your role and access level** When assigning roles, you can assign the same, or a lower, access level as your own. For example, if you have _User_ (read/write) access to a project, you can grant another user _User_ or _Consumer_ (read-only) access, but you cannot grant them _Owner_ (read/write/administer) access. **Effect of the recipient's role and access level** The share recipient's access to DataRobot assets is controlled by role-based access control (RBAC) roles, assigned by the organization's DataRobot administrator. Those designated privileges override your assignments. If you attempt to share an asset at a role above the user's RBAC privileges, sharing is prevented. Try assigning a more restrictive role. The following provides additional documentation related to sharing. * [Roles and permissions](roles-permissions) * [Role priority and sharing](roles-permissions#role-priority-and-sharing) * [AI Catalog](catalog) * [Notification center](getting-help#notification-center) * [Project control center](manage-projects)
sharing
--- title: Personal data detection description: DataRobot automates the detection of specific types of personal data to provide a layer of protection against the inadvertent inclusion of this information modeling and predictions. section_name: Administrator maturity: public-preview platform: self-managed-only --- # Personal data detection {: #personal-data-detection } !!! info "Availability information: Self-Managed only" This public preview feature is only available for US-based Self-Managed AI Platform deployments. Contact your DataRobot representative or administrator for information on enabling the feature. In some regulated and specific use cases, the use of personal data as a feature in a model is forbidden. DataRobot automates the detection of specific types of personal data to provide a layer of protection against the inadvertent inclusion of this information in a dataset and prevent its usage at modeling and prediction time. After a dataset is ingested through the [**AI Catalog**](catalog), you have the option to check each feature for the presence of personal data. The result is a process that checks every cell in a dataset against patterns that DataRobot has developed for identifying this type of information. If found, a warning message is displayed in the **AI Catalog**'s **Info** and **Profile** pages, informing you of the type of personal data detected for each feature and providing sample values to help you make an informed decision on how to move forward. Additionally, DataRobot creates a new feature list&mdash;the equivalent of <em>Informative Features</em> but with all features containing any personal data removed. The new list is named <em>Informative Features - Personal Data Removed</em>. !!! warning There is no guarantee that this tool has identified all instances of personal data. It is intended to supplement your own personal data detection controls. DataRobot currently supports detection of the following fields: * Email address * IPv4 address * US telephone number * Social security number To run personal data detection on a dataset in the **AI Catalog**, go to the **Info** page click **Run Personal Data Detection** on the banner that indicates successful dataset publishing:. ![](images/pii-1.png) If DataRobot detects personal data in the dataset, a warning message displays. Click **Details** to view more information about the personal data detected; click **Dismiss** to remove the warning and prevent it from being shown again. ![](images/pii-2.png) Warnings are also highlighted by column on the **Profile** tab: ![](images/pii-3.png)
pii-detect
--- title: Administrator public preview features description: Read preliminary documentation for administration features currently in the DataRobot public preview pipeline. section_name: Administrator maturity: public-preview platform: self-managed-only --- # Administrator public preview features {: #administrator-public-preview-features } {% include 'includes/pub-preview-notice-include.md' %} ## Available administrator public preview documentation {: #available-administrator-public-preview-documentation } Public preview for... | Describes... ----- | ------ [Personal data detection](pii-detect) | Run personal data detection on a dataset in the **AI Catalog** and provide a layer of protection against the inadvertent inclusion of this information.
index
--- title: Two-factor authentication (2FA) description: How to set up two-factor authentication (2FA), an opt-in feature that provides additional security for DataRobot users. --- # Two-factor authentication {: #two-factor-authentication } Two-factor authentication (2FA) is an opt-in feature that provides additional security for DataRobot users. DataRobot's 2FA is based on the Time-based One-Time Password algorithm (<a target="_blank" href="https://en.wikipedia.org/wiki/Time-based_One-time_Password_algorithm">TOTP</a>), the IETF RFC 6238 standard for many two-factor authentication systems. It works by generating a temporary, one-time password that must be manually entered into the app to authenticate access. To work with 2FA, you use an authentication app on your mobile device (for example, <a target="_blank" href="https://support.google.com/accounts/answer/1066447?co=GENIE.Platform%3DAndroid&hl=en">Google Authenticator</a>). If you haven't already done so, install and register an app on your device. You will use the app to scan a DataRobot-provided QR code, which will, in turn, generate authentication and recovery codes. DataRobot provides a series of recovery codes for use if you lose access to your default authentication method. !!! warning Before completing two-factor authentication, download, copy, or print these codes and save them to a secure location. !!! tip When you enable 2FA, all API endpoints that validate username and password require secondary authentication. See the [troubleshooting](2fa-help){ target=_blank } section for additional information. ## Set up 2FA {: #set-up-2fa } To enable 2FA: 1. From the [**Profile**](profile-settings) page, on the **Security** tab, switch the **Two-Factor Authentication** toggle to on: ![](images/2fa-1.png) A dialog box opens to the first step of the setup process: ![](images/2fa-2.png) 2. Open the authenticator app on your device and select the option that allows you to scan a barcode. (On Google Authenticator, click the `+` sign and choose "Scan barcode.") 3. Scan the QR code shown in the dialog box; your device displays a 6-digit code. (If you have trouble scanning, see the [alternate option](#non-qr-code-method).) Or, if you receive an error, see the [troubleshooting](2fa-help){ target=_blank } section. 4. Enter the code (no spaces) into the box at the bottom of the screen and click **Verify**. <a name="step-4"></a> ![](images/2fa-3.png) 5. Once verified, DataRobot returns 20 recovery codes for your use if you lose access to your default authentication method. ***Save these codes in a secure place.*** ![](images/2fa-5.png) 6. Select a method for saving your codes and click **Complete**. DataRobot briefly displays a notice that two-factor authentication is enabled. ### Non-QR code method {: #non-qr-code-method } If you could not scan the QR code: 1. From the dialog box, choose **Try this instead**: ![](images/2fa-4.png) DataRobot displays your registered email address and a code for use with your app. 2. In your authenticator app, manually generate a code. For example, in Google Authenticator, click the `+` sign and choose "Manual entry." 3. Enter the credentials displayed in the dialog box. Note: * the code is not case-sensitive * spaces are optional, as most apps remove them when you enter the characters. The authenticator app returns a 6-digit code. 4. Enter the code (no spaces) into the box at the bottom of the screen and click **Verify**. Return to [step 4 of "Set up 2FA"](#set-up-2fa), above. Or, if you receive an error, see the [troubleshooting](2fa-help){ target=_blank } section. ## Use 2FA {: #use-2fa } After you enable and set up 2FA, you will be prompted for a code each time you log into DataRobot. (You are also prompted for an authentication code when requesting a password reset from the login page.) Open DataRobot and enter your email and password, or sign in with Google. You are prompted for an authentication code: ![](images/2fa-6.png) If you have your mobile device available, open the authenticator app and enter the 6-digit code displayed. If you do not have your device, click **Switch to recovery code** and enter one of the codes from your saved list of codes. When you've entered the code, click **Verify**. DataRobot validates your account and opens the application.
2fa
--- title: SAML SSO description: How to configure DataRobot and an external Identity Provider (IdP) for user authentication via single sign-on (SSO). DataRobot supports the SAML 2.0 protocol. --- # SAML SSO {: #saml-sso } DataRobot allows you to use external services (Identity Providers, known as IdPs) for user authentication through single sign-on (SSO) technology. DataRobot's SSO support is based on the SAML 2.0 protocol. To use SAML SSO in DataRobot, you must make changes to both the [IdP](#identity-provider-idp-configuration) and [service provider](#datarobot-configuration) (DataRobot) configurations. === "SaaS" !!! info "Availability information" Availability of single sign-on (SSO) is dependent on your DataRobot package. If it is not enabled for your organization, contact your DataRobot representative. **Required permission:** Enable SAML SSO The basic workflow for configuring SAML SSO is as follows: 1. Review and complete the [prerequisites](#prerequisites). 2. [Configure SSO in your identity provider](#identity-provider-idp-configuration) and identify DataRobot as the service provider. 3. Configure SSO in DataRobot: * Choose a [configuration option](#datarobot-configuration) to set up the Entity ID and IdP metadata. * Use [mapping](#mapping) to define how attributes, groups, and roles are synchronized between DataRobot and the IdP. * Set [SSO requirements](#set-sso-requirements), including making SSO optional or required for all users. === "Self-Managed" The Self-Managed AI Platform provides enhanced SAML SSO : !!! info "Availability information" **Required permission:** Enable Enhanced Global SAML SSO The basic workflow for configuring SAML SSO is as follows: 1. Review and complete the [prerequisites](#prerequisites). 2. [Configure SSO in your identity provider](#identity-provider-idp-configuration) and identify DataRobot as the service provider. 3. Configure SSO in DataRobot: * Choose a [configuration option](#datarobot-configuration) to set up the Entity ID and IdP metadata. * Use [mapping](#mapping) to define how attributes, groups, and roles are synchronized between DataRobot and the IdP. * Modify [security parameters](#security-parameters) to increase or decrease the SAML protocol security strength. * Set up [advanced options](#advanced-options). * Set [SSO requirements](#set-sso-requirements), including making SSO optional or required for all users. ## Prerequisites {: #prerequisites } Make sure the following prerequisites are met before starting the SAML SSO configuration process: === "SaaS" * SAML for SSO is enabled. * The organization has at least one Org/System admin; the admin will be responsible for SAML SSO configuration once it is enabled. Contact your DataRobot representative to enable SAML SSO, and if necessary, to set up the first Org/System admin user (that user can then assign additional users to the Org/System admin role). === "Self-Managed" * SAML for SSO is enabled. * The organization has at least one System admin; the System admin will be responsible for SAML SSO configuration once it is enabled. Contact your DataRobot representative to enable SAML SSO, and if necessary, to set up the first System admin user (that user can then assign additional users to the System admin role). The following describes configuration necessary to enable SAML SSO for use with DataRobot. Admins can access the information required for setup on DataRobobt's SAML SSO configuration page, which can be accessed from **Settings > Manage SSO**: ![](images/sso-0.png) ## Identity Provider (IdP) configuration {: #identity-provider-idp-configuration } !!! note * Because configurations differ among IdPs, refer to your provider's documentation for related instructions. * DataRobot does not provide a file containing the metadata required for IdP configuration; you must manually configure this information. When configuring the IdP, you must create a new SAML application with your IdP and identify DataRobot as the service provider (SP) by providing SP sign-in and sign-out URLs. To retrieve this information in DataRobot, go to **Settings > Manage SSO** and locate <span id="service-provider-details">**Service provider details**</span>, which lists URL details. ![](images/sso-5.png) The following table describes the URLs: === "SaaS" Use the root URL with the organization name appended. The organization name is the name assigned to your business by DataRobot, entered in lowercase with no spaces. | URL type | Root URL | Description | Okta example | |----------|----------|-------------|----------------------| | SP initiated login URL | app.datarobot.com/sso/sign-in/<*org\_name*> | The endpoint URL that the IdP receives service provider requests from (where the requests originate). | Recipient URL | | IdP initiated login URL | app.datarobot.com/sso/signed-in/<*org\_name*> | The endpoint URL that receives the SAML sign-in request from the IdP. | Single sign-on URL | | IdP initiated logout URL | app.datarobot.com/sso/sign-out/<*org\_name*> | *Optional.* The endpoint URL that receives the SAML sign-out request from the IdP. | N/A | === "Self-Managed" | URL type | Root URL | Description | Okta example | |----------|----------|-------------|----------------------| | SP initiated login URL | https://app.datarobot.com/sso/sign-in/ | The endpoint URL that the IdP receives service provider requests from (where the requests originate). | Recipient URL | | IdP initiated login URL | https://app.datarobot.com/sso/signed-in/ | The endpoint URL that receives the SAML sign-in request from the IdP. | Single sign-on URL | | IdP initiated logout URL | https://app.datarobot.com/sso/sign-out/ | *Optional.* The endpoint URL that receives the SAML sign-out request from the IdP. | N/A| The tabs below provide example instructions for finishing IdP configuration in Okta, PingOne, and Azure Active Directory. !!! warning "Third-party application screenshots" The following images were accurate at the time they were taken, however, they may not reflect the current user interface of the third-party application. === "Okta" Make sure that the following required configuration is complete on the IdP side&mdash;this example uses Okta. 1. If you don't already have an Okta developer account, [sign up for free](https://developer.okta.com/signup/) using your GitHub username or email. 2. In Okta, click **Applications > Applications** in the left-hand navigation. 3. Click **Create App Integration**, select **SAML 2.0**, and click **Next**. ![](images/sso-okta-1.png) 4. On the **General Settings** tab, enter a name for the application and click **Next**. ![](images/sso-okta-2.png) 5. On the **Configure SAML** tab, fill in the following fields: * **Single sign-on URL** * **Audience URI (SP Entity ID)** * **Attribute Statement** for `username` ![](images/sso-okta-3.png) !!! note The Single sign-on URL has `signed-in` at the end. The attribute `username` must be set to `user.email` in order for SSO login to be successful with DataRobot. 6. On the **Feedback** tab, select **I’m a software vendor** and click **Finish**. ![](images/sso-okta-4.png) 7. With your new application selected, click **Applications > Assignments** and assign People or Groups to your app. ![](images/sso-okta-5.png) 8. On the **Sign On** tab, locate the **SAML Signing Certifiates section**. Next to _SHA-2_, select **Actions > View IdP metadata** and copy the IdP metadata link address&mdash;you will need this to configure SSO in DataRobot. ![](images/sso-okta-6.png) === "PingOne" Make sure that the following required configuration is complete on the IdP side&mdash;this example uses PingOne. **Configure the PingOne SSO Environment** 1. In PingIdentity, navigate to the **Your Environments** page and click **Add Environment**. ![](images/ping-sso-1.png) 2. Select **Customer solution** and click **Next**. 3. Click **Next** again. ![](images/ping-sso-2.png) 4. Name the environemnt (`TestDataRobotSSOEnv` in this example) and click **Finish**. **Configure a PingOne SSO Application** 1. Select the PingOne Environment you want to use to house your SSO application (`TestDataRobotSSOEnv` in this example). ![](images/ping-sso-3.png) 2. Click the **Add a SAML app** tile and open the **Connections** tab. ![](images/ping-sso-4.png) 3. Click the **+** icon the right of the **Applications**. ![](images/ping-sso-5.png) 4. Name the application (`TestDataRobotSSOApp` in this example), select **SAML Application**, and click **Configure**. 5. Select **Manually Enter**; then copy and paste the following: - Copy the [**IdP initiated login URL**](#service-provider-details) from DataRobot and paste it in the **ACS URLs** field. - Copy the [**Entity ID**](#datarobot-configuration) from DataRobot and paste it in the **Entity ID** field. ![](images/ping-sso-6.png) 7. Click **Save**. 8. On the **Configuration** tab, click the pencil icon. ![](images/ping-sso-7.png) 9. Make sure **Sign Assertation & Response** is selected. ![](images/ping-sso-8.png) 10. Scroll down to the **Subject Named Format** dropdown. Click the dropdown and select `urn:oasis:names:tc:SAML:2.0:name-id:transient`. ![](images/ping-sso-9.png) 11. Click **Save**. 12. Use the toggle to turn on the `TestDataRobotSSOApp` PingOne application. ![](images/ping-sso-10.png) 13. Save the **IDP Metadata URL**. You will need this to configure SSO in DataRobot. ![](images/ping-sso-11.png) **Map Attributes** 1. Click the **Attribute Mappings** tab and click the penicl icon. ![](images/ping-sso-12.png) 2. Next to `saml_subject`, change the _PingOne Mapping_ to **Email Address**. Click **Add**, enter `username` under _Attributes_, and select **Email address** for the the _PingOne Mapping_. ![](images/ping-sso-13.png) 3. Click **Save**. === "Azure AD" Make sure that the following required configuration is complete on the IdP side&mdash;this example uses Azure Active Directory. 1. Sign into [Azure](https://login.microsoftonline.com) as a cloud application admin. 2. Navigate to **Azure Active Directory > Enterprise applications** and click **+ Create your own application**. ![](images/azure-sso-1.png) 3. Name the application, select **Integrate any other application you don't find in the gallery (Non-gallery)**, and click **Add**. ![](images/azure-sso-2.png) 4. On the **Overview** page, select **Set up single sign on** and select SAML as the single sign-on method. ![](images/azure-sso-3.png) 5. Click the pencil icon to the right of **Basic SAML Configuration**. Populate the following fields: - For **Identifier (Entity ID)**, enter an arbitrary string. - For **Reply URL (Assertion Consumer Service URL)**, enter `<domain>/sso/saml/signed-in/`. ![](images/azure-sso-4.png) 6. Click **Save**. 7. Click the pencil icon to the right of **User Attributes & Claims**. Delete all default additional claims and add the following claims: - `username` as **Name**. - `Attribute` as **Source**. - `user.userprincipalname` as **Source attribute**. ![](images/azure-sso-5.png) If the form prevents you from saving without a **Namespace** value, provide any string, click **Save**, and then edit it again to remove the **Namespace** value. After saving, the new claim appears in the table. 8. Before proceeding, ensure the **Signing Option** is set to sign both the SAML response _and_ assertion in the Azure Active Directory settings. Otherwise, you can encounter a SAML SSO authentication for a missing signature. - In Azure's **Set up Single Sign-On with SAML** preview page, find the **SAML Signing Certificate** heading and click the **Edit** pencil icon to navigate to the **SAML Signing Certificate** page. - In the **Signing Option** dropdown list, select **Sign SAML response and assertion**. - Click **Save** to apply the new SAML signing certificate settings. ![](images/saml-sso-azure.png) 9. To make sure the test account has access to the application, open **Users and groups** in the left-hand navigation and click **Add user**. ![](images/azure-sso-6.png) 10. Copy the **Identifier (Entity ID)** and **App Federation Metadata URL**&mdash;you will need these values to configure SSO in DataRobot. ![](images/azure-sso-7.png) After configuring SSO in the IdP, you can now [configure SSO in DataRobot](#datarobot-configuration). ## DataRobot configuration {: #datarobot-configuration } Now, configure the IdP in DataRobot. !!! warning "Saving progress" At any point in your configuration, and at configuration completion, click <b>Save and Authorize</b>. The button is only active when the minimum required fields are complete. ![](images/sso-10.png) After configuring the IdP, you must configure SSO in DataRobot by setting up an Entity ID and IdP Metadata for your organization. There are two Entity IDs&mdash;one from the service provider (DataRobot) and one from the IdP: * The **Entity ID** entered in the DataRobot SSO configuration is a unique string that serves as the service provider entity ID. This is what you enter when configuring service provider metadata for the DataRobot-specific SAML application on the IdP side. * If manually configuring IdP metadata for the DataRobot-side configuration, the **Issuer** field is the unique identifier of the Identity Provider (IdP), found on the IdP DataRobot-specific SAML application configuration. Normally, it is a URL of an identity provider. When logged in as an admin, open **Settings > Manage SSO** and click the **Configure using** dropdown to see the three options available to configure the IdP parameters (described in below). ![](images/sso-1.png) ### Metadata URL {: #metadata-url } Complete the following fields: === "SaaS" ![](images/sso-2.png) | Field | Description | |-------|---------------| | Name | Specify a meaningful name for the IdP configuration (for example, the organization name). This name will be used in the service provider details URL fields. Enter the name in lowercase, with no spaces. The value entered in this field updates the values provided in the **Service provider details** section. | | Entity ID | An arbitrary, unique-per-organization string (for example, myorg\_saml) that serves as the service provider Entity ID. Enter this value to establish a common identifier between DataRobot (SP) app and IdP SAML application. | | Metadata URL | A URL provided by the IdP that points to an XML document with integration-specific information. The endpoint must be accessible to the DataRobot application. (For a local file, use the **Metadata file** option.) | | Verify IdP Metadata HTTPS Certificate | If toggled on, the host certificate is validated for a given metadata URL. | === "Self-Managed" ![](images/sso-enhanced-2.png) Field | Description -------|--------------- Entity ID | An arbitrary, unique identifier of the SAML application created at Idp side, see SP Entity ID / Issuer / Audience above (some IdPs use Client ID term). Enter this value to establish a common identifier between the DataRobot (SP) app and the IdP SAML application. Metadata URL | A URL provided by the IdP that points to an XML document with integration-specific information. The endpoint must be accessible to the DataRobot application. (For a local file, use the **Metadata file** option. In case the URLs triggers file downloading then also use the **Metadata file** option for downloaded metadata XML file.) Verify IdP Metadata HTTPS Certificate | If toggled on, the host certificate is validated for a given metadata URL. ### Metadata file {: #metadata-file } Select **Metadata file** to provide IdP metadata as XML content. === "SaaS" ![](images/sso-3.png) Complete the following fields: | Field | Description | |-------|---------------| | Name | Specify a meaningful name for the IdP configuration (for example, the organization name). This name will be used in the service provider details URL fields. Enter the name in lowercase, with no spaces. The value entered in this field updates the values provided in the **Service provider details** section. | | Entity ID | An arbitrary, unique-per-organization string (for example, myorg\_saml) that serves as the service provider Entity ID. Enter this value to set a matching between DataRobot (SP) app and IdP SAML application. | | Metadata file | An XML document, provided by the IdP, with integration-specific information. Use this if the IdP did not provide a metadata URL. | === "Self-Managed" ![](images/sso-enhanced-3.png) Complete the following fields: | Field | Description | |-------|---------------| | Entity ID | An arbitrary, unique identifier provided by the identity provider. Enter this value to establish a common identifier between the DataRobot (SP) app and the IdP SAML application. | | Metadata file | An XML document, provided by the IdP, with integration-specific information. Use this if the IdP did not provide a metadata URL.| ### Manual settings {: #manual-settings } Select **Manual settings** if IdP metadata is not available. === "SaaS" ![](images/sso-4.png) Complete the following fields: | Field | Description | |-------|---------------| | Name | Specify a meaningful name for the IdP configuration (for example, the organization name). This name will be used in the service provider details URL fields. Enter the name in lowercase, with no spaces. The value entered in this field updates the values provided in the **Service provider details** section. | | Entity ID | An arbitrary, unique-per-organization string (for example, myorg\_saml) that serves as the service provider Entity ID. Enter this value when manually configuring the IdP application for DataRobot. | | Identity Provider Single Sign-On URL | The URL that DataRobot contacts to initiate login authentication for the user. This is obtained from the SAML application you created for DataRobot in the IdP configuration. | | Identity Provider Single Sign-Out URL (optional) | The URL that DataRobot directs the user’s browser to after logout. This is obtained from the SAML application you created for DataRobot in the IdP configuration. If left blank, DatRobot redirects to the root DataRobot site. | | Issuer | The IdP-provided Entity ID obtained from the SAML application you created for DataRobot in the IdP configuration. **Note**: Although the DataRobot UI shows this as optional, it is *not* and must be set correctly. | | Certificate | The X.509 certificate, pasted or uploaded. Certificate is used for validating IdP signatures. This is obtained from the SAML application you created for DataRobot in the IdP configuration. | === "Self-Managed" ![](images/sso-enhanced-4.png) Complete the following fields: | Field | Description | |-------|---------------| | Entity ID | An arbitrary, unique identifier provided by the identity provider. Enter this value to set a matching between DataRobot (SP) app and IdP SAML application. | | Identity Provider Single Sign-On URL | The URL that DataRobot contacts to initiate login authentication for the user. This is obtained from the SAML application you created for DataRobot in the IdP configuration. | | Identity Provider Single Sign-Out URL (optional) | The URL that DataRobot directs the user’s browser to after logout. This is obtained from the SAML application you created for DataRobot in the IdP configuration. If left blank, DataRobot redirects to the root DataRobot site. | | Issuer | The IdP-provided Entity ID obtained from the SAML application created for DataRobot in the IdP configuration. **Note**: Although the DataRobot UI shows this as optional, it is *not* and must be set correctly. | | Certificate | The X.509 certificate, pasted or uploaded. Certificate is used for validating IdP signatures. This is obtained from the SAML application you created for DataRobot in the IdP configuration. | ![](images/sso-enhanced-5.png) **Auto-generate Users** automatically adds new users to DataRobot upon initial sign on. ### Mapping {: #mapping } All three configuration options allow you to define how attributes, groups, and roles are synchronized between DataRobot and the IdP. Mappings allow you to automatically provision users on DataRobot based on their settings in the IdP configuration. It also prevents individuals from teams _not_ configured for DataRobot from entering the system. Adding mappings both adds more restrictions on who can access DataRobot and controls which assets users can access. Without mappings, anyone in your organization who was manually added to the DataRobot system by an administrator can access the platform. ??? example "Mapping example" J_Doe joins Company A on Team A and J's manager sends a link to DataRobot. When J click's on the link, s/he's profile is automatically created in the DataRobot system based on the mappings from the identity provider. Permissions are assigned based on the role as defined by J's company and how that role is defined in the IdP configuration. On the other hand, let's say J joins Company A on Team B, but Team B isn’t configured to use DataRobot. If J's manager sends J a DataRobot link, when s/he clicks on the link, access to DataRobot is denied and no user record is created. You can set up the following mappings: === "Attributes" **Attribute** mapping allows you to map DataRobot attributes (data about the user) to the fields of the SAML response. In other words, because DataRobot and the IdP may use different names, this section allows you to configure the name of the field in the SAML response where DataRobot updates the user's display name, first name, last name, and email. ![](images/sso-6.png) === "Groups" **Groups** mapping allows you to create an unlimited number of mappings between IdP groups and existing [DataRobot groups](manage-groups). Mappings can only be one-to-one. ![](images/sso-7.png) To configure, set: Field | Description ---------- | ----------- Group attribute | The name, in the SAML response, that identifies the string as a group name. DataRobot group | The name of an existing DataRobot group to which the user will be assigned. Identity provider group | The name of the IdP group to which the user belongs. !!! note "Self-Managed AI Platform only" You can use [custom RBAC roles](custom-roles) (available for public preview) to map one default role to each IdP group in DataRobot by creating a new role and assigning it to the desired group. === "Roles" **Roles** mapping allows you to create an unlimited number of mappings between IdP and [DataRobot roles](rbac-ref). Mappings can be one-to-one, one-to-many, or many-to-many. ![](images/sso-8.png) To configure, set: Field | Description ---------- | ----------- Role attribute | The name, in the SAML response, that identifies the string as a named user role. DataRobot role | The name of the DataRobot role to assign to the user. Identity provider role | The name of the role in the IdP configuration that is assigned to the user. !!! note "Self-Managed AI Platform admins" See the [**Security Parameters**](#security-parameters) section to modify the relationship between DataRobot and the IdP to either increase or decrease the SAML protocol security strength. ### Set SSO requirements {: #set-sso-requirements } After all fields are validated and connection is successful, choose whether to make SSO optional or required using the toggles. ![](images/sso-9.png) === "SaaS" Toggle | Description ---------- | ----------- Enable single sign-on | **Makes SSO optional for users.** If enabled, users have the option to sign into DataRobot using SSO or another authentication method (i.e., username/password). Enforce single sign-on | **Makes SSO required for users.** If enabled, users in the organization must sign in using SSO. !!! note Do not enforce sign on until you have completed configuration and testing. === "Self-Managed" Toggle | Description ---------- | ----------- Enable single sign-on | **Makes SSO optional for users.** If enabled, users have the option to sign into DataRobot using SSO or another authentication method (i.e., username/password). Enforce single sign-on | **Makes SSO required for users.** If enabled, users in the organization must sign in using SSO. !!! note Do not enforce sign on until you have completed configuration and testing. If you have selected to enforce SSO, the username and password login is hidden and only the SSO login displays: ![](images/sso-11.png) If SSO is optional, users can choose their login method: ![](images/sso-12.png) Once SSO is configured, provide users with the **SP initiated login URL** to sign into DataRobot (found under [**Manage SSO > Service Provider Details**](#service-provider-details)). Managed AI Platform users cannot access SSO via the login screen at `app.datarobot.com`. After clicking the SSO button in DataRobot, users are redirected to the IdP's authentication page and then redirected back to DataRobot after successful sign on. ## Self-Managed AI Platform admins {: #self-managed-ai-platform-admins } The following is available only on the Self-Managed AI Platform. ### Security Parameters {: #security-parameters } **Security Parameters** modify the relationship between DataRobot and the IdP to either increase or decrease the SAML protocol security strength. ![](images/sso-enhanced-9.png) Use the following options to modify security strength: | Field | Description | |-------|---------------| | Allow unsolicited | When SSO is initiated in DataRobot (SP-initiated request), DataRobot sends an auth request with a unique ID to the IdP. The Idp then sends a response back using the same unique ID. Enabling this parameter means the ID in the request and response do not need to match (e.g., in case of IdP-initiated authentication). | | Auth requests signed | DataRobot signs Authentication Requests before being sent to the IdP to make it possible to validate there was no third-party involvement. In **Advanced Options > Client Config**, configure a [private key](#advanced-options) before enabling this parameter. | | Want assertions signed | DataRobot recommends keeping this parameter enabled as it makes the DataRobot application require the IdP to send signed assertions. Admins can disable signed assertions for testing and/or debugging. | | Want response signed | DataRobot recommends keeping this parameter enabled as it makes the DataRobot app require the IdP to send signed SAML responses. Admins can disable signed assertions for testing and/or debugging. | | Logout requests signed | DataRobot signs logout requests before sending them to the IdP to make it possible to validate there was no third-party involvement. Configure a **private key** before enabling this parameter. | See also the section on setting a [**private key**](#advanced-options) in **Advanced Options > Client Config**, which is required for the options `Auth requests signed` and `Logout requests signed`. ### Advanced Options {: #advanced-options } You can configure the following advanced options: === "Session & Binding" **Session & Binding** controls how DataRobot and the IdP communicate&mdash;SAML requirements vary by IdP. ![](images/sso-enhanced-10.png) To configure, set: Field | Description ---------- | ----------- User Session Length (sec) | Session cookie expiration time. The default length is one week. Reducing this number increases a rate of authentication requests to the IdP. SP Initiated Method | The HTTP method used to start SAML authentication negotiation. IdP Initiated Method | The HTTP method used to move user to DataRobot after successful authentication. === "Client Config" **Client Config** allows users to set private keys and certificates. ![](images/sso-enhanced-11.png) To configure, set: Field | Description ---------- | ----------- Digest Algorithm | A message digest algorithm used for calculating hash values. Signature Algorithm | An algorithm used for producing signatures. SAML Config | A JSON file that fine-tunes the SAML client configuration (for example, setting a private key). A private key must be set before DataRobot can sign SAML authentication requests. <span id="secrets-setting">**To set a private key:**</span> ```json { "key_file": "/opt/datarobot/DataRobot-7.x.x/etc/certs/key.pem" } ``` Where `key_file` is a path to the key PEM file. The same is valid for a certificate file (use the `cert_file` field in that case). The following JSON can also be provided to upload secrets as content: ```json { "key_file_value": "-----BEGIN RSA PRIVATE KEY-----\n...\n-----END RSA PRIVATE KEY-----", "cert_file_value": "-----BEGIN CERTIFICATE-----\n...\n-----END CERTIFICATE-----" } ``` Where there is an ellipsis in the example, insert the private key (in PEM format) as a single line. The same is valid for certificate file (use `cert_file_value` field in that case). Describe key pair to allow DataRobot to decipher IdP SAML responses. This step is required in case IdP cyphers its responses. ??? example "Example: Describe key pair" ```json { "encryption_keypairs" : [{ "key_file" : "/opt/datarobot/DataRobot-7.x.x/etc/certs/key.pem", "cert_file" : "/opt/datarobot/DataRobot-7.x.x/etc/certs/cert.pem" }] } ``` Note that Okta requires an extra parameter (`id_attr_name_crypto`) when the key pair is described. ??? example "Example: Describe key pair in Okta" ```json { "encryption_keypairs" : [{ "key_file" : "/opt/datarobot/DataRobot-7.x.x/etc/certs/key.pem", "cert_file" : "/opt/datarobot/DataRobot-7.x.x/etc/certs/cert.pem" }], "id_attr_name_crypto" : "Id" } ``` The key pair can also be described by its content rather than its file paths. See the private key example above.
sso-ref
--- title: Authentication description: This section introduces authentication in DataRobot and includes links to information on SSO, 2FA, stored data credentials, and API key management. --- # Authentication {: #authentication } The information in this section provides information on authentication in DataRobot. Topic | Describes... ----- | ------------ [SSO](sso-ref) | Configure DataRobot and an external Identity Provider (IdP) for user authentication via single sign-on (SSO). [Two-factor authentication](2fa) | Set up two-factor authentication (2FA). [API key management](api-key-mgmt) | Access tools for working with prediction requests for the DataRobot API. ## Authentication in DataRobot {: #authentication-in-datarobot} DataRobot ensures authentication and security using a variety of techniques. When using the database connectivity feature, for example, you are prompted for your database username and password credentials each time you perform an operation that accesses your organization's data sources. The password is encrypted before passing through DataRobot components and is only decrypted when DataRobot establishes a connection to the database. DataRobot does not store the username or password in any format. === "SaaS" To log into the application website, users can choose to authenticate by providing a username and password or they can delegate authentication to Google. The authentication process is handled over HTTPS using TLS 1.2 to the application server. When the user sets their password, it is securely stored in the database pictured above. Before the password is stored, it is hashed and uniquely salted using SHA-512 and further protected with Password-Based Key Derivation Function 2 (BPKDF2). The original password is discarded and never permanently stored. === "Self-Managed" To log into the application website, users can choose to authenticate by providing a username and password or delegate authentication to LDAP. SSO using SAML 2.0 is also supported. The authentication process is handled over HTTPS using TLS 1.2 to the application server. When the user sets their password, it is securely stored in the database pictured above. Before the password is stored, it is hashed and uniquely salted using SHA-512 and further protected with Password-Based Key Derivation Function 2 (BPKDF2). The original password is discarded and never permanently stored. DataRobot also provides enhancements to password-based authentication, including support for multifactor authentication (MFA) with software tokens generated using Time-based One-time Password (TOTP). All API communications use TLS 1.2 to protect the confidentiality of authentication materials. When interacting with the DataRobot API, authentication is performed using a bearer token contained in the HTTP Authorization header. Use the same authentication method when interacting with prediction servers via the API. While it is possible to authenticate using a username + API token (basic authentication) or just via an API token, these authentication methods are deprecated and not recommended. An additional HTTP Header named datarobot-key is also required to further limit access to the prediction servers.
index