{"id": "f7c061c33796-0", "text": "Quickstart\n\nInstalling MLflow\n\nYou install MLflow by running:\n\n# Install MLflow\npip\n\ninstall\n\nmlflow\n\n# Install MLflow with extra ML libraries and 3rd-party tools\npip\n\ninstall\n\nmlflow\n\n[extras\n\n# Install a lightweight version of MLflow\npip\n\ninstall\n\nmlflow-skinny\n\ninstall.packages\n\n\"mlflow\"\n\nNote\n\nMLflow works on MacOS. If you run into issues with the default system Python on MacOS, try\ninstalling Python 3 through the Homebrew package manager using\nbrew install python. (In this case, installing MLflow is now pip3 install mlflow).\n\nNote\n\nTo use certain MLflow modules and functionality (ML model persistence/inference,\nartifact storage options, etc), you may need to install extra libraries. For example, the\nmlflow.tensorflow module requires TensorFlow to be installed. See\nhttps://github.com/mlflow/mlflow/blob/master/EXTRA_DEPENDENCIES.rst for more details.\n\nNote\n\nWhen using MLflow skinny, you may need to install additional dependencies if you wish to use\ncertain MLflow modules and functionalities. For example, usage of SQL-based storage for\nMLflow Tracking (e.g. mlflow.set_tracking_uri(\"sqlite:///my.db\")) requires\npip install mlflow-skinny sqlalchemy alembic sqlparse. If using MLflow skinny for serving,\na minimally functional installation would require pip install mlflow-skinny flask.\n\nAt this point we recommend you follow the tutorial for a walk-through on how you\ncan leverage MLflow in your daily workflow.\n\nDownloading the Quickstart\n\nDownload the quickstart code by cloning MLflow via git clone https://github.com/mlflow/mlflow,\nand cd into the examples subdirectory of the repository. We\u2019ll use this working directory for\nrunning the quickstart.", "metadata": {"url": "https://mlflow.org/docs/latest/quickstart.html"}} {"id": "f7c061c33796-1", "text": "We avoid running directly from our clone of MLflow as doing so would cause the tutorial to\nuse MLflow from source, rather than your PyPi installation of MLflow.\n\nUsing the Tracking API\n\nThe MLflow Tracking API lets you log metrics and artifacts (files) from your data\nscience code and see a history of your runs. You can try it out by writing a simple Python script\nas follows (this example is also included in quickstart/mlflow_tracking.py):\n\nimport\n\nos\n\nfrom\n\nrandom\n\nimport\n\nrandom\n\nrandint\n\nfrom\n\nmlflow\n\nimport\n\nlog_metric\n\nlog_param\n\nlog_artifacts\n\nif\n\n__name__\n\n==\n\n\"__main__\"\n\n# Log a parameter (key-value pair)\n\nlog_param\n\n\"param1\"\n\nrandint\n\n100\n\n))\n\n# Log a metric; metrics can be updated throughout the run\n\nlog_metric\n\n\"foo\"\n\nrandom\n\n())\n\nlog_metric\n\n\"foo\"\n\nrandom\n\n()\n\nlog_metric\n\n\"foo\"\n\nrandom\n\n()\n\n# Log an artifact (output file)\n\nif\n\nnot\n\nos\n\npath\n\nexists\n\n\"outputs\"\n\n):\n\nos\n\nmakedirs\n\n\"outputs\"\n\nwith\n\nopen\n\n\"outputs/test.txt\"\n\n\"w\"\n\nas\n\nwrite\n\n\"hello world!\"\n\nlog_artifacts\n\n\"outputs\"\n\nlibrary\n\nmlflow\n\n# Log a parameter (key-value pair)\n\nmlflow_log_param\n\n\"param1\"\n\n# Log a metric; metrics can be updated throughout the run\n\nmlflow_log_metric\n\n\"foo\"\n\nmlflow_log_metric\n\n\"foo\"\n\nmlflow_log_metric\n\n\"foo\"\n\n# Log an artifact (output file)\n\nwriteLines\n\n\"Hello world!\"\n\n\"output.txt\"\n\nmlflow_log_artifact\n\n\"output.txt\"\n\nViewing the Tracking UI", "metadata": {"url": "https://mlflow.org/docs/latest/quickstart.html"}} {"id": "f7c061c33796-2", "text": "mlflow_log_artifact\n\n\"output.txt\"\n\nViewing the Tracking UI\n\nBy default, wherever you run your program, the tracking API writes data into files into a local\n./mlruns directory. You can then run MLflow\u2019s Tracking UI:\n\nmlflow\n\nui\n\nmlflow_ui\n\n()\n\nand view it at http://localhost:5000.\n\nNote\n\nIf you see message [CRITICAL] WORKER TIMEOUT in the MLflow UI or error logs, try using http://localhost:5000 instead of http://127.0.0.1:5000.\n\nRunning MLflow Projects\n\nMLflow allows you to package code and its dependencies as a project that can be run in a\nreproducible fashion on other data. Each project includes its code and a MLproject file that\ndefines its dependencies (for example, Python environment) as well as what commands can be run into the\nproject and what arguments they take.\n\nYou can easily run existing projects with the mlflow run command, which runs a project from\neither a local directory or a GitHub URI:\n\nmlflow\n\nrun\n\nsklearn_elasticnet_wine\n\nP\n\nalpha\n\n0.5\n\nmlflow\n\nrun\n\nhttps://github.com/mlflow/mlflow-example.git\n\nP\n\nalpha\n\n5.0\n\nThere\u2019s a sample project in tutorial, including a MLproject file that\nspecifies its dependencies. if you haven\u2019t configured a tracking server,\nprojects log their Tracking API data in the local mlruns directory so you can see these\nruns using mlflow ui.\n\nNote\n\nBy default mlflow run installs all dependencies using virtualenv.\nTo run a project without using virtualenv, you can provide the --env-manager=local option to\nmlflow run. In this case, you must ensure that the necessary dependencies are already installed\nin your Python environment.\n\nFor more information, see MLflow Projects.\n\nSaving and Serving Models", "metadata": {"url": "https://mlflow.org/docs/latest/quickstart.html"}} {"id": "f7c061c33796-3", "text": "For more information, see MLflow Projects.\n\nSaving and Serving Models\n\nMLflow includes a generic MLmodel format for saving models from a variety of tools in diverse\nflavors. For example, many models can be served as Python functions, so an MLmodel file can\ndeclare how each model should be interpreted as a Python function in order to let various tools\nserve it. MLflow also includes tools for running such models locally and exporting them to Docker\ncontainers or commercial serving platforms.\n\nTo illustrate this functionality, the mlflow.sklearn package can log scikit-learn models as\nMLflow artifacts and then load them again for serving. There is an example training application in\nsklearn_logistic_regression/train.py that you can run as follows:\n\npython\n\nsklearn_logistic_regression/train.py\n\nWhen you run the example, it outputs an MLflow run ID for that experiment. If you look at\nmlflow ui, you will also see that the run saved a model folder containing an MLmodel\ndescription file and a pickled scikit-learn model. You can pass the run ID and the path of the model\nwithin the artifacts directory (here \u201cmodel\u201d) to various tools. For example, MLflow includes a\nsimple REST server for python-based models:\n\nmlflow\n\nmodels\n\nserve\n\nm\n\nruns://model\n\nNote\n\nBy default the server runs on port 5000. If that port is already in use, use the \u2013port option to\nspecify a different port. For example: mlflow models serve -m runs://model --port 1234\n\nOnce you have started the server, you can pass it some sample data and see the\npredictions.", "metadata": {"url": "https://mlflow.org/docs/latest/quickstart.html"}} {"id": "f7c061c33796-4", "text": "Once you have started the server, you can pass it some sample data and see the\npredictions.\n\nThe following example uses curl to send a JSON-serialized pandas DataFrame with the split\norientation to the model server. For more information about the input data formats accepted by\nthe pyfunc model server, see the MLflow deployment tools documentation.\n\ncurl\n\nd\n\n'{\"dataframe_split\": {\"columns\": [\"x\"], \"data\": [[1], [-1]]}}'\n\nH\n\n'Content-Type: application/json'\n\nX\n\nPOST\n\nlocalhost:5000/invocations\n\nwhich returns:\n\nFor more information, see MLflow Models.\n\nLogging to a Remote Tracking Server\n\nIn the examples above, MLflow logs data to the local filesystem of the machine it\u2019s running on.\nTo manage results centrally or share them across a team, you can configure MLflow to log to a remote\ntracking server. To get access to a remote tracking server:\n\nLaunch a Tracking Server on a Remote Machine\n\nLaunch a tracking server on a remote machine.\n\nYou can then log to the remote tracking server by\nsetting the MLFLOW_TRACKING_URI environment variable to your server\u2019s URI, or\nby adding the following to the start of your program:\n\nimport\n\nmlflow\n\nmlflow\n\nset_tracking_uri\n\n\"http://YOUR-SERVER:4040\"\n\nmlflow\n\nset_experiment\n\n\"my-experiment\"\n\nlibrary\n\nmlflow\n\ninstall_mlflow\n\n()\n\nmlflow_set_tracking_uri\n\n\"http://YOUR-SERVER:4040\"\n\nmlflow_set_experiment\n\n\"/my-experiment\"\n\nLog to Databricks Community Edition", "metadata": {"url": "https://mlflow.org/docs/latest/quickstart.html"}} {"id": "f7c061c33796-5", "text": "mlflow_set_experiment\n\n\"/my-experiment\"\n\nLog to Databricks Community Edition\n\nAlternatively, sign up for Databricks Community Edition,\na free service that includes a hosted tracking server. Note that\nCommunity Edition is intended for quick experimentation rather than production use cases.\nAfter signing up, run databricks configure to create a credentials file for MLflow, specifying\nhttps://community.cloud.databricks.com as the host.\n\nTo log to the Community Edition server, set the MLFLOW_TRACKING_URI environment variable\nto \u201cdatabricks\u201d, or add the following to the start of your program:\n\nimport\n\nmlflow\n\nmlflow\n\nset_tracking_uri\n\n\"databricks\"\n\n# Note: on Databricks, the experiment name passed to set_experiment must be a valid path\n\n# in the workspace, like '/Users//my-experiment'. See\n\n# https://docs.databricks.com/user-guide/workspace.html for more info.\n\nmlflow\n\nset_experiment\n\n\"/my-experiment\"\n\nlibrary\n\nmlflow\n\ninstall_mlflow\n\n()\n\nmlflow_set_tracking_uri\n\n\"databricks\"\n\n# Note: on Databricks, the experiment name passed to mlflow_set_experiment must be a\n\n# valid path in the workspace, like '/Users//my-experiment'. See\n\n# https://docs.databricks.com/user-guide/workspace.html for more info.\n\nmlflow_set_experiment\n\n\"/my-experiment\"", "metadata": {"url": "https://mlflow.org/docs/latest/quickstart.html"}} {"id": "2eb3b7e954eb-0", "text": "MLflow Tracking\n\nThe MLflow Tracking component is an API and UI for logging parameters, code versions, metrics, and output files\nwhen running your machine learning code and for later visualizing the results.\nMLflow Tracking lets you log and query experiments using Python, REST, R API, and Java API APIs.\n\nTable of Contents\n\nConcepts\n\nWhere Runs Are Recorded\n\nHow Runs and Artifacts are Recorded\n\nScenario 1: MLflow on localhost\nScenario 2: MLflow on localhost with SQLite\nScenario 3: MLflow on localhost with Tracking Server\nScenario 4: MLflow with remote Tracking Server, backend and artifact stores\nScenario 5: MLflow Tracking Server enabled with proxied artifact storage access\nScenario 6: MLflow Tracking Server used exclusively as proxied access host for artifact storage access\n\nLogging Data to Runs\n\nLogging Functions\nLaunching Multiple Runs in One Program\nPerformance Tracking with Metrics\nVisualizing Metrics\n\nAutomatic Logging\n\nScikit-learn\nKeras\nGluon\nXGBoost\nLightGBM\nStatsmodels\nSpark\nFastai\nPytorch\n\nOrganizing Runs in Experiments\n\nManaging Experiments and Runs with the Tracking Service API\n\nTracking UI\n\nQuerying Runs Programmatically\n\nMLflow Tracking Servers\n\nStorage\nNetworking\nUsing the Tracking Server for proxied artifact access\nLogging to a Tracking Server\n\nSystem Tags\n\nConcepts\n\nMLflow Tracking is organized around the concept of runs, which are executions of some piece of\ndata science code. Each run records the following information:\n\nGit commit hash used for the run, if it was run from an MLflow Project.\n\nStart and end time of the run\n\nName of the file to launch the run, or the project name and entry point for the run\nif run from an MLflow Project.\n\nKey-value input parameters of your choice. Both keys and values are strings.", "metadata": {"url": "https://mlflow.org/docs/latest/tracking.html#automatic-logging"}} {"id": "2eb3b7e954eb-1", "text": "Key-value input parameters of your choice. Both keys and values are strings.\n\nKey-value metrics, where the value is numeric. Each metric can be updated throughout the\ncourse of the run (for example, to track how your model\u2019s loss function is converging), and\nMLflow records and lets you visualize the metric\u2019s full history.\n\nOutput files in any format. For example, you can record images (for example, PNGs), models\n(for example, a pickled scikit-learn model), and data files (for example, a\nParquet file) as artifacts.\n\nYou can record runs using MLflow Python, R, Java, and REST APIs from anywhere you run your code. For\nexample, you can record them in a standalone program, on a remote cloud machine, or in an\ninteractive notebook. If you record runs in an MLflow Project, MLflow\nremembers the project URI and source version.\n\nYou can optionally organize runs into experiments, which group together runs for a\nspecific task. You can create an experiment using the mlflow experiments CLI, with\nmlflow.create_experiment(), or using the corresponding REST parameters. The MLflow API and\nUI let you create and search for experiments.\n\nOnce your runs have been recorded, you can query them using the Tracking UI or the MLflow\nAPI.\n\nWhere Runs Are Recorded\n\nMLflow runs can be recorded to local files, to a SQLAlchemy compatible database, or remotely\nto a tracking server. By default, the MLflow Python API logs runs locally to files in an mlruns directory wherever you\nran your program. You can then run mlflow ui to see the logged runs.\n\nTo log runs remotely, set the MLFLOW_TRACKING_URI environment variable to a tracking server\u2019s URI or\ncall mlflow.set_tracking_uri().\n\nThere are different kinds of remote tracking URIs:", "metadata": {"url": "https://mlflow.org/docs/latest/tracking.html#automatic-logging"}} {"id": "2eb3b7e954eb-2", "text": "There are different kinds of remote tracking URIs:\n\nLocal file path (specified as file:/my/local/dir), where data is just directly stored locally.\n\nDatabase encoded as +://:@:/. MLflow supports the dialects mysql, mssql, sqlite, and postgresql. For more details, see SQLAlchemy database uri.\n\nHTTP server (specified as https://my-server:5000), which is a server hosting an MLflow tracking server.\n\nDatabricks workspace (specified as databricks or as databricks://, a Databricks CLI profile.\nRefer to Access the MLflow tracking server from outside Databricks [AWS]\n[Azure], or the quickstart to\neasily get started with hosted MLflow on Databricks Community Edition.\n\nHow Runs and Artifacts are Recorded\n\nAs mentioned above, MLflow runs can be recorded to local files, to a SQLAlchemy compatible database, or remotely to a tracking server. MLflow artifacts can be persisted to local files\nand a variety of remote file storage solutions. For storing runs and artifacts, MLflow uses two components for storage: backend store and artifact store. While the backend store persists\nMLflow entities (runs, parameters, metrics, tags, notes, metadata, etc), the artifact store persists artifacts\n(files, models, images, in-memory objects, or model summary, etc).\n\nThe MLflow server can be configured with an artifacts HTTP proxy, passing artifact requests through the tracking server\nto store and retrieve artifacts without having to interact with underlying object store services.\nUsage of the proxied artifact access feature is described in Scenarios 5 and 6 below.\n\nThe MLflow client can interface with a variety of backend and artifact storage configurations.\nHere are four common configuration scenarios:\n\nScenario 1: MLflow on localhost", "metadata": {"url": "https://mlflow.org/docs/latest/tracking.html#automatic-logging"}} {"id": "2eb3b7e954eb-3", "text": "Scenario 1: MLflow on localhost\n\nMany developers run MLflow on their local machine, where both the backend and artifact store share a directory\non the local filesystem\u2014./mlruns\u2014as shown in the diagram. The MLflow client directly interfaces with an\ninstance of a FileStore and LocalArtifactRepository.\n\nIn this simple scenario, the MLflow client uses the following interfaces to record MLflow entities and artifacts:\n\nAn instance of a LocalArtifactRepository (to store artifacts)\n\nAn instance of a FileStore (to save MLflow entities)\n\nScenario 2: MLflow on localhost with SQLite\n\nMany users also run MLflow on their local machines with a SQLAlchemy-compatible database: SQLite. In this case, artifacts\nare stored under the local ./mlruns directory, and MLflow entities are inserted in a SQLite database file mlruns.db.\n\nIn this scenario, the MLflow client uses the following interfaces to record MLflow entities and artifacts:\n\nAn instance of a LocalArtifactRepository (to save artifacts)\n\nAn instance of an SQLAlchemyStore (to store MLflow entities to a SQLite file mlruns.db)\n\nScenario 3: MLflow on localhost with Tracking Server\n\nSimilar to scenario 1 but a tracking server is launched, listening for REST request calls at the default port 5000.\nThe arguments supplied to the mlflow server dictate what backend and artifact stores are used.\nThe default is local FileStore. For example, if a user launched a tracking server as\nmlflow server --backend-store-uri sqlite:///mydb.sqlite, then SQLite would be used for backend storage instead.\n\nAs in scenario 1, MLflow uses a local mlruns filesystem directory as a backend store and artifact store. With a tracking\nserver running, the MLflow client interacts with the tracking server via REST requests, as shown in the diagram.\n\nCommand to run the tracking server in this configuration\n\nmlflow\n\nserver\n\n-backend-store-uri", "metadata": {"url": "https://mlflow.org/docs/latest/tracking.html#automatic-logging"}} {"id": "2eb3b7e954eb-4", "text": "Command to run the tracking server in this configuration\n\nmlflow\n\nserver\n\n-backend-store-uri\n\nfile:///path/to/mlruns\n\n-no-serve-artifacts\n\nTo store all runs\u2019 MLflow entities, the MLflow client interacts with the tracking server via a series of REST requests:\n\nPart 1a and b:\n\nThe MLflow client creates an instance of a RestStore and sends REST API requests to log MLflow entities\n\nThe Tracking Server creates an instance of a FileStore to save MLflow entities and writes directly to the local mlruns directory\n\nFor the artifacts, the MLflow client interacts with the tracking server via a REST request:\n\nPart 2a, b, and c:\n\nThe MLflow client uses RestStore to send a REST request to fetch the artifact store URI location\nThe Tracking Server responds with an artifact store URI location\nThe MLflow client creates an instance of a LocalArtifactRepository and saves artifacts to the local filesystem location specified by the artifact store URI (a subdirectory of mlruns)\n\nScenario 4: MLflow with remote Tracking Server, backend and artifact stores\n\nMLflow also supports distributed architectures, where the tracking server, backend store, and artifact store\nreside on remote hosts. This example scenario depicts an architecture with a remote MLflow Tracking Server,\na Postgres database for backend entity storage, and an S3 bucket for artifact storage.\n\nCommand to run the tracking server in this configuration\n\nmlflow\n\nserver\n\n-backend-store-uri\n\npostgresql://user:password@postgres:5432/mlflowdb\n\n-default-artifact-root\n\ns3://bucket_name\n\n-host\n\nremote_host\n\n-no-serve-artifacts\n\nTo record all runs\u2019 MLflow entities, the MLflow client interacts with the tracking server via a series of REST requests:\n\nPart 1a and b:\n\nThe MLflow client creates an instance of a RestStore and sends REST API requests to log MLflow entities", "metadata": {"url": "https://mlflow.org/docs/latest/tracking.html#automatic-logging"}} {"id": "2eb3b7e954eb-5", "text": "The Tracking Server creates an instance of an SQLAlchemyStore and connects to the remote host to\ninsert MLflow entities in the database\n\nFor artifact logging, the MLflow client interacts with the remote Tracking Server and artifact storage host:\n\nPart 2a, b, and c:\n\nThe MLflow client uses RestStore to send a REST request to fetch the artifact store URI location from the Tracking Server\n\nThe Tracking Server responds with an artifact store URI location (an S3 storage URI in this case)\n\nThe MLflow client creates an instance of an S3ArtifactRepository, connects to the remote AWS host using the\nboto client libraries, and uploads the artifacts to the S3 bucket URI location\n\nFileStore,\n\nRestStore,\nand\n\nSQLAlchemyStore are\nconcrete implementations of the abstract class\n\nAbstractStore,\nand the\n\nLocalArtifactRepository and\n\nS3ArtifactRepository are\nconcrete implementations of the abstract class\n\nArtifactRepository.\n\nScenario 5: MLflow Tracking Server enabled with proxied artifact storage access\n\nMLflow\u2019s Tracking Server supports utilizing the host as a proxy server for operations involving artifacts.\nOnce configured with the appropriate access requirements, an administrator can start the tracking server to enable\nassumed-role operations involving the saving, loading, or listing of model artifacts, images, documents, and files.\nThis eliminates the need to allow end users to have direct path access to a remote object store (e.g., s3, adls, gcs, hdfs) for artifact handling and eliminates the\nneed for an end-user to provide access credentials to interact with an underlying object store.\n\nCommand to run the tracking server in this configuration\n\nmlflow\n\nserver\n\n-backend-store-uri\n\npostgresql://user:password@postgres:5432/mlflowdb\n\n# Artifact access is enabled through the proxy URI 'mlflow-artifacts:/',\n\n# giving users access to this location without having to manage credentials\n\n# or permissions.\n\n-artifacts-destination", "metadata": {"url": "https://mlflow.org/docs/latest/tracking.html#automatic-logging"}} {"id": "2eb3b7e954eb-6", "text": "# or permissions.\n\n-artifacts-destination\n\ns3://bucket_name\n\n-host\n\nremote_host\n\nEnabling the Tracking Server to perform proxied artifact access in order to route client artifact requests to an object store location:\n\nPart 1a and b:\n\nThe MLflow client creates an instance of a RestStore and sends REST API requests to log MLflow entities\n\nThe Tracking Server creates an instance of an SQLAlchemyStore and connects to the remote host for inserting\ntracking information in the database (i.e., metrics, parameters, tags, etc.)\n\nPart 1c and d:\n\nRetrieval requests by the client return information from the configured SQLAlchemyStore table\n\nPart 2a and b:\n\nLogging events for artifacts are made by the client using the HttpArtifactRepository to write files to MLflow Tracking Server\n\nThe Tracking Server then writes these files to the configured object store location with assumed role authentication\n\nPart 2c and d:\n\nRetrieving artifacts from the configured backend store for a user request is done with the same authorized authentication that was configured at server start\n\nArtifacts are passed to the end user through the Tracking Server through the interface of the HttpArtifactRepository\n\nNote\n\nWhen an experiment is created, the artifact storage location from the configuration of the tracking server is logged in the experiment\u2019s metadata.\nWhen enabling proxied artifact storage, any existing experiments that were created while operating a tracking server in\nnon-proxied mode will continue to use a non-proxied artifact location. In order to use proxied artifact logging, a new experiment must be created.\nIf the intention of enabling a tracking server in -serve-artifacts mode is to eliminate the need for a client to have authentication to\nthe underlying storage, new experiments should be created for use by clients so that the tracking server can handle authentication after this migration.\n\nWarning", "metadata": {"url": "https://mlflow.org/docs/latest/tracking.html#automatic-logging"}} {"id": "2eb3b7e954eb-7", "text": "Warning\n\nThe MLflow artifact proxied access service enables users to have an assumed role of access to all artifacts that are accessible to the Tracking Server.\nAdministrators who are enabling this feature should ensure that the access level granted to the Tracking Server for artifact\noperations meets all security requirements prior to enabling the Tracking Server to operate in a proxied file handling role.\n\nScenario 6: MLflow Tracking Server used exclusively as proxied access host for artifact storage access\n\nMLflow\u2019s Tracking Server can be used in an exclusive artifact proxied artifact handling role. Specifying the\n--artifacts-only flag restricts an MLflow server instance to only serve artifact-related API requests by proxying to an underlying object store.\n\nNote\n\nStarting a Tracking Server with the --artifacts-only parameter will disable all Tracking Server functionality apart from API calls related to saving, loading, or listing artifacts.\nCreating runs, logging metrics or parameters, and accessing other attributes about experiments are all not permitted in this mode.\n\nCommand to run the tracking server in this configuration\n\nmlflow\n\nserver\n\n-artifacts-destination\n\ns3://bucket_name\n\n-artifacts-only\n\n-host\n\nremote_host\n\nRunning an MLFlow server in --artifacts-only mode:\n\nPart 1a and b:\n\nThe MLflow client will interact with the Tracking Server using the HttpArtifactRepository interface.\n\nListing artifacts associated with a run will be conducted from the Tracking Server using the access credentials set at server startup\n\nSaving of artifacts will transmit the files to the Tracking Server which will then write the files to the file store using credentials set at server start.\n\nPart 1c and d:\n\nListing of artifact responses will pass from the file store through the Tracking Server to the client\n\nLoading of artifacts will utilize the access credentials of the MLflow Tracking Server to acquire the files which are then passed on to the client\n\nNote", "metadata": {"url": "https://mlflow.org/docs/latest/tracking.html#automatic-logging"}} {"id": "2eb3b7e954eb-8", "text": "Note\n\nIf migrating from Scenario 5 to Scenario 6 due to request volumes, it is important to perform two validations:\n\nEnsure that the new tracking server that is operating in --artifacts-only mode has access permissions to the\nlocation set by --artifacts-destination that the former multi-role tracking server had.\nThe former multi-role tracking server that was serving artifacts must have the -serve-artifacts argument disabled.\n\nWarning\n\nOperating the Tracking Server in proxied artifact access mode by setting the parameter --serve-artifacts during server start, even in --artifacts-only mode,\nwill give access to artifacts residing on the object store to any user that has authentication to access the Tracking Server. Ensure that any per-user\nsecurity posture that you are required to maintain is applied accordingly to the proxied access that the Tracking Server will have in this mode\nof operation.\n\nLogging Data to Runs\n\nYou can log data to runs using the MLflow Python, R, Java, or REST API. This section\nshows the Python API.\n\nIn this section:\n\nLogging Functions\n\nLaunching Multiple Runs in One Program\n\nPerformance Tracking with Metrics\n\nVisualizing Metrics\n\nLogging Functions\n\nmlflow.set_tracking_uri() connects to a tracking URI. You can also set the\nMLFLOW_TRACKING_URI environment variable to have MLflow find a URI from there. In both cases,\nthe URI can either be a HTTP/HTTPS URI for a remote server, a database connection string, or a\nlocal path to log data to a directory. The URI defaults to mlruns.\n\nmlflow.get_tracking_uri() returns the current tracking URI.\n\nmlflow.create_experiment() creates a new experiment and returns its ID. Runs can be\nlaunched under the experiment by passing the experiment ID to mlflow.start_run.", "metadata": {"url": "https://mlflow.org/docs/latest/tracking.html#automatic-logging"}} {"id": "2eb3b7e954eb-9", "text": "mlflow.set_experiment() sets an experiment as active. If the experiment does not exist,\ncreates a new experiment. If you do not specify an experiment in mlflow.start_run(), new\nruns are launched under this experiment.\n\nmlflow.start_run() returns the currently active run (if one exists), or starts a new run\nand returns a mlflow.ActiveRun object usable as a context manager for the\ncurrent run. You do not need to call start_run explicitly: calling one of the logging functions\nwith no active run automatically starts a new one.\n\nNote\n\nIf the argument run_name is not set within mlflow.start_run(), a unique run name will be generated for each run.\n\nmlflow.end_run() ends the currently active run, if any, taking an optional run status.\n\nmlflow.active_run() returns a mlflow.entities.Run object corresponding to the\ncurrently active run, if any.\nNote: You cannot access currently-active run attributes\n(parameters, metrics, etc.) through the run returned by mlflow.active_run. In order to access\nsuch attributes, use the MlflowClient as follows:\n\nclient\n\nmlflow\n\nMlflowClient\n\n()\n\ndata\n\nclient\n\nget_run\n\nmlflow\n\nactive_run\n\n()\n\ninfo\n\nrun_id\n\ndata\n\nmlflow.last_active_run() retuns a mlflow.entities.Run object corresponding to the\ncurrently active run, if any. Otherwise, it returns a mlflow.entities.Run object corresponding\nthe last run started from the current Python process that reached a terminal status (i.e. FINISHED, FAILED, or KILLED).\n\nmlflow.log_param() logs a single key-value param in the currently active run. The key and\nvalue are both strings. Use mlflow.log_params() to log multiple params at once.", "metadata": {"url": "https://mlflow.org/docs/latest/tracking.html#automatic-logging"}} {"id": "2eb3b7e954eb-10", "text": "mlflow.log_metric() logs a single key-value metric. The value must always be a number.\nMLflow remembers the history of values for each metric. Use mlflow.log_metrics() to log\nmultiple metrics at once.\n\nmlflow.set_tag() sets a single key-value tag in the currently active run. The key and\nvalue are both strings. Use mlflow.set_tags() to set multiple tags at once.\n\nmlflow.log_artifact() logs a local file or directory as an artifact, optionally taking an\nartifact_path to place it in within the run\u2019s artifact URI. Run artifacts can be organized into\ndirectories, so you can place the artifact in a directory this way.\n\nmlflow.log_artifacts() logs all the files in a given directory as artifacts, again taking\nan optional artifact_path.\n\nmlflow.get_artifact_uri() returns the URI that artifacts from the current run should be\nlogged to.\n\nLaunching Multiple Runs in One Program\n\nSometimes you want to launch multiple MLflow runs in the same program: for example, maybe you are\nperforming a hyperparameter search locally or your experiments are just very fast to run. This is\neasy to do because the ActiveRun object returned by mlflow.start_run() is a Python\ncontext manager. You can \u201cscope\u201d each run to\njust one block of code as follows:\n\nwith\n\nmlflow\n\nstart_run\n\n():\n\nmlflow\n\nlog_param\n\n\"x\"\n\nmlflow\n\nlog_metric\n\n\"y\"\n\n...\n\nThe run remains open throughout the with statement, and is automatically closed when the\nstatement exits, even if it exits due to an exception.\n\nPerformance Tracking with Metrics\n\nYou log MLflow metrics with log methods in the Tracking API. The log methods support two alternative methods for distinguishing metric values on the x-axis: timestamp and step.", "metadata": {"url": "https://mlflow.org/docs/latest/tracking.html#automatic-logging"}} {"id": "2eb3b7e954eb-11", "text": "timestamp is an optional long value that represents the time that the metric was logged. timestamp defaults to the current time. step is an optional integer that represents any measurement of training progress (number of training iterations, number of epochs, and so on). step defaults to 0 and has the following requirements and properties:\n\nMust be a valid 64-bit integer value.\n\nCan be negative.\n\nCan be out of order in successive write calls. For example, (1, 3, 2) is a valid sequence.\n\nCan have \u201cgaps\u201d in the sequence of values specified in successive write calls. For example, (1, 5, 75, -20) is a valid sequence.\n\nIf you specify both a timestamp and a step, metrics are recorded against both axes independently.\n\nExamples\n\nwith mlflow.start_run():\n for epoch in range(0, 3):\n mlflow.log_metric(key=\"quality\", value=2 * epoch, step=epoch)\n\nMlflowClient client = new MlflowClient();\nRunInfo run = client.createRun();\nfor (int epoch = 0; epoch < 3; epoch ++) {\n client.logMetric(run.getRunId(), \"quality\", 2 * epoch, System.currentTimeMillis(), epoch);\n}\n\nVisualizing Metrics\n\nHere is an example plot of the quick start tutorial with the step x-axis and two timestamp axes:\n\nX-axis step\n\nX-axis wall time - graphs the absolute time each metric was logged\n\nX-axis relative time - graphs the time relative to the first metric logged, for each run\n\nAutomatic Logging\n\nAutomatic logging allows you to log metrics, parameters, and models without the need for explicit log statements.\n\nThere are two ways to use autologging:\n\nCall mlflow.autolog() before your training code. This will enable autologging for each supported library you have installed as soon as you import it.", "metadata": {"url": "https://mlflow.org/docs/latest/tracking.html#automatic-logging"}} {"id": "2eb3b7e954eb-12", "text": "Use library-specific autolog calls for each library you use in your code. See below for examples.\n\nThe following libraries support autologging:\n\nScikit-learn\n\nKeras\n\nGluon\n\nXGBoost\n\nLightGBM\n\nStatsmodels\n\nSpark\n\nFastai\n\nPytorch\n\nFor flavors that automatically save models as an artifact, additional files for dependency management are logged.\n\nYou can access the most recent autolog run through the mlflow.last_active_run() function. Here\u2019s a short sklearn autolog example that makes use of this function:\n\nimport\n\nmlflow\n\nfrom\n\nsklearn.model_selection\n\nimport\n\ntrain_test_split\n\nfrom\n\nsklearn.datasets\n\nimport\n\nload_diabetes\n\nfrom\n\nsklearn.ensemble\n\nimport\n\nRandomForestRegressor\n\nmlflow\n\nautolog\n\n()\n\ndb\n\nload_diabetes\n\n()\n\nX_train\n\nX_test\n\ny_train\n\ny_test\n\ntrain_test_split\n\ndb\n\ndata\n\ndb\n\ntarget\n\n# Create and train models.\n\nrf\n\nRandomForestRegressor\n\nn_estimators\n\n100\n\nmax_depth\n\nmax_features\n\nrf\n\nfit\n\nX_train\n\ny_train\n\n# Use the model to make predictions on the test dataset.\n\npredictions\n\nrf\n\npredict\n\nX_test\n\nautolog_run\n\nmlflow\n\nlast_active_run\n\n()\n\nScikit-learn\n\nCall mlflow.sklearn.autolog() before your training code to enable automatic logging of sklearn metrics, params, and models.\nSee example usage here.\n\nAutologging for estimators (e.g. LinearRegression) and meta estimators (e.g. Pipeline) creates a single run and logs:\n\nMetrics\n\nParameters\n\nTags\n\nArtifacts\n\nTraining score obtained\nby estimator.score\n\nParameters obtained by\nestimator.get_params\n\nClass name\nFully qualified class name\n\nFitted estimator\n\nAutologging for parameter search estimators (e.g. GridSearchCV) creates a single parent run and nested child runs", "metadata": {"url": "https://mlflow.org/docs/latest/tracking.html#automatic-logging"}} {"id": "2eb3b7e954eb-13", "text": "containing the following data:\n\nParent\n\nrun\n\nChild\n\nrun\n\nChild\n\nrun\n\n...\n\ncontaining the following data:\n\nRun type\n\nMetrics\n\nParameters\n\nTags\n\nArtifacts\n\nParent\n\nTraining score\n\nParameter search estimator\u2019s parameters\nBest parameter combination\n\nClass name\nFully qualified class name\n\nFitted parameter search estimator\nFitted best estimator\nSearch results csv file\n\nChild\n\nCV test score for\neach parameter combination\n\nEach parameter combination\n\nClass name\nFully qualified class name\n\nKeras\n\nCall mlflow.tensorflow.autolog() before your training code to enable automatic logging of metrics and parameters. See example usages with Keras and\nTensorFlow.\n\nNote that only tensorflow>=2.3 are supported.\nThe respective metrics associated with tf.estimator and EarlyStopping are automatically logged.\nAs an example, try running the MLflow TensorFlow examples.\n\nAutologging captures the following information:\n\nFramework/module\n\nMetrics\n\nParameters\n\nTags\n\nArtifacts\n\ntf.keras\n\nTraining loss; validation loss; user-specified metrics\n\nfit() parameters; optimizer name; learning rate; epsilon\n\nModel summary on training start; MLflow Model (Keras model); TensorBoard logs on training end\n\ntf.keras.callbacks.EarlyStopping\n\nMetrics from the EarlyStopping callbacks. For example,\nstopped_epoch, restored_epoch,\nrestore_best_weight, etc\n\nfit() parameters from EarlyStopping.\nFor example, min_delta, patience, baseline,\nrestore_best_weights, etc\n\nIf no active run exists when autolog() captures data, MLflow will automatically create a run to log information to.\nAlso, MLflow will then automatically end the run once training ends via calls to tf.keras.fit().\n\nIf a run already exists when autolog() captures data, MLflow will log to that run but not automatically end that run after training.\n\nGluon", "metadata": {"url": "https://mlflow.org/docs/latest/tracking.html#automatic-logging"}} {"id": "2eb3b7e954eb-14", "text": "Gluon\n\nCall mlflow.gluon.autolog() before your training code to enable automatic logging of metrics and parameters.\nSee example usages with Gluon .\n\nAutologging captures the following information:\n\nFramework\n\nMetrics\n\nParameters\n\nTags\n\nArtifacts\n\nGluon\n\nTraining loss; validation loss; user-specified metrics\n\nNumber of layers; optimizer name; learning rate; epsilon\n\nMLflow Model (Gluon model); on training end\n\nXGBoost\n\nCall mlflow.xgboost.autolog() before your training code to enable automatic logging of metrics and parameters.\n\nAutologging captures the following information:\n\nFramework\n\nMetrics\n\nParameters\n\nTags\n\nArtifacts\n\nXGBoost\n\nuser-specified metrics\n\nxgboost.train parameters\n\nMLflow Model (XGBoost model) with model signature on training end; feature importance; input example\n\nIf early stopping is activated, metrics at the best iteration will be logged as an extra step/iteration.\n\nLightGBM\n\nCall mlflow.lightgbm.autolog() before your training code to enable automatic logging of metrics and parameters.\n\nAutologging captures the following information:\n\nFramework\n\nMetrics\n\nParameters\n\nTags\n\nArtifacts\n\nLightGBM\n\nuser-specified metrics\n\nlightgbm.train parameters\n\nMLflow Model (LightGBM model) with model signature on training end; feature importance; input example\n\nIf early stopping is activated, metrics at the best iteration will be logged as an extra step/iteration.\n\nStatsmodels\n\nCall mlflow.statsmodels.autolog() before your training code to enable automatic logging of metrics and parameters.\n\nAutologging captures the following information:\n\nFramework\n\nMetrics\n\nParameters\n\nTags\n\nArtifacts\n\nStatsmodels\n\nuser-specified metrics\n\nstatsmodels.base.model.Model.fit parameters\n\nMLflow Model (statsmodels.base.wrapper.ResultsWrapper) on training end\n\nNote\n\nEach model subclass that overrides fit expects and logs its own parameters.", "metadata": {"url": "https://mlflow.org/docs/latest/tracking.html#automatic-logging"}} {"id": "2eb3b7e954eb-15", "text": "Note\n\nEach model subclass that overrides fit expects and logs its own parameters.\n\nSpark\n\nInitialize a SparkSession with the mlflow-spark JAR attached (e.g.\nSparkSession.builder.config(\"spark.jars.packages\", \"org.mlflow.mlflow-spark\")) and then\ncall mlflow.spark.autolog() to enable automatic logging of Spark datasource\ninformation at read-time, without the need for explicit\nlog statements. Note that autologging of Spark ML (MLlib) models is not yet supported.\n\nAutologging captures the following information:\n\nFramework\n\nMetrics\n\nParameters\n\nTags\n\nArtifacts\n\nSpark\n\nSingle tag containing source path, version, format. The tag contains one line per datasource\n\nNote\n\nMoreover, Spark datasource autologging occurs asynchronously - as such, it\u2019s possible (though unlikely) to see race conditions when launching short-lived MLflow runs that result in datasource information not being logged.\n\nFastai\n\nCall mlflow.fastai.autolog() before your training code to enable automatic logging of metrics and parameters.\nSee an example usage with Fastai.\n\nAutologging captures the following information:\n\nFramework\n\nMetrics\n\nParameters\n\nTags\n\nArtifacts\n\nfastai\n\nuser-specified metrics\n\nLogs optimizer data as parameters. For example,\nepochs, lr, opt_func, etc;\nLogs the parameters of the EarlyStoppingCallback and\nOneCycleScheduler callbacks\n\nModel checkpoints are logged to a \u2018models\u2019 directory; MLflow Model (fastai Learner model) on training end; Model summary text is logged\n\nPytorch\n\nCall mlflow.pytorch.autolog() before your Pytorch Lightning training code to enable automatic logging of metrics, parameters, and models. See example usages here. Note\nthat currently, Pytorch autologging supports only models trained using Pytorch Lightning.\n\nAutologging is triggered on calls to pytorch_lightning.trainer.Trainer.fit and captures the following information:\n\nFramework/module\n\nMetrics\n\nParameters", "metadata": {"url": "https://mlflow.org/docs/latest/tracking.html#automatic-logging"}} {"id": "2eb3b7e954eb-16", "text": "Framework/module\n\nMetrics\n\nParameters\n\nTags\n\nArtifacts\n\npytorch_lightning.trainer.Trainer\n\nTraining loss; validation loss; average_test_accuracy;\nuser-defined-metrics.\n\nfit() parameters; optimizer name; learning rate; epsilon.\n\nModel summary on training start, MLflow Model (Pytorch model) on training end;\n\npytorch_lightning.callbacks.earlystopping\n\nTraining loss; validation loss; average_test_accuracy;\nuser-defined-metrics.\nMetrics from the EarlyStopping callbacks.\nFor example, stopped_epoch, restored_epoch,\nrestore_best_weight, etc.\n\nfit() parameters; optimizer name; learning rate; epsilon\nParameters from the EarlyStopping callbacks.\nFor example, min_delta, patience, baseline,``restore_best_weights``, etc\n\nModel summary on training start; MLflow Model (Pytorch model) on training end;\nBest Pytorch model checkpoint, if training stops due to early stopping callback.\n\nIf no active run exists when autolog() captures data, MLflow will automatically create a run to log information, ending the run once\nthe call to pytorch_lightning.trainer.Trainer.fit() completes.\n\nIf a run already exists when autolog() captures data, MLflow will log to that run but not automatically end that run after training.\n\nNote\n\nParameters not explicitly passed by users (parameters that use default values) while using pytorch_lightning.trainer.Trainer.fit() are not currently automatically logged\n\nIn case of a multi-optimizer scenario (such as usage of autoencoder), only the parameters for the first optimizer are logged\n\nOrganizing Runs in Experiments\n\nCommand-Line Interface (\n\nmlflow\n\nexperiments\n\nmlflow.create_experiment() Python API. You can pass the experiment name for an individual run\nusing the CLI (for example,\n\nmlflow\n\nrun\n\n...\n\n-experiment-name\n\n[name]\n\nMLFLOW_EXPERIMENT_NAME\n\n-experiment-id\n\nMLFLOW_EXPERIMENT_ID", "metadata": {"url": "https://mlflow.org/docs/latest/tracking.html#automatic-logging"}} {"id": "2eb3b7e954eb-17", "text": "[name]\n\nMLFLOW_EXPERIMENT_NAME\n\n-experiment-id\n\nMLFLOW_EXPERIMENT_ID\n\n# Set the experiment via environment variables\n\nexport\n\nMLFLOW_EXPERIMENT_NAME\n\n=fraud-detection\n\nmlflow\n\nexperiments\n\ncreate\n\n-experiment-name\n\nfraud-detection\n\n# Launch a run. The experiment is inferred from the MLFLOW_EXPERIMENT_NAME environment\n\n# variable, or from the --experiment-name parameter passed to the MLflow CLI (the latter\n\n# taking precedence)\n\nwith\n\nmlflow\n\nstart_run\n\n():\n\nmlflow\n\nlog_param\n\n\"a\"\n\nmlflow\n\nlog_metric\n\n\"b\"\n\nManaging Experiments and Runs with the Tracking Service API\n\nMLflow provides a more detailed Tracking Service API for managing experiments and runs directly,\nwhich is available through client SDK in the mlflow.client module.\nThis makes it possible to query data about past runs, log additional information about them, create experiments,\nadd tags to a run, and more.\n\nExample\n\nfrom\n\nmlflow.tracking\n\nimport\n\nMlflowClient\n\nclient\n\nMlflowClient\n\n()\n\nexperiments\n\nclient\n\nsearch_experiments\n\n()\n\n# returns a list of mlflow.entities.Experiment\n\nrun\n\nclient\n\ncreate_run\n\nexperiments\n\nexperiment_id\n\n# returns mlflow.entities.Run\n\nclient\n\nlog_param\n\nrun\n\ninfo\n\nrun_id\n\n\"hello\"\n\n\"world\"\n\nclient\n\nset_terminated\n\nrun\n\ninfo\n\nrun_id\n\nAdding Tags to Runs\n\nThe MlflowClient.set_tag() function lets you add custom tags to runs. A tag can only have a single unique value mapped to it at a time. For example:\n\nclient\n\nset_tag\n\nrun\n\ninfo\n\nrun_id\n\n\"tag_key\"\n\n\"tag_value\"\n\nImportant", "metadata": {"url": "https://mlflow.org/docs/latest/tracking.html#automatic-logging"}} {"id": "2eb3b7e954eb-18", "text": "run\n\ninfo\n\nrun_id\n\n\"tag_key\"\n\n\"tag_value\"\n\nImportant\n\nDo not use the prefix mlflow. (e.g. mlflow.note) for a tag. This prefix is reserved for use by MLflow. See System Tags for a list of reserved tag keys.\n\nTracking UI\n\nThe Tracking UI lets you visualize, search and compare runs, as well as download run artifacts or\nmetadata for analysis in other tools. If you log runs to a local mlruns directory,\nrun mlflow ui in the directory above it, and it loads the corresponding runs.\nAlternatively, the MLflow tracking server serves the same UI and enables remote storage of run artifacts.\nIn that case, you can view the UI using URL http://:5000 in your browser from any\nmachine, including any remote machine that can connect to your tracking server.\n\nThe UI contains the following key features:\n\nExperiment-based run listing and comparison (including run comparison across multiple experiments)\n\nSearching for runs by parameter or metric value\n\nVisualizing run metrics\n\nDownloading run results\n\nQuerying Runs Programmatically\n\nYou can access all of the functions in the Tracking UI programmatically. This makes it easy to do several common tasks:\n\nQuery and compare runs using any data analysis tool of your choice, for example, pandas.\n\nDetermine the artifact URI for a run to feed some of its artifacts into a new run when executing a workflow. For an example of querying runs and constructing a multistep workflow, see the MLflow Multistep Workflow Example project.\n\nLoad artifacts from past runs as MLflow Models. For an example of training, exporting, and loading a model, and predicting using the model, see the MLflow TensorFlow example.", "metadata": {"url": "https://mlflow.org/docs/latest/tracking.html#automatic-logging"}} {"id": "2eb3b7e954eb-19", "text": "Run automated parameter search algorithms, where you query the metrics from various runs to submit new ones. For an example of running automated parameter search algorithms, see the MLflow Hyperparameter Tuning Example project.\n\nMLflow Tracking Servers\n\nIn this section:\n\nStorage\n\nBackend Stores\nArtifact Stores\nFile store performance\nDeletion Behavior\nSQLAlchemy Options\n\nNetworking\n\nUsing the Tracking Server for proxied artifact access\n\nOptionally using a Tracking Server instance exclusively for artifact handling\n\nLogging to a Tracking Server\n\nTracking Server versioning\n\nYou run an MLflow tracking server using mlflow server. An example configuration for a server is:\n\nCommand to run the tracking server in this configuration\n\nmlflow\n\nserver\n\n-backend-store-uri\n\n/mnt/persistent-disk\n\n-default-artifact-root\n\ns3://my-mlflow-bucket/\n\n-host\n\n0.0.0.0\n\nNote\n\nWhen started in --artifacts-only mode, the tracking server will not permit any operation other than saving, loading, and listing artifacts.\n\nStorage\n\nAn MLflow tracking server has two components for storage: a backend store and an artifact store.\n\nBackend Stores\n\nThe backend store is where MLflow Tracking Server stores experiment and run metadata as well as\nparams, metrics, and tags for runs. MLflow supports two types of backend stores: file store and\ndatabase-backed store.\n\nNote\n\nIn order to use model registry functionality, you must run your server using a database-backed store.\n\nUse --backend-store-uri to configure the type of backend store. You specify:\n\nA file store backend as ./path_to_store or file:/path_to_store", "metadata": {"url": "https://mlflow.org/docs/latest/tracking.html#automatic-logging"}} {"id": "2eb3b7e954eb-20", "text": "A file store backend as ./path_to_store or file:/path_to_store\n\nA database-backed store as SQLAlchemy database URI.\nThe database URI typically takes the format +://:@:/.\nMLflow supports the database dialects mysql, mssql, sqlite, and postgresql.\nDrivers are optional. If you do not specify a driver, SQLAlchemy uses a dialect\u2019s default driver.\nFor example, --backend-store-uri sqlite:///mlflow.db would use a local SQLite database.\n\nImportant\n\nmlflow server will fail against a database-backed store with an out-of-date database schema.\nTo prevent this, upgrade your database schema to the latest supported version using\nmlflow db upgrade [db_uri]. Schema migrations can result in database downtime, may\ntake longer on larger databases, and are not guaranteed to be transactional. You should always\ntake a backup of your database prior to running mlflow db upgrade - consult your database\u2019s\ndocumentation for instructions on taking a backup.\n\nBy default --backend-store-uri is set to the local ./mlruns directory (the same as when\nrunning mlflow run locally), but when running a server, make sure that this points to a\npersistent (that is, non-ephemeral) file system location.\n\nArtifact Stores\n\nIn this section:\n\nAmazon S3 and S3-compatible storage\n\nAzure Blob Storage\n\nGoogle Cloud Storage\n\nFTP server\n\nSFTP Server\n\nNFS\n\nHDFS\n\nThe artifact store is a location suitable for large data (such as an S3 bucket or shared NFS\nfile system) and is where clients log their artifact output (for example, models).\nartifact_location is a property recorded on mlflow.entities.Experiment for\ndefault location to store artifacts for all runs in this experiment. Additionally, artifact_uri\nis a property on mlflow.entities.RunInfo to indicate location where all artifacts for\nthis run are stored.", "metadata": {"url": "https://mlflow.org/docs/latest/tracking.html#automatic-logging"}} {"id": "2eb3b7e954eb-21", "text": "The MLflow client caches artifact location information on a per-run basis.\nIt is therefore not recommended to alter a run\u2019s artifact location before it has terminated.\n\nIn addition to local file paths, MLflow supports the following storage systems as artifact\nstores: Amazon S3, Azure Blob Storage, Google Cloud Storage, SFTP server, and NFS.\n\nUse --default-artifact-root (defaults to local ./mlruns directory) to configure default\nlocation to server\u2019s artifact store. This will be used as artifact location for newly-created\nexperiments that do not specify one. Once you create an experiment, --default-artifact-root\nis no longer relevant to that experiment.\n\nBy default, a server is launched with the --serve-artifacts flag to enable proxied access for artifacts.\nThe uri mlflow-artifacts:/ replaces an otherwise explicit object store destination (e.g., \u201cs3:/my_bucket/mlartifacts\u201d)\nfor interfacing with artifacts. The client can access artifacts via HTTP requests to the MLflow Tracking Server.\nThis simplifies access requirements for users of the MLflow client, eliminating the need to\nconfigure access tokens or username and password environment variables for the underlying object store when writing or retrieving artifacts.\nTo disable proxied access for artifacts, specify --no-serve-artifacts.\n\nProvided an Mlflow server configuration where the --default-artifact-root is s3://my-root-bucket,\nthe following patterns will all resolve to the configured proxied object store location of s3://my-root-bucket/mlartifacts:\n\nhttps://:/mlartifacts\n\nhttp:///mlartifacts\n\nmlflow-artifacts:///mlartifacts\n\nmlflow-artifacts://:/mlartifacts\n\nmlflow-artifacts:/mlartifacts", "metadata": {"url": "https://mlflow.org/docs/latest/tracking.html#automatic-logging"}} {"id": "2eb3b7e954eb-22", "text": "mlflow-artifacts:/mlartifacts\n\nIf the host or host:port declaration is absent in client artifact requests to the MLflow server, the client API\nwill assume that the host is the same as the MLflow Tracking uri.\n\nNote\n\nIf an MLflow server is running with the --artifact-only flag, the client should interact with this server explicitly by\nincluding either a host or host:port definition for uri location references for artifacts.\nOtherwise, all artifact requests will route to the MLflow Tracking server, defeating the purpose of running a distinct artifact server.\n\nImportant\n\nAccess credentials and configuration for the artifact storage location are configured once during server initialization in the place\nof having users handle access credentials for artifact-based operations. Note that all users who have access to the\nTracking Server in this mode will have access to artifacts served through this assumed role.\n\nTo allow the server and clients to access the artifact location, you should configure your cloud\nprovider credentials as normal. For example, for S3, you can set the AWS_ACCESS_KEY_ID\nand AWS_SECRET_ACCESS_KEY environment variables, use an IAM role, or configure a default\nprofile in ~/.aws/credentials.\nSee Set up AWS Credentials and Region for Development for more info.\n\nImportant\n\nIf you do not specify a --default-artifact-root or an artifact URI when creating the experiment\n(for example, mlflow experiments create --artifact-location s3://), the artifact root\nis a path inside the file store. Typically this is not an appropriate location, as the client and\nserver probably refer to different physical locations (that is, the same path on different disks).\n\nYou may set an MLflow environment variable to configure the timeout for artifact uploads and downloads:\n\nMLFLOW_ARTIFACT_UPLOAD_DOWNLOAD_TIMEOUT - (Experimental, may be changed or removed) Sets the timeout for artifact upload/download in seconds (Default set by individual artifact stores).", "metadata": {"url": "https://mlflow.org/docs/latest/tracking.html#automatic-logging"}} {"id": "2eb3b7e954eb-23", "text": "Amazon S3 and S3-compatible storage\n\nMinIO or\n\nDigital Ocean Spaces), specify a URI of the form\n\ns3:///\n\n~/.aws/credentials\n\nAWS_ACCESS_KEY_ID\n\nAWS_SECRET_ACCESS_KEY\n\nSet up AWS Credentials and Region for Development.\n\nTo add S3 file upload extra arguments, set MLFLOW_S3_UPLOAD_EXTRA_ARGS to a JSON object of key/value pairs.\nFor example, if you want to upload to a KMS Encrypted bucket using the KMS Key 1234:\n\nexport\n\nMLFLOW_S3_UPLOAD_EXTRA_ARGS\n\n'{\"ServerSideEncryption\": \"aws:kms\", \"SSEKMSKeyId\": \"1234\"}'\n\nFor a list of available extra args see Boto3 ExtraArgs Documentation.\n\nTo store artifacts in a custom endpoint, set the MLFLOW_S3_ENDPOINT_URL to your endpoint\u2019s URL. For example, if you are using Digital Ocean Spaces:\n\nexport\n\nMLFLOW_S3_ENDPOINT_URL\n\n=https://.digitaloceanspaces.com\n\nIf you have a MinIO server at 1.2.3.4 on port 9000:\n\nexport\n\nMLFLOW_S3_ENDPOINT_URL\n\n=http://1.2.3.4:9000\n\nIf the MinIO server is configured with using SSL self-signed or signed using some internal-only CA certificate, you could set MLFLOW_S3_IGNORE_TLS or AWS_CA_BUNDLE variables (not both at the same time!) to disable certificate signature check, or add a custom CA bundle to perform this check, respectively:\n\nexport\n\nMLFLOW_S3_IGNORE_TLS\n\ntrue\n\n#or\n\nexport\n\nAWS_CA_BUNDLE\n\n=/some/ca/bundle.pem\n\nAdditionally, if MinIO server is configured with non-default region, you should set AWS_DEFAULT_REGION variable:\n\nexport\n\nAWS_DEFAULT_REGION\n\n=my_region\n\nWarning\n\n-default-artifact-root\n\n$MLFLOW_S3_ENDPOINT_URL", "metadata": {"url": "https://mlflow.org/docs/latest/tracking.html#automatic-logging"}} {"id": "2eb3b7e954eb-24", "text": "=my_region\n\nWarning\n\n-default-artifact-root\n\n$MLFLOW_S3_ENDPOINT_URL\n\nMLFLOW_S3_ENDPOINT_URL\n\n-default-artifact-root\n\nMLFLOW_S3_ENDPOINT_URL\n\nMLFLOW_S3_ENDPOINT_URL\n\nhttps://.s3..amazonaws.com///\n\ns3://///\n\nunset\n\nComplete list of configurable values for an S3 client is available in boto3 documentation.\n\nAzure Blob Storage\n\nTo store artifacts in Azure Blob Storage, specify a URI of the form\nwasbs://@.blob.core.windows.net/.\nMLflow expects Azure Storage access credentials in the\nAZURE_STORAGE_CONNECTION_STRING, AZURE_STORAGE_ACCESS_KEY environment variables\nor having your credentials configured such that the DefaultAzureCredential(). class can pick them up.\nThe order of precedence is:\n\nAZURE_STORAGE_CONNECTION_STRING\n\nAZURE_STORAGE_ACCESS_KEY\n\nDefaultAzureCredential()\n\nYou must set one of these options on both your client application and your MLflow tracking server.\nAlso, you must run pip install azure-storage-blob separately (on both your client and the server) to access Azure Blob Storage.\nFinally, if you want to use DefaultAzureCredential, you must pip install azure-identity;\nMLflow does not declare a dependency on these packages by default.\n\nYou may set an MLflow environment variable to configure the timeout for artifact uploads and downloads:\n\nMLFLOW_ARTIFACT_UPLOAD_DOWNLOAD_TIMEOUT - (Experimental, may be changed or removed) Sets the timeout for artifact upload/download in seconds (Default: 600 for Azure blob).\n\nGoogle Cloud Storage", "metadata": {"url": "https://mlflow.org/docs/latest/tracking.html#automatic-logging"}} {"id": "2eb3b7e954eb-25", "text": "Google Cloud Storage\n\nTo store artifacts in Google Cloud Storage, specify a URI of the form gs:///.\nYou should configure credentials for accessing the GCS container on the client and server as described\nin the GCS documentation.\nFinally, you must run pip install google-cloud-storage (on both your client and the server)\nto access Google Cloud Storage; MLflow does not declare a dependency on this package by default.\n\nYou may set some MLflow environment variables to troubleshoot GCS read-timeouts (eg. due to slow transfer speeds) using the following variables:\n\nMLFLOW_ARTIFACT_UPLOAD_DOWNLOAD_TIMEOUT - (Experimental, may be changed or removed) Sets the standard timeout for transfer operations in seconds (Default: 60 for GCS). Use -1 for indefinite timeout.\n\nMLFLOW_GCS_DEFAULT_TIMEOUT - (Deprecated, please use MLFLOW_ARTIFACT_UPLOAD_DOWNLOAD_TIMEOUT) Sets the standard timeout for transfer operations in seconds (Default: 60). Use -1 for indefinite timeout.\n\nMLFLOW_GCS_UPLOAD_CHUNK_SIZE - Sets the standard upload chunk size for bigger files in bytes (Default: 104857600 \u2259 100MiB), must be multiple of 256 KB.\n\nMLFLOW_GCS_DOWNLOAD_CHUNK_SIZE - Sets the standard download chunk size for bigger files in bytes (Default: 104857600 \u2259 100MiB), must be multiple of 256 KB\n\nFTP server\n\nTo store artifacts in a FTP server, specify a URI of the form ftp://user@host/path/to/directory .\nThe URI may optionally include a password for logging into the server, e.g. ftp://user:pass@host/path/to/directory\n\nSFTP Server", "metadata": {"url": "https://mlflow.org/docs/latest/tracking.html#automatic-logging"}} {"id": "2eb3b7e954eb-26", "text": "SFTP Server\n\nTo store artifacts in an SFTP server, specify a URI of the form sftp://user@host/path/to/directory.\nYou should configure the client to be able to log in to the SFTP server without a password over SSH (e.g. public key, identity file in ssh_config, etc.).\n\nThe format sftp://user:pass@host/ is supported for logging in. However, for safety reasons this is not recommended.\n\nWhen using this store, pysftp must be installed on both the server and the client. Run pip install pysftp to install the required package.\n\nNFS\n\nTo store artifacts in an NFS mount, specify a URI as a normal file system path, e.g., /mnt/nfs.\nThis path must be the same on both the server and the client \u2013 you may need to use symlinks or remount\nthe client in order to enforce this property.\n\nHDFS\n\nTo store artifacts in HDFS, specify a hdfs: URI. It can contain host and port: hdfs://:/ or just the path: hdfs://.\n\nThere are also two ways to authenticate to HDFS:\n\nUse current UNIX account authorization\n\nKerberos credentials using following environment variables:\n\nexport\n\nMLFLOW_KERBEROS_TICKET_CACHE\n\n=/tmp/krb5cc_22222222\n\nexport\n\nMLFLOW_KERBEROS_USER\n\n=user_name_to_use\n\nMost of the cluster contest settings are read from hdfs-site.xml accessed by the HDFS native\ndriver using the CLASSPATH environment variable.\n\nThe used HDFS driver is libhdfs.\n\nFile store performance", "metadata": {"url": "https://mlflow.org/docs/latest/tracking.html#automatic-logging"}} {"id": "2eb3b7e954eb-27", "text": "The used HDFS driver is libhdfs.\n\nFile store performance\n\nMLflow will automatically try to use LibYAML bindings if they are already installed.\nHowever if you notice any performance issues when using file store backend, it could mean LibYAML is not installed on your system.\nOn Linux or Mac you can easily install it using your system package manager:\n\n# On Ubuntu/Debian\napt-get\n\ninstall\n\nlibyaml-cpp-dev\n\nlibyaml-dev\n\n# On macOS using Homebrew\nbrew\n\ninstall\n\nyaml-cpp\n\nlibyaml\n\nAfter installing LibYAML, you need to reinstall PyYAML:\n\n# Reinstall PyYAML\npip\n\n-no-cache-dir\n\ninstall\n\n-force-reinstall\n\nI\n\npyyaml\n\nDeletion Behavior\n\nIn order to allow MLflow Runs to be restored, Run metadata and artifacts are not automatically removed\nfrom the backend store or artifact store when a Run is deleted. The mlflow gc CLI is provided\nfor permanently removing Run metadata and artifacts for deleted runs.\n\nSQLAlchemy Options\n\nYou can inject some SQLAlchemy connection pooling options using environment variables.\n\nMLflow Environment Variable\n\nSQLAlchemy QueuePool Option\n\nMLFLOW_SQLALCHEMYSTORE_POOL_SIZE\n\npool_size\n\nMLFLOW_SQLALCHEMYSTORE_POOL_RECYCLE\n\npool_recycle\n\nMLFLOW_SQLALCHEMYSTORE_MAX_OVERFLOW\n\nmax_overflow\n\nNetworking\n\nThe --host option exposes the service on all interfaces. If running a server in production, we\nwould recommend not exposing the built-in server broadly (as it is unauthenticated and unencrypted),\nand instead putting it behind a reverse proxy like NGINX or Apache httpd, or connecting over VPN.\nYou can then pass authentication headers to MLflow using these environment variables.\n\nAdditionally, you should ensure that the --backend-store-uri (which defaults to the\n./mlruns directory) points to a persistent (non-ephemeral) disk or database connection.\n\nUsing the Tracking Server for proxied artifact access", "metadata": {"url": "https://mlflow.org/docs/latest/tracking.html#automatic-logging"}} {"id": "2eb3b7e954eb-28", "text": "Using the Tracking Server for proxied artifact access\n\nTo use an instance of the MLflow Tracking server for artifact operations ( Scenario 5: MLflow Tracking Server enabled with proxied artifact storage access ),\nstart a server with the optional parameters --serve-artifacts to enable proxied artifact access and set a\npath to record artifacts to by providing a value for the argument --artifacts-destination. The tracking server will,\nin this mode, stream any artifacts that a client is logging directly through an assumed (server-side) identity,\neliminating the need for access credentials to be handled by end-users.\n\nNote\n\nAuthentication access to the value set by --artifacts-destination must be configured when starting the tracking\nserver, if required.\n\nTo start the MLflow server with proxy artifact access enabled to an HDFS location (as an example):\n\nexport\n\nHADOOP_USER_NAME\n\n=mlflowserverauth\n\nmlflow\n\nserver\n\n-host\n\n0.0.0.0\n\n-port\n\n8885\n\n-artifacts-destination\n\nhdfs://myhost:8887/mlprojects/models\n\nOptionally using a Tracking Server instance exclusively for artifact handling\n\nIf the volume of tracking server requests is sufficiently large and performance issues are noticed, a tracking server\ncan be configured to serve in --artifacts-only mode ( Scenario 6: MLflow Tracking Server used exclusively as proxied access host for artifact storage access ), operating in tandem with an instance that\noperates with --no-serve-artifacts specified. This configuration ensures that the processing of artifacts is isolated\nfrom all other tracking server event handling.", "metadata": {"url": "https://mlflow.org/docs/latest/tracking.html#automatic-logging"}} {"id": "2eb3b7e954eb-29", "text": "When a tracking server is configured in --artifacts-only mode, any tasks apart from those concerned with artifact\nhandling (i.e., model logging, loading models, logging artifacts, listing artifacts, etc.) will return an HTTPError.\nSee the following example of a client REST call in Python attempting to list experiments from a server that is configured in\n--artifacts-only mode:\n\nimport\n\nrequests\n\nresponse\n\nrequests\n\nget\n\n\"http://0.0.0.0:8885/api/2.0/mlflow/experiments/list\"\n\nOutput\n\n>> HTTPError: Endpoint: /api/2.0/mlflow/experiments/list disabled due to the mlflow server running in `--artifacts-only` mode.\n\nUsing an additional MLflow server to handle artifacts exclusively can be useful for large-scale MLOps infrastructure.\nDecoupling the longer running and more compute-intensive tasks of artifact handling from the faster and higher-volume\nmetadata functionality of the other Tracking API requests can help minimize the burden of an otherwise single MLflow\nserver handling both types of payloads.\n\nLogging to a Tracking Server\n\nTo log to a tracking server, set the MLFLOW_TRACKING_URI environment variable to the server\u2019s URI,\nalong with its scheme and port (for example, http://10.0.0.1:5000) or call mlflow.set_tracking_uri().\n\nThe mlflow.start_run(), mlflow.log_param(), and mlflow.log_metric() calls\nthen make API requests to your remote tracking server.\n\nimport\n\nmlflow\n\nremote_server_uri\n\n\"...\"\n\n# set to your server URI\n\nmlflow\n\nset_tracking_uri\n\nremote_server_uri\n\n# Note: on Databricks, the experiment name passed to mlflow_set_experiment must be a\n\n# valid path in the workspace\n\nmlflow\n\nset_experiment\n\n\"/my-experiment\"\n\nwith\n\nmlflow\n\nstart_run\n\n():\n\nmlflow\n\nlog_param\n\n\"a\"\n\nmlflow", "metadata": {"url": "https://mlflow.org/docs/latest/tracking.html#automatic-logging"}} {"id": "2eb3b7e954eb-30", "text": "mlflow\n\nstart_run\n\n():\n\nmlflow\n\nlog_param\n\n\"a\"\n\nmlflow\n\nlog_metric\n\n\"b\"\n\nlibrary\n\nmlflow\n\ninstall_mlflow\n\n()\n\nremote_server_uri\n\n\"...\"\n\n# set to your server URI\n\nmlflow_set_tracking_uri\n\nremote_server_uri\n\n# Note: on Databricks, the experiment name passed to mlflow_set_experiment must be a\n\n# valid path in the workspace\n\nmlflow_set_experiment\n\n\"/my-experiment\"\n\nmlflow_log_param\n\n\"a\"\n\n\"1\"\n\nIn addition to the MLFLOW_TRACKING_URI environment variable, the following environment variables\nallow passing HTTP authentication to the tracking server:\n\nMLFLOW_TRACKING_USERNAME and MLFLOW_TRACKING_PASSWORD - username and password to use with HTTP\nBasic authentication. To use Basic authentication, you must set both environment variables .\n\nMLFLOW_TRACKING_TOKEN - token to use with HTTP Bearer authentication. Basic authentication takes precedence if set.\n\nMLFLOW_TRACKING_INSECURE_TLS - If set to the literal true, MLflow does not verify the TLS connection,\nmeaning it does not validate certificates or hostnames for https:// tracking URIs. This flag is not recommended for\nproduction environments. If this is set to true then MLFLOW_TRACKING_SERVER_CERT_PATH must not be set.\n\nMLFLOW_TRACKING_SERVER_CERT_PATH - Path to a CA bundle to use. Sets the verify param of the\nrequests.request function\n(see requests main interface).\nWhen you use a self-signed server certificate you can use this to verify it on client side.\nIf this is set MLFLOW_TRACKING_INSECURE_TLS must not be set (false).\n\nMLFLOW_TRACKING_CLIENT_CERT_PATH - Path to ssl client cert file (.pem). Sets the cert param\nof the requests.request function\n(see requests main interface).\nThis can be used to use a (self-signed) client certificate.\n\nNote", "metadata": {"url": "https://mlflow.org/docs/latest/tracking.html#automatic-logging"}} {"id": "2eb3b7e954eb-31", "text": "Note\n\nIf the MLflow server is not configured with the --serve-artifacts option, the client directly pushes artifacts\nto the artifact store. It does not proxy these through the tracking server by default.\n\nFor this reason, the client needs direct access to the artifact store. For instructions on setting up these credentials,\nsee Artifact Stores.\n\nTracking Server versioning\n\nThe version of MLflow running on the server can be found by querying the /version endpoint.\nThis can be used to check that the client-side version of MLflow is up-to-date with a remote tracking server prior to running experiments.\nFor example:\n\nimport\n\nrequests\n\nimport\n\nmlflow\n\nresponse\n\nrequests\n\nget\n\n\"http://:/version\"\n\nassert\n\nresponse\n\ntext\n\n==\n\nmlflow\n\n__version__\n\n# Checking for a strict version match\n\nSystem Tags\n\nYou can annotate runs with arbitrary tags. Tag keys that start with mlflow. are reserved for\ninternal use. The following tags are set automatically by MLflow, when appropriate:\n\nKey\n\nDescription\n\nmlflow.note.content\n\nA descriptive note about this run. This reserved tag is not set automatically and can\nbe overridden by the user to include additional information about the run. The content\nis displayed on the run\u2019s page under the Notes section.\n\nmlflow.parentRunId\n\nThe ID of the parent run, if this is a nested run.\n\nmlflow.user\n\nIdentifier of the user who created the run.\n\nmlflow.source.type\n\nSource type. Possible values: \"NOTEBOOK\", \"JOB\", \"PROJECT\",\n\"LOCAL\", and \"UNKNOWN\"\n\nmlflow.source.name\n\nSource identifier (e.g., GitHub URL, local Python filename, name of notebook)\n\nmlflow.source.git.commit\n\nCommit hash of the executed code, if in a git repository.\n\nmlflow.source.git.branch", "metadata": {"url": "https://mlflow.org/docs/latest/tracking.html#automatic-logging"}} {"id": "2eb3b7e954eb-32", "text": "Commit hash of the executed code, if in a git repository.\n\nmlflow.source.git.branch\n\nName of the branch of the executed code, if in a git repository.\n\nmlflow.source.git.repoURL\n\nURL that the executed code was cloned from.\n\nmlflow.project.env\n\nThe runtime context used by the MLflow project.\nPossible values: \"docker\" and \"conda\".\n\nmlflow.project.entryPoint\n\nName of the project entry point associated with the current run, if any.\n\nmlflow.docker.image.name\n\nName of the Docker image used to execute this run.\n\nmlflow.docker.image.id\n\nID of the Docker image used to execute this run.\n\nmlflow.log-model.history\n\nModel metadata collected by log-model calls. Includes the serialized\nform of the MLModel model files logged to a run, although the exact format and\ninformation captured is subject to change.", "metadata": {"url": "https://mlflow.org/docs/latest/tracking.html#automatic-logging"}} {"id": "72da79a93de6-0", "text": "MLflow Tracking\n\nThe MLflow Tracking component is an API and UI for logging parameters, code versions, metrics, and output files\nwhen running your machine learning code and for later visualizing the results.\nMLflow Tracking lets you log and query experiments using Python, REST, R API, and Java API APIs.\n\nTable of Contents\n\nConcepts\n\nWhere Runs Are Recorded\n\nHow Runs and Artifacts are Recorded\n\nScenario 1: MLflow on localhost\nScenario 2: MLflow on localhost with SQLite\nScenario 3: MLflow on localhost with Tracking Server\nScenario 4: MLflow with remote Tracking Server, backend and artifact stores\nScenario 5: MLflow Tracking Server enabled with proxied artifact storage access\nScenario 6: MLflow Tracking Server used exclusively as proxied access host for artifact storage access\n\nLogging Data to Runs\n\nLogging Functions\nLaunching Multiple Runs in One Program\nPerformance Tracking with Metrics\nVisualizing Metrics\n\nAutomatic Logging\n\nScikit-learn\nKeras\nGluon\nXGBoost\nLightGBM\nStatsmodels\nSpark\nFastai\nPytorch\n\nOrganizing Runs in Experiments\n\nManaging Experiments and Runs with the Tracking Service API\n\nTracking UI\n\nQuerying Runs Programmatically\n\nMLflow Tracking Servers\n\nStorage\nNetworking\nUsing the Tracking Server for proxied artifact access\nLogging to a Tracking Server\n\nSystem Tags\n\nConcepts\n\nMLflow Tracking is organized around the concept of runs, which are executions of some piece of\ndata science code. Each run records the following information:\n\nGit commit hash used for the run, if it was run from an MLflow Project.\n\nStart and end time of the run\n\nName of the file to launch the run, or the project name and entry point for the run\nif run from an MLflow Project.\n\nKey-value input parameters of your choice. Both keys and values are strings.", "metadata": {"url": "https://mlflow.org/docs/latest/tracking.html"}} {"id": "72da79a93de6-1", "text": "Key-value input parameters of your choice. Both keys and values are strings.\n\nKey-value metrics, where the value is numeric. Each metric can be updated throughout the\ncourse of the run (for example, to track how your model\u2019s loss function is converging), and\nMLflow records and lets you visualize the metric\u2019s full history.\n\nOutput files in any format. For example, you can record images (for example, PNGs), models\n(for example, a pickled scikit-learn model), and data files (for example, a\nParquet file) as artifacts.\n\nYou can record runs using MLflow Python, R, Java, and REST APIs from anywhere you run your code. For\nexample, you can record them in a standalone program, on a remote cloud machine, or in an\ninteractive notebook. If you record runs in an MLflow Project, MLflow\nremembers the project URI and source version.\n\nYou can optionally organize runs into experiments, which group together runs for a\nspecific task. You can create an experiment using the mlflow experiments CLI, with\nmlflow.create_experiment(), or using the corresponding REST parameters. The MLflow API and\nUI let you create and search for experiments.\n\nOnce your runs have been recorded, you can query them using the Tracking UI or the MLflow\nAPI.\n\nWhere Runs Are Recorded\n\nMLflow runs can be recorded to local files, to a SQLAlchemy compatible database, or remotely\nto a tracking server. By default, the MLflow Python API logs runs locally to files in an mlruns directory wherever you\nran your program. You can then run mlflow ui to see the logged runs.\n\nTo log runs remotely, set the MLFLOW_TRACKING_URI environment variable to a tracking server\u2019s URI or\ncall mlflow.set_tracking_uri().\n\nThere are different kinds of remote tracking URIs:", "metadata": {"url": "https://mlflow.org/docs/latest/tracking.html"}} {"id": "72da79a93de6-2", "text": "There are different kinds of remote tracking URIs:\n\nLocal file path (specified as file:/my/local/dir), where data is just directly stored locally.\n\nDatabase encoded as +://:@:/. MLflow supports the dialects mysql, mssql, sqlite, and postgresql. For more details, see SQLAlchemy database uri.\n\nHTTP server (specified as https://my-server:5000), which is a server hosting an MLflow tracking server.\n\nDatabricks workspace (specified as databricks or as databricks://, a Databricks CLI profile.\nRefer to Access the MLflow tracking server from outside Databricks [AWS]\n[Azure], or the quickstart to\neasily get started with hosted MLflow on Databricks Community Edition.\n\nHow Runs and Artifacts are Recorded\n\nAs mentioned above, MLflow runs can be recorded to local files, to a SQLAlchemy compatible database, or remotely to a tracking server. MLflow artifacts can be persisted to local files\nand a variety of remote file storage solutions. For storing runs and artifacts, MLflow uses two components for storage: backend store and artifact store. While the backend store persists\nMLflow entities (runs, parameters, metrics, tags, notes, metadata, etc), the artifact store persists artifacts\n(files, models, images, in-memory objects, or model summary, etc).\n\nThe MLflow server can be configured with an artifacts HTTP proxy, passing artifact requests through the tracking server\nto store and retrieve artifacts without having to interact with underlying object store services.\nUsage of the proxied artifact access feature is described in Scenarios 5 and 6 below.\n\nThe MLflow client can interface with a variety of backend and artifact storage configurations.\nHere are four common configuration scenarios:\n\nScenario 1: MLflow on localhost", "metadata": {"url": "https://mlflow.org/docs/latest/tracking.html"}} {"id": "72da79a93de6-3", "text": "Scenario 1: MLflow on localhost\n\nMany developers run MLflow on their local machine, where both the backend and artifact store share a directory\non the local filesystem\u2014./mlruns\u2014as shown in the diagram. The MLflow client directly interfaces with an\ninstance of a FileStore and LocalArtifactRepository.\n\nIn this simple scenario, the MLflow client uses the following interfaces to record MLflow entities and artifacts:\n\nAn instance of a LocalArtifactRepository (to store artifacts)\n\nAn instance of a FileStore (to save MLflow entities)\n\nScenario 2: MLflow on localhost with SQLite\n\nMany users also run MLflow on their local machines with a SQLAlchemy-compatible database: SQLite. In this case, artifacts\nare stored under the local ./mlruns directory, and MLflow entities are inserted in a SQLite database file mlruns.db.\n\nIn this scenario, the MLflow client uses the following interfaces to record MLflow entities and artifacts:\n\nAn instance of a LocalArtifactRepository (to save artifacts)\n\nAn instance of an SQLAlchemyStore (to store MLflow entities to a SQLite file mlruns.db)\n\nScenario 3: MLflow on localhost with Tracking Server\n\nSimilar to scenario 1 but a tracking server is launched, listening for REST request calls at the default port 5000.\nThe arguments supplied to the mlflow server dictate what backend and artifact stores are used.\nThe default is local FileStore. For example, if a user launched a tracking server as\nmlflow server --backend-store-uri sqlite:///mydb.sqlite, then SQLite would be used for backend storage instead.\n\nAs in scenario 1, MLflow uses a local mlruns filesystem directory as a backend store and artifact store. With a tracking\nserver running, the MLflow client interacts with the tracking server via REST requests, as shown in the diagram.\n\nCommand to run the tracking server in this configuration\n\nmlflow\n\nserver\n\n-backend-store-uri", "metadata": {"url": "https://mlflow.org/docs/latest/tracking.html"}} {"id": "72da79a93de6-4", "text": "Command to run the tracking server in this configuration\n\nmlflow\n\nserver\n\n-backend-store-uri\n\nfile:///path/to/mlruns\n\n-no-serve-artifacts\n\nTo store all runs\u2019 MLflow entities, the MLflow client interacts with the tracking server via a series of REST requests:\n\nPart 1a and b:\n\nThe MLflow client creates an instance of a RestStore and sends REST API requests to log MLflow entities\n\nThe Tracking Server creates an instance of a FileStore to save MLflow entities and writes directly to the local mlruns directory\n\nFor the artifacts, the MLflow client interacts with the tracking server via a REST request:\n\nPart 2a, b, and c:\n\nThe MLflow client uses RestStore to send a REST request to fetch the artifact store URI location\nThe Tracking Server responds with an artifact store URI location\nThe MLflow client creates an instance of a LocalArtifactRepository and saves artifacts to the local filesystem location specified by the artifact store URI (a subdirectory of mlruns)\n\nScenario 4: MLflow with remote Tracking Server, backend and artifact stores\n\nMLflow also supports distributed architectures, where the tracking server, backend store, and artifact store\nreside on remote hosts. This example scenario depicts an architecture with a remote MLflow Tracking Server,\na Postgres database for backend entity storage, and an S3 bucket for artifact storage.\n\nCommand to run the tracking server in this configuration\n\nmlflow\n\nserver\n\n-backend-store-uri\n\npostgresql://user:password@postgres:5432/mlflowdb\n\n-default-artifact-root\n\ns3://bucket_name\n\n-host\n\nremote_host\n\n-no-serve-artifacts\n\nTo record all runs\u2019 MLflow entities, the MLflow client interacts with the tracking server via a series of REST requests:\n\nPart 1a and b:\n\nThe MLflow client creates an instance of a RestStore and sends REST API requests to log MLflow entities", "metadata": {"url": "https://mlflow.org/docs/latest/tracking.html"}} {"id": "72da79a93de6-5", "text": "The Tracking Server creates an instance of an SQLAlchemyStore and connects to the remote host to\ninsert MLflow entities in the database\n\nFor artifact logging, the MLflow client interacts with the remote Tracking Server and artifact storage host:\n\nPart 2a, b, and c:\n\nThe MLflow client uses RestStore to send a REST request to fetch the artifact store URI location from the Tracking Server\n\nThe Tracking Server responds with an artifact store URI location (an S3 storage URI in this case)\n\nThe MLflow client creates an instance of an S3ArtifactRepository, connects to the remote AWS host using the\nboto client libraries, and uploads the artifacts to the S3 bucket URI location\n\nFileStore,\n\nRestStore,\nand\n\nSQLAlchemyStore are\nconcrete implementations of the abstract class\n\nAbstractStore,\nand the\n\nLocalArtifactRepository and\n\nS3ArtifactRepository are\nconcrete implementations of the abstract class\n\nArtifactRepository.\n\nScenario 5: MLflow Tracking Server enabled with proxied artifact storage access\n\nMLflow\u2019s Tracking Server supports utilizing the host as a proxy server for operations involving artifacts.\nOnce configured with the appropriate access requirements, an administrator can start the tracking server to enable\nassumed-role operations involving the saving, loading, or listing of model artifacts, images, documents, and files.\nThis eliminates the need to allow end users to have direct path access to a remote object store (e.g., s3, adls, gcs, hdfs) for artifact handling and eliminates the\nneed for an end-user to provide access credentials to interact with an underlying object store.\n\nCommand to run the tracking server in this configuration\n\nmlflow\n\nserver\n\n-backend-store-uri\n\npostgresql://user:password@postgres:5432/mlflowdb\n\n# Artifact access is enabled through the proxy URI 'mlflow-artifacts:/',\n\n# giving users access to this location without having to manage credentials\n\n# or permissions.\n\n-artifacts-destination", "metadata": {"url": "https://mlflow.org/docs/latest/tracking.html"}} {"id": "72da79a93de6-6", "text": "# or permissions.\n\n-artifacts-destination\n\ns3://bucket_name\n\n-host\n\nremote_host\n\nEnabling the Tracking Server to perform proxied artifact access in order to route client artifact requests to an object store location:\n\nPart 1a and b:\n\nThe MLflow client creates an instance of a RestStore and sends REST API requests to log MLflow entities\n\nThe Tracking Server creates an instance of an SQLAlchemyStore and connects to the remote host for inserting\ntracking information in the database (i.e., metrics, parameters, tags, etc.)\n\nPart 1c and d:\n\nRetrieval requests by the client return information from the configured SQLAlchemyStore table\n\nPart 2a and b:\n\nLogging events for artifacts are made by the client using the HttpArtifactRepository to write files to MLflow Tracking Server\n\nThe Tracking Server then writes these files to the configured object store location with assumed role authentication\n\nPart 2c and d:\n\nRetrieving artifacts from the configured backend store for a user request is done with the same authorized authentication that was configured at server start\n\nArtifacts are passed to the end user through the Tracking Server through the interface of the HttpArtifactRepository\n\nNote\n\nWhen an experiment is created, the artifact storage location from the configuration of the tracking server is logged in the experiment\u2019s metadata.\nWhen enabling proxied artifact storage, any existing experiments that were created while operating a tracking server in\nnon-proxied mode will continue to use a non-proxied artifact location. In order to use proxied artifact logging, a new experiment must be created.\nIf the intention of enabling a tracking server in -serve-artifacts mode is to eliminate the need for a client to have authentication to\nthe underlying storage, new experiments should be created for use by clients so that the tracking server can handle authentication after this migration.\n\nWarning", "metadata": {"url": "https://mlflow.org/docs/latest/tracking.html"}} {"id": "72da79a93de6-7", "text": "Warning\n\nThe MLflow artifact proxied access service enables users to have an assumed role of access to all artifacts that are accessible to the Tracking Server.\nAdministrators who are enabling this feature should ensure that the access level granted to the Tracking Server for artifact\noperations meets all security requirements prior to enabling the Tracking Server to operate in a proxied file handling role.\n\nScenario 6: MLflow Tracking Server used exclusively as proxied access host for artifact storage access\n\nMLflow\u2019s Tracking Server can be used in an exclusive artifact proxied artifact handling role. Specifying the\n--artifacts-only flag restricts an MLflow server instance to only serve artifact-related API requests by proxying to an underlying object store.\n\nNote\n\nStarting a Tracking Server with the --artifacts-only parameter will disable all Tracking Server functionality apart from API calls related to saving, loading, or listing artifacts.\nCreating runs, logging metrics or parameters, and accessing other attributes about experiments are all not permitted in this mode.\n\nCommand to run the tracking server in this configuration\n\nmlflow\n\nserver\n\n-artifacts-destination\n\ns3://bucket_name\n\n-artifacts-only\n\n-host\n\nremote_host\n\nRunning an MLFlow server in --artifacts-only mode:\n\nPart 1a and b:\n\nThe MLflow client will interact with the Tracking Server using the HttpArtifactRepository interface.\n\nListing artifacts associated with a run will be conducted from the Tracking Server using the access credentials set at server startup\n\nSaving of artifacts will transmit the files to the Tracking Server which will then write the files to the file store using credentials set at server start.\n\nPart 1c and d:\n\nListing of artifact responses will pass from the file store through the Tracking Server to the client\n\nLoading of artifacts will utilize the access credentials of the MLflow Tracking Server to acquire the files which are then passed on to the client\n\nNote", "metadata": {"url": "https://mlflow.org/docs/latest/tracking.html"}} {"id": "72da79a93de6-8", "text": "Note\n\nIf migrating from Scenario 5 to Scenario 6 due to request volumes, it is important to perform two validations:\n\nEnsure that the new tracking server that is operating in --artifacts-only mode has access permissions to the\nlocation set by --artifacts-destination that the former multi-role tracking server had.\nThe former multi-role tracking server that was serving artifacts must have the -serve-artifacts argument disabled.\n\nWarning\n\nOperating the Tracking Server in proxied artifact access mode by setting the parameter --serve-artifacts during server start, even in --artifacts-only mode,\nwill give access to artifacts residing on the object store to any user that has authentication to access the Tracking Server. Ensure that any per-user\nsecurity posture that you are required to maintain is applied accordingly to the proxied access that the Tracking Server will have in this mode\nof operation.\n\nLogging Data to Runs\n\nYou can log data to runs using the MLflow Python, R, Java, or REST API. This section\nshows the Python API.\n\nIn this section:\n\nLogging Functions\n\nLaunching Multiple Runs in One Program\n\nPerformance Tracking with Metrics\n\nVisualizing Metrics\n\nLogging Functions\n\nmlflow.set_tracking_uri() connects to a tracking URI. You can also set the\nMLFLOW_TRACKING_URI environment variable to have MLflow find a URI from there. In both cases,\nthe URI can either be a HTTP/HTTPS URI for a remote server, a database connection string, or a\nlocal path to log data to a directory. The URI defaults to mlruns.\n\nmlflow.get_tracking_uri() returns the current tracking URI.\n\nmlflow.create_experiment() creates a new experiment and returns its ID. Runs can be\nlaunched under the experiment by passing the experiment ID to mlflow.start_run.", "metadata": {"url": "https://mlflow.org/docs/latest/tracking.html"}} {"id": "72da79a93de6-9", "text": "mlflow.set_experiment() sets an experiment as active. If the experiment does not exist,\ncreates a new experiment. If you do not specify an experiment in mlflow.start_run(), new\nruns are launched under this experiment.\n\nmlflow.start_run() returns the currently active run (if one exists), or starts a new run\nand returns a mlflow.ActiveRun object usable as a context manager for the\ncurrent run. You do not need to call start_run explicitly: calling one of the logging functions\nwith no active run automatically starts a new one.\n\nNote\n\nIf the argument run_name is not set within mlflow.start_run(), a unique run name will be generated for each run.\n\nmlflow.end_run() ends the currently active run, if any, taking an optional run status.\n\nmlflow.active_run() returns a mlflow.entities.Run object corresponding to the\ncurrently active run, if any.\nNote: You cannot access currently-active run attributes\n(parameters, metrics, etc.) through the run returned by mlflow.active_run. In order to access\nsuch attributes, use the MlflowClient as follows:\n\nclient\n\nmlflow\n\nMlflowClient\n\n()\n\ndata\n\nclient\n\nget_run\n\nmlflow\n\nactive_run\n\n()\n\ninfo\n\nrun_id\n\ndata\n\nmlflow.last_active_run() retuns a mlflow.entities.Run object corresponding to the\ncurrently active run, if any. Otherwise, it returns a mlflow.entities.Run object corresponding\nthe last run started from the current Python process that reached a terminal status (i.e. FINISHED, FAILED, or KILLED).\n\nmlflow.log_param() logs a single key-value param in the currently active run. The key and\nvalue are both strings. Use mlflow.log_params() to log multiple params at once.", "metadata": {"url": "https://mlflow.org/docs/latest/tracking.html"}} {"id": "72da79a93de6-10", "text": "mlflow.log_metric() logs a single key-value metric. The value must always be a number.\nMLflow remembers the history of values for each metric. Use mlflow.log_metrics() to log\nmultiple metrics at once.\n\nmlflow.set_tag() sets a single key-value tag in the currently active run. The key and\nvalue are both strings. Use mlflow.set_tags() to set multiple tags at once.\n\nmlflow.log_artifact() logs a local file or directory as an artifact, optionally taking an\nartifact_path to place it in within the run\u2019s artifact URI. Run artifacts can be organized into\ndirectories, so you can place the artifact in a directory this way.\n\nmlflow.log_artifacts() logs all the files in a given directory as artifacts, again taking\nan optional artifact_path.\n\nmlflow.get_artifact_uri() returns the URI that artifacts from the current run should be\nlogged to.\n\nLaunching Multiple Runs in One Program\n\nSometimes you want to launch multiple MLflow runs in the same program: for example, maybe you are\nperforming a hyperparameter search locally or your experiments are just very fast to run. This is\neasy to do because the ActiveRun object returned by mlflow.start_run() is a Python\ncontext manager. You can \u201cscope\u201d each run to\njust one block of code as follows:\n\nwith\n\nmlflow\n\nstart_run\n\n():\n\nmlflow\n\nlog_param\n\n\"x\"\n\nmlflow\n\nlog_metric\n\n\"y\"\n\n...\n\nThe run remains open throughout the with statement, and is automatically closed when the\nstatement exits, even if it exits due to an exception.\n\nPerformance Tracking with Metrics\n\nYou log MLflow metrics with log methods in the Tracking API. The log methods support two alternative methods for distinguishing metric values on the x-axis: timestamp and step.", "metadata": {"url": "https://mlflow.org/docs/latest/tracking.html"}} {"id": "72da79a93de6-11", "text": "timestamp is an optional long value that represents the time that the metric was logged. timestamp defaults to the current time. step is an optional integer that represents any measurement of training progress (number of training iterations, number of epochs, and so on). step defaults to 0 and has the following requirements and properties:\n\nMust be a valid 64-bit integer value.\n\nCan be negative.\n\nCan be out of order in successive write calls. For example, (1, 3, 2) is a valid sequence.\n\nCan have \u201cgaps\u201d in the sequence of values specified in successive write calls. For example, (1, 5, 75, -20) is a valid sequence.\n\nIf you specify both a timestamp and a step, metrics are recorded against both axes independently.\n\nExamples\n\nwith mlflow.start_run():\n for epoch in range(0, 3):\n mlflow.log_metric(key=\"quality\", value=2 * epoch, step=epoch)\n\nMlflowClient client = new MlflowClient();\nRunInfo run = client.createRun();\nfor (int epoch = 0; epoch < 3; epoch ++) {\n client.logMetric(run.getRunId(), \"quality\", 2 * epoch, System.currentTimeMillis(), epoch);\n}\n\nVisualizing Metrics\n\nHere is an example plot of the quick start tutorial with the step x-axis and two timestamp axes:\n\nX-axis step\n\nX-axis wall time - graphs the absolute time each metric was logged\n\nX-axis relative time - graphs the time relative to the first metric logged, for each run\n\nAutomatic Logging\n\nAutomatic logging allows you to log metrics, parameters, and models without the need for explicit log statements.\n\nThere are two ways to use autologging:\n\nCall mlflow.autolog() before your training code. This will enable autologging for each supported library you have installed as soon as you import it.", "metadata": {"url": "https://mlflow.org/docs/latest/tracking.html"}} {"id": "72da79a93de6-12", "text": "Use library-specific autolog calls for each library you use in your code. See below for examples.\n\nThe following libraries support autologging:\n\nScikit-learn\n\nKeras\n\nGluon\n\nXGBoost\n\nLightGBM\n\nStatsmodels\n\nSpark\n\nFastai\n\nPytorch\n\nFor flavors that automatically save models as an artifact, additional files for dependency management are logged.\n\nYou can access the most recent autolog run through the mlflow.last_active_run() function. Here\u2019s a short sklearn autolog example that makes use of this function:\n\nimport\n\nmlflow\n\nfrom\n\nsklearn.model_selection\n\nimport\n\ntrain_test_split\n\nfrom\n\nsklearn.datasets\n\nimport\n\nload_diabetes\n\nfrom\n\nsklearn.ensemble\n\nimport\n\nRandomForestRegressor\n\nmlflow\n\nautolog\n\n()\n\ndb\n\nload_diabetes\n\n()\n\nX_train\n\nX_test\n\ny_train\n\ny_test\n\ntrain_test_split\n\ndb\n\ndata\n\ndb\n\ntarget\n\n# Create and train models.\n\nrf\n\nRandomForestRegressor\n\nn_estimators\n\n100\n\nmax_depth\n\nmax_features\n\nrf\n\nfit\n\nX_train\n\ny_train\n\n# Use the model to make predictions on the test dataset.\n\npredictions\n\nrf\n\npredict\n\nX_test\n\nautolog_run\n\nmlflow\n\nlast_active_run\n\n()\n\nScikit-learn\n\nCall mlflow.sklearn.autolog() before your training code to enable automatic logging of sklearn metrics, params, and models.\nSee example usage here.\n\nAutologging for estimators (e.g. LinearRegression) and meta estimators (e.g. Pipeline) creates a single run and logs:\n\nMetrics\n\nParameters\n\nTags\n\nArtifacts\n\nTraining score obtained\nby estimator.score\n\nParameters obtained by\nestimator.get_params\n\nClass name\nFully qualified class name\n\nFitted estimator\n\nAutologging for parameter search estimators (e.g. GridSearchCV) creates a single parent run and nested child runs", "metadata": {"url": "https://mlflow.org/docs/latest/tracking.html"}} {"id": "72da79a93de6-13", "text": "containing the following data:\n\nParent\n\nrun\n\nChild\n\nrun\n\nChild\n\nrun\n\n...\n\ncontaining the following data:\n\nRun type\n\nMetrics\n\nParameters\n\nTags\n\nArtifacts\n\nParent\n\nTraining score\n\nParameter search estimator\u2019s parameters\nBest parameter combination\n\nClass name\nFully qualified class name\n\nFitted parameter search estimator\nFitted best estimator\nSearch results csv file\n\nChild\n\nCV test score for\neach parameter combination\n\nEach parameter combination\n\nClass name\nFully qualified class name\n\nKeras\n\nCall mlflow.tensorflow.autolog() before your training code to enable automatic logging of metrics and parameters. See example usages with Keras and\nTensorFlow.\n\nNote that only tensorflow>=2.3 are supported.\nThe respective metrics associated with tf.estimator and EarlyStopping are automatically logged.\nAs an example, try running the MLflow TensorFlow examples.\n\nAutologging captures the following information:\n\nFramework/module\n\nMetrics\n\nParameters\n\nTags\n\nArtifacts\n\ntf.keras\n\nTraining loss; validation loss; user-specified metrics\n\nfit() parameters; optimizer name; learning rate; epsilon\n\nModel summary on training start; MLflow Model (Keras model); TensorBoard logs on training end\n\ntf.keras.callbacks.EarlyStopping\n\nMetrics from the EarlyStopping callbacks. For example,\nstopped_epoch, restored_epoch,\nrestore_best_weight, etc\n\nfit() parameters from EarlyStopping.\nFor example, min_delta, patience, baseline,\nrestore_best_weights, etc\n\nIf no active run exists when autolog() captures data, MLflow will automatically create a run to log information to.\nAlso, MLflow will then automatically end the run once training ends via calls to tf.keras.fit().\n\nIf a run already exists when autolog() captures data, MLflow will log to that run but not automatically end that run after training.\n\nGluon", "metadata": {"url": "https://mlflow.org/docs/latest/tracking.html"}} {"id": "72da79a93de6-14", "text": "Gluon\n\nCall mlflow.gluon.autolog() before your training code to enable automatic logging of metrics and parameters.\nSee example usages with Gluon .\n\nAutologging captures the following information:\n\nFramework\n\nMetrics\n\nParameters\n\nTags\n\nArtifacts\n\nGluon\n\nTraining loss; validation loss; user-specified metrics\n\nNumber of layers; optimizer name; learning rate; epsilon\n\nMLflow Model (Gluon model); on training end\n\nXGBoost\n\nCall mlflow.xgboost.autolog() before your training code to enable automatic logging of metrics and parameters.\n\nAutologging captures the following information:\n\nFramework\n\nMetrics\n\nParameters\n\nTags\n\nArtifacts\n\nXGBoost\n\nuser-specified metrics\n\nxgboost.train parameters\n\nMLflow Model (XGBoost model) with model signature on training end; feature importance; input example\n\nIf early stopping is activated, metrics at the best iteration will be logged as an extra step/iteration.\n\nLightGBM\n\nCall mlflow.lightgbm.autolog() before your training code to enable automatic logging of metrics and parameters.\n\nAutologging captures the following information:\n\nFramework\n\nMetrics\n\nParameters\n\nTags\n\nArtifacts\n\nLightGBM\n\nuser-specified metrics\n\nlightgbm.train parameters\n\nMLflow Model (LightGBM model) with model signature on training end; feature importance; input example\n\nIf early stopping is activated, metrics at the best iteration will be logged as an extra step/iteration.\n\nStatsmodels\n\nCall mlflow.statsmodels.autolog() before your training code to enable automatic logging of metrics and parameters.\n\nAutologging captures the following information:\n\nFramework\n\nMetrics\n\nParameters\n\nTags\n\nArtifacts\n\nStatsmodels\n\nuser-specified metrics\n\nstatsmodels.base.model.Model.fit parameters\n\nMLflow Model (statsmodels.base.wrapper.ResultsWrapper) on training end\n\nNote\n\nEach model subclass that overrides fit expects and logs its own parameters.", "metadata": {"url": "https://mlflow.org/docs/latest/tracking.html"}} {"id": "72da79a93de6-15", "text": "Note\n\nEach model subclass that overrides fit expects and logs its own parameters.\n\nSpark\n\nInitialize a SparkSession with the mlflow-spark JAR attached (e.g.\nSparkSession.builder.config(\"spark.jars.packages\", \"org.mlflow.mlflow-spark\")) and then\ncall mlflow.spark.autolog() to enable automatic logging of Spark datasource\ninformation at read-time, without the need for explicit\nlog statements. Note that autologging of Spark ML (MLlib) models is not yet supported.\n\nAutologging captures the following information:\n\nFramework\n\nMetrics\n\nParameters\n\nTags\n\nArtifacts\n\nSpark\n\nSingle tag containing source path, version, format. The tag contains one line per datasource\n\nNote\n\nMoreover, Spark datasource autologging occurs asynchronously - as such, it\u2019s possible (though unlikely) to see race conditions when launching short-lived MLflow runs that result in datasource information not being logged.\n\nFastai\n\nCall mlflow.fastai.autolog() before your training code to enable automatic logging of metrics and parameters.\nSee an example usage with Fastai.\n\nAutologging captures the following information:\n\nFramework\n\nMetrics\n\nParameters\n\nTags\n\nArtifacts\n\nfastai\n\nuser-specified metrics\n\nLogs optimizer data as parameters. For example,\nepochs, lr, opt_func, etc;\nLogs the parameters of the EarlyStoppingCallback and\nOneCycleScheduler callbacks\n\nModel checkpoints are logged to a \u2018models\u2019 directory; MLflow Model (fastai Learner model) on training end; Model summary text is logged\n\nPytorch\n\nCall mlflow.pytorch.autolog() before your Pytorch Lightning training code to enable automatic logging of metrics, parameters, and models. See example usages here. Note\nthat currently, Pytorch autologging supports only models trained using Pytorch Lightning.\n\nAutologging is triggered on calls to pytorch_lightning.trainer.Trainer.fit and captures the following information:\n\nFramework/module\n\nMetrics\n\nParameters", "metadata": {"url": "https://mlflow.org/docs/latest/tracking.html"}} {"id": "72da79a93de6-16", "text": "Framework/module\n\nMetrics\n\nParameters\n\nTags\n\nArtifacts\n\npytorch_lightning.trainer.Trainer\n\nTraining loss; validation loss; average_test_accuracy;\nuser-defined-metrics.\n\nfit() parameters; optimizer name; learning rate; epsilon.\n\nModel summary on training start, MLflow Model (Pytorch model) on training end;\n\npytorch_lightning.callbacks.earlystopping\n\nTraining loss; validation loss; average_test_accuracy;\nuser-defined-metrics.\nMetrics from the EarlyStopping callbacks.\nFor example, stopped_epoch, restored_epoch,\nrestore_best_weight, etc.\n\nfit() parameters; optimizer name; learning rate; epsilon\nParameters from the EarlyStopping callbacks.\nFor example, min_delta, patience, baseline,``restore_best_weights``, etc\n\nModel summary on training start; MLflow Model (Pytorch model) on training end;\nBest Pytorch model checkpoint, if training stops due to early stopping callback.\n\nIf no active run exists when autolog() captures data, MLflow will automatically create a run to log information, ending the run once\nthe call to pytorch_lightning.trainer.Trainer.fit() completes.\n\nIf a run already exists when autolog() captures data, MLflow will log to that run but not automatically end that run after training.\n\nNote\n\nParameters not explicitly passed by users (parameters that use default values) while using pytorch_lightning.trainer.Trainer.fit() are not currently automatically logged\n\nIn case of a multi-optimizer scenario (such as usage of autoencoder), only the parameters for the first optimizer are logged\n\nOrganizing Runs in Experiments\n\nCommand-Line Interface (\n\nmlflow\n\nexperiments\n\nmlflow.create_experiment() Python API. You can pass the experiment name for an individual run\nusing the CLI (for example,\n\nmlflow\n\nrun\n\n...\n\n-experiment-name\n\n[name]\n\nMLFLOW_EXPERIMENT_NAME\n\n-experiment-id\n\nMLFLOW_EXPERIMENT_ID", "metadata": {"url": "https://mlflow.org/docs/latest/tracking.html"}} {"id": "72da79a93de6-17", "text": "[name]\n\nMLFLOW_EXPERIMENT_NAME\n\n-experiment-id\n\nMLFLOW_EXPERIMENT_ID\n\n# Set the experiment via environment variables\n\nexport\n\nMLFLOW_EXPERIMENT_NAME\n\n=fraud-detection\n\nmlflow\n\nexperiments\n\ncreate\n\n-experiment-name\n\nfraud-detection\n\n# Launch a run. The experiment is inferred from the MLFLOW_EXPERIMENT_NAME environment\n\n# variable, or from the --experiment-name parameter passed to the MLflow CLI (the latter\n\n# taking precedence)\n\nwith\n\nmlflow\n\nstart_run\n\n():\n\nmlflow\n\nlog_param\n\n\"a\"\n\nmlflow\n\nlog_metric\n\n\"b\"\n\nManaging Experiments and Runs with the Tracking Service API\n\nMLflow provides a more detailed Tracking Service API for managing experiments and runs directly,\nwhich is available through client SDK in the mlflow.client module.\nThis makes it possible to query data about past runs, log additional information about them, create experiments,\nadd tags to a run, and more.\n\nExample\n\nfrom\n\nmlflow.tracking\n\nimport\n\nMlflowClient\n\nclient\n\nMlflowClient\n\n()\n\nexperiments\n\nclient\n\nsearch_experiments\n\n()\n\n# returns a list of mlflow.entities.Experiment\n\nrun\n\nclient\n\ncreate_run\n\nexperiments\n\nexperiment_id\n\n# returns mlflow.entities.Run\n\nclient\n\nlog_param\n\nrun\n\ninfo\n\nrun_id\n\n\"hello\"\n\n\"world\"\n\nclient\n\nset_terminated\n\nrun\n\ninfo\n\nrun_id\n\nAdding Tags to Runs\n\nThe MlflowClient.set_tag() function lets you add custom tags to runs. A tag can only have a single unique value mapped to it at a time. For example:\n\nclient\n\nset_tag\n\nrun\n\ninfo\n\nrun_id\n\n\"tag_key\"\n\n\"tag_value\"\n\nImportant", "metadata": {"url": "https://mlflow.org/docs/latest/tracking.html"}} {"id": "72da79a93de6-18", "text": "run\n\ninfo\n\nrun_id\n\n\"tag_key\"\n\n\"tag_value\"\n\nImportant\n\nDo not use the prefix mlflow. (e.g. mlflow.note) for a tag. This prefix is reserved for use by MLflow. See System Tags for a list of reserved tag keys.\n\nTracking UI\n\nThe Tracking UI lets you visualize, search and compare runs, as well as download run artifacts or\nmetadata for analysis in other tools. If you log runs to a local mlruns directory,\nrun mlflow ui in the directory above it, and it loads the corresponding runs.\nAlternatively, the MLflow tracking server serves the same UI and enables remote storage of run artifacts.\nIn that case, you can view the UI using URL http://:5000 in your browser from any\nmachine, including any remote machine that can connect to your tracking server.\n\nThe UI contains the following key features:\n\nExperiment-based run listing and comparison (including run comparison across multiple experiments)\n\nSearching for runs by parameter or metric value\n\nVisualizing run metrics\n\nDownloading run results\n\nQuerying Runs Programmatically\n\nYou can access all of the functions in the Tracking UI programmatically. This makes it easy to do several common tasks:\n\nQuery and compare runs using any data analysis tool of your choice, for example, pandas.\n\nDetermine the artifact URI for a run to feed some of its artifacts into a new run when executing a workflow. For an example of querying runs and constructing a multistep workflow, see the MLflow Multistep Workflow Example project.\n\nLoad artifacts from past runs as MLflow Models. For an example of training, exporting, and loading a model, and predicting using the model, see the MLflow TensorFlow example.", "metadata": {"url": "https://mlflow.org/docs/latest/tracking.html"}} {"id": "72da79a93de6-19", "text": "Run automated parameter search algorithms, where you query the metrics from various runs to submit new ones. For an example of running automated parameter search algorithms, see the MLflow Hyperparameter Tuning Example project.\n\nMLflow Tracking Servers\n\nIn this section:\n\nStorage\n\nBackend Stores\nArtifact Stores\nFile store performance\nDeletion Behavior\nSQLAlchemy Options\n\nNetworking\n\nUsing the Tracking Server for proxied artifact access\n\nOptionally using a Tracking Server instance exclusively for artifact handling\n\nLogging to a Tracking Server\n\nTracking Server versioning\n\nYou run an MLflow tracking server using mlflow server. An example configuration for a server is:\n\nCommand to run the tracking server in this configuration\n\nmlflow\n\nserver\n\n-backend-store-uri\n\n/mnt/persistent-disk\n\n-default-artifact-root\n\ns3://my-mlflow-bucket/\n\n-host\n\n0.0.0.0\n\nNote\n\nWhen started in --artifacts-only mode, the tracking server will not permit any operation other than saving, loading, and listing artifacts.\n\nStorage\n\nAn MLflow tracking server has two components for storage: a backend store and an artifact store.\n\nBackend Stores\n\nThe backend store is where MLflow Tracking Server stores experiment and run metadata as well as\nparams, metrics, and tags for runs. MLflow supports two types of backend stores: file store and\ndatabase-backed store.\n\nNote\n\nIn order to use model registry functionality, you must run your server using a database-backed store.\n\nUse --backend-store-uri to configure the type of backend store. You specify:\n\nA file store backend as ./path_to_store or file:/path_to_store", "metadata": {"url": "https://mlflow.org/docs/latest/tracking.html"}} {"id": "72da79a93de6-20", "text": "A file store backend as ./path_to_store or file:/path_to_store\n\nA database-backed store as SQLAlchemy database URI.\nThe database URI typically takes the format +://:@:/.\nMLflow supports the database dialects mysql, mssql, sqlite, and postgresql.\nDrivers are optional. If you do not specify a driver, SQLAlchemy uses a dialect\u2019s default driver.\nFor example, --backend-store-uri sqlite:///mlflow.db would use a local SQLite database.\n\nImportant\n\nmlflow server will fail against a database-backed store with an out-of-date database schema.\nTo prevent this, upgrade your database schema to the latest supported version using\nmlflow db upgrade [db_uri]. Schema migrations can result in database downtime, may\ntake longer on larger databases, and are not guaranteed to be transactional. You should always\ntake a backup of your database prior to running mlflow db upgrade - consult your database\u2019s\ndocumentation for instructions on taking a backup.\n\nBy default --backend-store-uri is set to the local ./mlruns directory (the same as when\nrunning mlflow run locally), but when running a server, make sure that this points to a\npersistent (that is, non-ephemeral) file system location.\n\nArtifact Stores\n\nIn this section:\n\nAmazon S3 and S3-compatible storage\n\nAzure Blob Storage\n\nGoogle Cloud Storage\n\nFTP server\n\nSFTP Server\n\nNFS\n\nHDFS\n\nThe artifact store is a location suitable for large data (such as an S3 bucket or shared NFS\nfile system) and is where clients log their artifact output (for example, models).\nartifact_location is a property recorded on mlflow.entities.Experiment for\ndefault location to store artifacts for all runs in this experiment. Additionally, artifact_uri\nis a property on mlflow.entities.RunInfo to indicate location where all artifacts for\nthis run are stored.", "metadata": {"url": "https://mlflow.org/docs/latest/tracking.html"}} {"id": "72da79a93de6-21", "text": "The MLflow client caches artifact location information on a per-run basis.\nIt is therefore not recommended to alter a run\u2019s artifact location before it has terminated.\n\nIn addition to local file paths, MLflow supports the following storage systems as artifact\nstores: Amazon S3, Azure Blob Storage, Google Cloud Storage, SFTP server, and NFS.\n\nUse --default-artifact-root (defaults to local ./mlruns directory) to configure default\nlocation to server\u2019s artifact store. This will be used as artifact location for newly-created\nexperiments that do not specify one. Once you create an experiment, --default-artifact-root\nis no longer relevant to that experiment.\n\nBy default, a server is launched with the --serve-artifacts flag to enable proxied access for artifacts.\nThe uri mlflow-artifacts:/ replaces an otherwise explicit object store destination (e.g., \u201cs3:/my_bucket/mlartifacts\u201d)\nfor interfacing with artifacts. The client can access artifacts via HTTP requests to the MLflow Tracking Server.\nThis simplifies access requirements for users of the MLflow client, eliminating the need to\nconfigure access tokens or username and password environment variables for the underlying object store when writing or retrieving artifacts.\nTo disable proxied access for artifacts, specify --no-serve-artifacts.\n\nProvided an Mlflow server configuration where the --default-artifact-root is s3://my-root-bucket,\nthe following patterns will all resolve to the configured proxied object store location of s3://my-root-bucket/mlartifacts:\n\nhttps://:/mlartifacts\n\nhttp:///mlartifacts\n\nmlflow-artifacts:///mlartifacts\n\nmlflow-artifacts://:/mlartifacts\n\nmlflow-artifacts:/mlartifacts", "metadata": {"url": "https://mlflow.org/docs/latest/tracking.html"}} {"id": "72da79a93de6-22", "text": "mlflow-artifacts:/mlartifacts\n\nIf the host or host:port declaration is absent in client artifact requests to the MLflow server, the client API\nwill assume that the host is the same as the MLflow Tracking uri.\n\nNote\n\nIf an MLflow server is running with the --artifact-only flag, the client should interact with this server explicitly by\nincluding either a host or host:port definition for uri location references for artifacts.\nOtherwise, all artifact requests will route to the MLflow Tracking server, defeating the purpose of running a distinct artifact server.\n\nImportant\n\nAccess credentials and configuration for the artifact storage location are configured once during server initialization in the place\nof having users handle access credentials for artifact-based operations. Note that all users who have access to the\nTracking Server in this mode will have access to artifacts served through this assumed role.\n\nTo allow the server and clients to access the artifact location, you should configure your cloud\nprovider credentials as normal. For example, for S3, you can set the AWS_ACCESS_KEY_ID\nand AWS_SECRET_ACCESS_KEY environment variables, use an IAM role, or configure a default\nprofile in ~/.aws/credentials.\nSee Set up AWS Credentials and Region for Development for more info.\n\nImportant\n\nIf you do not specify a --default-artifact-root or an artifact URI when creating the experiment\n(for example, mlflow experiments create --artifact-location s3://), the artifact root\nis a path inside the file store. Typically this is not an appropriate location, as the client and\nserver probably refer to different physical locations (that is, the same path on different disks).\n\nYou may set an MLflow environment variable to configure the timeout for artifact uploads and downloads:\n\nMLFLOW_ARTIFACT_UPLOAD_DOWNLOAD_TIMEOUT - (Experimental, may be changed or removed) Sets the timeout for artifact upload/download in seconds (Default set by individual artifact stores).", "metadata": {"url": "https://mlflow.org/docs/latest/tracking.html"}} {"id": "72da79a93de6-23", "text": "Amazon S3 and S3-compatible storage\n\nMinIO or\n\nDigital Ocean Spaces), specify a URI of the form\n\ns3:///\n\n~/.aws/credentials\n\nAWS_ACCESS_KEY_ID\n\nAWS_SECRET_ACCESS_KEY\n\nSet up AWS Credentials and Region for Development.\n\nTo add S3 file upload extra arguments, set MLFLOW_S3_UPLOAD_EXTRA_ARGS to a JSON object of key/value pairs.\nFor example, if you want to upload to a KMS Encrypted bucket using the KMS Key 1234:\n\nexport\n\nMLFLOW_S3_UPLOAD_EXTRA_ARGS\n\n'{\"ServerSideEncryption\": \"aws:kms\", \"SSEKMSKeyId\": \"1234\"}'\n\nFor a list of available extra args see Boto3 ExtraArgs Documentation.\n\nTo store artifacts in a custom endpoint, set the MLFLOW_S3_ENDPOINT_URL to your endpoint\u2019s URL. For example, if you are using Digital Ocean Spaces:\n\nexport\n\nMLFLOW_S3_ENDPOINT_URL\n\n=https://.digitaloceanspaces.com\n\nIf you have a MinIO server at 1.2.3.4 on port 9000:\n\nexport\n\nMLFLOW_S3_ENDPOINT_URL\n\n=http://1.2.3.4:9000\n\nIf the MinIO server is configured with using SSL self-signed or signed using some internal-only CA certificate, you could set MLFLOW_S3_IGNORE_TLS or AWS_CA_BUNDLE variables (not both at the same time!) to disable certificate signature check, or add a custom CA bundle to perform this check, respectively:\n\nexport\n\nMLFLOW_S3_IGNORE_TLS\n\ntrue\n\n#or\n\nexport\n\nAWS_CA_BUNDLE\n\n=/some/ca/bundle.pem\n\nAdditionally, if MinIO server is configured with non-default region, you should set AWS_DEFAULT_REGION variable:\n\nexport\n\nAWS_DEFAULT_REGION\n\n=my_region\n\nWarning\n\n-default-artifact-root\n\n$MLFLOW_S3_ENDPOINT_URL", "metadata": {"url": "https://mlflow.org/docs/latest/tracking.html"}} {"id": "72da79a93de6-24", "text": "=my_region\n\nWarning\n\n-default-artifact-root\n\n$MLFLOW_S3_ENDPOINT_URL\n\nMLFLOW_S3_ENDPOINT_URL\n\n-default-artifact-root\n\nMLFLOW_S3_ENDPOINT_URL\n\nMLFLOW_S3_ENDPOINT_URL\n\nhttps://.s3..amazonaws.com///\n\ns3://///\n\nunset\n\nComplete list of configurable values for an S3 client is available in boto3 documentation.\n\nAzure Blob Storage\n\nTo store artifacts in Azure Blob Storage, specify a URI of the form\nwasbs://@.blob.core.windows.net/.\nMLflow expects Azure Storage access credentials in the\nAZURE_STORAGE_CONNECTION_STRING, AZURE_STORAGE_ACCESS_KEY environment variables\nor having your credentials configured such that the DefaultAzureCredential(). class can pick them up.\nThe order of precedence is:\n\nAZURE_STORAGE_CONNECTION_STRING\n\nAZURE_STORAGE_ACCESS_KEY\n\nDefaultAzureCredential()\n\nYou must set one of these options on both your client application and your MLflow tracking server.\nAlso, you must run pip install azure-storage-blob separately (on both your client and the server) to access Azure Blob Storage.\nFinally, if you want to use DefaultAzureCredential, you must pip install azure-identity;\nMLflow does not declare a dependency on these packages by default.\n\nYou may set an MLflow environment variable to configure the timeout for artifact uploads and downloads:\n\nMLFLOW_ARTIFACT_UPLOAD_DOWNLOAD_TIMEOUT - (Experimental, may be changed or removed) Sets the timeout for artifact upload/download in seconds (Default: 600 for Azure blob).\n\nGoogle Cloud Storage", "metadata": {"url": "https://mlflow.org/docs/latest/tracking.html"}} {"id": "72da79a93de6-25", "text": "Google Cloud Storage\n\nTo store artifacts in Google Cloud Storage, specify a URI of the form gs:///.\nYou should configure credentials for accessing the GCS container on the client and server as described\nin the GCS documentation.\nFinally, you must run pip install google-cloud-storage (on both your client and the server)\nto access Google Cloud Storage; MLflow does not declare a dependency on this package by default.\n\nYou may set some MLflow environment variables to troubleshoot GCS read-timeouts (eg. due to slow transfer speeds) using the following variables:\n\nMLFLOW_ARTIFACT_UPLOAD_DOWNLOAD_TIMEOUT - (Experimental, may be changed or removed) Sets the standard timeout for transfer operations in seconds (Default: 60 for GCS). Use -1 for indefinite timeout.\n\nMLFLOW_GCS_DEFAULT_TIMEOUT - (Deprecated, please use MLFLOW_ARTIFACT_UPLOAD_DOWNLOAD_TIMEOUT) Sets the standard timeout for transfer operations in seconds (Default: 60). Use -1 for indefinite timeout.\n\nMLFLOW_GCS_UPLOAD_CHUNK_SIZE - Sets the standard upload chunk size for bigger files in bytes (Default: 104857600 \u2259 100MiB), must be multiple of 256 KB.\n\nMLFLOW_GCS_DOWNLOAD_CHUNK_SIZE - Sets the standard download chunk size for bigger files in bytes (Default: 104857600 \u2259 100MiB), must be multiple of 256 KB\n\nFTP server\n\nTo store artifacts in a FTP server, specify a URI of the form ftp://user@host/path/to/directory .\nThe URI may optionally include a password for logging into the server, e.g. ftp://user:pass@host/path/to/directory\n\nSFTP Server", "metadata": {"url": "https://mlflow.org/docs/latest/tracking.html"}} {"id": "72da79a93de6-26", "text": "SFTP Server\n\nTo store artifacts in an SFTP server, specify a URI of the form sftp://user@host/path/to/directory.\nYou should configure the client to be able to log in to the SFTP server without a password over SSH (e.g. public key, identity file in ssh_config, etc.).\n\nThe format sftp://user:pass@host/ is supported for logging in. However, for safety reasons this is not recommended.\n\nWhen using this store, pysftp must be installed on both the server and the client. Run pip install pysftp to install the required package.\n\nNFS\n\nTo store artifacts in an NFS mount, specify a URI as a normal file system path, e.g., /mnt/nfs.\nThis path must be the same on both the server and the client \u2013 you may need to use symlinks or remount\nthe client in order to enforce this property.\n\nHDFS\n\nTo store artifacts in HDFS, specify a hdfs: URI. It can contain host and port: hdfs://:/ or just the path: hdfs://.\n\nThere are also two ways to authenticate to HDFS:\n\nUse current UNIX account authorization\n\nKerberos credentials using following environment variables:\n\nexport\n\nMLFLOW_KERBEROS_TICKET_CACHE\n\n=/tmp/krb5cc_22222222\n\nexport\n\nMLFLOW_KERBEROS_USER\n\n=user_name_to_use\n\nMost of the cluster contest settings are read from hdfs-site.xml accessed by the HDFS native\ndriver using the CLASSPATH environment variable.\n\nThe used HDFS driver is libhdfs.\n\nFile store performance", "metadata": {"url": "https://mlflow.org/docs/latest/tracking.html"}} {"id": "72da79a93de6-27", "text": "The used HDFS driver is libhdfs.\n\nFile store performance\n\nMLflow will automatically try to use LibYAML bindings if they are already installed.\nHowever if you notice any performance issues when using file store backend, it could mean LibYAML is not installed on your system.\nOn Linux or Mac you can easily install it using your system package manager:\n\n# On Ubuntu/Debian\napt-get\n\ninstall\n\nlibyaml-cpp-dev\n\nlibyaml-dev\n\n# On macOS using Homebrew\nbrew\n\ninstall\n\nyaml-cpp\n\nlibyaml\n\nAfter installing LibYAML, you need to reinstall PyYAML:\n\n# Reinstall PyYAML\npip\n\n-no-cache-dir\n\ninstall\n\n-force-reinstall\n\nI\n\npyyaml\n\nDeletion Behavior\n\nIn order to allow MLflow Runs to be restored, Run metadata and artifacts are not automatically removed\nfrom the backend store or artifact store when a Run is deleted. The mlflow gc CLI is provided\nfor permanently removing Run metadata and artifacts for deleted runs.\n\nSQLAlchemy Options\n\nYou can inject some SQLAlchemy connection pooling options using environment variables.\n\nMLflow Environment Variable\n\nSQLAlchemy QueuePool Option\n\nMLFLOW_SQLALCHEMYSTORE_POOL_SIZE\n\npool_size\n\nMLFLOW_SQLALCHEMYSTORE_POOL_RECYCLE\n\npool_recycle\n\nMLFLOW_SQLALCHEMYSTORE_MAX_OVERFLOW\n\nmax_overflow\n\nNetworking\n\nThe --host option exposes the service on all interfaces. If running a server in production, we\nwould recommend not exposing the built-in server broadly (as it is unauthenticated and unencrypted),\nand instead putting it behind a reverse proxy like NGINX or Apache httpd, or connecting over VPN.\nYou can then pass authentication headers to MLflow using these environment variables.\n\nAdditionally, you should ensure that the --backend-store-uri (which defaults to the\n./mlruns directory) points to a persistent (non-ephemeral) disk or database connection.\n\nUsing the Tracking Server for proxied artifact access", "metadata": {"url": "https://mlflow.org/docs/latest/tracking.html"}} {"id": "72da79a93de6-28", "text": "Using the Tracking Server for proxied artifact access\n\nTo use an instance of the MLflow Tracking server for artifact operations ( Scenario 5: MLflow Tracking Server enabled with proxied artifact storage access ),\nstart a server with the optional parameters --serve-artifacts to enable proxied artifact access and set a\npath to record artifacts to by providing a value for the argument --artifacts-destination. The tracking server will,\nin this mode, stream any artifacts that a client is logging directly through an assumed (server-side) identity,\neliminating the need for access credentials to be handled by end-users.\n\nNote\n\nAuthentication access to the value set by --artifacts-destination must be configured when starting the tracking\nserver, if required.\n\nTo start the MLflow server with proxy artifact access enabled to an HDFS location (as an example):\n\nexport\n\nHADOOP_USER_NAME\n\n=mlflowserverauth\n\nmlflow\n\nserver\n\n-host\n\n0.0.0.0\n\n-port\n\n8885\n\n-artifacts-destination\n\nhdfs://myhost:8887/mlprojects/models\n\nOptionally using a Tracking Server instance exclusively for artifact handling\n\nIf the volume of tracking server requests is sufficiently large and performance issues are noticed, a tracking server\ncan be configured to serve in --artifacts-only mode ( Scenario 6: MLflow Tracking Server used exclusively as proxied access host for artifact storage access ), operating in tandem with an instance that\noperates with --no-serve-artifacts specified. This configuration ensures that the processing of artifacts is isolated\nfrom all other tracking server event handling.", "metadata": {"url": "https://mlflow.org/docs/latest/tracking.html"}} {"id": "72da79a93de6-29", "text": "When a tracking server is configured in --artifacts-only mode, any tasks apart from those concerned with artifact\nhandling (i.e., model logging, loading models, logging artifacts, listing artifacts, etc.) will return an HTTPError.\nSee the following example of a client REST call in Python attempting to list experiments from a server that is configured in\n--artifacts-only mode:\n\nimport\n\nrequests\n\nresponse\n\nrequests\n\nget\n\n\"http://0.0.0.0:8885/api/2.0/mlflow/experiments/list\"\n\nOutput\n\n>> HTTPError: Endpoint: /api/2.0/mlflow/experiments/list disabled due to the mlflow server running in `--artifacts-only` mode.\n\nUsing an additional MLflow server to handle artifacts exclusively can be useful for large-scale MLOps infrastructure.\nDecoupling the longer running and more compute-intensive tasks of artifact handling from the faster and higher-volume\nmetadata functionality of the other Tracking API requests can help minimize the burden of an otherwise single MLflow\nserver handling both types of payloads.\n\nLogging to a Tracking Server\n\nTo log to a tracking server, set the MLFLOW_TRACKING_URI environment variable to the server\u2019s URI,\nalong with its scheme and port (for example, http://10.0.0.1:5000) or call mlflow.set_tracking_uri().\n\nThe mlflow.start_run(), mlflow.log_param(), and mlflow.log_metric() calls\nthen make API requests to your remote tracking server.\n\nimport\n\nmlflow\n\nremote_server_uri\n\n\"...\"\n\n# set to your server URI\n\nmlflow\n\nset_tracking_uri\n\nremote_server_uri\n\n# Note: on Databricks, the experiment name passed to mlflow_set_experiment must be a\n\n# valid path in the workspace\n\nmlflow\n\nset_experiment\n\n\"/my-experiment\"\n\nwith\n\nmlflow\n\nstart_run\n\n():\n\nmlflow\n\nlog_param\n\n\"a\"\n\nmlflow", "metadata": {"url": "https://mlflow.org/docs/latest/tracking.html"}} {"id": "72da79a93de6-30", "text": "mlflow\n\nstart_run\n\n():\n\nmlflow\n\nlog_param\n\n\"a\"\n\nmlflow\n\nlog_metric\n\n\"b\"\n\nlibrary\n\nmlflow\n\ninstall_mlflow\n\n()\n\nremote_server_uri\n\n\"...\"\n\n# set to your server URI\n\nmlflow_set_tracking_uri\n\nremote_server_uri\n\n# Note: on Databricks, the experiment name passed to mlflow_set_experiment must be a\n\n# valid path in the workspace\n\nmlflow_set_experiment\n\n\"/my-experiment\"\n\nmlflow_log_param\n\n\"a\"\n\n\"1\"\n\nIn addition to the MLFLOW_TRACKING_URI environment variable, the following environment variables\nallow passing HTTP authentication to the tracking server:\n\nMLFLOW_TRACKING_USERNAME and MLFLOW_TRACKING_PASSWORD - username and password to use with HTTP\nBasic authentication. To use Basic authentication, you must set both environment variables .\n\nMLFLOW_TRACKING_TOKEN - token to use with HTTP Bearer authentication. Basic authentication takes precedence if set.\n\nMLFLOW_TRACKING_INSECURE_TLS - If set to the literal true, MLflow does not verify the TLS connection,\nmeaning it does not validate certificates or hostnames for https:// tracking URIs. This flag is not recommended for\nproduction environments. If this is set to true then MLFLOW_TRACKING_SERVER_CERT_PATH must not be set.\n\nMLFLOW_TRACKING_SERVER_CERT_PATH - Path to a CA bundle to use. Sets the verify param of the\nrequests.request function\n(see requests main interface).\nWhen you use a self-signed server certificate you can use this to verify it on client side.\nIf this is set MLFLOW_TRACKING_INSECURE_TLS must not be set (false).\n\nMLFLOW_TRACKING_CLIENT_CERT_PATH - Path to ssl client cert file (.pem). Sets the cert param\nof the requests.request function\n(see requests main interface).\nThis can be used to use a (self-signed) client certificate.\n\nNote", "metadata": {"url": "https://mlflow.org/docs/latest/tracking.html"}} {"id": "72da79a93de6-31", "text": "Note\n\nIf the MLflow server is not configured with the --serve-artifacts option, the client directly pushes artifacts\nto the artifact store. It does not proxy these through the tracking server by default.\n\nFor this reason, the client needs direct access to the artifact store. For instructions on setting up these credentials,\nsee Artifact Stores.\n\nTracking Server versioning\n\nThe version of MLflow running on the server can be found by querying the /version endpoint.\nThis can be used to check that the client-side version of MLflow is up-to-date with a remote tracking server prior to running experiments.\nFor example:\n\nimport\n\nrequests\n\nimport\n\nmlflow\n\nresponse\n\nrequests\n\nget\n\n\"http://:/version\"\n\nassert\n\nresponse\n\ntext\n\n==\n\nmlflow\n\n__version__\n\n# Checking for a strict version match\n\nSystem Tags\n\nYou can annotate runs with arbitrary tags. Tag keys that start with mlflow. are reserved for\ninternal use. The following tags are set automatically by MLflow, when appropriate:\n\nKey\n\nDescription\n\nmlflow.note.content\n\nA descriptive note about this run. This reserved tag is not set automatically and can\nbe overridden by the user to include additional information about the run. The content\nis displayed on the run\u2019s page under the Notes section.\n\nmlflow.parentRunId\n\nThe ID of the parent run, if this is a nested run.\n\nmlflow.user\n\nIdentifier of the user who created the run.\n\nmlflow.source.type\n\nSource type. Possible values: \"NOTEBOOK\", \"JOB\", \"PROJECT\",\n\"LOCAL\", and \"UNKNOWN\"\n\nmlflow.source.name\n\nSource identifier (e.g., GitHub URL, local Python filename, name of notebook)\n\nmlflow.source.git.commit\n\nCommit hash of the executed code, if in a git repository.\n\nmlflow.source.git.branch", "metadata": {"url": "https://mlflow.org/docs/latest/tracking.html"}} {"id": "72da79a93de6-32", "text": "Commit hash of the executed code, if in a git repository.\n\nmlflow.source.git.branch\n\nName of the branch of the executed code, if in a git repository.\n\nmlflow.source.git.repoURL\n\nURL that the executed code was cloned from.\n\nmlflow.project.env\n\nThe runtime context used by the MLflow project.\nPossible values: \"docker\" and \"conda\".\n\nmlflow.project.entryPoint\n\nName of the project entry point associated with the current run, if any.\n\nmlflow.docker.image.name\n\nName of the Docker image used to execute this run.\n\nmlflow.docker.image.id\n\nID of the Docker image used to execute this run.\n\nmlflow.log-model.history\n\nModel metadata collected by log-model calls. Includes the serialized\nform of the MLModel model files logged to a run, although the exact format and\ninformation captured is subject to change.", "metadata": {"url": "https://mlflow.org/docs/latest/tracking.html"}} {"id": "ced13a6bcc4c-0", "text": "MLflow Projects\n\nAn MLflow Project is a format for packaging data science code in a reusable and reproducible way,\nbased primarily on conventions. In addition, the Projects component includes an API and command-line\ntools for running projects, making it possible to chain together projects into workflows.\n\nTable of Contents\n\nOverview\n\nSpecifying Projects\n\nRunning Projects\n\nIterating Quickly\n\nBuilding Multistep Workflows\n\nOverview\n\nAt the core, MLflow Projects are just a convention for organizing and describing your code to let\nother data scientists (or automated tools) run it. Each project is simply a directory of files, or\na Git repository, containing your code. MLflow can run some projects based on a convention for\nplacing files in this directory (for example, a conda.yaml file is treated as a\nConda environment), but you can describe your project in more detail by\nadding a MLproject file, which is a YAML formatted\ntext file. Each project can specify several properties:\n\nA human-readable name for the project.\n\nCommands that can be run within the project, and information about their\nparameters. Most projects contain at least one entry point that you want other users to\ncall. Some projects can also contain more than one entry point: for example, you might have a\nsingle Git repository containing multiple featurization algorithms. You can also call\nany .py or .sh file in the project as an entry point. If you list your entry points in\na MLproject file, however, you can also specify parameters for them, including data\ntypes and default values.\n\nThe software environment that should be used to execute project entry points. This includes all\nlibrary dependencies required by the project code. See Project Environments for more\ninformation about the software environments supported by MLflow Projects, including\nConda environments,\nVirtualenv environments, and\nDocker containers.", "metadata": {"url": "https://mlflow.org/docs/latest/projects.html"}} {"id": "ced13a6bcc4c-1", "text": "You can run any project from a Git URI or from a local directory using the mlflow run\ncommand-line tool, or the mlflow.projects.run() Python API. These APIs also allow submitting the\nproject for remote execution on Databricks and\nKubernetes.\n\nImportant\n\nBy default, MLflow uses a new, temporary working directory for Git projects.\nThis means that you should generally pass any file arguments to MLflow\nproject using absolute, not relative, paths. If your project declares its parameters, MLflow\nautomatically makes paths absolute for parameters of type path.\n\nSpecifying Projects\n\nBy default, any Git repository or local directory can be treated as an MLflow project; you can\ninvoke any bash or Python script contained in the directory as a project entry point. The\nProject Directories section describes how MLflow interprets directories as projects.\n\nTo provide additional control over a project\u2019s attributes, you can also include an MLproject\nfile in your project\u2019s repository or directory.\n\nFinally, MLflow projects allow you to specify the software environment\nthat is used to execute project entry points.\n\nProject Environments\n\nMLflow currently supports the following project environments: Virtualenv environment, conda environment, Docker container environment, and system environment.\n\nVirtualenv environments support Python packages available on PyPI. When an MLflow Project\nspecifies a Virtualenv environment, MLflow will download the specified version of Python by using\npyenv and create an isolated environment that contains the project dependencies using virtualenv,\nactivating it as the execution environment prior to running the project code.\nYou can specify a Virtualenv environment for your MLflow Project by including a python_env entry in your\nMLproject file. For details, see the Project Directories and Specifying an Environment sections.", "metadata": {"url": "https://mlflow.org/docs/latest/projects.html"}} {"id": "ced13a6bcc4c-2", "text": "Docker containers allow you to capture\nnon-Python dependencies such as Java libraries.\nWhen you run an MLflow project that specifies a Docker image, MLflow adds a new Docker layer\nthat copies the project\u2019s contents into the /mlflow/projects/code directory. This step produces\na new image. MLflow then runs the new image and invokes the project entrypoint in the resulting\ncontainer.\nEnvironment variables, such as MLFLOW_TRACKING_URI, are propagated inside the Docker container\nduring project execution. Additionally, runs and\nexperiments created by the project are saved to the\ntracking server specified by your tracking URI. When running\nagainst a local tracking URI, MLflow mounts the host system\u2019s tracking directory\n(e.g., a local mlruns directory) inside the container so that metrics, parameters, and\nartifacts logged during project execution are accessible afterwards.\nSee Dockerized Model Training with MLflow for an example of an MLflow\nproject with a Docker environment.\nTo specify a Docker container environment, you must add an\nMLproject file to your project. For information about specifying\na Docker container environment in an MLproject file, see\nSpecifying an Environment.\n\nConda environments support\nboth Python packages and native libraries (e.g, CuDNN or Intel MKL). When an MLflow Project\nspecifies a Conda environment, it is activated before project code is run.\n\nWarning\nBy using conda, you\u2019re responsible for adhering to Anaconda\u2019s terms of service.", "metadata": {"url": "https://mlflow.org/docs/latest/projects.html"}} {"id": "ced13a6bcc4c-3", "text": "By default, MLflow uses the system path to find and run the conda binary. You can use a\ndifferent Conda installation by setting the MLFLOW_CONDA_HOME environment variable; in this\ncase, MLflow attempts to run the binary at $MLFLOW_CONDA_HOME/bin/conda.\nYou can specify a Conda environment for your MLflow project by including a conda.yaml\nfile in the root of the project directory or by including a conda_env entry in your\nMLproject file. For details, see the Project Directories and Specifying an Environment sections.\nThe mlflow run command supports running a conda environment project as a virtualenv environment project.\nTo do this, run mlflow run with --env-manager virtualenv:\nmlflow run /path/to/conda/project --env-manager virtualenv\n\n\n\nWarning\nWhen a conda environment project is executed as a virtualenv environment project,\nconda dependencies will be ignored and only pip dependencies will be installed.\n\nYou can also run MLflow Projects directly in your current system environment. All of the\nproject\u2019s dependencies must be installed on your system prior to project execution. The system\nenvironment is supplied at runtime. It is not part of the MLflow Project\u2019s directory contents\nor MLproject file. For information about using the system environment when running\na project, see the Environment parameter description in the Running Projects section.\n\nProject Directories\n\nWhen running an MLflow Project directory or repository that does not contain an MLproject\nfile, MLflow uses the following conventions to determine the project\u2019s attributes:\n\nThe project\u2019s name is the name of the directory.\n\nThe Conda environment\nis specified in conda.yaml, if present. If no conda.yaml file is present, MLflow\nuses a Conda environment containing only Python (specifically, the latest Python available to\nConda) when running the project.", "metadata": {"url": "https://mlflow.org/docs/latest/projects.html"}} {"id": "ced13a6bcc4c-4", "text": "Any .py and .sh file in the project can be an entry point. MLflow uses Python\nto execute entry points with the .py extension, and it uses bash to execute entry points with\nthe .sh extension. For more information about specifying project entrypoints at runtime,\nsee Running Projects.\n\nBy default, entry points do not have any parameters when an MLproject file is not included.\nParameters can be supplied at runtime via the mlflow run CLI or the\nmlflow.projects.run() Python API. Runtime parameters are passed to the entry point on the\ncommand line using --key value syntax. For more information about running projects and\nwith runtime parameters, see Running Projects.\n\nMLproject File\n\nYou can get more control over an MLflow Project by adding an MLproject file, which is a text\nfile in YAML syntax, to the project\u2019s root directory. The following is an example of an\nMLproject file:\n\nname\n\nMy Project\n\npython_env\n\npython_env.yaml\n\n# or\n\n# conda_env: my_env.yaml\n\n# or\n\n# docker_env:\n\n# image: mlflow-docker-example\n\nentry_points\n\nmain\n\nparameters\n\ndata_file\n\npath\n\nregularization\n\ntype\n\nfloat\n\ndefault\n\n0.1\n\ncommand\n\n\"python\n\ntrain.py\n\nr\n\n{regularization}\n\n{data_file}\"\n\nvalidate\n\nparameters\n\ndata_file\n\npath\n\ncommand\n\n\"python\n\nvalidate.py\n\n{data_file}\"\n\nThe file can specify a name and a Conda or Docker environment, as well as more detailed information about each entry point.\nSpecifically, each entry point defines a command to run and\nparameters to pass to the command (including data types).\n\nSpecifying an Environment\n\nThis section describes how to specify Conda and Docker container environments in an MLproject file.\nMLproject files cannot specify both a Conda environment and a Docker environment.", "metadata": {"url": "https://mlflow.org/docs/latest/projects.html"}} {"id": "ced13a6bcc4c-5", "text": "Include a top-level python_env entry in the MLproject file.\nThe value of this entry must be a relative path to a python_env YAML file\nwithin the MLflow project\u2019s directory. The following is an example MLProject\nfile with a python_env definition:\npython_env: files/config/python_env.yaml\n\n\npython_env refers to an environment file located at\n/files/config/python_env.yaml, where\n is the path to the MLflow project\u2019s root directory.\nThe following is an example of a python_env.yaml file:\n# Python version required to run the project.\npython: \"3.8.15\"\n# Dependencies required to build packages. This field is optional.\nbuild_dependencies:\n - pip\n - setuptools\n - wheel==0.37.1\n# Dependencies required to run the project.\ndependencies:\n - mlflow\n - scikit-learn==1.0.2\n\nInclude a top-level conda_env entry in the MLproject file.\nThe value of this entry must be a relative path to a Conda environment YAML file\nwithin the MLflow project\u2019s directory. In the following example:\nconda_env: files/config/conda_environment.yaml\n\n\nconda_env refers to an environment file located at\n/files/config/conda_environment.yaml, where\n is the path to the MLflow project\u2019s root directory.\n\nInclude a top-level docker_env entry in the MLproject file. The value of this entry must be the name\nof a Docker image that is accessible on the system executing the project; this image name\nmay include a registry path and tags. Here are a couple of examples.\nExample 1: Image without a registry path\ndocker_env:\n image: mlflow-docker-example-environment", "metadata": {"url": "https://mlflow.org/docs/latest/projects.html"}} {"id": "ced13a6bcc4c-6", "text": "In this example, docker_env refers to the Docker image with name\nmlflow-docker-example-environment and default tag latest. Because no registry path is\nspecified, Docker searches for this image on the system that runs the MLflow project. If the\nimage is not found, Docker attempts to pull it from DockerHub.\nExample 2: Mounting volumes and specifying environment variables\nYou can also specify local volumes to mount in the docker image (as you normally would with Docker\u2019s -v option), and additional environment variables (as per Docker\u2019s -e option). Environment variables can either be copied from the host system\u2019s environment variables, or specified as new variables for the Docker environment. The environment field should be a list. Elements in this list can either be lists of two strings (for defining a new variable) or single strings (for copying variables from the host system). For example:\ndocker_env:\n image: mlflow-docker-example-environment\n volumes: [\"/local/path:/container/mount/path\"]\n environment: [[\"NEW_ENV_VAR\", \"new_var_value\"], \"VAR_TO_COPY_FROM_HOST_ENVIRONMENT\"]\n\n\nIn this example our docker container will have one additional local volume mounted, and two additional environment variables: one newly-defined, and one copied from the host system.\nExample 3: Image in a remote registry\ndocker_env:\n image: 012345678910.dkr.ecr.us-west-2.amazonaws.com/mlflow-docker-example-environment:7.0", "metadata": {"url": "https://mlflow.org/docs/latest/projects.html"}} {"id": "ced13a6bcc4c-7", "text": "In this example, docker_env refers to the Docker image with name\nmlflow-docker-example-environment and tag 7.0 in the Docker registry with path\n012345678910.dkr.ecr.us-west-2.amazonaws.com, which corresponds to an\nAmazon ECR registry.\nWhen the MLflow project is run, Docker attempts to pull the image from the specified registry.\nThe system executing the MLflow project must have credentials to pull this image from the specified registry.\nExample 4: Build a new image\ndocker_env:\n image: python:3.8\n\n\nmlflow run ... --build-image\n\n\nTo build a new image that\u2019s based on the specified image and files contained in\nthe project directory, use the --build-image argument. In the above example, the image\npython:3.8 is pulled from Docker Hub if it\u2019s not present locally, and a new image is built\nbased on it. The project is executed in a container created from this image.\n\nCommand Syntax\n\nMLproject\n\nformat string syntax.\nAll of the parameters declared in the entry point\u2019s\n\nparameters\n\nparameters\n\n-key\n\nvalue\n\nMLproject\n\nBefore substituting parameters in the command, MLflow escapes them using the Python\nshlex.quote function, so you don\u2019t\nneed to worry about adding quotes inside your command field.\n\nSpecifying Parameters\n\nMLflow allows specifying a data type and default value for each parameter. You can specify just the\ndata type by writing:\n\nparameter_name\n\ndata_type\n\nin your YAML file, or add a default value as well using one of the following syntaxes (which are\nequivalent in YAML):\n\nparameter_name\n\ntype\n\ndata_type\n\ndefault\n\nvalue\n\n# Short syntax\n\nparameter_name\n\n# Long syntax\n\ntype\n\ndata_type\n\ndefault\n\nvalue", "metadata": {"url": "https://mlflow.org/docs/latest/projects.html"}} {"id": "ced13a6bcc4c-8", "text": "# Short syntax\n\nparameter_name\n\n# Long syntax\n\ntype\n\ndata_type\n\ndefault\n\nvalue\n\nMLflow supports four parameter types, some of which it treats specially (for example, downloading\ndata to local files). Any undeclared parameters are treated as string. The parameter types are:\n\nA text string.\n\nA real number. MLflow validates that the parameter is a number.\n\nA path on the local file system. MLflow converts any relative path parameters to absolute\npaths. MLflow also downloads any paths passed as distributed storage URIs\n(s3://, dbfs://, gs://, etc.) to local files. Use this type for programs that can only read local\nfiles.\n\nA URI for data either in a local or distributed storage system. MLflow converts\nrelative paths to absolute paths, as in the path type. Use this type for programs\nthat know how to read from distributed storage (e.g., programs that use Spark).\n\nRunning Projects\n\nMLflow provides two ways to run projects: the mlflow run command-line tool, or\nthe mlflow.projects.run() Python API. Both tools take the following parameters:\n\nA directory on the local file system or a Git repository path,\nspecified as a URI of the form https:// (to use HTTPS) or user@host:path\n(to use Git over SSH). To run against an MLproject file located in a subdirectory of the project,\nadd a \u2018#\u2019 to the end of the URI argument, followed by the relative path from the project\u2019s root directory\nto the subdirectory containing the desired project.\n\nFor Git-based projects, the commit hash or branch name in the Git repository.\n\nThe name of the entry point, which defaults to main. You can use any\nentry point named in the MLproject file, or any .py or .sh file in the project,\ngiven as a path from the project root (for example, src/test.py).", "metadata": {"url": "https://mlflow.org/docs/latest/projects.html"}} {"id": "ced13a6bcc4c-9", "text": "Key-value parameters. Any parameters with\ndeclared types are validated and transformed if needed.\n\nBoth the command-line and API let you launch projects remotely\nin a Databricks environment. This includes setting cluster\nparameters such as a VM type. Of course, you can also run projects on any other computing\ninfrastructure of your choice using the local version of the mlflow run command (for\nexample, submit a script that does mlflow run to a standard job queueing system).\nYou can also launch projects remotely on Kubernetes clusters\nusing the mlflow run CLI (see Run an MLflow Project on Kubernetes).\n\nBy default, MLflow Projects are run in the environment specified by the project directory\nor the MLproject file (see Specifying Project Environments).\nYou can ignore a project\u2019s specified environment and run the project in the current\nsystem environment by supplying the --env-manager=local flag, but this can lead to\nunexpected results if there are dependency mismatches between the project environment and\nthe current system environment.\n\nFor example, the tutorial creates and publishes an MLflow Project that trains a linear model. The\nproject is also published on GitHub at https://github.com/mlflow/mlflow-example. To run\nthis project:\n\nmlflow\n\nrun\n\ngit@github.com:mlflow/mlflow-example.git\n\nP\n\nalpha\n\n0.5\n\nThere are also additional options for disabling the creation of a Conda environment, which can be\nuseful if you quickly want to test a project in your existing shell environment.\n\nRun an MLflow Project on Databricks\n\nYou can run MLflow Projects remotely on Databricks. To use this feature, you must have an enterprise\nDatabricks account (Community Edition is not supported) and you must have set up the\nDatabricks CLI. Find detailed instructions\nin the Databricks docs (Azure Databricks,\nDatabricks on AWS).", "metadata": {"url": "https://mlflow.org/docs/latest/projects.html"}} {"id": "ced13a6bcc4c-10", "text": "Run an MLflow Project on Kubernetes\n\nYou can run MLflow Projects with Docker environments\non Kubernetes. The following sections provide an overview of the feature, including a simple\nProject execution guide with examples.\n\nTo see this feature in action, you can also refer to the\nDocker example, which includes\nthe required Kubernetes backend configuration (kubernetes_backend.json) and Kubernetes Job Spec\n(kubernetes_job_template.yaml) files.\n\nHow it works\n\nWhen you run an MLflow Project on Kubernetes, MLflow constructs a new Docker image\ncontaining the Project\u2019s contents; this image inherits from the Project\u2019s\nDocker environment. MLflow then pushes the new\nProject image to your specified Docker registry and starts a\nKubernetes Job\non your specified Kubernetes cluster. This Kubernetes Job downloads the Project image and starts\na corresponding Docker container. Finally, the container invokes your Project\u2019s\nentry point, logging parameters, tags, metrics, and artifacts to your\nMLflow tracking server.\n\nExecution guide\n\nYou can run your MLflow Project on Kubernetes by following these steps:\n\nAdd a Docker environment to your MLflow Project, if one does not already exist. For\nreference, see Specifying an Environment.\n\nCreate a backend configuration JSON file with the following entries:", "metadata": {"url": "https://mlflow.org/docs/latest/projects.html"}} {"id": "ced13a6bcc4c-11", "text": "Create a backend configuration JSON file with the following entries:\n\nkube-context\nThe Kubernetes context\nwhere MLflow will run the job. If not provided, MLflow will use the current context.\nIf no context is available, MLflow will assume it is running in a Kubernetes cluster\nand it will use the Kubernetes service account running the current pod (\u2018in-cluster\u2019 configuration).\nrepository-uri\nThe URI of the docker repository where the Project execution Docker image will be uploaded\n(pushed). Your Kubernetes cluster must have access to this repository in order to run your\nMLflow Project.\nkube-job-template-path\nThe path to a YAML configuration file for your Kubernetes Job - a Kubernetes Job Spec.\nMLflow reads the Job Spec and replaces certain fields to facilitate job execution and\nmonitoring; MLflow does not modify the original template file. For more information about\nwriting Kubernetes Job Spec templates for use with MLflow, see the\nJob Templates section.\n\nExample Kubernetes backend configuration\n\n\"kube-context\"\n\n\"docker-for-desktop\"\n\n\"repository-uri\"\n\n\"username/mlflow-kubernetes-example\"\n\n\"kube-job-template-path\"\n\n\"/Users/username/path/to/kubernetes_job_template.yaml\"\n\nIf necessary, obtain credentials to access your Project\u2019s Docker and Kubernetes resources, including:\n\nThe Docker environment image specified in the MLproject\nfile.\nThe Docker repository referenced by repository-uri in your backend configuration file.\nThe Kubernetes context\nreferenced by kube-context in your backend configuration file.\n\nMLflow expects these resources to be accessible via the\ndocker and\nkubectl CLIs before running the\nProject.\n\nRun the Project using the MLflow Projects CLI or Python API,\nspecifying your Project URI and the path to your backend configuration file. For example:\nmlflow run --backend kubernetes --backend-config examples/docker/kubernetes_config.json\n\n\nwhere is a Git repository URI or a folder.\n\nJob Templates", "metadata": {"url": "https://mlflow.org/docs/latest/projects.html"}} {"id": "ced13a6bcc4c-12", "text": "where is a Git repository URI or a folder.\n\nJob Templates\n\nMLflow executes Projects on Kubernetes by creating Kubernetes Job resources.\nMLflow creates a Kubernetes Job for an MLflow Project by reading a user-specified\nJob Spec.\nWhen MLflow reads a Job Spec, it formats the following fields:\n\nmetadata.name Replaced with a string containing the name of the MLflow Project and the time\nof Project execution\n\nspec.template.spec.container[0].name Replaced with the name of the MLflow Project\n\nspec.template.spec.container[0].image Replaced with the URI of the Docker image created during\nProject execution. This URI includes the Docker image\u2019s digest hash.\n\nspec.template.spec.container[0].command Replaced with the Project entry point command\nspecified when executing the MLflow Project.\n\nThe following example shows a simple Kubernetes Job Spec that is compatible with MLflow Project\nexecution. Replaced fields are indicated using bracketed text.\n\nExample Kubernetes Job Spec\n\napiVersion\n\nbatch/v1\n\nkind\n\nJob\n\nmetadata\n\nname\n\n\"{replaced\n\nwith\n\nMLflow\n\nProject\n\nname}\"\n\nnamespace\n\nmlflow\n\nspec\n\nttlSecondsAfterFinished\n\n100\n\nbackoffLimit\n\ntemplate\n\nspec\n\ncontainers\n\nname\n\n\"{replaced\n\nwith\n\nMLflow\n\nProject\n\nname}\"\n\nimage\n\n\"{replaced\n\nwith\n\nURI\n\nof\n\nDocker\n\nimage\n\ncreated\n\nduring\n\nProject\n\nexecution}\"\n\ncommand\n\n\"{replaced\n\nwith\n\nMLflow\n\nProject\n\nentry\n\npoint\n\ncommand}\"\n\nenv\n\n\"{appended\n\nwith\n\nMLFLOW_TRACKING_URI,\n\nMLFLOW_RUN_ID\n\nand\n\nMLFLOW_EXPERIMENT_ID}\"\n\nresources\n\nlimits\n\nmemory\n\n512Mi\n\nrequests\n\nmemory\n\n256Mi\n\nrestartPolicy\n\nNever\n\ncontainer.name\n\ncontainer.image\n\ncontainer.command\n\nMLFLOW_TRACKING_URI\n\nMLFLOW_RUN_ID\n\nMLFLOW_EXPERIMENT_ID\n\ncontainer.env", "metadata": {"url": "https://mlflow.org/docs/latest/projects.html"}} {"id": "ced13a6bcc4c-13", "text": "MLFLOW_TRACKING_URI\n\nMLFLOW_RUN_ID\n\nMLFLOW_EXPERIMENT_ID\n\ncontainer.env\n\nKUBE_MLFLOW_TRACKING_URI\n\nMLFLOW_TRACKING_URI\n\nIterating Quickly\n\nIf you want to rapidly develop a project, we recommend creating an MLproject file with your\nmain program specified as the main entry point, and running it with mlflow run ..\nTo avoid having to write parameters repeatedly, you can add default parameters in your MLproject file.\n\nBuilding Multistep Workflows\n\nmlflow.projects.run() API, combined with\n\nmlflow.client, makes it possible to build\nmulti-step workflows with separate projects (or entry points in the same project) as the individual\nsteps. Each call to\n\nmlflow.projects.run() returns a run object, that you can use with\n\nmlflow.client to determine when the run has ended and get its output artifacts. These artifacts\ncan then be passed into another step that takes\n\npath\n\nuri\n\nDifferent users can publish reusable steps for data featurization, training, validation, and so on, that other users or team can run in their workflows. Because MLflow supports Git versioning, another team can lock their workflow to a specific version of a project, or upgrade to a new one on their own schedule.\n\nUsing mlflow.projects.run() you can launch multiple runs in parallel either on the local machine or on a cloud platform like Databricks. Your driver program can then inspect the metrics from each run in real time to cancel runs, launch new ones, or select the best performing run on a target metric.\n\nSometimes you want to run the same training code on different random splits of training and validation data. With MLflow Projects, you can package the project in a way that allows this, for example, by taking a random seed for the train/validation split as a parameter, or by calling another project first that can split the input data.", "metadata": {"url": "https://mlflow.org/docs/latest/projects.html"}} {"id": "ced13a6bcc4c-14", "text": "For an example of how to construct such a multistep workflow, see the MLflow Multistep Workflow Example project.", "metadata": {"url": "https://mlflow.org/docs/latest/projects.html"}} {"id": "fe0d5088bc70-0", "text": "MLflow Models\n\nAn MLflow Model is a standard format for packaging machine learning models that can be used in a\nvariety of downstream tools\u2014for example, real-time serving through a REST API or batch inference\non Apache Spark. The format defines a convention that lets you save a model in different \u201cflavors\u201d\nthat can be understood by different downstream tools.\n\nTable of Contents\n\nStorage Format\n\nModel Signature And Input Example\n\nModel API\n\nBuilt-In Model Flavors\n\nModel Evaluation\n\nModel Customization\n\nBuilt-In Deployment Tools\n\nDeployment to Custom Targets\n\nCommunity Model Flavors\n\nStorage Format\n\nEach MLflow Model is a directory containing arbitrary files, together with an MLmodel\nfile in the root of the directory that can define multiple flavors that the model can be viewed\nin.\n\nFlavors are the key concept that makes MLflow Models powerful: they are a convention that deployment\ntools can use to understand the model, which makes it possible to write tools that work with models\nfrom any ML library without having to integrate each tool with each library. MLflow defines\nseveral \u201cstandard\u201d flavors that all of its built-in deployment tools support, such as a \u201cPython\nfunction\u201d flavor that describes how to run the model as a Python function. However, libraries can\nalso define and use other flavors. For example, MLflow\u2019s mlflow.sklearn library allows\nloading models back as a scikit-learn Pipeline object for use in code that is aware of\nscikit-learn, or as a generic Python function for use in tools that just need to apply the model\n(for example, the mlflow deployments tool with the option -t sagemaker for deploying models\nto Amazon SageMaker).\n\nAll of the flavors that a particular model supports are defined in its MLmodel file in YAML\nformat. For example, mlflow.sklearn outputs models as follows:", "metadata": {"url": "https://mlflow.org/docs/latest/models.html"}} {"id": "fe0d5088bc70-1", "text": "# Directory written by mlflow.sklearn.save_model(model, \"my_model\")\nmy_model/\n\u251c\u2500\u2500 MLmodel\n\u251c\u2500\u2500 model.pkl\n\u251c\u2500\u2500 conda.yaml\n\u251c\u2500\u2500 python_env.yaml\n\u2514\u2500\u2500 requirements.txt\n\nAnd its MLmodel file describes two flavors:\n\ntime_created\n\n2018-05-25T17:28:53.35\n\nflavors\n\nsklearn\n\nsklearn_version\n\n0.19.1\n\npickled_model\n\nmodel.pkl\n\npython_function\n\nloader_module\n\nmlflow.sklearn\n\nThis model can then be used with any tool that supports either the sklearn or\npython_function model flavor. For example, the mlflow models serve command\ncan serve a model with the python_function or the crate (R Function) flavor:\n\nmlflow\n\nmodels\n\nserve\n\nm\n\nmy_model\n\nIn addition, the mlflow deployments command-line tool can package and deploy models to AWS\nSageMaker as long as they support the python_function flavor:\n\nmlflow\n\ndeployments\n\ncreate\n\nt\n\nsagemaker\n\nm\n\nmy_model\n\n[other\n\noptions\n\nFields in the MLmodel Format\n\nApart from a flavors field listing the model flavors, the MLmodel YAML format can contain\nthe following fields:\n\nDate and time when the model was created, in UTC ISO 8601 format.\n\nID of the run that created the model, if the model was saved using MLflow Tracking.\n\nmodel signature in JSON format.\n\nreference to an artifact with input example.\n\nDatabricks runtime version and type, if the model was trained in a Databricks notebook or job.\n\nThe version of MLflow that was used to log the model.\n\nAdditional Logged Files\n\nconda.yaml\n\npython_env.yaml\n\nrequirements.txt\n\nconda\n\nvirtualenv\n\npip\n\nNote", "metadata": {"url": "https://mlflow.org/docs/latest/models.html"}} {"id": "fe0d5088bc70-2", "text": "conda.yaml\n\npython_env.yaml\n\nrequirements.txt\n\nconda\n\nvirtualenv\n\npip\n\nNote\n\nAnaconda Inc. updated their terms of service for anaconda.org channels. Based on the new terms of service you may require a commercial license if you rely on Anaconda\u2019s packaging and distribution. See Anaconda Commercial Edition FAQ for more information. Your use of any Anaconda channels is governed by their terms of service.\n\nv1.18 were by default logged with the conda\n\ndefaults\n\nhttps://repo.anaconda.com/pkgs/) as a dependency. Because of this license change, MLflow has stopped the use of the\n\ndefaults\n\nconda-forge\n\nhttps://conda-forge.org/.\n\ndefaults\n\ndefaults\n\nchannel\n\nconda.yaml\n\nconda.yaml\n\ndefaults\n\nname\n\nmlflow-env\n\nchannels\n\ndefaults\n\ndependencies\n\npython=3.8.8\n\npip\n\npip\n\nmlflow\n\nscikit-learn==0.23.2\n\ncloudpickle==1.6.0\n\nIf you would like to change the channel used in a model\u2019s environment, you can re-register the model to the model registry with a new conda.yaml. You can do this by specifying the channel in the conda_env parameter of log_model().\n\nFor more information on the log_model() API, see the MLflow documentation for the model flavor you are working with, for example, mlflow.sklearn.log_model().\n\nWhen saving a model, MLflow provides the option to pass in a conda environment parameter that can contain dependencies used by the model. If no conda environment is provided, a default environment is created based on the flavor of the model. This conda environment is then saved in conda.yaml.\n\nThis file contains the following information that\u2019s required to restore a model environment using virtualenv:\n\nPython version\nVersion specifiers for pip, setuptools, and wheel\nPip requirements of the model (reference to requirements.txt)", "metadata": {"url": "https://mlflow.org/docs/latest/models.html"}} {"id": "fe0d5088bc70-3", "text": "The requirements file is created from the pip portion of the conda.yaml environment specification. Additional pip dependencies can be added to requirements.txt by including them as a pip dependency in a conda environment and logging the model with the environment or using the pip_requirements argument of the mlflow..log_model API.\n\nThe following shows an example of saving a model with a manually specified conda environment and the corresponding content of the generated conda.yaml and requirements.txt files.\n\nconda_env\n\n\"channels\"\n\n\"conda-forge\"\n\n],\n\n\"dependencies\"\n\n\"python=3.8.8\"\n\n\"pip\"\n\n],\n\n\"pip\"\n\n\"mlflow\"\n\n\"scikit-learn==0.23.2\"\n\n\"cloudpickle==1.6.0\"\n\n],\n\n\"name\"\n\n\"mlflow-env\"\n\nmlflow\n\nsklearn\n\nlog_model\n\nmodel\n\n\"my_model\"\n\nconda_env\n\nconda_env\n\nThe written conda.yaml file:\n\nname\n\nmlflow-env\n\nchannels\n\nconda-forge\n\ndependencies\n\npython=3.8.8\n\npip\n\npip\n\nmlflow\n\nscikit-learn==0.23.2\n\ncloudpickle==1.6.0\n\nThe written python_env.yaml file:\n\npython\n\n3.8.8\n\nbuild_dependencies\n\npip==21.1.3\n\nsetuptools==57.4.0\n\nwheel==0.37.0\n\ndependencies\n\nr requirements.txt\n\nThe written requirements.txt file:\n\nmlflow\nscikit-learn==0.23.2\ncloudpickle==1.6.0\n\nModel Signature And Input Example", "metadata": {"url": "https://mlflow.org/docs/latest/models.html"}} {"id": "fe0d5088bc70-4", "text": "Model Signature And Input Example\n\nWhen working with ML models you often need to know some basic functional properties of the model\nat hand, such as \u201cWhat inputs does it expect?\u201d and \u201cWhat output does it produce?\u201d. MLflow models can\ninclude the following additional metadata about model inputs and outputs that can be used by\ndownstream tooling:\n\nModel Signature - description of a model\u2019s inputs and outputs.\n\nModel Input Example - example of a valid model input.\n\nModel Signature\n\nThe Model signature defines the schema of a model\u2019s inputs and outputs. Model inputs and outputs can\nbe either column-based or tensor-based. Column-based inputs and outputs can be described as a\nsequence of (optionally) named columns with type specified as one of the\nMLflow data types. Tensor-based inputs and outputs can be\ndescribed as a sequence of (optionally) named tensors with type specified as one of the\nnumpy data types.\n\nTo include a signature with your model, pass a signature object as an argument to the appropriate log_model call, e.g.\nsklearn.log_model(). More details are in the How to log models with signatures section. The signature is stored in\nJSON format in the MLmodel file, together with other model metadata.\n\nModel signatures are recognized and enforced by standard MLflow model deployment tools. For example, the mlflow models serve tool,\nwhich deploys a model as a REST API, validates inputs based on the model\u2019s signature.\n\nColumn-based Signature Example\n\nAll flavors support column-based signatures.\n\nEach column-based input and output is represented by a type corresponding to one of\nMLflow data types and an optional name. The following example\ndisplays an MLmodel file excerpt containing the model signature for a classification model trained on\nthe Iris dataset. The input has 4 named, numeric columns.\nThe output is an unnamed integer specifying the predicted class.\n\nsignature\n\ninputs\n\n'[{\"name\":\n\n\"sepal", "metadata": {"url": "https://mlflow.org/docs/latest/models.html"}} {"id": "fe0d5088bc70-5", "text": "signature\n\ninputs\n\n'[{\"name\":\n\n\"sepal\n\nlength\n\n(cm)\",\n\n\"type\":\n\n\"double\"},\n\n{\"name\":\n\n\"sepal\n\nwidth\n\n(cm)\",\n\n\"type\":\n\n\"double\"},\n\n{\"name\":\n\n\"petal\n\nlength\n\n(cm)\",\n\n\"type\":\n\n\"double\"},\n\n{\"name\":\n\n\"petal\n\nwidth\n\n(cm)\",\n\n\"type\":\n\n\"double\"}]'\n\noutputs\n\n'[{\"type\":\n\n\"integer\"}]'\n\nTensor-based Signature Example\n\nOnly DL flavors support tensor-based signatures (i.e TensorFlow, Keras, PyTorch, Onnx, and Gluon).\n\nEach tensor-based input and output is represented by a dtype corresponding to one of\nnumpy data types, shape and an optional name.\nWhen specifying the shape, -1 is used for axes that may be variable in size.\nThe following example displays an MLmodel file excerpt containing the model signature for a\nclassification model trained on the MNIST dataset.\nThe input has one named tensor where input sample is an image represented by a 28 \u00d7 28 \u00d7 1 array\nof float32 numbers. The output is an unnamed tensor that has 10 units specifying the\nlikelihood corresponding to each of the 10 classes. Note that the first dimension of the input\nand the output is the batch size and is thus set to -1 to allow for variable batch sizes.\n\nsignature\n\ninputs\n\n'[{\"name\":\n\n\"images\",\n\n\"dtype\":\n\n\"uint8\",\n\n\"shape\":\n\n[-1,\n\n28,\n\n28,\n\n1]}]'\n\noutputs\n\n'[{\"shape\":\n\n[-1,\n\n10],\n\n\"dtype\":\n\n\"float32\"}]'\n\nSignature Enforcement", "metadata": {"url": "https://mlflow.org/docs/latest/models.html"}} {"id": "fe0d5088bc70-6", "text": "[-1,\n\n10],\n\n\"dtype\":\n\n\"float32\"}]'\n\nSignature Enforcement\n\nSchema enforcement checks the provided input against the model\u2019s signature\nand raises an exception if the input is not compatible. This enforcement is applied in MLflow before\ncalling the underlying model implementation. Note that this enforcement only applies when using MLflow\nmodel deployment tools or when loading models as python_function. In\nparticular, it is not applied to models that are loaded in their native format (e.g. by calling\nmlflow.sklearn.load_model()).\n\nName Ordering Enforcement\n\nThe input names are checked against the model signature. If there are any missing inputs,\nMLflow will raise an exception. Extra inputs that were not declared in the signature will be\nignored. If the input schema in the signature defines input names, input matching is done by name\nand the inputs are reordered to match the signature. If the input schema does not have input\nnames, matching is done by position (i.e. MLflow will only check the number of inputs).\n\nInput Type Enforcement\n\nThe input types are checked against the signature.\n\nFor models with column-based signatures (i.e DataFrame inputs), MLflow will perform safe type conversions\nif necessary. Generally, only conversions that are guaranteed to be lossless are allowed. For\nexample, int -> long or int -> double conversions are ok, long -> double is not. If the types cannot\nbe made compatible, MLflow will raise an error.\n\nFor models with tensor-based signatures, type checking is strict (i.e an exception will be thrown if\nthe input type does not match the type specified by the schema).\n\nHandling Integers With Missing Values", "metadata": {"url": "https://mlflow.org/docs/latest/models.html"}} {"id": "fe0d5088bc70-7", "text": "Handling Integers With Missing Values\n\nInteger data with missing values is typically represented as floats in Python. Therefore, data\ntypes of integer columns in Python can vary depending on the data sample. This type variance can\ncause schema enforcement errors at runtime since integer and float are not compatible types. For\nexample, if your training data did not have any missing values for integer column c, its type will\nbe integer. However, when you attempt to score a sample of the data that does include a missing\nvalue in column c, its type will be float. If your model signature specified c to have integer type,\nMLflow will raise an error since it can not convert float to int. Note that MLflow uses python to\nserve models and to deploy models to Spark, so this can affect most model deployments. The best way\nto avoid this problem is to declare integer columns as doubles (float64) whenever there can be\nmissing values.\n\nHandling Date and Timestamp\n\nFor datetime values, Python has precision built into the type. For example, datetime values with\nday precision have numpy type datetime64[D], while values with nanosecond precision have\ntype datetime64[ns]. Datetime precision is ignored for column-based model signature but is\nenforced for tensor-based signatures.\n\nHandling Ragged Arrays\n\nRagged arrays can be created in numpy and are produced with a shape of (-1,) and a dytpe of\nobject. This will be handled by default when using infer_signature, resulting in a\nsignature containing Tensor('object', (-1,)). A similar signature can be manually created\ncontaining a more detailed representation of a ragged array, for a more expressive signature,\nsuch as Tensor('float64', (-1, -1, -1, 3)). Enforcement will then be done on as much detail\nas possible given the signature provided, and will support ragged input arrays as well.\n\nHow To Log Models With Signatures", "metadata": {"url": "https://mlflow.org/docs/latest/models.html"}} {"id": "fe0d5088bc70-8", "text": "How To Log Models With Signatures\n\nTo include a signature with your model, pass signature object as an argument to the appropriate log_model call, e.g.\nsklearn.log_model(). The model signature object can be created\nby hand or inferred from datasets with valid model inputs\n(e.g. the training dataset with target column omitted) and valid model outputs (e.g. model\npredictions generated on the training dataset).\n\nColumn-based Signature Example\n\nThe following example demonstrates how to store a model signature for a simple classifier trained\non the Iris dataset:\n\nimport\n\npandas\n\nas\n\npd\n\nfrom\n\nsklearn\n\nimport\n\ndatasets\n\nfrom\n\nsklearn.ensemble\n\nimport\n\nRandomForestClassifier\n\nimport\n\nmlflow\n\nimport\n\nmlflow.sklearn\n\nfrom\n\nmlflow.models.signature\n\nimport\n\ninfer_signature\n\niris\n\ndatasets\n\nload_iris\n\n()\n\niris_train\n\npd\n\nDataFrame\n\niris\n\ndata\n\ncolumns\n\niris\n\nfeature_names\n\nclf\n\nRandomForestClassifier\n\nmax_depth\n\nrandom_state\n\nclf\n\nfit\n\niris_train\n\niris\n\ntarget\n\nsignature\n\ninfer_signature\n\niris_train\n\nclf\n\npredict\n\niris_train\n\n))\n\nmlflow\n\nsklearn\n\nlog_model\n\nclf\n\n\"iris_rf\"\n\nsignature\n\nsignature\n\nThe same signature can be created explicitly as follows:\n\nfrom\n\nmlflow.models.signature\n\nimport\n\nModelSignature\n\nfrom\n\nmlflow.types.schema\n\nimport\n\nSchema\n\nColSpec\n\ninput_schema\n\nSchema\n\nColSpec\n\n\"double\"\n\n\"sepal length (cm)\"\n\n),\n\nColSpec\n\n\"double\"\n\n\"sepal width (cm)\"\n\n),\n\nColSpec\n\n\"double\"\n\n\"petal length (cm)\"\n\n),\n\nColSpec\n\n\"double\"\n\n\"petal width (cm)\"\n\n),\n\noutput_schema\n\nSchema\n\n([\n\nColSpec\n\n\"long\"\n\n)])\n\nsignature\n\nModelSignature\n\ninputs\n\ninput_schema\n\noutputs\n\noutput_schema\n\nTensor-based Signature Example", "metadata": {"url": "https://mlflow.org/docs/latest/models.html"}} {"id": "fe0d5088bc70-9", "text": "signature\n\nModelSignature\n\ninputs\n\ninput_schema\n\noutputs\n\noutput_schema\n\nTensor-based Signature Example\n\nThe following example demonstrates how to store a model signature for a simple classifier trained\non the MNIST dataset:\n\nfrom\n\nkeras.datasets\n\nimport\n\nmnist\n\nfrom\n\nkeras.utils\n\nimport\n\nto_categorical\n\nfrom\n\nkeras.models\n\nimport\n\nSequential\n\nfrom\n\nkeras.layers\n\nimport\n\nConv2D\n\nMaxPooling2D\n\nDense\n\nFlatten\n\nfrom\n\nkeras.optimizers\n\nimport\n\nSGD\n\nimport\n\nmlflow\n\nfrom\n\nmlflow.models.signature\n\nimport\n\ninfer_signature\n\ntrain_X\n\ntrain_Y\n\n),\n\ntest_X\n\ntest_Y\n\nmnist\n\nload_data\n\n()\n\ntrainX\n\ntrain_X\n\nreshape\n\n((\n\ntrain_X\n\nshape\n\n],\n\n28\n\n28\n\n))\n\ntestX\n\ntest_X\n\nreshape\n\n((\n\ntest_X\n\nshape\n\n],\n\n28\n\n28\n\n))\n\ntrainY\n\nto_categorical\n\ntrain_Y\n\ntestY\n\nto_categorical\n\ntest_Y\n\nmodel\n\nSequential\n\n()\n\nmodel\n\nadd\n\nConv2D\n\n32\n\n),\n\nactivation\n\n\"relu\"\n\nkernel_initializer\n\n\"he_uniform\"\n\ninput_shape\n\n28\n\n28\n\n),\n\nmodel\n\nadd\n\nMaxPooling2D\n\n((\n\n)))\n\nmodel\n\nadd\n\nFlatten\n\n())\n\nmodel\n\nadd\n\nDense\n\n100\n\nactivation\n\n\"relu\"\n\nkernel_initializer\n\n\"he_uniform\"\n\n))\n\nmodel\n\nadd\n\nDense\n\n10\n\nactivation\n\n\"softmax\"\n\n))\n\nopt\n\nSGD\n\nlr\n\n0.01\n\nmomentum\n\n0.9\n\nmodel\n\ncompile\n\noptimizer\n\nopt\n\nloss\n\n\"categorical_crossentropy\"\n\nmetrics\n\n\"accuracy\"\n\n])\n\nmodel\n\nfit\n\ntrainX\n\ntrainY\n\nepochs\n\n10\n\nbatch_size\n\n32\n\nvalidation_data\n\ntestX\n\ntestY\n\n))\n\nsignature\n\ninfer_signature\n\ntestX\n\nmodel\n\npredict", "metadata": {"url": "https://mlflow.org/docs/latest/models.html"}} {"id": "fe0d5088bc70-10", "text": "testX\n\ntestY\n\n))\n\nsignature\n\ninfer_signature\n\ntestX\n\nmodel\n\npredict\n\ntestX\n\n))\n\nmlflow\n\ntensorflow\n\nlog_model\n\nmodel\n\n\"mnist_cnn\"\n\nsignature\n\nsignature\n\nThe same signature can be created explicitly as follows:\n\nimport\n\nnumpy\n\nas\n\nnp\n\nfrom\n\nmlflow.models.signature\n\nimport\n\nModelSignature\n\nfrom\n\nmlflow.types.schema\n\nimport\n\nSchema\n\nTensorSpec\n\ninput_schema\n\nSchema\n\nTensorSpec\n\nnp\n\ndtype\n\nnp\n\nuint8\n\n),\n\n28\n\n28\n\n)),\n\noutput_schema\n\nSchema\n\n([\n\nTensorSpec\n\nnp\n\ndtype\n\nnp\n\nfloat32\n\n),\n\n10\n\n))])\n\nsignature\n\nModelSignature\n\ninputs\n\ninput_schema\n\noutputs\n\noutput_schema\n\nModel Input Example\n\nSimilar to model signatures, model inputs can be column-based (i.e DataFrames) or tensor-based\n(i.e numpy.ndarrays). A model input example provides an instance of a valid model input.\nInput examples are stored with the model as separate artifacts and are referenced in the the\nMLmodel file.\n\nTo include an input example with your model, add it to the appropriate log_model call, e.g.\nsklearn.log_model().\n\nHow To Log Model With Column-based Example\n\nFor models accepting column-based inputs, an example can be a single record or a batch of records. The\nsample input can be passed in as a Pandas DataFrame, list or dictionary. The given\nexample will be converted to a Pandas DataFrame and then serialized to json using the Pandas split-oriented\nformat. Bytes are base64-encoded. The following example demonstrates how you can log a column-based\ninput example with your model:\n\ninput_example\n\n\"sepal length (cm)\"\n\n5.1\n\n\"sepal width (cm)\"\n\n3.5\n\n\"petal length (cm)\"\n\n1.4\n\n\"petal width (cm)\"\n\n0.2", "metadata": {"url": "https://mlflow.org/docs/latest/models.html"}} {"id": "fe0d5088bc70-11", "text": "1.4\n\n\"petal width (cm)\"\n\n0.2\n\nmlflow\n\nsklearn\n\nlog_model\n\n...\n\ninput_example\n\ninput_example\n\nHow To Log Model With Tensor-based Example\n\nFor models accepting tensor-based inputs, an example must be a batch of inputs. By default, the axis 0\nis the batch axis unless specified otherwise in the model signature. The sample input can be passed in as\na numpy ndarray or a dictionary mapping a string to a numpy array. The following example demonstrates how\nyou can log a tensor-based input example with your model:\n\n# each input has shape (4, 4)\n\ninput_example\n\nnp\n\narray\n\n[[\n\n],\n\n134\n\n25\n\n56\n\n],\n\n253\n\n242\n\n195\n\n],\n\n93\n\n82\n\n82\n\n]],\n\n[[\n\n23\n\n46\n\n],\n\n33\n\n13\n\n36\n\n166\n\n],\n\n76\n\n75\n\n255\n\n],\n\n33\n\n44\n\n11\n\n82\n\n]],\n\n],\n\ndtype\n\nnp\n\nuint8\n\nmlflow\n\ntensorflow\n\nlog_model\n\n...\n\ninput_example\n\ninput_example\n\nModel API\n\nYou can save and load MLflow Models in multiple ways. First, MLflow includes integrations with\nseveral common libraries. For example, mlflow.sklearn contains\nsave_model, log_model,\nand load_model functions for scikit-learn models. Second,\nyou can use the mlflow.models.Model class to create and write models. This\nclass has four key functions:\n\nadd_flavor to add a flavor to the model. Each flavor\nhas a string name and a dictionary of key-value attributes, where the values can be any object\nthat can be serialized to YAML.\n\nsave to save the model to a local directory.\n\nlog to log the model as an artifact in the\ncurrent run using MLflow Tracking.\n\nload to load a model from a local directory or\nfrom an artifact in a previous run.", "metadata": {"url": "https://mlflow.org/docs/latest/models.html"}} {"id": "fe0d5088bc70-12", "text": "load to load a model from a local directory or\nfrom an artifact in a previous run.\n\nBuilt-In Model Flavors\n\nMLflow provides several standard flavors that might be useful in your applications. Specifically,\nmany of its deployment tools support these flavors, so you can export your own model in one of these\nflavors to benefit from all these tools:\n\nPython Function (python_function)\n\nR Function (crate)\n\nH2O (h2o)\n\nKeras (keras)\n\nMLeap (mleap)\n\nPyTorch (pytorch)\n\nScikit-learn (sklearn)\n\nSpark MLlib (spark)\n\nTensorFlow (tensorflow)\n\nONNX (onnx)\n\nMXNet Gluon (gluon)\n\nXGBoost (xgboost)\n\nLightGBM (lightgbm)\n\nCatBoost (catboost)\n\nSpacy(spaCy)\n\nFastai(fastai)\n\nStatsmodels (statsmodels)\n\nProphet (prophet)\n\nPmdarima (pmdarima)\n\nDiviner (diviner)\n\nPython Function (python_function)\n\nThe python_function model flavor serves as a default model interface for MLflow Python models.\nAny MLflow Python model is expected to be loadable as a python_function model. This enables\nother MLflow tools to work with any python model regardless of which persistence module or\nframework was used to produce the model. This interoperability is very powerful because it allows\nany Python model to be productionized in a variety of environments.", "metadata": {"url": "https://mlflow.org/docs/latest/models.html"}} {"id": "fe0d5088bc70-13", "text": "In addition, the python_function model flavor defines a generic filesystem model format for Python models and provides utilities for saving and loading models\nto and from this format. The format is self-contained in the sense that it includes all the\ninformation necessary to load and use a model. Dependencies are stored either directly with the\nmodel or referenced via conda environment. This model format allows other tools to integrate\ntheir models with MLflow.\n\nHow To Save Model As Python Function\n\nMost python_function models are saved as part of other model flavors - for example, all mlflow\nbuilt-in flavors include the python_function flavor in the exported models. In addition, the\nmlflow.pyfunc module defines functions for creating python_function models explicitly.\nThis module also includes utilities for creating custom Python models, which is a convenient way of\nadding custom python code to ML models. For more information, see the custom Python models\ndocumentation.\n\nHow To Load And Score Python Function Models\n\nYou can load python_function models in Python by calling the mlflow.pyfunc.load_model()\nfunction. Note that the load_model function assumes that all dependencies are already available\nand will not check nor install any dependencies (\nsee model deployment section for tools to deploy models with\nautomatic dependency management).\n\nOnce loaded, you can score the model by calling the predict\nmethod, which has the following signature:\n\npredict\n\nmodel_input\n\npandas\n\nDataFrame\n\nnumpy\n\nndarray\n\nDict\n\nstr\n\nnp\n\nndarray\n\n]])\n\n>\n\nnumpy\n\nndarray\n\npandas\n\nSeries\n\nDataFrame\n\n)]\n\nAll PyFunc models will support pandas.DataFrame as an input. In addition to pandas.DataFrame,\nDL PyFunc models will also support tensor inputs in the form of numpy.ndarrays. To verify\nwhether a model flavor supports tensor inputs, please check the flavor\u2019s documentation.", "metadata": {"url": "https://mlflow.org/docs/latest/models.html"}} {"id": "fe0d5088bc70-14", "text": "For models with a column-based schema, inputs are typically provided in the form of a pandas.DataFrame.\nIf a dictionary mapping column name to values is provided as input for schemas with named columns or if a\npython List or a numpy.ndarray is provided as input for schemas with unnamed columns, MLflow will cast the\ninput to a DataFrame. Schema enforcement and casting with respect to the expected data types is performed against\nthe DataFrame.\n\nFor models with a tensor-based schema, inputs are typically provided in the form of a numpy.ndarray or a\ndictionary mapping the tensor name to its np.ndarray value. Schema enforcement will check the provided input\u2019s\nshape and type against the shape and type specified in the model\u2019s schema and throw an error if they do not match.\n\nFor models where no schema is defined, no changes to the model inputs and outputs are made. MLflow will\npropagate any errors raised by the model if the model does not accept the provided input type.\n\nThe python environment that a PyFunc model is loaded into for prediction or inference may differ from the environment\nin which it was trained. In the case of an environment mismatch, a warning message will be printed when calling\nmlflow.pyfunc.load_model(). This warning statement will identify the packages that have a version mismatch\nbetween those used during training and the current environment. In order to get the full dependencies of the\nenvironment in which the model was trained, you can call mlflow.pyfunc.get_model_dependencies().\nFurthermore, if you want to run model inference in the same environment used in model training, you can call\nmlflow.pyfunc.spark_udf() with the env_manager argument set as \u201cconda\u201d. This will generate the environment\nfrom the conda.yaml file, ensuring that the python UDF will execute with the exact package versions that were used\nduring training.\n\nR Function (crate)", "metadata": {"url": "https://mlflow.org/docs/latest/models.html"}} {"id": "fe0d5088bc70-15", "text": "R Function (crate)\n\nThe crate model flavor defines a generic model format for representing an arbitrary R prediction\nfunction as an MLflow model using the crate function from the\ncarrier package. The prediction function is expected to take a dataframe as input and\nproduce a dataframe, a vector or a list with the predictions as output.\n\nThis flavor requires R to be installed in order to be used.\n\ncrate usage\n\nFor a minimal crate model, an example configuration for the predict function is:\n\nlibrary\n\nmlflow\n\nlibrary\n\ncarrier\n\n# Load iris dataset\n\ndata\n\n\"iris\"\n\n# Learn simple linear regression model\n\nmodel\n\n<-\n\nlm\n\nSepal.Width\n\nSepal.Length\n\ndata\n\niris\n\n# Define a crate model\n\n# call package functions with an explicit :: namespace.\n\ncrate_model\n\n<-\n\ncrate\n\nfunction\n\nnew_obs\n\nstats\n\n::\n\npredict\n\nmodel\n\ndata.frame\n\n\"Sepal.Length\"\n\nnew_obs\n\n)),\n\nmodel\n\nmodel\n\n# log the model\n\nmodel_path\n\n<-\n\nmlflow_log_model\n\nmodel\n\ncrate_model\n\nartifact_path\n\n\"iris_prediction\"\n\n# load the logged model and make a prediction\n\nmodel_uri\n\n<-\n\npaste0\n\nmlflow_get_run\n\n()\n\nartifact_uri\n\n\"/iris_prediction\"\n\nmlflow_model\n\n<-\n\nmlflow_load_model\n\nmodel_uri\n\nmodel_uri\n\nflavor\n\nNULL\n\nclient\n\nmlflow_client\n\n())\n\nprediction\n\n<-\n\nmlflow_predict\n\nmodel\n\nmlflow_model\n\ndata\n\nprint\n\nprediction\n\nH2O (h2o)\n\nThe h2o model flavor enables logging and loading H2O models.\n\nmlflow.h2o module defines\n\nsave_model() and\n\nlog_model() methods in python, and\n\nmlflow_save_model and\n\nmlflow_log_model in R for saving H2O models in MLflow Model\nformat.\nThese methods produce MLflow Models with the\n\npython_function", "metadata": {"url": "https://mlflow.org/docs/latest/models.html"}} {"id": "fe0d5088bc70-16", "text": "python_function\n\nmlflow.pyfunc.load_model().\nThis loaded PyFunc model can be scored with only DataFrame input. When you load\nMLflow Models with the\n\nh2o\n\nmlflow.pyfunc.load_model(),\nthe\n\nh2o.init() method is\ncalled. Therefore, the correct version of\n\nh2o(-py)\n\nh2o.init() by modifying the\n\ninit\n\nmodel.h2o/h2o.yaml\n\nFinally, you can use the mlflow.h2o.load_model() method to load MLflow Models with the\nh2o flavor as H2O model objects.\n\nFor more information, see mlflow.h2o.\n\nKeras (keras)\n\nkeras\n\nmlflow_save_model and\n\nmlflow_log_model.\nThese functions serialize Keras models models as HDF5 files using the Keras library\u2019s built-in\nmodel persistence functions. You can use\n\nmlflow_load_model function in R to load MLflow Models\nwith the\n\nkeras\n\nKeras Model objects.\n\nKeras pyfunc usage\n\nFor a minimal Sequential model, an example configuration for the pyfunc predict() method is:\n\nimport\n\nmlflow\n\nimport\n\nnumpy\n\nas\n\nnp\n\nimport\n\npathlib\n\nimport\n\nshutil\n\nfrom\n\ntensorflow\n\nimport\n\nkeras\n\nmlflow\n\ntensorflow\n\nautolog\n\n()\n\nwith\n\nmlflow\n\nstart_run\n\n():\n\nnp\n\narray\n\n([\n\n])\n\nreshape\n\nnp\n\narray\n\n([\n\n])\n\nmodel\n\nkeras\n\nSequential\n\nkeras\n\nInput\n\nshape\n\n,)),\n\nkeras\n\nlayers\n\nDense\n\nactivation\n\n\"sigmoid\"\n\n),\n\nmodel\n\ncompile\n\nloss\n\n\"binary_crossentropy\"\n\noptimizer\n\n\"adam\"\n\nmetrics\n\n\"accuracy\"\n\n])\n\nmodel\n\nfit\n\nbatch_size\n\nepochs\n\nvalidation_split\n\n0.2\n\nmodel_info\n\nmlflow\n\ntensorflow\n\nlog_model\n\nmodel\n\nmodel\n\nartifact_path", "metadata": {"url": "https://mlflow.org/docs/latest/models.html"}} {"id": "fe0d5088bc70-17", "text": "model_info\n\nmlflow\n\ntensorflow\n\nlog_model\n\nmodel\n\nmodel\n\nartifact_path\n\n\"model\"\n\nlocal_artifact_dir\n\n\"/tmp/mlflow/keras_model\"\n\npathlib\n\nPath\n\nlocal_artifact_dir\n\nmkdir\n\nparents\n\nTrue\n\nexist_ok\n\nTrue\n\nkeras_pyfunc\n\nmlflow\n\npyfunc\n\nload_model\n\nmodel_uri\n\nmodel_info\n\nmodel_uri\n\ndst_path\n\nlocal_artifact_dir\n\ndata\n\nnp\n\narray\n\n([\n\n10\n\n])\n\nreshape\n\npredictions\n\nkeras_pyfunc\n\npredict\n\ndata\n\nshutil\n\nrmtree\n\nlocal_artifact_dir\n\nMLeap (mleap)\n\nThe mleap model flavor supports saving Spark models in MLflow format using the\nMLeap persistence mechanism. MLeap is an inference-optimized\nformat and execution engine for Spark models that does not depend on\nSparkContext\nto evaluate inputs.\n\nNote\n\nYou can save Spark models in MLflow format with the mleap flavor by specifying the\nsample_input argument of the mlflow.spark.save_model() or\nmlflow.spark.log_model() method (recommended). For more details see Spark MLlib.\n\nmlflow.mleap module also\ndefines\n\nsave_model() and\n\nlog_model() methods for saving MLeap models in MLflow format,\nbut these methods do not include the\n\npython_function\n\nmleap\n\nmlflow_save_model\nand loaded with\n\nmlflow_load_model, with\n\nmlflow_save_model requiring\n\nA companion module for loading MLflow Models with the MLeap flavor is available in the\nmlflow/java package.\n\nFor more information, see mlflow.spark, mlflow.mleap, and the\nMLeap documentation.\n\nPyTorch (pytorch)\n\nThe pytorch model flavor enables logging and loading PyTorch models.\n\nmlflow.pytorch module defines utilities for saving and loading MLflow Models with the\n\npytorch", "metadata": {"url": "https://mlflow.org/docs/latest/models.html"}} {"id": "fe0d5088bc70-18", "text": "mlflow.pytorch module defines utilities for saving and loading MLflow Models with the\n\npytorch\n\nmlflow.pytorch.save_model() and\n\nmlflow.pytorch.log_model() methods to save PyTorch models in MLflow format; both of these\nfunctions use the\n\ntorch.save() method to\nserialize PyTorch models. Additionally, you can use the\n\nmlflow.pytorch.load_model()\nmethod to load MLflow Models with the\n\npytorch\n\nmlflow.pytorch.save_model() and\n\nmlflow.pytorch.log_model() contain\nthe\n\npython_function\n\nmlflow.pyfunc.load_model().\n\nNote\n\nWhen using the PyTorch flavor, if a GPU is available at prediction time, the default GPU will be used to run\ninference. To disable this behavior, users can use the\nMLFLOW_DEFAULT_PREDICTION_DEVICE\nor pass in a device with the device parameter for the predict function.\n\nNote\n\nIn case of multi gpu training, ensure to save the model only with global rank 0 gpu. This avoids\nlogging multiple copies of the same model.\n\nPyTorch pyfunc usage\n\nFor a minimal PyTorch model, an example configuration for the pyfunc predict() method is:\n\nimport\n\nnumpy\n\nas\n\nnp\n\nimport\n\nmlflow\n\nimport\n\ntorch\n\nfrom\n\ntorch\n\nimport\n\nnn\n\nnet\n\nnn\n\nLinear\n\nloss_function\n\nnn\n\nL1Loss\n\n()\n\noptimizer\n\ntorch\n\noptim\n\nAdam\n\nnet\n\nparameters\n\n(),\n\nlr\n\n1e-4\n\ntorch\n\nrandn\n\ntorch\n\nrandn\n\nepochs\n\nfor\n\nepoch\n\nin\n\nrange\n\nepochs\n\n):\n\noptimizer\n\nzero_grad\n\n()\n\noutputs\n\nnet\n\nloss\n\nloss_function\n\noutputs\n\nloss\n\nbackward\n\n()\n\noptimizer\n\nstep\n\n()\n\nwith\n\nmlflow\n\nstart_run\n\n()\n\nas\n\nrun\n\nmodel_info\n\nmlflow\n\npytorch\n\nlog_model\n\nnet\n\n\"model\"", "metadata": {"url": "https://mlflow.org/docs/latest/models.html"}} {"id": "fe0d5088bc70-19", "text": "run\n\nmodel_info\n\nmlflow\n\npytorch\n\nlog_model\n\nnet\n\n\"model\"\n\npytorch_pyfunc\n\nmlflow\n\npyfunc\n\nload_model\n\nmodel_uri\n\nmodel_info\n\nmodel_uri\n\npredictions\n\npytorch_pyfunc\n\npredict\n\ntorch\n\nrandn\n\nnumpy\n\n())\n\nprint\n\npredictions\n\nFor more information, see mlflow.pytorch.\n\nScikit-learn (sklearn)\n\nsklearn\n\nmlflow.sklearn module defines\n\nsave_model() and\n\nlog_model() functions that save scikit-learn models in\nMLflow format, using either Python\u2019s pickle module (Pickle) or CloudPickle for model serialization.\nThese functions produce MLflow Models with the\n\npython_function\n\nmlflow.pyfunc.load_model().\nThis loaded PyFunc model can only be scored with DataFrame input. Finally, you can use the\n\nmlflow.sklearn.load_model() method to load MLflow Models with the\n\nsklearn\n\nScikit-learn pyfunc usage\n\nFor a Scikit-learn LogisticRegression model, an example configuration for the pyfunc predict() method is:\n\nimport\n\nmlflow\n\nimport\n\nnumpy\n\nas\n\nnp\n\nfrom\n\nsklearn.linear_model\n\nimport\n\nLogisticRegression\n\nwith\n\nmlflow\n\nstart_run\n\n():\n\nnp\n\narray\n\n([\n\n])\n\nreshape\n\nnp\n\narray\n\n([\n\n])\n\nlr\n\nLogisticRegression\n\n()\n\nlr\n\nfit\n\nmodel_info\n\nmlflow\n\nsklearn\n\nlog_model\n\nsk_model\n\nlr\n\nartifact_path\n\n\"model\"\n\nsklearn_pyfunc\n\nmlflow\n\npyfunc\n\nload_model\n\nmodel_uri\n\nmodel_info\n\nmodel_uri\n\ndata\n\nnp\n\narray\n\n([\n\n10\n\n])\n\nreshape\n\npredictions\n\nsklearn_pyfunc\n\npredict\n\ndata\n\nFor more information, see mlflow.sklearn.\n\nSpark MLlib (spark)\n\nThe spark model flavor enables exporting Spark MLlib models as MLflow Models.", "metadata": {"url": "https://mlflow.org/docs/latest/models.html"}} {"id": "fe0d5088bc70-20", "text": "The spark model flavor enables exporting Spark MLlib models as MLflow Models.\n\nThe mlflow.spark module defines\n\nsave_model() to save a Spark MLlib model to a DBFS path.\n\nlog_model() to upload a Spark MLlib model to the tracking server.\n\nmlflow.spark.load_model() to load MLflow Models with the spark flavor as Spark MLlib pipelines.\n\npython_function\n\nmlflow.pyfunc.load_model().\nThis loaded PyFunc model can only be scored with DataFrame input.\nWhen a model with the\n\nspark\n\nmlflow.pyfunc.load_model(), a new\n\nSparkContext\nis created for model inference; additionally, the function converts all Pandas DataFrame inputs to\nSpark DataFrames before scoring. While this initialization overhead and format translation latency\nis not ideal for high-performance use cases, it enables you to easily deploy any\n\nMLlib PipelineModel to any production environment supported by MLflow\n(SageMaker, AzureML, etc).\n\nSpark MLlib pyfunc usage\n\nfrom\n\npyspark.ml.classification\n\nimport\n\nLogisticRegression\n\nfrom\n\npyspark.ml.linalg\n\nimport\n\nVectors\n\nfrom\n\npyspark.sql\n\nimport\n\nSparkSession\n\nimport\n\nmlflow\n\n# Prepare training data from a list of (label, features) tuples.\n\nspark\n\nSparkSession\n\nbuilder\n\nappName\n\n\"LogisticRegressionExample\"\n\ngetOrCreate\n\n()\n\ntraining\n\nspark\n\ncreateDataFrame\n\n1.0\n\nVectors\n\ndense\n\n([\n\n0.0\n\n1.1\n\n0.1\n\n])),\n\n0.0\n\nVectors\n\ndense\n\n([\n\n2.0\n\n1.0\n\n1.0\n\n])),\n\n0.0\n\nVectors\n\ndense\n\n([\n\n2.0\n\n1.3\n\n1.0\n\n])),\n\n1.0\n\nVectors\n\ndense\n\n([\n\n0.0\n\n1.2\n\n0.5\n\n])),\n\n],\n\n\"label\"\n\n\"features\"\n\n],", "metadata": {"url": "https://mlflow.org/docs/latest/models.html"}} {"id": "fe0d5088bc70-21", "text": "1.2\n\n0.5\n\n])),\n\n],\n\n\"label\"\n\n\"features\"\n\n],\n\n# Create and fit a LogisticRegression instance\n\nlr\n\nLogisticRegression\n\nmaxIter\n\n10\n\nregParam\n\n0.01\n\nlr_model\n\nlr\n\nfit\n\ntraining\n\n# Serialize the Model\n\nwith\n\nmlflow\n\nstart_run\n\n():\n\nmodel_info\n\nmlflow\n\nspark\n\nlog_model\n\nlr_model\n\n\"spark-model\"\n\n# Load saved model\n\nlr_model_saved\n\nmlflow\n\npyfunc\n\nload_model\n\nmodel_info\n\nmodel_uri\n\n# Make predictions on test data.\n\n# The DataFrame used in the predict method must be a Pandas DataFrame\n\ntest\n\nspark\n\ncreateDataFrame\n\n1.0\n\nVectors\n\ndense\n\n([\n\n1.0\n\n1.5\n\n1.3\n\n])),\n\n0.0\n\nVectors\n\ndense\n\n([\n\n3.0\n\n2.0\n\n0.1\n\n])),\n\n1.0\n\nVectors\n\ndense\n\n([\n\n0.0\n\n2.2\n\n1.5\n\n])),\n\n],\n\n\"label\"\n\n\"features\"\n\n],\n\ntoPandas\n\n()\n\nprediction\n\nlr_model_saved\n\npredict\n\ntest\n\nNote\n\nNote that when the sample_input parameter is provided to log_model() or\nsave_model(), the Spark model is automatically saved as an mleap flavor\nby invoking mlflow.mleap.add_to_model().\n\nFor example, the follow code block:\n\ntraining_df\n\nspark\n\ncreateDataFrame\n\n([\n\n\"a b c d e spark\"\n\n1.0\n\n),\n\n\"b d\"\n\n0.0\n\n),\n\n\"spark f g h\"\n\n1.0\n\n),\n\n\"hadoop mapreduce\"\n\n0.0\n\n],\n\n\"id\"\n\n\"text\"\n\n\"label\"\n\n])\n\ntokenizer\n\nTokenizer\n\ninputCol\n\n\"text\"\n\noutputCol\n\n\"words\"\n\nhashingTF\n\nHashingTF\n\ninputCol\n\ntokenizer", "metadata": {"url": "https://mlflow.org/docs/latest/models.html"}} {"id": "fe0d5088bc70-22", "text": "outputCol\n\n\"words\"\n\nhashingTF\n\nHashingTF\n\ninputCol\n\ntokenizer\n\ngetOutputCol\n\n(),\n\noutputCol\n\n\"features\"\n\nlr\n\nLogisticRegression\n\nmaxIter\n\n10\n\nregParam\n\n0.001\n\npipeline\n\nPipeline\n\nstages\n\ntokenizer\n\nhashingTF\n\nlr\n\n])\n\nmodel\n\npipeline\n\nfit\n\ntraining_df\n\nmlflow\n\nspark\n\nlog_model\n\nmodel\n\n\"spark-model\"\n\nsample_input\n\ntraining_df\n\nresults in the following directory structure logged to the MLflow Experiment:\n\n# Directory written by with the addition of mlflow.mleap.add_to_model(model, \"spark-model\", training_df)\n# Note the addition of the mleap directory\nspark-model/\n\u251c\u2500\u2500 mleap\n\u251c\u2500\u2500 sparkml\n\u251c\u2500\u2500 MLmodel\n\u251c\u2500\u2500 conda.yaml\n\u251c\u2500\u2500 python_env.yaml\n\u2514\u2500\u2500 requirements.txt\n\nFor more information, see mlflow.mleap.\n\nFor more information, see mlflow.spark.\n\nTensorFlow (tensorflow)\n\ntensorflow\n\nmlflow.tensorflow.save_model() and\n\nmlflow.tensorflow.log_model() methods. These methods also add the\n\npython_function\n\nmlflow.pyfunc.load_model(). This loaded PyFunc model\ncan be scored with both DataFrame input and numpy array input. Finally, you can use the\n\nmlflow.tensorflow.load_model() method to load MLflow Models with the\n\ntensorflow\n\nFor more information, see mlflow.tensorflow.\n\nONNX (onnx)\n\nonnx\n\nONNX models in MLflow format via\nthe\n\nmlflow.onnx.save_model() and\n\nmlflow.onnx.log_model() methods. These\nmethods also add the\n\npython_function\n\nmlflow.pyfunc.load_model(). This loaded PyFunc model can be scored with\nboth DataFrame input and numpy array input. The\n\npython_function", "metadata": {"url": "https://mlflow.org/docs/latest/models.html"}} {"id": "fe0d5088bc70-23", "text": "python_function\n\nONNX Runtime execution engine for\nevaluation. Finally, you can use the\n\nmlflow.onnx.load_model() method to load MLflow\nModels with the\n\nonnx\n\nFor more information, see mlflow.onnx and http://onnx.ai/.\n\nONNX pyfunc usage example\n\nFor an ONNX model, an example configuration that uses pytorch to train a dummy model,\nconverts it to ONNX, logs to mlflow and makes a prediction using pyfunc predict() method is:\n\nimport\n\nnumpy\n\nas\n\nnp\n\nimport\n\nmlflow\n\nimport\n\nonnx\n\nimport\n\ntorch\n\nfrom\n\ntorch\n\nimport\n\nnn\n\n# define a torch model\n\nnet\n\nnn\n\nLinear\n\nloss_function\n\nnn\n\nL1Loss\n\n()\n\noptimizer\n\ntorch\n\noptim\n\nAdam\n\nnet\n\nparameters\n\n(),\n\nlr\n\n1e-4\n\ntorch\n\nrandn\n\ntorch\n\nrandn\n\n# run model training\n\nepochs\n\nfor\n\nepoch\n\nin\n\nrange\n\nepochs\n\n):\n\noptimizer\n\nzero_grad\n\n()\n\noutputs\n\nnet\n\nloss\n\nloss_function\n\noutputs\n\nloss\n\nbackward\n\n()\n\noptimizer\n\nstep\n\n()\n\n# convert model to ONNX and load it\n\ntorch\n\nonnx\n\nexport\n\nnet\n\n\"model.onnx\"\n\nonnx_model\n\nonnx\n\nload_model\n\n\"model.onnx\"\n\n# log the model into a mlflow run\n\nwith\n\nmlflow\n\nstart_run\n\n():\n\nmodel_info\n\nmlflow\n\nonnx\n\nlog_model\n\nonnx_model\n\n\"model\"\n\n# load the logged model and make a prediction\n\nonnx_pyfunc\n\nmlflow\n\npyfunc\n\nload_model\n\nmodel_info\n\nmodel_uri\n\npredictions\n\nonnx_pyfunc\n\npredict\n\nnumpy\n\n())\n\nprint\n\npredictions\n\nMXNet Gluon (gluon)\n\ngluon\n\nGluon models in MLflow format via\nthe", "metadata": {"url": "https://mlflow.org/docs/latest/models.html"}} {"id": "fe0d5088bc70-24", "text": "gluon\n\nGluon models in MLflow format via\nthe\n\nmlflow.gluon.save_model() and\n\nmlflow.gluon.log_model() methods. These\nmethods also add the\n\npython_function\n\nmlflow.pyfunc.load_model(). This loaded PyFunc model can be scored with\nboth DataFrame input and numpy array input. You can also use the\n\nmlflow.gluon.load_model()\nmethod to load MLflow Models with the\n\ngluon\n\nFor more information, see mlflow.gluon.\n\nXGBoost (xgboost)\n\nxgboost\n\nXGBoost models\nin MLflow format via the\n\nmlflow.xgboost.save_model() and\n\nmlflow.xgboost.log_model() methods in python and\n\nmlflow_save_model and\n\nmlflow_log_model in R respectively.\nThese methods also add the\n\npython_function\n\nmlflow.pyfunc.load_model(). This loaded PyFunc model can only be scored with DataFrame input.\nYou can also use the\n\nmlflow.xgboost.load_model()\nmethod to load MLflow Models with the\n\nxgboost\n\nNote that the xgboost model flavor only supports an instance of xgboost.Booster,\nnot models that implement the scikit-learn API.\n\nXGBoost pyfunc usage\n\nThe example below\n\nLoads the IRIS dataset from scikit-learn\n\nTrains an XGBoost Classifier\n\nLogs the model and params using mlflow\n\nLoads the logged model and makes predictions\n\nfrom\n\nsklearn.datasets\n\nimport\n\nload_iris\n\nfrom\n\nsklearn.model_selection\n\nimport\n\ntrain_test_split\n\nfrom\n\nxgboost\n\nimport\n\nXGBClassifier\n\nimport\n\nmlflow\n\ndata\n\nload_iris\n\n()\n\nX_train\n\nX_test\n\ny_train\n\ny_test\n\ntrain_test_split\n\ndata\n\n\"data\"\n\n],\n\ndata\n\n\"target\"\n\n],\n\ntest_size\n\n0.2\n\nxgb_classifier", "metadata": {"url": "https://mlflow.org/docs/latest/models.html"}} {"id": "fe0d5088bc70-25", "text": "],\n\ndata\n\n\"target\"\n\n],\n\ntest_size\n\n0.2\n\nxgb_classifier\n\nXGBClassifier\n\nn_estimators\n\n10\n\nmax_depth\n\nlearning_rate\n\nobjective\n\n\"binary:logistic\"\n\nrandom_state\n\n123\n\n# log fitted model and XGBClassifier parameters\n\nwith\n\nmlflow\n\nstart_run\n\n():\n\nxgb_classifier\n\nfit\n\nX_train\n\ny_train\n\nclf_params\n\nxgb_classifier\n\nget_xgb_params\n\n()\n\nmlflow\n\nlog_params\n\nclf_params\n\nmodel_info\n\nmlflow\n\nxgboost\n\nlog_model\n\nxgb_classifier\n\n\"iris-classifier\"\n\n# Load saved model and make predictions\n\nxgb_classifier_saved\n\nmlflow\n\npyfunc\n\nload_model\n\nmodel_info\n\nmodel_uri\n\ny_pred\n\nxgb_classifier_saved\n\npredict\n\nX_test\n\nFor more information, see mlflow.xgboost.\n\nLightGBM (lightgbm)\n\nlightgbm\n\nLightGBM models\nin MLflow format via the\n\nmlflow.lightgbm.save_model() and\n\nmlflow.lightgbm.log_model() methods.\nThese methods also add the\n\npython_function\n\nmlflow.pyfunc.load_model(). You can also use the\n\nmlflow.lightgbm.load_model()\nmethod to load MLflow Models with the\n\nlightgbm\n\nNote that the scikit-learn API for LightGBM is now supported. For more information, see mlflow.lightgbm.\n\nLightGBM pyfunc usage\n\nThe example below\n\nLoads the IRIS dataset from scikit-learn\n\nTrains a LightGBM LGBMClassifier\n\nLogs the model and feature importance\u2019s using mlflow\n\nLoads the logged model and makes predictions\n\nfrom\n\nlightgbm\n\nimport\n\nLGBMClassifier\n\nfrom\n\nsklearn.datasets\n\nimport\n\nload_iris\n\nfrom\n\nsklearn.model_selection\n\nimport\n\ntrain_test_split\n\nimport\n\nmlflow\n\ndata\n\nload_iris\n\n()", "metadata": {"url": "https://mlflow.org/docs/latest/models.html"}} {"id": "fe0d5088bc70-26", "text": "import\n\ntrain_test_split\n\nimport\n\nmlflow\n\ndata\n\nload_iris\n\n()\n\n# Remove special characters from feature names to be able to use them as keys for mlflow metrics\n\nfeature_names\n\nname\n\nreplace\n\n\" \"\n\n\"_\"\n\nreplace\n\n\"(\"\n\n\"\"\n\nreplace\n\n\")\"\n\n\"\"\n\nfor\n\nname\n\nin\n\ndata\n\n\"feature_names\"\n\nX_train\n\nX_test\n\ny_train\n\ny_test\n\ntrain_test_split\n\ndata\n\n\"data\"\n\n],\n\ndata\n\n\"target\"\n\n],\n\ntest_size\n\n0.2\n\n# create model instance\n\nlgb_classifier\n\nLGBMClassifier\n\nn_estimators\n\n10\n\nmax_depth\n\nlearning_rate\n\nobjective\n\n\"binary:logistic\"\n\nrandom_state\n\n123\n\n# Fit and save model and LGBMClassifier feature importances as mlflow metrics\n\nwith\n\nmlflow\n\nstart_run\n\n():\n\nlgb_classifier\n\nfit\n\nX_train\n\ny_train\n\nfeature_importances\n\ndict\n\nzip\n\nfeature_names\n\nlgb_classifier\n\nfeature_importances_\n\n))\n\nfeature_importance_metrics\n\n\"feature_importance_\n\nfeature_name\n\nimp_value\n\nfor\n\nfeature_name\n\nimp_value\n\nin\n\nfeature_importances\n\nitems\n\n()\n\nmlflow\n\nlog_metrics\n\nfeature_importance_metrics\n\nmodel_info\n\nmlflow\n\nlightgbm\n\nlog_model\n\nlgb_classifier\n\n\"iris-classifier\"\n\n# Load saved model and make predictions\n\nlgb_classifier_saved\n\nmlflow\n\npyfunc\n\nload_model\n\nmodel_info\n\nmodel_uri\n\ny_pred\n\nlgb_classifier_saved\n\npredict\n\nX_test\n\nprint\n\ny_pred\n\nCatBoost (catboost)\n\ncatboost\n\nCatBoost models\nin MLflow format via the\n\nmlflow.catboost.save_model() and\n\nmlflow.catboost.log_model() methods.\nThese methods also add the\n\npython_function\n\nmlflow.pyfunc.load_model(). You can also use the", "metadata": {"url": "https://mlflow.org/docs/latest/models.html"}} {"id": "fe0d5088bc70-27", "text": "python_function\n\nmlflow.pyfunc.load_model(). You can also use the\n\nmlflow.catboost.load_model()\nmethod to load MLflow Models with the\n\ncatboost\n\nFor more information, see mlflow.catboost.\n\nCatBoost pyfunc usage\n\nFor a CatBoost Classifier model, an example configuration for the pyfunc predict() method is:\n\nimport\n\nmlflow\n\nfrom\n\ncatboost\n\nimport\n\nCatBoostClassifier\n\nfrom\n\nsklearn\n\nimport\n\ndatasets\n\n# prepare data\n\ndatasets\n\nload_wine\n\nas_frame\n\nFalse\n\nreturn_X_y\n\nTrue\n\n# train the model\n\nmodel\n\nCatBoostClassifier\n\niterations\n\nloss_function\n\n\"MultiClass\"\n\nallow_writing_files\n\nFalse\n\nmodel\n\nfit\n\n# log the model into a mlflow run\n\nwith\n\nmlflow\n\nstart_run\n\n():\n\nmodel_info\n\nmlflow\n\ncatboost\n\nlog_model\n\nmodel\n\n\"model\"\n\n# load the logged model and make a prediction\n\ncatboost_pyfunc\n\nmlflow\n\npyfunc\n\nload_model\n\nmodel_uri\n\nmodel_info\n\nmodel_uri\n\nprint\n\ncatboost_pyfunc\n\npredict\n\n[:\n\n]))\n\nSpacy(spaCy)\n\nspaCy\n\nspaCy models in MLflow format via\nthe\n\nmlflow.spacy.save_model() and\n\nmlflow.spacy.log_model() methods. Additionally, these\nmethods add the\n\npython_function\n\nmlflow.pyfunc.load_model().\nThis loaded PyFunc model can only be scored with DataFrame input. You can\nalso use the\n\nmlflow.spacy.load_model() method to load MLflow Models with the\n\nspacy\n\nFor more information, see mlflow.spacy.\n\nSpacy pyfunc usage", "metadata": {"url": "https://mlflow.org/docs/latest/models.html"}} {"id": "fe0d5088bc70-28", "text": "spacy\n\nFor more information, see mlflow.spacy.\n\nSpacy pyfunc usage\n\nThe example below shows how to train a Spacy TextCategorizer model, log the model artifact and metrics to the\nmlflow tracking server and then load the saved model to make predictions. For this example, we will be using the\nPolarity 2.0 dataset available in the nltk package. This dataset consists of 10000 positive and 10000 negative\nshort movie reviews.\n\nFirst we convert the texts and sentiment labels (\u201cpos\u201d or \u201cneg\u201d) from NLTK native format to Spacy\u2019s DocBin format:\n\nimport\n\npandas\n\nas\n\npd\n\nimport\n\nspacy\n\nfrom\n\nnltk.corpus\n\nimport\n\nmovie_reviews\n\nfrom\n\nspacy\n\nimport\n\nLanguage\n\nfrom\n\nspacy.tokens\n\nimport\n\nDocBin\n\nnltk\n\ndownload\n\n\"movie_reviews\"\n\ndef\n\nget_sentences\n\nsentiment_type\n\nstr\n\n>\n\npd\n\nDataFrame\n\n\"\"\"Reconstruct the sentences from the word lists for each review record for a specific ``sentiment_type``\n\nas a pandas DataFrame with two columns: 'sentence' and 'sentiment'.\n\n\"\"\"\n\nfile_ids\n\nmovie_reviews\n\nfileids\n\nsentiment_type\n\nsent_df\n\n[]\n\nfor\n\nfile_id\n\nin\n\nfile_ids\n\nsentence\n\n\" \"\n\njoin\n\nmovie_reviews\n\nwords\n\nfile_id\n\n))\n\nsent_df\n\nappend\n\n({\n\n\"sentence\"\n\nsentence\n\n\"sentiment\"\n\nsentiment_type\n\n})\n\nreturn\n\npd\n\nDataFrame\n\nsent_df\n\ndef\n\nconvert\n\ndata_df\n\npd\n\nDataFrame\n\ntarget_file\n\nstr\n\n):\n\n\"\"\"Convert a DataFrame with 'sentence' and 'sentiment' columns to a\n\nspacy DocBin object and save it to 'target_file'.\n\n\"\"\"\n\nnlp\n\nspacy\n\nblank\n\n\"en\"\n\nsentiment_labels\n\ndata_df\n\nsentiment\n\nunique\n\n()", "metadata": {"url": "https://mlflow.org/docs/latest/models.html"}} {"id": "fe0d5088bc70-29", "text": "blank\n\n\"en\"\n\nsentiment_labels\n\ndata_df\n\nsentiment\n\nunique\n\n()\n\nspacy_doc\n\nDocBin\n\n()\n\nfor\n\nrow\n\nin\n\ndata_df\n\niterrows\n\n():\n\nsent_tokens\n\nnlp\n\nmake_doc\n\nrow\n\n\"sentence\"\n\n])\n\n# To train a Spacy TextCategorizer model, the label must be attached to the \"cats\" dictionary of the \"Doc\"\n\n# object, e.g. {\"pos\": 1.0, \"neg\": 0.0} for a \"pos\" label.\n\nfor\n\nlabel\n\nin\n\nsentiment_labels\n\nsent_tokens\n\ncats\n\nlabel\n\n1.0\n\nif\n\nlabel\n\n==\n\nrow\n\n\"sentiment\"\n\nelse\n\n0.0\n\nspacy_doc\n\nadd\n\nsent_tokens\n\nspacy_doc\n\nto_disk\n\ntarget_file\n\n# Build a single DataFrame with both positive and negative reviews, one row per review\n\nreview_data\n\nget_sentences\n\nsentiment_type\n\nfor\n\nsentiment_type\n\nin\n\n\"pos\"\n\n\"neg\"\n\n)]\n\nreview_data\n\npd\n\nconcat\n\nreview_data\n\naxis\n\n# Split the DataFrame into a train and a dev set\n\ntrain_df\n\nreview_data\n\ngroupby\n\n\"sentiment\"\n\ngroup_keys\n\nFalse\n\napply\n\nlambda\n\nsample\n\nfrac\n\n0.7\n\nrandom_state\n\n100\n\ndev_df\n\nreview_data\n\nloc\n\nreview_data\n\nindex\n\ndifference\n\ntrain_df\n\nindex\n\n),\n\n:]\n\n# Save the train and dev data files to the current directory as \"corpora.train\" and \"corpora.dev\", respectively\n\nconvert\n\ntrain_df\n\n\"corpora.train\"\n\nconvert\n\ndev_df\n\n\"corpora.dev\"\n\nTo set up the training job, we first need to generate a configuration file as described in the Spacy Documentation\nFor simplicity, we will only use a TextCategorizer in the pipeline.", "metadata": {"url": "https://mlflow.org/docs/latest/models.html"}} {"id": "fe0d5088bc70-30", "text": "python -m spacy init config --pipeline textcat --lang en mlflow-textcat.cfg\n\nChange the default train and dev paths in the config file to the current directory:\n\n[paths]\n\ntrain = null\n\ndev = null\n\n+ train = \".\"\n\n+ dev = \".\"\n\nIn Spacy, the training loop is defined internally in Spacy\u2019s code. Spacy provides a \u201clogging\u201d extension point where\nwe can use mlflow. To do this,\n\nWe have to define a function to write metrics / model input to mlfow\n\nRegister it as a logger in Spacy\u2019s component registry\n\nChange the default console logger in the Spacy\u2019s configuration file (mlflow-textcat.cfg)\n\nfrom\n\ntyping\n\nimport\n\nIO\n\nCallable\n\nTuple\n\nDict\n\nAny\n\nOptional\n\nimport\n\nspacy\n\nfrom\n\nspacy\n\nimport\n\nLanguage\n\nimport\n\nmlflow\n\n@spacy\n\nregistry\n\nloggers\n\n\"mlflow_logger.v1\"\n\ndef\n\nmlflow_logger\n\n():\n\n\"\"\"Returns a function, ``setup_logger`` that returns two functions:\n\n``log_step`` is called internally by Spacy for every evaluation step. We can log the intermediate train and\n\nvalidation scores to the mlflow tracking server here.\n\n``finalize``: is called internally by Spacy after training is complete. We can log the model artifact to the\n\nmlflow tracking server here.\n\n\"\"\"\n\ndef\n\nsetup_logger\n\nnlp\n\nLanguage\n\nstdout\n\nIO\n\nsys\n\nstdout\n\nstderr\n\nIO\n\nsys\n\nstderr\n\n>\n\nTuple\n\nCallable\n\nCallable\n\n]:\n\ndef\n\nlog_step\n\ninfo\n\nOptional\n\nDict\n\nstr\n\nAny\n\n]]):\n\nif\n\ninfo\n\nstep\n\ninfo\n\n\"step\"\n\nscore\n\ninfo\n\n\"score\"\n\nmetrics\n\n{}\n\nfor\n\npipe_name\n\nin\n\nnlp\n\npipe_names\n\nloss\n\ninfo\n\n\"losses\"\n\n][\n\npipe_name\n\nmetrics\n\npipe_name", "metadata": {"url": "https://mlflow.org/docs/latest/models.html"}} {"id": "fe0d5088bc70-31", "text": "loss\n\ninfo\n\n\"losses\"\n\n][\n\npipe_name\n\nmetrics\n\npipe_name\n\n_loss\"\n\nloss\n\nmetrics\n\npipe_name\n\n_score\"\n\nscore\n\nmlflow\n\nlog_metrics\n\nmetrics\n\nstep\n\nstep\n\ndef\n\nfinalize\n\n():\n\nuri\n\nmlflow\n\nspacy\n\nlog_model\n\nnlp\n\n\"mlflow_textcat_example\"\n\nmlflow\n\nend_run\n\n()\n\nreturn\n\nlog_step\n\nfinalize\n\nreturn\n\nsetup_logger\n\nCheck the spacy-loggers library _ for a more complete implementation.\n\nPoint to our mlflow logger in Spacy configuration file. For this example, we will lower the number of training steps\nand eval frequency:\n\n[training.logger]\n\n@loggers = \"spacy.ConsoleLogger.v1\"\n\ndev = null\n\n+ @loggers = \"mlflow_logger.v1\"\n\n[training]\n\nmax_steps = 20000\n\neval_frequency = 100\n\n+ max_steps = 100\n\n+ eval_frequency = 10\n\nTrain our model:\n\nfrom\n\nspacy.cli.train\n\nimport\n\ntrain\n\nas\n\nspacy_train\n\nspacy_train\n\n\"mlflow-textcat.cfg\"\n\nTo make predictions, we load the saved model from the last run:\n\nfrom\n\nmlflow\n\nimport\n\nMlflowClient\n\n# look up the last run info from mlflow\n\nclient\n\nMlflowClient\n\n()\n\nlast_run\n\nclient\n\nsearch_runs\n\nexperiment_ids\n\n\"0\"\n\n],\n\nmax_results\n\n)[\n\n# We need to append the spacy model directory name to the artifact uri\n\nspacy_model\n\nmlflow\n\npyfunc\n\nload_model\n\nlast_run\n\ninfo\n\nartifact_uri\n\n/mlflow_textcat_example\"\n\npredictions_in\n\ndev_df\n\nloc\n\n[:,\n\n\"sentence\"\n\n]]\n\npredictions_out\n\nspacy_model\n\npredict\n\npredictions_in\n\nsqueeze\n\n()\n\ntolist\n\n()", "metadata": {"url": "https://mlflow.org/docs/latest/models.html"}} {"id": "fe0d5088bc70-32", "text": "predictions_out\n\nspacy_model\n\npredict\n\npredictions_in\n\nsqueeze\n\n()\n\ntolist\n\n()\n\npredicted_labels\n\n\"pos\"\n\nif\n\nrow\n\n\"pos\"\n\nrow\n\n\"neg\"\n\nelse\n\n\"neg\"\n\nfor\n\nrow\n\nin\n\npredictions_out\n\nprint\n\ndev_df\n\nassign\n\npredicted_sentiment\n\npredicted_labels\n\n))\n\nFastai(fastai)\n\nfastai\n\nfastai Learner models in MLflow format via\nthe\n\nmlflow.fastai.save_model() and\n\nmlflow.fastai.log_model() methods. Additionally, these\nmethods add the\n\npython_function\n\nmlflow.pyfunc.load_model(). This loaded PyFunc model can\nonly be scored with DataFrame input. You can also use the\n\nmlflow.fastai.load_model() method to\nload MLflow Models with the\n\nfastai\n\nThe interface for utilizing a fastai model loaded as a pyfunc type for generating predictions uses a\nPandas DataFrame argument.\n\nThis example runs the fastai tabular tutorial,\nlogs the experiments, saves the model in fastai format and loads the model to get predictions\nusing a fastai data loader:\n\nfrom\n\nfastai.data.external\n\nimport\n\nURLs\n\nuntar_data\n\nfrom\n\nfastai.tabular.core\n\nimport\n\nCategorify\n\nFillMissing\n\nNormalize\n\nTabularPandas\n\nfrom\n\nfastai.tabular.data\n\nimport\n\nTabularDataLoaders\n\nfrom\n\nfastai.tabular.learner\n\nimport\n\ntabular_learner\n\nfrom\n\nfastai.data.transforms\n\nimport\n\nRandomSplitter\n\nfrom\n\nfastai.metrics\n\nimport\n\naccuracy\n\nfrom\n\nfastcore.basics\n\nimport\n\nrange_of\n\nimport\n\npandas\n\nas\n\npd\n\nimport\n\nmlflow\n\nimport\n\nmlflow.fastai\n\ndef\n\nprint_auto_logged_info\n\n):\n\ntags\n\nfor\n\nin\n\ndata\n\ntags\n\nitems\n\n()\n\nif\n\nnot\n\nstartswith\n\n\"mlflow.\"\n\n)}\n\nartifacts", "metadata": {"url": "https://mlflow.org/docs/latest/models.html"}} {"id": "fe0d5088bc70-33", "text": "items\n\n()\n\nif\n\nnot\n\nstartswith\n\n\"mlflow.\"\n\n)}\n\nartifacts\n\npath\n\nfor\n\nin\n\nmlflow\n\nMlflowClient\n\n()\n\nlist_artifacts\n\ninfo\n\nrun_id\n\n\"model\"\n\nprint\n\n\"run_id:\n\n{}\n\nformat\n\ninfo\n\nrun_id\n\n))\n\nprint\n\n\"artifacts:\n\n{}\n\nformat\n\nartifacts\n\n))\n\nprint\n\n\"params:\n\n{}\n\nformat\n\ndata\n\nparams\n\n))\n\nprint\n\n\"metrics:\n\n{}\n\nformat\n\ndata\n\nmetrics\n\n))\n\nprint\n\n\"tags:\n\n{}\n\nformat\n\ntags\n\n))\n\ndef\n\nmain\n\nepochs\n\nlearning_rate\n\n0.01\n\n):\n\npath\n\nuntar_data\n\nURLs\n\nADULT_SAMPLE\n\npath\n\nls\n\n()\n\ndf\n\npd\n\nread_csv\n\npath\n\n\"adult.csv\"\n\ndls\n\nTabularDataLoaders\n\nfrom_csv\n\npath\n\n\"adult.csv\"\n\npath\n\npath\n\ny_names\n\n\"salary\"\n\ncat_names\n\n\"workclass\"\n\n\"education\"\n\n\"marital-status\"\n\n\"occupation\"\n\n\"relationship\"\n\n\"race\"\n\n],\n\ncont_names\n\n\"age\"\n\n\"fnlwgt\"\n\n\"education-num\"\n\n],\n\nprocs\n\nCategorify\n\nFillMissing\n\nNormalize\n\n],\n\nsplits\n\nRandomSplitter\n\nvalid_pct\n\n0.2\n\n)(\n\nrange_of\n\ndf\n\n))\n\nto\n\nTabularPandas\n\ndf\n\nprocs\n\nCategorify\n\nFillMissing\n\nNormalize\n\n],\n\ncat_names\n\n\"workclass\"\n\n\"education\"\n\n\"marital-status\"\n\n\"occupation\"\n\n\"relationship\"\n\n\"race\"\n\n],\n\ncont_names\n\n\"age\"\n\n\"fnlwgt\"\n\n\"education-num\"\n\n],\n\ny_names\n\n\"salary\"\n\nsplits\n\nsplits\n\ndls\n\nto\n\ndataloaders\n\nbs\n\n64\n\nmodel", "metadata": {"url": "https://mlflow.org/docs/latest/models.html"}} {"id": "fe0d5088bc70-34", "text": "splits\n\nsplits\n\ndls\n\nto\n\ndataloaders\n\nbs\n\n64\n\nmodel\n\ntabular_learner\n\ndls\n\nmetrics\n\naccuracy\n\nmlflow\n\nfastai\n\nautolog\n\n()\n\nwith\n\nmlflow\n\nstart_run\n\n()\n\nas\n\nrun\n\nmodel\n\nfit\n\n0.01\n\nmlflow\n\nfastai\n\nlog_model\n\nmodel\n\n\"model\"\n\nprint_auto_logged_info\n\nmlflow\n\nget_run\n\nrun_id\n\nrun\n\ninfo\n\nrun_id\n\n))\n\nmodel_uri\n\n\"runs:/\n\n{}\n\n/model\"\n\nformat\n\nrun\n\ninfo\n\nrun_id\n\nloaded_model\n\nmlflow\n\nfastai\n\nload_model\n\nmodel_uri\n\ntest_df\n\ndf\n\ncopy\n\n()\n\ntest_df\n\ndrop\n\n([\n\n\"salary\"\n\n],\n\naxis\n\ninplace\n\nTrue\n\ndl\n\nlearn\n\ndls\n\ntest_dl\n\ntest_df\n\npredictions\n\nloaded_model\n\nget_preds\n\ndl\n\ndl\n\npx\n\npd\n\nDataFrame\n\npredictions\n\nastype\n\n\"float\"\n\npx\n\nhead\n\nmain\n\n()\n\nOutput (Pandas DataFrame):\n\nIndex\n\nProbability of first class\n\nProbability of second class\n\n0.545088\n\n0.454912\n\n0.503172\n\n0.496828\n\n0.962663\n\n0.037337\n\n0.206107\n\n0.793893\n\n0.807599\n\n0.192401\n\nAlternatively, when using the python_function flavor, get predictions from a DataFrame.\n\nfrom\n\nfastai.data.external\n\nimport\n\nURLs\n\nuntar_data\n\nfrom\n\nfastai.tabular.core\n\nimport\n\nCategorify\n\nFillMissing\n\nNormalize\n\nTabularPandas\n\nfrom\n\nfastai.tabular.data\n\nimport\n\nTabularDataLoaders\n\nfrom\n\nfastai.tabular.learner\n\nimport\n\ntabular_learner\n\nfrom\n\nfastai.data.transforms\n\nimport\n\nRandomSplitter\n\nfrom\n\nfastai.metrics\n\nimport\n\naccuracy\n\nfrom", "metadata": {"url": "https://mlflow.org/docs/latest/models.html"}} {"id": "fe0d5088bc70-35", "text": "import\n\nRandomSplitter\n\nfrom\n\nfastai.metrics\n\nimport\n\naccuracy\n\nfrom\n\nfastcore.basics\n\nimport\n\nrange_of\n\nimport\n\npandas\n\nas\n\npd\n\nimport\n\nmlflow\n\nimport\n\nmlflow.fastai\n\nmodel_uri\n\n...\n\npath\n\nuntar_data\n\nURLs\n\nADULT_SAMPLE\n\ndf\n\npd\n\nread_csv\n\npath\n\n\"adult.csv\"\n\ntest_df\n\ndf\n\ncopy\n\n()\n\ntest_df\n\ndrop\n\n([\n\n\"salary\"\n\n],\n\naxis\n\ninplace\n\nTrue\n\nloaded_model\n\nmlflow\n\npyfunc\n\nload_model\n\nmodel_uri\n\nloaded_model\n\npredict\n\ntest_df\n\nOutput (Pandas DataFrame):\n\nIndex\n\nProbability of first class, Probability of second class\n\n[0.5450878, 0.45491222]\n\n[0.50317234, 0.49682766]\n\n[0.9626626, 0.037337445]\n\n[0.20610662, 0.7938934]\n\n[0.8075987, 0.19240129]\n\nFor more information, see mlflow.fastai.\n\nStatsmodels (statsmodels)\n\nstatsmodels\n\nStatsmodels models in MLflow format via the\n\nmlflow.statsmodels.save_model()\nand\n\nmlflow.statsmodels.log_model() methods.\nThese methods also add the\n\npython_function\n\nmlflow.pyfunc.load_model(). This loaded PyFunc model can only be scored with DataFrame input.\nYou can also use the\n\nmlflow.statsmodels.load_model()\nmethod to load MLflow Models with the\n\nstatsmodels\n\nAs for now, automatic logging is restricted to parameters, metrics and models generated by a call to fit\non a statsmodels model.\n\nFor more information, see mlflow.statsmodels.\n\nProphet (prophet)\n\nprophet\n\nProphet models in MLflow format via the\n\nmlflow.prophet.save_model()\nand", "metadata": {"url": "https://mlflow.org/docs/latest/models.html"}} {"id": "fe0d5088bc70-36", "text": "Prophet models in MLflow format via the\n\nmlflow.prophet.save_model()\nand\n\nmlflow.prophet.log_model() methods.\nThese methods also add the\n\npython_function\n\nmlflow.pyfunc.load_model(). This loaded PyFunc model can only be scored with DataFrame input.\nYou can also use the\n\nmlflow.prophet.load_model()\nmethod to load MLflow Models with the\n\nprophet\n\nProphet pyfunc usage\n\nThis example uses a time series dataset from Prophet\u2019s GitHub repository, containing log number of daily views to\nPeyton Manning\u2019s Wikipedia page for several years. A sample of the dataset is as follows:\n\nds\n\n2007-12-10\n\n9.59076113897809\n\n2007-12-11\n\n8.51959031601596\n\n2007-12-12\n\n8.18367658262066\n\n2007-12-13\n\n8.07246736935477\n\nimport\n\nnumpy\n\nas\n\nnp\n\nimport\n\npandas\n\nas\n\npd\n\nfrom\n\nprophet\n\nimport\n\nProphet\n\nfrom\n\nprophet.diagnostics\n\nimport\n\ncross_validation\n\nperformance_metrics\n\nimport\n\nmlflow\n\n# starts on 2007-12-10, ends on 2016-01-20\n\ntrain_df\n\npd\n\nread_csv\n\n\"https://raw.githubusercontent.com/facebook/prophet/main/examples/example_wp_log_peyton_manning.csv\"\n\n# Create a \"test\" DataFrame with the \"ds\" column containing 10 days after the end date in train_df\n\ntest_dates\n\npd\n\ndate_range\n\nstart\n\n\"2016-01-21\"\n\nend\n\n\"2016-01-31\"\n\nfreq\n\n\"D\"\n\ntest_df\n\npd\n\nSeries\n\ndata\n\ntest_dates\n\nvalues\n\nname\n\n\"ds\"\n\nto_frame\n\n()\n\nprophet_model\n\nProphet\n\nchangepoint_prior_scale\n\n0.5\n\nuncertainty_samples\n\nwith\n\nmlflow", "metadata": {"url": "https://mlflow.org/docs/latest/models.html"}} {"id": "fe0d5088bc70-37", "text": "changepoint_prior_scale\n\n0.5\n\nuncertainty_samples\n\nwith\n\nmlflow\n\nstart_run\n\n():\n\nprophet_model\n\nfit\n\ntrain_df\n\n# extract and log parameters such as changepoint_prior_scale in the mlflow run\n\nmodel_params\n\nname\n\nvalue\n\nfor\n\nname\n\nvalue\n\nin\n\nvars\n\nprophet_model\n\nitems\n\n()\n\nif\n\nnp\n\nisscalar\n\nvalue\n\nmlflow\n\nlog_params\n\nmodel_params\n\n# cross validate with 900 days of data initially, predictions for next 30 days\n\n# walk forward by 30 days\n\ncv_results\n\ncross_validation\n\nprophet_model\n\ninitial\n\n\"900 days\"\n\nperiod\n\n\"30 days\"\n\nhorizon\n\n\"30 days\"\n\n# Calculate metrics from cv_results, then average each metric across all backtesting windows and log to mlflow\n\ncv_metrics\n\n\"mse\"\n\n\"rmse\"\n\n\"mape\"\n\nmetrics_results\n\nperformance_metrics\n\ncv_results\n\nmetrics\n\ncv_metrics\n\naverage_metrics\n\nmetrics_results\n\nloc\n\n[:,\n\ncv_metrics\n\nmean\n\naxis\n\nto_dict\n\n()\n\nmlflow\n\nlog_metrics\n\naverage_metrics\n\nmodel_info\n\nmlflow\n\nprophet\n\nlog_model\n\nprophet_model\n\n\"prophet-model\"\n\n# Load saved model\n\nprophet_model_saved\n\nmlflow\n\npyfunc\n\nload_model\n\nmodel_info\n\nmodel_uri\n\npredictions\n\nprophet_model_saved\n\npredict\n\ntest_df\n\nOutput (Pandas DataFrame):\n\nIndex\n\nds\n\nyhat\n\nyhat_upper\n\nyhat_lower\n\n2016-01-21\n\n8.526513\n\n8.827397\n\n8.328563\n\n2016-01-22\n\n8.541355\n\n9.434994\n\n8.112758\n\n2016-01-23\n\n8.308332\n\n8.633746\n\n8.201323\n\n2016-01-24\n\n8.676326\n\n9.534593", "metadata": {"url": "https://mlflow.org/docs/latest/models.html"}} {"id": "fe0d5088bc70-38", "text": "2016-01-24\n\n8.676326\n\n9.534593\n\n8.020874\n\n2016-01-25\n\n8.983457\n\n9.430136\n\n8.121798\n\nFor more information, see mlflow.prophet.\n\nPmdarima (pmdarima)\n\npmdarima\n\npmdarima models in MLflow\nformat via the\n\nmlflow.pmdarima.save_model() and\n\nmlflow.pmdarima.log_model() methods.\nThese methods also add the\n\npython_function\n\nmlflow.pyfunc.load_model().\nThis loaded PyFunc model can only be scored with a DataFrame input.\nYou can also use the\n\nmlflow.pmdarima.load_model() method to load MLflow Models with the\n\npmdarima\n\nThe interface for utilizing a pmdarima model loaded as a pyfunc type for generating forecast predictions uses\na single-row Pandas DataFrame configuration argument. The following columns in this configuration\nPandas DataFrame are supported:\n\nn_periods (required) - specifies the number of future periods to generate starting from the last datetime valueof the training dataset, utilizing the frequency of the input training series when the model was trained.\n(for example, if the training data series elements represent one value per hour, in order to forecast 3 days of\nfuture data, set the column n_periods to 72.\n\nX (optional) - exogenous regressor values (only supported in pmdarima version >= 1.8.0) a 2D array of values forfuture time period events. For more information, read the underlying library\nexplanation.\n\nreturn_conf_int (optional) - a boolean (Default: False) for whether to return confidence interval values.See above note.\n\nalpha (optional) - the significance value for calculating confidence intervals. (Default: 0.05)", "metadata": {"url": "https://mlflow.org/docs/latest/models.html"}} {"id": "fe0d5088bc70-39", "text": "An example configuration for the pyfunc predict of a pmdarima model is shown below, with a future period\nprediction count of 100, a confidence interval calculation generation, no exogenous regressor elements, and a default\nalpha of 0.05:\n\nIndex\n\nn_periods\n\nreturn_conf_int\n\n100\n\nTrue\n\nWarning\n\nThe Pandas DataFrame passed to a pmdarima pyfunc flavor must only contain 1 row.\n\nNote\n\npmdarima\n\npredict\n\nDataFrame\n\nreturn_conf_int\n\nFalse\n\nNone\n\nDataFrame\n\nPandas\n\nDataFrame\n\n[\"yhat\"]\n\nTrue\n\nDataFrame\n\n[\"yhat\",\n\n\"yhat_lower\",\n\n\"yhat_upper\"]\n\nyhat_lower\n\nyhat_upper\n\nyhat\n\nExample usage of pmdarima artifact loaded as a pyfunc with confidence intervals calculated:\n\nimport\n\npmdarima\n\nimport\n\nmlflow\n\nimport\n\npandas\n\nas\n\npd\n\ndata\n\npmdarima\n\ndatasets\n\nload_airpassengers\n\n()\n\nwith\n\nmlflow\n\nstart_run\n\n():\n\nmodel\n\npmdarima\n\nauto_arima\n\ndata\n\nseasonal\n\nTrue\n\nmlflow\n\npmdarima\n\nsave_model\n\nmodel\n\n\"/tmp/model.pmd\"\n\nloaded_pyfunc\n\nmlflow\n\npyfunc\n\nload_model\n\n\"/tmp/model.pmd\"\n\nprediction_conf\n\npd\n\nDataFrame\n\n[{\n\n\"n_periods\"\n\n\"return_conf_int\"\n\nTrue\n\n\"alpha\"\n\n0.1\n\n}]\n\npredictions\n\nloaded_pyfunc\n\npredict\n\nprediction_conf\n\nOutput (Pandas DataFrame):\n\nIndex\n\nyhat\n\nyhat_lower\n\nyhat_upper\n\n467.573731\n\n423.30995\n\n511.83751\n\n490.494467\n\n416.17449\n\n564.81444\n\n509.138684\n\n420.56255\n\n597.71117\n\n492.554714\n\n397.30634", "metadata": {"url": "https://mlflow.org/docs/latest/models.html"}} {"id": "fe0d5088bc70-40", "text": "420.56255\n\n597.71117\n\n492.554714\n\n397.30634\n\n587.80309\n\nWarning\n\nSignature logging for pmdarima will not function correctly if return_conf_int is set to True from\na non-pyfunc artifact. The output of the native ARIMA.predict() when returning confidence intervals is not\na recognized signature type.\n\nDiviner (diviner)\n\ndiviner\n\ndiviner models in MLflow format via the\n\nmlflow.diviner.save_model() and\n\nmlflow.diviner.log_model() methods. These methods also add the\n\npython_function\n\nmlflow.pyfunc.load_model().\nThis loaded PyFunc model can only be scored with a DataFrame input.\nYou can also use the\n\nmlflow.diviner.load_model() method to load MLflow Models with the\n\ndiviner\n\nDiviner Types\n\nDiviner is a library that provides an orchestration framework for performing time series forecasting on groups of\nrelated series. Forecasting in diviner is accomplished through wrapping popular open source libraries such as\nprophet and pmdarima. The diviner\nlibrary offers a simplified set of APIs to simultaneously generate distinct time series forecasts for multiple data\ngroupings using a single input DataFrame and a unified high-level API.\n\nMetrics and Parameters logging for Diviner\n\nUnlike other flavors that are supported in MLflow, Diviner has the concept of grouped models. As a collection of many\n(perhaps thousands) of individual forecasting models, the burden to the tracking server to log individual metrics\nand parameters for each of these models is significant. For this reason, metrics and parameters are exposed for\nretrieval from Diviner\u2019s APIs as Pandas DataFrames, rather than discrete primitive values.\n\nTo illustrate, let us assume we are forecasting hourly electricity consumption from major cities around the world.\nA sample of our input data looks like this:\n\ncountry\n\ncity\n\ndatetime\n\nwatts\n\nUS\n\nNewYork", "metadata": {"url": "https://mlflow.org/docs/latest/models.html"}} {"id": "fe0d5088bc70-41", "text": "country\n\ncity\n\ndatetime\n\nwatts\n\nUS\n\nNewYork\n\n2022-03-01 00:01:00\n\n23568.9\n\nUS\n\nNewYork\n\n2022-03-01 00:02:00\n\n22331.7\n\nUS\n\nBoston\n\n2022-03-01 00:01:00\n\n14220.1\n\nUS\n\nBoston\n\n2022-03-01 00:02:00\n\n14183.4\n\nCA\n\nToronto\n\n2022-03-01 00:01:00\n\n18562.2\n\nCA\n\nToronto\n\n2022-03-01 00:02:00\n\n17681.6\n\nMX\n\nMexicoCity\n\n2022-03-01 00:01:00\n\n19946.8\n\nMX\n\nMexicoCity\n\n2022-03-01 00:02:00\n\n19444.0\n\nIf we were to fit a model on this data, supplying the grouping keys as:\n\ngrouping_keys\n\n\"country\"\n\n\"city\"\n\nWe will have a model generated for each of the grouping keys that have been supplied:\n\n[(\n\n\"US\"\n\n\"NewYork\"\n\n),\n\n\"US\"\n\n\"Boston\"\n\n),\n\n\"CA\"\n\n\"Toronto\"\n\n),\n\n\"MX\"\n\n\"MexicoCity\"\n\n)]\n\nWith a model constructed for each of these, entering each of their metrics and parameters wouldn\u2019t be an issue for the\nMLflow tracking server. What would become a problem, however, is if we modeled each major city on the planet and ran\nthis forecasting scenario every day. If we were to adhere to the conditions of the World Bank, that would mean just\nover 10,000 models as of 2022. After a mere few weeks of running this forecasting every day we would have a very large\nmetrics table.", "metadata": {"url": "https://mlflow.org/docs/latest/models.html"}} {"id": "fe0d5088bc70-42", "text": "To eliminate this issue for large-scale forecasting, the metrics and parameters for diviner are extracted as a\ngrouping key indexed Pandas DataFrame, as shown below for example (float values truncated for visibility):\n\ngrouping_key_columns\n\ncountry\n\ncity\n\nmse\n\nrmse\n\nmae\n\nmape\n\nmdape\n\nsmape\n\n\u201c(\u2018country\u2019, \u2018city\u2019)\u201d\n\nCA\n\nToronto\n\n8276851.6\n\n2801.7\n\n2417.7\n\n0.16\n\n0.16\n\n0.159\n\n\u201c(\u2018country\u2019, \u2018city\u2019)\u201d\n\nMX\n\nMexicoCity\n\n3548872.4\n\n1833.8\n\n1584.5\n\n0.15\n\n0.16\n\n0.159\n\n\u201c(\u2018country\u2019, \u2018city\u2019)\u201d\n\nUS\n\nNewYork\n\n3167846.4\n\n1732.4\n\n1498.2\n\n0.15\n\n0.16\n\n0.158\n\n\u201c(\u2018country\u2019, \u2018city\u2019)\u201d\n\nUS\n\nBoston\n\n14082666.4\n\n3653.2\n\n3156.2\n\n0.15\n\n0.16\n\n0.159\n\nThere are two recommended means of logging the metrics and parameters from a diviner model :\n\nWriting the DataFrames to local storage and using mlflow.log_artifacts()\n\nimport\n\nos\n\nimport\n\nmlflow\n\nimport\n\ntempfile\n\nwith\n\ntempfile\n\nTemporaryDirectory\n\n()\n\nas\n\ntmpdir\n\nparams\n\nmodel\n\nextract_model_params\n\n()\n\nmetrics\n\nmodel\n\ncross_validate_and_score\n\nhorizon\n\n\"72 hours\"\n\nperiod\n\n\"240 hours\"\n\ninitial\n\n\"480 hours\"\n\nparallel\n\n\"threads\"\n\nrolling_window\n\n0.1\n\nmonthly\n\nFalse\n\nparams\n\nto_csv\n\ntmpdir\n\n/params.csv\"\n\nindex\n\nFalse\n\nheader\n\nTrue\n\nmetrics\n\nto_csv\n\ntmpdir\n\n/metrics.csv\"\n\nindex\n\nFalse\n\nheader\n\nTrue\n\nmlflow", "metadata": {"url": "https://mlflow.org/docs/latest/models.html"}} {"id": "fe0d5088bc70-43", "text": "tmpdir\n\n/metrics.csv\"\n\nindex\n\nFalse\n\nheader\n\nTrue\n\nmlflow\n\nlog_artifacts\n\ntmpdir\n\nartifact_path\n\n\"data\"\n\nWriting directly as a JSON artifact using mlflow.log_dict()\n\nNote\n\nThe parameters extract from diviner models may require casting (or dropping of columns) if using the\npd.DataFrame.to_dict() approach due to the inability of this method to serialize objects.\n\nimport\n\nmlflow\n\nparams\n\nmodel\n\nextract_model_params\n\n()\n\nmetrics\n\nmodel\n\ncross_validate_and_score\n\nhorizon\n\n\"72 hours\"\n\nperiod\n\n\"240 hours\"\n\ninitial\n\n\"480 hours\"\n\nparallel\n\n\"threads\"\n\nrolling_window\n\n0.1\n\nmonthly\n\nFalse\n\nparams\n\n\"t_scale\"\n\nparams\n\n\"t_scale\"\n\nastype\n\nstr\n\nparams\n\n\"start\"\n\nparams\n\n\"start\"\n\nastype\n\nstr\n\nparams\n\nparams\n\ndrop\n\n\"stan_backend\"\n\naxis\n\nmlflow\n\nlog_dict\n\nparams\n\nto_dict\n\n(),\n\n\"params.json\"\n\nmlflow\n\nlog_dict\n\nmetrics\n\nto_dict\n\n(),\n\n\"metrics.json\"\n\nLogging of the model artifact is shown in the pyfunc example below.\n\nDiviner pyfunc usage\n\nThe MLflow Diviner flavor includes an implementation of the pyfunc interface for Diviner models. To control\nprediction behavior, you can specify configuration arguments in the first row of a Pandas DataFrame input.\n\nAs this configuration is dependent upon the underlying model type (i.e., the diviner.GroupedProphet.forecast()\nmethod has a different signature than does diviner.GroupedPmdarima.predict()), the Diviner pyfunc implementation\nattempts to coerce arguments to the types expected by the underlying model.\n\nNote", "metadata": {"url": "https://mlflow.org/docs/latest/models.html"}} {"id": "fe0d5088bc70-44", "text": "Note\n\nDiviner models support both \u201cfull group\u201d and \u201cpartial group\u201d forecasting. If a column named \"groups\" is present\nin the configuration DataFrame submitted to the pyfunc flavor, the grouping key values in the first row\nwill be used to generate a subset of forecast predictions. This functionality removes the need to filter a subset\nfrom the full output of all groups forecasts if the results of only a few (or one) groups are needed.\n\nFor a GroupedPmdarima model, an example configuration for the pyfunc predict() method is:\n\nimport\n\nmlflow\n\nimport\n\npandas\n\nas\n\npd\n\nfrom\n\npmdarima.arima.auto\n\nimport\n\nAutoARIMA\n\nfrom\n\ndiviner\n\nimport\n\nGroupedPmdarima\n\nwith\n\nmlflow\n\nstart_run\n\n():\n\nbase_model\n\nAutoARIMA\n\nout_of_sample_size\n\n96\n\nmaxiter\n\n200\n\nmodel\n\nGroupedPmdarima\n\nmodel_template\n\nbase_model\n\nfit\n\ndf\n\ndf\n\ngroup_key_columns\n\n\"country\"\n\n\"city\"\n\n],\n\ny_col\n\n\"watts\"\n\ndatetime_col\n\n\"datetime\"\n\nsilence_warnings\n\nTrue\n\nmlflow\n\ndiviner\n\nsave_model\n\ndiviner_model\n\nmodel\n\npath\n\n\"/tmp/diviner_model\"\n\ndiviner_pyfunc\n\nmlflow\n\npyfunc\n\nload_model\n\nmodel_uri\n\n\"/tmp/diviner_model\"\n\npredict_conf\n\npd\n\nDataFrame\n\n\"n_periods\"\n\n120\n\n\"groups\"\n\n\"US\"\n\n\"NewYork\"\n\n),\n\n\"CA\"\n\n\"Toronto\"\n\n),\n\n\"MX\"\n\n\"MexicoCity\"\n\n),\n\n],\n\n# NB: List of tuples required.\n\n\"predict_col\"\n\n\"wattage_forecast\"\n\n\"alpha\"\n\n0.1\n\n\"return_conf_int\"\n\nTrue\n\n\"on_error\"\n\n\"warn\"\n\n},\n\nindex\n\n],\n\nsubset_forecasts", "metadata": {"url": "https://mlflow.org/docs/latest/models.html"}} {"id": "fe0d5088bc70-45", "text": "True\n\n\"on_error\"\n\n\"warn\"\n\n},\n\nindex\n\n],\n\nsubset_forecasts\n\ndiviner_pyfunc\n\npredict\n\npredict_conf\n\nNote\n\nThere are several instances in which a configuration DataFrame submitted to the pyfunc predict() method\nwill cause an MlflowException to be raised:\n\nIf neither horizon or n_periods are provided.\n\nThe value of n_periods or horizon is not an integer.\n\nIf the model is of type GroupedProphet, frequency as a string type must be provided.\n\nIf both horizon and n_periods are provided with different values.\n\nModel Evaluation\n\nAfter building and training your MLflow Model, you can use the mlflow.evaluate() API to\nevaluate its performance on one or more datasets of your choosing. mlflow.evaluate()\ncurrently supports evaluation of MLflow Models with the\npython_function (pyfunc) model flavor for classification and regression\ntasks, computing a variety of task-specific performance metrics, model performance plots, and\nmodel explanations. Evaluation results are logged to MLflow Tracking.\n\nThe following example from the MLflow GitHub Repository\nuses mlflow.evaluate() to evaluate the performance of a classifier\non the UCI Adult Data Set, logging a\ncomprehensive collection of MLflow Metrics and Artifacts that provide insight into model performance\nand behavior:\n\nimport\n\nxgboost\n\nimport\n\nshap\n\nimport\n\nmlflow\n\nfrom\n\nsklearn.model_selection\n\nimport\n\ntrain_test_split\n\n# Load the UCI Adult Dataset\n\nshap\n\ndatasets\n\nadult\n\n()\n\n# Split the data into training and test sets\n\nX_train\n\nX_test\n\ny_train\n\ny_test\n\ntrain_test_split\n\ntest_size\n\n0.33\n\nrandom_state\n\n42\n\n# Fit an XGBoost binary classifier on the training data split\n\nmodel\n\nxgboost\n\nXGBClassifier\n\n()\n\nfit\n\nX_train\n\ny_train\n\n# Build the Evaluation Dataset from the test set\n\neval_data\n\nX_test", "metadata": {"url": "https://mlflow.org/docs/latest/models.html"}} {"id": "fe0d5088bc70-46", "text": "y_train\n\n# Build the Evaluation Dataset from the test set\n\neval_data\n\nX_test\n\neval_data\n\n\"label\"\n\ny_test\n\nwith\n\nmlflow\n\nstart_run\n\n()\n\nas\n\nrun\n\n# Log the baseline model to MLflow\n\nmlflow\n\nsklearn\n\nlog_model\n\nmodel\n\n\"model\"\n\nmodel_uri\n\nmlflow\n\nget_artifact_uri\n\n\"model\"\n\n# Evaluate the logged model\n\nresult\n\nmlflow\n\nevaluate\n\nmodel_uri\n\neval_data\n\ntargets\n\n\"label\"\n\nmodel_type\n\n\"classifier\"\n\nevaluators\n\n\"default\"\n\n],\n\nEvaluating with Custom Metrics\n\ncustom_metrics\n\ncustom_artifacts\n\nmlflow.evaluate() to produce custom metrics and artifacts for the model(s) that you\u2019re evaluating.\nThe following\n\nshort example from the MLflow GitHub Repository\nuses\n\nmlflow.evaluate() with a custom metric function to evaluate the performance of a regressor on the\n\nCalifornia Housing Dataset.\n\nfrom\n\nsklearn.linear_model\n\nimport\n\nLinearRegression\n\nfrom\n\nsklearn.datasets\n\nimport\n\nfetch_california_housing\n\nfrom\n\nsklearn.model_selection\n\nimport\n\ntrain_test_split\n\nimport\n\nnumpy\n\nas\n\nnp\n\nimport\n\nmlflow\n\nfrom\n\nmlflow.models\n\nimport\n\nmake_metric\n\nimport\n\nos\n\nimport\n\nmatplotlib.pyplot\n\nas\n\nplt\n\n# loading the California housing dataset\n\ncali_housing\n\nfetch_california_housing\n\nas_frame\n\nTrue\n\n# split the dataset into train and test partitions\n\nX_train\n\nX_test\n\ny_train\n\ny_test\n\ntrain_test_split\n\ncali_housing\n\ndata\n\ncali_housing\n\ntarget\n\ntest_size\n\n0.2\n\nrandom_state\n\n123\n\n# train the model\n\nlin_reg\n\nLinearRegression\n\n()\n\nfit\n\nX_train\n\ny_train\n\n# creating the evaluation dataframe\n\neval_data\n\nX_test\n\ncopy\n\n()\n\neval_data\n\n\"target\"\n\ny_test\n\ndef\n\nsquared_diff_plus_one", "metadata": {"url": "https://mlflow.org/docs/latest/models.html"}} {"id": "fe0d5088bc70-47", "text": "()\n\neval_data\n\n\"target\"\n\ny_test\n\ndef\n\nsquared_diff_plus_one\n\neval_df\n\n_builtin_metrics\n\n):\n\n\"\"\"\n\nThis example custom metric function creates a metric based on the ``prediction`` and\n\n``target`` columns in ``eval_df`.\n\n\"\"\"\n\nreturn\n\nnp\n\nsum\n\nnp\n\nabs\n\neval_df\n\n\"prediction\"\n\neval_df\n\n\"target\"\n\n*\n\ndef\n\nsum_on_target_divided_by_two\n\n_eval_df\n\nbuiltin_metrics\n\n):\n\n\"\"\"\n\nThis example custom metric function creates a metric derived from existing metrics in\n\n``builtin_metrics``.\n\n\"\"\"\n\nreturn\n\nbuiltin_metrics\n\n\"sum_on_target\"\n\ndef\n\nprediction_target_scatter\n\neval_df\n\n_builtin_metrics\n\nartifacts_dir\n\n):\n\n\"\"\"\n\nThis example custom artifact generates and saves a scatter plot to ``artifacts_dir`` that\n\nvisualizes the relationship between the predictions and targets for the given model to a\n\nfile as an image artifact.\n\n\"\"\"\n\nplt\n\nscatter\n\neval_df\n\n\"prediction\"\n\n],\n\neval_df\n\n\"target\"\n\n])\n\nplt\n\nxlabel\n\n\"Targets\"\n\nplt\n\nylabel\n\n\"Predictions\"\n\nplt\n\ntitle\n\n\"Targets vs. Predictions\"\n\nplot_path\n\nos\n\npath\n\njoin\n\nartifacts_dir\n\n\"example_scatter_plot.png\"\n\nplt\n\nsavefig\n\nplot_path\n\nreturn\n\n\"example_scatter_plot_artifact\"\n\nplot_path\n\nwith\n\nmlflow\n\nstart_run\n\n()\n\nas\n\nrun\n\nmlflow\n\nsklearn\n\nlog_model\n\nlin_reg\n\n\"model\"\n\nmodel_uri\n\nmlflow\n\nget_artifact_uri\n\n\"model\"\n\nresult\n\nmlflow\n\nevaluate\n\nmodel\n\nmodel_uri\n\ndata\n\neval_data\n\ntargets\n\n\"target\"\n\nmodel_type\n\n\"regressor\"\n\nevaluators\n\n\"default\"\n\n],\n\ncustom_metrics\n\nmake_metric\n\neval_fn\n\nsquared_diff_plus_one\n\ngreater_is_better", "metadata": {"url": "https://mlflow.org/docs/latest/models.html"}} {"id": "fe0d5088bc70-48", "text": "custom_metrics\n\nmake_metric\n\neval_fn\n\nsquared_diff_plus_one\n\ngreater_is_better\n\nFalse\n\n),\n\nmake_metric\n\neval_fn\n\nsum_on_target_divided_by_two\n\ngreater_is_better\n\nTrue\n\n),\n\n],\n\ncustom_artifacts\n\nprediction_target_scatter\n\n],\n\nprint\n\n\"metrics:\n\n\\n\n\nresult\n\nmetrics\n\nprint\n\n\"artifacts:\n\n\\n\n\nresult\n\nartifacts\n\nFor a more comprehensive custom metrics usage example, refer to this example from the MLflow GitHub Repository.\n\nPerforming Model Validation\n\nmlflow.evaluate() API to perform some checks on the metrics\ngenerated during model evaluation to validate the quality of your model. By specifying a\n\nvalidation_thresholds\n\nmlflow.models.MetricThreshold\nobjects, you can specify value thresholds that your model\u2019s evaluation metrics must exceed as well\nas absolute and relative gains your model must have in comparison to a specified\n\nbaseline_model\n\nmlflow.evaluate()\nwill throw a\n\nModelValidationFailedException\n\nimport\n\nxgboost\n\nimport\n\nshap\n\nfrom\n\nsklearn.model_selection\n\nimport\n\ntrain_test_split\n\nfrom\n\nsklearn.dummy\n\nimport\n\nDummyClassifier\n\nimport\n\nmlflow\n\nfrom\n\nmlflow.models\n\nimport\n\nMetricThreshold\n\n# load UCI Adult Data Set; segment it into training and test sets\n\nshap\n\ndatasets\n\nadult\n\n()\n\nX_train\n\nX_test\n\ny_train\n\ny_test\n\ntrain_test_split\n\ntest_size\n\n0.33\n\nrandom_state\n\n42\n\n# train a candidate XGBoost model\n\ncandidate_model\n\nxgboost\n\nXGBClassifier\n\n()\n\nfit\n\nX_train\n\ny_train\n\n# train a baseline dummy model\n\nbaseline_model\n\nDummyClassifier\n\nstrategy\n\n\"uniform\"\n\nfit\n\nX_train\n\ny_train\n\n# construct an evaluation dataset from the test set\n\neval_data\n\nX_test\n\neval_data\n\n\"label\"\n\ny_test\n\n# Define criteria for model to be validated against\n\nthresholds", "metadata": {"url": "https://mlflow.org/docs/latest/models.html"}} {"id": "fe0d5088bc70-49", "text": "\"label\"\n\ny_test\n\n# Define criteria for model to be validated against\n\nthresholds\n\n\"accuracy_score\"\n\nMetricThreshold\n\nthreshold\n\n0.8\n\n# accuracy should be >=0.8\n\nmin_absolute_change\n\n0.05\n\n# accuracy should be at least 0.05 greater than baseline model accuracy\n\nmin_relative_change\n\n0.05\n\n# accuracy should be at least 5 percent greater than baseline model accuracy\n\nhigher_is_better\n\nTrue\n\n),\n\nwith\n\nmlflow\n\nstart_run\n\n()\n\nas\n\nrun\n\ncandidate_model_uri\n\nmlflow\n\nsklearn\n\nlog_model\n\ncandidate_model\n\n\"candidate_model\"\n\nmodel_uri\n\nbaseline_model_uri\n\nmlflow\n\nsklearn\n\nlog_model\n\nbaseline_model\n\n\"baseline_model\"\n\nmodel_uri\n\nmlflow\n\nevaluate\n\ncandidate_model_uri\n\neval_data\n\ntargets\n\n\"label\"\n\nmodel_type\n\n\"classifier\"\n\nvalidation_thresholds\n\nthresholds\n\nbaseline_model\n\nbaseline_model_uri\n\nRefer to mlflow.models.MetricThreshold to see details on how the thresholds are specified\nand checked. For a more comprehensive demonstration on how to use mlflow.evaluate() to perform model validation, refer to\nthe Model Validation example from the MLflow GitHub Repository.\n\nThe logged output within the MLflow UI for the comprehensive example is shown below. Note the two model artifacts that have\nbeen logged: \u2018baseline_model\u2019 and \u2018candidate_model\u2019 for comparison purposes in the example.\n\nNote\n\nLimitations (when the default evaluator is used):\n\nModel validation results are not included in the active MLflow run.\n\nNo metrics are logged nor artifacts produced for the baseline model in the active MLflow run.\n\nAdditional information about model evaluation behaviors and outputs is available in the\nmlflow.evaluate() API docs.\n\nNote\n\nDifferences in the computation of Area under Curve Precision Recall score (metric name\nprecision_recall_auc) between multi and binary classifiers:", "metadata": {"url": "https://mlflow.org/docs/latest/models.html"}} {"id": "fe0d5088bc70-50", "text": "Multiclass classifier models, when evaluated, utilize the standard scoring metric from sklearn:\nsklearn.metrics.roc_auc_score to calculate the area under the precision recall curve. This\nalgorithm performs a linear interpolation calculation utilizing the trapezoidal rule to estimate\nthe area under the precision recall curve. It is well-suited for use in evaluating multi-class\nclassification models to provide a single numeric value of the quality of fit.\n\nBinary classifier models, on the other hand, use the sklearn.metrics.average_precision_score to\navoid the shortcomings of the roc_auc_score implementation when applied to heavily\nimbalanced classes in binary classification. Usage of the roc_auc_score for imbalanced\ndatasets can give a misleading result (optimistically better than the model\u2019s actual ability\nto accurately predict the minority class membership).\n\nFor additional information on the topic of why different algorithms are employed for this, as\nwell as links to the papers that informed the implementation of these metrics within the\nsklearn.metrics module, refer to\nthe documentation.\n\nFor simplicity purposes, both methodologies evaluation metric results (whether for multi-class\nor binary classification) are unified in the single metric: precision_recall_auc.\n\nModel Customization\n\nWhile MLflow\u2019s built-in model persistence utilities are convenient for packaging models from various\npopular ML libraries in MLflow Model format, they do not cover every use case. For example, you may\nwant to use a model from an ML library that is not explicitly supported by MLflow\u2019s built-in\nflavors. Alternatively, you may want to package custom inference code and data to create an\nMLflow Model. Fortunately, MLflow provides two solutions that can be used to accomplish these\ntasks: Custom Python Models and Custom Flavors.\n\nIn this section:\n\nCustom Python Models\n\nExample: Creating a custom \u201cadd n\u201d model\nExample: Saving an XGBoost model in MLflow format\n\nCustom Flavors\n\nCustom Python Models", "metadata": {"url": "https://mlflow.org/docs/latest/models.html"}} {"id": "fe0d5088bc70-51", "text": "Custom Flavors\n\nCustom Python Models\n\nThe mlflow.pyfunc module provides save_model() and\nlog_model() utilities for creating MLflow Models with the\npython_function flavor that contain user-specified code and artifact (file) dependencies.\nThese artifact dependencies may include serialized models produced by any Python ML library.\n\nBecause these custom models contain the python_function flavor, they can be deployed\nto any of MLflow\u2019s supported production environments, such as SageMaker, AzureML, or local\nREST endpoints.\n\nThe following examples demonstrate how you can use the mlflow.pyfunc module to create\ncustom Python models. For additional information about model customization with MLflow\u2019s\npython_function utilities, see the\npython_function custom models documentation.\n\nExample: Creating a custom \u201cadd n\u201d model\n\nThis example defines a class for a custom model that adds a specified numeric value, n, to all\ncolumns of a Pandas DataFrame input. Then, it uses the mlflow.pyfunc APIs to save an\ninstance of this model with n = 5 in MLflow Model format. Finally, it loads the model in\npython_function format and uses it to evaluate a sample input.\n\nimport\n\nmlflow.pyfunc\n\n# Define the model class\n\nclass\n\nAddN\n\nmlflow\n\npyfunc\n\nPythonModel\n\n):\n\ndef\n\n__init__\n\nself\n\n):\n\nself\n\ndef\n\npredict\n\nself\n\ncontext\n\nmodel_input\n\n):\n\nreturn\n\nmodel_input\n\napply\n\nlambda\n\ncolumn\n\ncolumn\n\nself\n\n# Construct and save the model\n\nmodel_path\n\n\"add_n_model\"\n\nadd5_model\n\nAddN\n\nmlflow\n\npyfunc\n\nsave_model\n\npath\n\nmodel_path\n\npython_model\n\nadd5_model\n\n# Load the model in `python_function` format\n\nloaded_model\n\nmlflow\n\npyfunc\n\nload_model\n\nmodel_path\n\n# Evaluate the model\n\nimport\n\npandas\n\nas\n\npd\n\nmodel_input\n\npd\n\nDataFrame\n\n([", "metadata": {"url": "https://mlflow.org/docs/latest/models.html"}} {"id": "fe0d5088bc70-52", "text": "import\n\npandas\n\nas\n\npd\n\nmodel_input\n\npd\n\nDataFrame\n\n([\n\nrange\n\n10\n\n)])\n\nmodel_output\n\nloaded_model\n\npredict\n\nmodel_input\n\nassert\n\nmodel_output\n\nequals\n\npd\n\nDataFrame\n\n([\n\nrange\n\n15\n\n)]))\n\nExample: Saving an XGBoost model in MLflow format\n\nThis example begins by training and saving a gradient boosted tree model using the XGBoost\nlibrary. Next, it defines a wrapper class around the XGBoost model that conforms to MLflow\u2019s\npython_function inference API. Then, it uses the wrapper class and\nthe saved XGBoost model to construct an MLflow Model that performs inference using the gradient\nboosted tree. Finally, it loads the MLflow Model in python_function format and uses it to\nevaluate test data.\n\n# Load training and test datasets\n\nfrom\n\nsys\n\nimport\n\nversion_info\n\nimport\n\nxgboost\n\nas\n\nxgb\n\nfrom\n\nsklearn\n\nimport\n\ndatasets\n\nfrom\n\nsklearn.model_selection\n\nimport\n\ntrain_test_split\n\nPYTHON_VERSION\n\n{major}\n\n{minor}\n\n{micro}\n\nformat\n\nmajor\n\nversion_info\n\nmajor\n\nminor\n\nversion_info\n\nminor\n\nmicro\n\nversion_info\n\nmicro\n\niris\n\ndatasets\n\nload_iris\n\n()\n\niris\n\ndata\n\n[:,\n\n:]\n\niris\n\ntarget\n\nx_train\n\nx_test\n\ny_train\n\ntrain_test_split\n\ntest_size\n\n0.2\n\nrandom_state\n\n42\n\ndtrain\n\nxgb\n\nDMatrix\n\nx_train\n\nlabel\n\ny_train\n\n# Train and save an XGBoost model\n\nxgb_model\n\nxgb\n\ntrain\n\nparams\n\n\"max_depth\"\n\n10\n\n},\n\ndtrain\n\ndtrain\n\nnum_boost_round\n\n10\n\nxgb_model_path\n\n\"xgb_model.pth\"\n\nxgb_model\n\nsave_model\n\nxgb_model_path", "metadata": {"url": "https://mlflow.org/docs/latest/models.html"}} {"id": "fe0d5088bc70-53", "text": "\"xgb_model.pth\"\n\nxgb_model\n\nsave_model\n\nxgb_model_path\n\n# Create an `artifacts` dictionary that assigns a unique name to the saved XGBoost model file.\n\n# This dictionary will be passed to `mlflow.pyfunc.save_model`, which will copy the model file\n\n# into the new MLflow Model's directory.\n\nartifacts\n\n\"xgb_model\"\n\nxgb_model_path\n\n# Define the model class\n\nimport\n\nmlflow.pyfunc\n\nclass\n\nXGBWrapper\n\nmlflow\n\npyfunc\n\nPythonModel\n\n):\n\ndef\n\nload_context\n\nself\n\ncontext\n\n):\n\nimport\n\nxgboost\n\nas\n\nxgb\n\nself\n\nxgb_model\n\nxgb\n\nBooster\n\n()\n\nself\n\nxgb_model\n\nload_model\n\ncontext\n\nartifacts\n\n\"xgb_model\"\n\n])\n\ndef\n\npredict\n\nself\n\ncontext\n\nmodel_input\n\n):\n\ninput_matrix\n\nxgb\n\nDMatrix\n\nmodel_input\n\nvalues\n\nreturn\n\nself\n\nxgb_model\n\npredict\n\ninput_matrix\n\n# Create a Conda environment for the new MLflow Model that contains all necessary dependencies.\n\nimport\n\ncloudpickle\n\nconda_env\n\n\"channels\"\n\n\"defaults\"\n\n],\n\n\"dependencies\"\n\n\"python=\n\n{}\n\nformat\n\nPYTHON_VERSION\n\n),\n\n\"pip\"\n\n\"pip\"\n\n\"mlflow\"\n\n\"xgboost==\n\n{}\n\nformat\n\nxgb\n\n__version__\n\n),\n\n\"cloudpickle==\n\n{}\n\nformat\n\ncloudpickle\n\n__version__\n\n),\n\n],\n\n},\n\n],\n\n\"name\"\n\n\"xgb_env\"\n\n# Save the MLflow Model\n\nmlflow_pyfunc_model_path\n\n\"xgb_mlflow_pyfunc\"\n\nmlflow\n\npyfunc\n\nsave_model\n\npath\n\nmlflow_pyfunc_model_path\n\npython_model\n\nXGBWrapper\n\n(),\n\nartifacts\n\nartifacts\n\nconda_env\n\nconda_env", "metadata": {"url": "https://mlflow.org/docs/latest/models.html"}} {"id": "fe0d5088bc70-54", "text": "python_model\n\nXGBWrapper\n\n(),\n\nartifacts\n\nartifacts\n\nconda_env\n\nconda_env\n\n# Load the model in `python_function` format\n\nloaded_model\n\nmlflow\n\npyfunc\n\nload_model\n\nmlflow_pyfunc_model_path\n\n# Evaluate the model\n\nimport\n\npandas\n\nas\n\npd\n\ntest_predictions\n\nloaded_model\n\npredict\n\npd\n\nDataFrame\n\nx_test\n\n))\n\nprint\n\ntest_predictions\n\nCustom Flavors\n\nYou can also create custom MLflow Models by writing a custom flavor.\n\nAs discussed in the Model API and Storage Format sections, an MLflow Model\nis defined by a directory of files that contains an MLmodel configuration file. This MLmodel\nfile describes various model attributes, including the flavors in which the model can be\ninterpreted. The MLmodel file contains an entry for each flavor name; each entry is\na YAML-formatted collection of flavor-specific attributes.\n\nTo create a new flavor to support a custom model, you define the set of flavor-specific attributes\nto include in the MLmodel configuration file, as well as the code that can interpret the\ncontents of the model directory and the flavor\u2019s attributes.\n\nmlflow.pytorch module corresponding to MLflow\u2019s\n\npytorch\n\nmlflow.pytorch.save_model() method, a PyTorch model is saved\nto a specified output directory. Additionally,\n\nmlflow.pytorch.save_model() leverages the\n\nmlflow.models.Model.add_flavor() and\n\nmlflow.models.Model.save() functions to\nproduce an\n\nMLmodel\n\npytorch\n\npytorch_version\n\nsave_model(), the\n\nmlflow.pytorch module also\ndefines a\n\nload_model() method.\n\nmlflow.pytorch.load_model() reads the\n\nMLmodel\n\npytorch\n\nBuilt-In Deployment Tools\n\nMLflow provides tools for deploying MLflow models on a local machine and to several production environments.\nNot all deployment methods are available for all model flavors.\n\nIn this section:", "metadata": {"url": "https://mlflow.org/docs/latest/models.html"}} {"id": "fe0d5088bc70-55", "text": "In this section:\n\nDeploy MLflow models\n\nDeploy a python_function model on Microsoft Azure ML\n\nDeploy a python_function model on Amazon SageMaker\n\nExport a python_function model as an Apache Spark UDF\n\nDeploy MLflow models\n\nMLflow can deploy models locally as local REST API endpoints or to directly score files. In addition,\nMLflow can package models as self-contained Docker images with the REST API endpoint. The image can\nbe used to safely deploy the model to various environments such as Kubernetes.\n\nYou deploy MLflow model locally or generate a Docker image using the CLI interface to the\nmlflow.models module.\n\nThe REST API defines 4 endpoints:\n\n/ping used for health check\n\n/health (same as /ping)\n\n/version used for getting the mlflow version\n\n/invocations used for scoring\n\nThe REST API server accepts csv or json input. The input format must be specified in\nContent-Type header. The value of the header must be either application/json or\napplication/csv.\n\nThe csv input must be a valid pandas.DataFrame csv representation. For example,\ndata = pandas_df.to_csv().\n\nThe json input must be a dictionary with exactly one of the following fields that further specify\nthe type and encoding of the input data\n\ndataframe_split field with pandas DataFrames in the split orientation. For example,\ndata = {\"dataframe_split\": pandas_df.to_dict(orient='split').\n\ndataframe_records field with pandas DataFrame in the records orientation. For example,\ndata = {\"dataframe_records\": pandas_df.to_dict(orient='records').*We do not\nrecommend using this format because it is not guaranteed to preserve column ordering.*\n\ninstances field with tensor input formatted as described in TF Serving\u2019s API docs where the provided inputs\nwill be cast to Numpy arrays.\n\ninputs field with tensor input formatted as described in TF Serving\u2019s API docs where the provided inputs\nwill be cast to Numpy arrays.\n\nNote", "metadata": {"url": "https://mlflow.org/docs/latest/models.html"}} {"id": "fe0d5088bc70-56", "text": "Note\n\nSince JSON loses type information, MLflow will cast the JSON input to the input type specified\nin the model\u2019s schema if available. If your model is sensitive to input types, it is recommended that\na schema is provided for the model to ensure that type mismatch errors do not occur at inference time.\nIn particular, DL models are typically strict about input types and will need model schema in order\nfor the model to score correctly. For complex data types, see Encoding complex data below.\n\nExample requests:\n\n# split-oriented DataFrame input\ncurl\n\nhttp://127.0.0.1:5000/invocations\n\nH\n\n'Content-Type: application/json'\n\nd\n\n'{\n\n\"dataframe_split\": {\n\n\"columns\": [\"a\", \"b\", \"c\"],\n\n\"data\": [[1, 2, 3], [4, 5, 6]]\n\n}'\n\n# record-oriented DataFrame input (fine for vector rows, loses ordering for JSON records)\ncurl\n\nhttp://127.0.0.1:5000/invocations\n\nH\n\n'Content-Type: application/json'\n\nd\n\n'{\n\n\"dataframe_records\": [\n\n{\"a\": 1,\"b\": 2,\"c\": 3},\n\n{\"a\": 4,\"b\": 5,\"c\": 6}\n\n}'\n\n# numpy/tensor input using TF serving's \"instances\" format\ncurl\n\nhttp://127.0.0.1:5000/invocations\n\nH\n\n'Content-Type: application/json'\n\nd\n\n'{\n\n\"instances\": [\n\n{\"a\": \"s1\", \"b\": 1, \"c\": [1, 2, 3]},\n\n{\"a\": \"s2\", \"b\": 2, \"c\": [4, 5, 6]},", "metadata": {"url": "https://mlflow.org/docs/latest/models.html"}} {"id": "fe0d5088bc70-57", "text": "{\"a\": \"s3\", \"b\": 3, \"c\": [7, 8, 9]}\n\n}'\n\n# numpy/tensor input using TF serving's \"inputs\" format\ncurl\n\nhttp://127.0.0.1:5000/invocations\n\nH\n\n'Content-Type: application/json'\n\nd\n\n'{\n\n\"inputs\": {\"a\": [\"s1\", \"s2\", \"s3\"], \"b\": [1, 2, 3], \"c\": [[1, 2, 3], [4, 5, 6], [7, 8, 9]]}\n\n}'\n\nFor more information about serializing pandas DataFrames, see\npandas.DataFrame.to_json.\n\nFor more information about serializing tensor inputs using the TF serving format, see\nTF serving\u2019s request format docs.\n\nServing with MLServer\n\nPython models can be deployed using Seldon\u2019s MLServer as alternative inference server.\nMLServer is integrated with two leading open source model deployment tools,\nSeldon Core\nand KServe (formerly known as KFServing), and can\nbe used to test and deploy models using these frameworks.\nThis is especially powerful when building docker images since the docker image\nbuilt with MLServer can be deployed directly with both of these frameworks.\n\nMLServer exposes the same scoring API through the /invocations endpoint.\nIn addition, it supports the standard V2 Inference Protocol.\n\nNote\n\nTo use MLServer with MLflow, please install mlflow as:\n\npip\n\ninstall\n\nmlflow\n\n[extras\n\nTo serve a MLflow model using MLServer, you can use the --enable-mlserver flag,\nsuch as:\n\nmlflow\n\nmodels\n\nserve\n\nm\n\nmy_model\n\n-enable-mlserver\n\nSimilarly, to build a Docker image built with MLServer you can use the\n--enable-mlserver flag, such as:\n\nmlflow", "metadata": {"url": "https://mlflow.org/docs/latest/models.html"}} {"id": "fe0d5088bc70-58", "text": "mlflow\n\nmodels\n\nbuild\n\nm\n\nmy_model\n\n-enable-mlserver\n\nn\n\nmy-model\n\nTo read more about the integration between MLflow and MLServer, please check\nthe end-to-end example in the MLServer documentation or\nvisit the MLServer docs.\n\nEncoding complex data\n\nComplex data types, such as dates or binary, do not have a native JSON representation. If you include a model\nsignature, MLflow can automatically decode supported data types from JSON. The following data type conversions\nare supported:\n\nbinary: data is expected to be base64 encoded, MLflow will automatically base64 decode.\n\ndatetime: data is expected as string according to\nISO 8601 specification.\nMLflow will parse this into the appropriate datetime representation on the given platform.\n\nExample requests:\n\n# record-oriented DataFrame input with binary column \"b\"\ncurl\n\nhttp://127.0.0.1:5000/invocations\n\nH\n\n'Content-Type: application/json'\n\nd\n\n'[\n\n{\"a\": 0, \"b\": \"dGVzdCBiaW5hcnkgZGF0YSAw\"},\n\n{\"a\": 1, \"b\": \"dGVzdCBiaW5hcnkgZGF0YSAx\"},\n\n{\"a\": 2, \"b\": \"dGVzdCBiaW5hcnkgZGF0YSAy\"}\n\n]'\n\n# record-oriented DataFrame input with datetime column \"b\"\ncurl\n\nhttp://127.0.0.1:5000/invocations\n\nH\n\n'Content-Type: application/json'\n\nd\n\n'[\n\n{\"a\": 0, \"b\": \"2020-01-01T00:00:00Z\"},\n\n{\"a\": 1, \"b\": \"2020-02-01T12:34:56Z\"},", "metadata": {"url": "https://mlflow.org/docs/latest/models.html"}} {"id": "fe0d5088bc70-59", "text": "{\"a\": 2, \"b\": \"2021-03-01T00:00:00Z\"}\n\n]'\n\nCommand Line Interface\n\nMLflow also has a CLI that supports the following commands:\n\nserve deploys the model as a local REST API server.\n\nbuild_docker packages a REST API endpoint serving the\nmodel as a docker image.\n\npredict uses the model to generate a prediction for a local\nCSV or JSON file. Note that this method only supports DataFrame input.\n\nFor more info, see:\n\nmlflow\n\nmodels\n\n-help\nmlflow\n\nmodels\n\nserve\n\n-help\nmlflow\n\nmodels\n\npredict\n\n-help\nmlflow\n\nmodels\n\nbuild-docker\n\n-help\n\nEnvironment Management Tools\n\nMLflow currently supports the following environment management tools to restore model environments:\n\nUse the local environment. No extra tools are required.\n\nCreate environments using virtualenv and pyenv (for python version management). Virtualenv and\npyenv (for Linux and macOS) or pyenv-win (for Windows) must be installed for this mode of environment reconstruction.\n\nvirtualenv installation instructions\npyenv installation instructions\npyenv-win installation instructions\n\nCreate environments using conda. Conda must be installed for this mode of environment reconstruction.\n\nWarning\nBy using conda, you\u2019re responsible for adhering to Anaconda\u2019s terms of service.\n\n\nconda installation instructions\n\nThe mlflow models CLI commands provide an optional --env-manager argument that selects a specific environment management configuration to be used, as shown below:\n\n# Use virtualenv\nmlflow\n\nmodels\n\npredict\n\n...\n\n-env-manager\n\n=virtualenv\n\n# Use conda\nmlflow\n\nmodels\n\nserve\n\n...\n\n-env-manager\n\n=conda\n\nDeploy a python_function model on Microsoft Azure ML", "metadata": {"url": "https://mlflow.org/docs/latest/models.html"}} {"id": "fe0d5088bc70-60", "text": "serve\n\n...\n\n-env-manager\n\n=conda\n\nDeploy a python_function model on Microsoft Azure ML\n\nThe MLflow plugin azureml-mlflow can deploy models to Azure ML, either to Azure Kubernetes Service (AKS) or Azure Container Instances (ACI) for real-time serving.\n\nThe resulting deployment accepts the following data formats as input:\n\nJSON-serialized pandas DataFrames in the split orientation. For example, data = pandas_df.to_json(orient='split'). This format is specified using a Content-Type request header value of application/json.\n\nWarning\n\nThe TensorSpec input format is not fully supported for deployments on Azure Machine Learning at the moment. Be aware that many autolog() implementations may use TensorSpec for model\u2019s signatures when logging models and hence those deployments will fail in Azure ML.\n\nDeployments can be generated using both the Python API or MLflow CLI. In both cases, a JSON configuration file can be indicated with the details of the deployment you want to achieve. If not indicated, then a default deployment is done using Azure Container Instances (ACI) and a minimal configuration. The full specification of this configuration file can be checked at Deployment configuration schema. Also, you will also need the Azure ML MLflow Tracking URI of your particular Azure ML Workspace where you want to deploy your model. You can obtain this URI in several ways:\n\nThrough the Azure ML Studio:\n\nNavigate to Azure ML Studio and select the workspace you are working on.\nClick on the name of the workspace at the upper right corner of the page.\nClick \u201cView all properties in Azure Portal\u201d on the pane popup.\nCopy the MLflow tracking URI value from the properties section.\n\nProgrammatically, using Azure ML SDK with the method Workspace.get_mlflow_tracking_uri(). If you are running inside Azure ML Compute, like for instance a Compute Instance, you can get this value also from the environment variable os.environ[\"MLFLOW_TRACKING_URI\"].", "metadata": {"url": "https://mlflow.org/docs/latest/models.html"}} {"id": "fe0d5088bc70-61", "text": "Manually, for a given Subscription ID, Resource Group and Azure ML Workspace, the URI is as follows: azureml://eastus.api.azureml.ms/mlflow/v1.0/subscriptions//resourceGroups//providers/Microsoft.MachineLearningServices/workspaces/\n\nConfiguration example for ACI deployment\n\n\"computeType\"\n\n\"aci\"\n\n\"containerResourceRequirements\"\n\n\"cpu\"\n\n\"memoryInGB\"\n\n},\n\n\"location\"\n\n\"eastus2\"\n\nIf containerResourceRequirements is not indicated, a deployment with minimal compute configuration is applied (cpu: 0.1 and memory: 0.5).\nIf location is not indicated, it defaults to the location of the workspace.\n\nConfiguration example for an AKS deployment\n\n\"computeType\"\n\n\"aks\"\n\n\"computeTargetName\"\n\n\"aks-mlflow\"\n\nIn above example, aks-mlflow is the name of an Azure Kubernetes Cluster registered/created in Azure Machine Learning.\n\nThe following examples show how to create a deployment in ACI. Please, ensure you have azureml-mlflow installed before continuing.\n\nExample: Workflow using the Python API\n\nimport\n\njson\n\nfrom\n\nmlflow.deployments\n\nimport\n\nget_deploy_client\n\n# Create the deployment configuration.\n\n# If no deployment configuration is provided, then the deployment happens on ACI.\n\ndeploy_config\n\n\"computeType\"\n\n\"aci\"\n\n# Write the deployment configuration into a file.\n\ndeployment_config_path\n\n\"deployment_config.json\"\n\nwith\n\nopen\n\ndeployment_config_path\n\n\"w\"\n\nas\n\noutfile\n\noutfile\n\nwrite\n\njson\n\ndumps\n\ndeploy_config\n\n))\n\n# Set the tracking uri in the deployment client.\n\nclient\n\nget_deploy_client\n\n\"\"\n\n# MLflow requires the deployment configuration to be passed as a dictionary.\n\nconfig\n\n\"deploy-config-file\"", "metadata": {"url": "https://mlflow.org/docs/latest/models.html"}} {"id": "fe0d5088bc70-62", "text": "config\n\n\"deploy-config-file\"\n\ndeployment_config_path\n\nmodel_name\n\n\"mymodel\"\n\nmodel_version\n\n# define the model path and the name is the service name\n\n# if model is not registered, it gets registered automatically and a name is autogenerated using the \"name\" parameter below\n\nclient\n\ncreate_deployment\n\nmodel_uri\n\n\"models:/\n\nmodel_name\n\nmodel_version\n\nconfig\n\nconfig\n\nname\n\n\"mymodel-aci-deployment\"\n\n# After the model deployment completes, requests can be posted via HTTP to the new ACI\n\n# webservice's scoring URI.\n\nprint\n\n\"Scoring URI is:\n\n%s\n\nwebservice\n\nscoring_uri\n\n# The following example posts a sample input from the wine dataset\n\n# used in the MLflow ElasticNet example:\n\n# https://github.com/mlflow/mlflow/tree/master/examples/sklearn_elasticnet_wine\n\n# `sample_input` is a JSON-serialized pandas DataFrame with the `split` orientation\n\nimport\n\nrequests\n\nimport\n\njson\n\n# `sample_input` is a JSON-serialized pandas DataFrame with the `split` orientation\n\nsample_input\n\n\"columns\"\n\n\"alcohol\"\n\n\"chlorides\"\n\n\"citric acid\"\n\n\"density\"\n\n\"fixed acidity\"\n\n\"free sulfur dioxide\"\n\n\"pH\"\n\n\"residual sugar\"\n\n\"sulphates\"\n\n\"total sulfur dioxide\"\n\n\"volatile acidity\"\n\n],\n\n\"data\"\n\n[[\n\n8.8\n\n0.045\n\n0.36\n\n1.001\n\n45\n\n20.7\n\n0.45\n\n170\n\n0.27\n\n]],\n\nresponse\n\nrequests\n\npost\n\nurl\n\nwebservice\n\nscoring_uri\n\ndata\n\njson\n\ndumps\n\nsample_input\n\n),\n\nheaders\n\n\"Content-type\"\n\n\"application/json\"\n\n},\n\nresponse_json\n\njson\n\nloads\n\nresponse\n\ntext\n\nprint\n\nresponse_json", "metadata": {"url": "https://mlflow.org/docs/latest/models.html"}} {"id": "fe0d5088bc70-63", "text": "},\n\nresponse_json\n\njson\n\nloads\n\nresponse\n\ntext\n\nprint\n\nresponse_json\n\nExample: Workflow using the MLflow CLI\n\necho\n\n\"{ computeType: aci }\"\n\ndeployment_config.json\nmlflow\n\ndeployments\n\ncreate\n\n-name\n\n\n\nm\n\nmodels://\n\nt\n\n\n\n-deploy-config-file\n\ndeployment_config.json\n\n# After the deployment completes, requests can be posted via HTTP to the new ACI\n\n# webservice's scoring URI.\n\nscoring_uri\n\n$(az\n\nml\n\nservice\n\nshow\n\n-name\n\n\n\nv\n\njq\n\nr\n\n\".scoringUri\"\n\n# The following example posts a sample input from the wine dataset\n\n# used in the MLflow ElasticNet example:\n\n# https://github.com/mlflow/mlflow/tree/master/examples/sklearn_elasticnet_wine\n\n# `sample_input` is a JSON-serialized pandas DataFrame with the `split` orientation\n\nsample_input\n\n\"columns\": [\n\n\"alcohol\",\n\n\"chlorides\",\n\n\"citric acid\",\n\n\"density\",\n\n\"fixed acidity\",\n\n\"free sulfur dioxide\",\n\n\"pH\",\n\n\"residual sugar\",\n\n\"sulphates\",\n\n\"total sulfur dioxide\",\n\n\"volatile acidity\"\n\n],\n\n\"data\": [\n\n[8.8, 0.045, 0.36, 1.001, 7, 45, 3, 20.7, 0.45, 170, 0.27]\n\n}'\n\necho\n\n$sample_input\n\ncurl\n\ns\n\nX\n\nPOST\n\n$scoring_uri\n\n\\\n-H\n\n'Cache-Control: no-cache'\n\n\\\n-H\n\n'Content-Type: application/json'\n\n\\\n-d\n\n@-\n\nYou can also test your deployments locally first using the option run-local:\n\nmlflow", "metadata": {"url": "https://mlflow.org/docs/latest/models.html"}} {"id": "fe0d5088bc70-64", "text": "@-\n\nYou can also test your deployments locally first using the option run-local:\n\nmlflow\n\ndeployments\n\nrun-local\n\n-name\n\n\n\nm\n\nmodels://\n\nt\n\n\n\nFor more info, see:\n\nmlflow\n\ndeployments\n\nhelp\n\nt\n\nazureml\n\nDeploy a python_function model on Amazon SageMaker\n\nThe mlflow.deployments and mlflow.sagemaker modules can deploy\npython_function models locally in a Docker container with SageMaker compatible environment and\nremotely on SageMaker. To deploy remotely to SageMaker you need to set up your environment and user\naccounts. To export a custom model to SageMaker, you need a MLflow-compatible Docker image to be\navailable on Amazon ECR. MLflow provides a default Docker image definition; however, it is up to you\nto build the image and upload it to ECR. MLflow includes the utility function\nbuild_and_push_container to perform this step. Once built and uploaded, you can use the MLflow\ncontainer for all MLflow Models. Model webservers deployed using the mlflow.deployments\nmodule accept the following data formats as input, depending on the deployment flavor:\n\npython_function: For this deployment flavor, the endpoint accepts the same formats described\nin the local model deployment documentation.\n\nmleap: For this deployment flavor, the endpoint accepts only\nJSON-serialized pandas DataFrames in the split orientation. For example,\ndata = pandas_df.to_json(orient='split'). This format is specified using a Content-Type\nrequest header value of application/json.\n\nCommands\n\nmlflow deployments run-local -t sagemaker deploys the\nmodel locally in a Docker container. The image and the environment should be identical to how the\nmodel would be run remotely and it is therefore useful for testing the model prior to deployment.", "metadata": {"url": "https://mlflow.org/docs/latest/models.html"}} {"id": "fe0d5088bc70-65", "text": "mlflow sagemaker build-and-push-container\nbuilds an MLfLow Docker image and uploads it to ECR. The caller must have the correct permissions\nset up. The image is built locally and requires Docker to be present on the machine that performs\nthis step.\n\nmlflow deployments create -t sagemaker\ndeploys the model on Amazon SageMaker. MLflow uploads the Python Function model into S3 and starts\nan Amazon SageMaker endpoint serving the model.\n\nExample workflow using the MLflow CLI\n\nmlflow\n\nsagemaker\n\nbuild-and-push-container\n\n# build the container (only needs to be called once)\nmlflow\n\ndeployments\n\nrun-local\n\nt\n\nsagemaker\n\n-name\n\n\n\nm\n\n\n\n# test the model locally\nmlflow\n\ndeployments\n\nsagemaker\n\ncreate\n\nt\n\n# deploy the model remotely\n\nFor more info, see:\n\nmlflow\n\nsagemaker\n\n-help\nmlflow\n\nsagemaker\n\nbuild-and-push-container\n\n-help\nmlflow\n\ndeployments\n\nrun-local\n\n-help\nmlflow\n\ndeployments\n\nhelp\n\nt\n\nsagemaker\n\nExport a python_function model as an Apache Spark UDF\n\nYou can output a python_function model as an Apache Spark UDF, which can be uploaded to a\nSpark cluster and used to score the model.\n\nExample\n\nfrom\n\npyspark.sql.functions\n\nimport\n\nstruct\n\nfrom\n\npyspark.sql\n\nimport\n\nSparkSession\n\nspark\n\nSparkSession\n\nbuilder\n\ngetOrCreate\n\n()\n\npyfunc_udf\n\nmlflow\n\npyfunc\n\nspark_udf\n\nspark\n\n\"\"\n\ndf\n\nspark_df\n\nwithColumn\n\n\"prediction\"\n\npyfunc_udf\n\nstruct\n\n([\n\n...\n\n])))", "metadata": {"url": "https://mlflow.org/docs/latest/models.html"}} {"id": "fe0d5088bc70-66", "text": "withColumn\n\n\"prediction\"\n\npyfunc_udf\n\nstruct\n\n([\n\n...\n\n])))\n\nIf a model contains a signature, the UDF can be called without specifying column name arguments.\nIn this case, the UDF will be called with column names from signature, so the evaluation\ndataframe\u2019s column names must match the model signature\u2019s column names.\n\nExample\n\nfrom\n\npyspark.sql\n\nimport\n\nSparkSession\n\nspark\n\nSparkSession\n\nbuilder\n\ngetOrCreate\n\n()\n\npyfunc_udf\n\nmlflow\n\npyfunc\n\nspark_udf\n\nspark\n\n\"\"\n\ndf\n\nspark_df\n\nwithColumn\n\n\"prediction\"\n\npyfunc_udf\n\n())\n\nIf a model contains a signature with tensor spec inputs,\nyou will need to pass a column of array type as a corresponding UDF argument.\nThe values in this column must be comprised of one-dimensional arrays. The\nUDF will reshape the array values to the required shape with \u2018C\u2019 order\n(i.e. read / write the elements using C-like index order) and cast the values\nas the required tensor spec type. For example, assuming a model\nrequires input \u2018a\u2019 of shape (-1, 2, 3) and input \u2018b\u2019 of shape (-1, 4, 5). In order to\nperform inference on this data, we need to prepare a Spark DataFrame with column \u2018a\u2019\ncontaining arrays of length 6 and column \u2018b\u2019 containing arrays of length 20. We can then\ninvoke the UDF like following example code:\n\nExample\n\nfrom\n\npyspark.sql\n\nimport\n\nSparkSession\n\nspark\n\nSparkSession\n\nbuilder\n\ngetOrCreate\n\n()\n\n# Assuming the model requires input 'a' of shape (-1, 2, 3) and input 'b' of shape (-1, 4, 5)\n\nmodel_path", "metadata": {"url": "https://mlflow.org/docs/latest/models.html"}} {"id": "fe0d5088bc70-67", "text": "model_path\n\n\"\"\n\npyfunc_udf\n\nmlflow\n\npyfunc\n\nspark_udf\n\nspark\n\nmodel_path\n\n# The `spark_df` has column 'a' containing arrays of length 6 and\n\n# column 'b' containing arrays of length 20\n\ndf\n\nspark_df\n\nwithColumn\n\n\"prediction\"\n\npyfunc_udf\n\nstruct\n\n\"a\"\n\n\"b\"\n\n)))\n\nThe resulting UDF is based on Spark\u2019s Pandas UDF and is currently limited to producing either a single\nvalue, an array of values, or a struct containing multiple field values\nof the same type per observation. By default, we return the first\nnumeric column as a double. You can control what result is returned by supplying result_type\nargument. The following values are supported:\n\n'int' or IntegerType: The leftmost integer that can fit in\nint32 result is returned or an exception is raised if there are none.\n\n'long' or LongType: The leftmost long integer that can fit in int64\nresult is returned or an exception is raised if there are none.\n\nArrayType (IntegerType | LongType): Return all integer columns that can fit\ninto the requested size.\n\n'float' or FloatType: The leftmost numeric result cast to\nfloat32 is returned or an exception is raised if there are no numeric columns.\n\n'double' or DoubleType: The leftmost numeric result cast to\ndouble is returned or an exception is raised if there are no numeric columns.\n\nArrayType ( FloatType | DoubleType ): Return all numeric columns cast to the\nrequested type. An exception is raised if there are no numeric columns.\n\n'string' or StringType: Result is the leftmost column cast as string.\n\nArrayType ( StringType ): Return all columns cast as string.", "metadata": {"url": "https://mlflow.org/docs/latest/models.html"}} {"id": "fe0d5088bc70-68", "text": "ArrayType ( StringType ): Return all columns cast as string.\n\n'bool' or 'boolean' or BooleanType: The leftmost column cast to bool\nis returned or an exception is raised if the values cannot be coerced.\n\n'field1 FIELD1_TYPE, field2 FIELD2_TYPE, ...': A struct type containing\nmultiple fields separated by comma, each field type must be one of types\nlisted above.\n\nExample\n\nfrom\n\npyspark.sql\n\nimport\n\nSparkSession\n\nspark\n\nSparkSession\n\nbuilder\n\ngetOrCreate\n\n()\n\n# Suppose the PyFunc model `predict` method returns a dict like:\n\n# `{'prediction': 1-dim_array, 'probability': 2-dim_array}`\n\n# You can supply result_type to be a struct type containing\n\n# 2 fields 'prediction' and 'probability' like following.\n\npyfunc_udf\n\nmlflow\n\npyfunc\n\nspark_udf\n\nspark\n\n\"\"\n\nresult_type\n\n\"prediction float, probability: array\"\n\ndf\n\nspark_df\n\nwithColumn\n\n\"prediction\"\n\npyfunc_udf\n\n())\n\nExample\n\nfrom\n\npyspark.sql.types\n\nimport\n\nArrayType\n\nFloatType\n\nfrom\n\npyspark.sql.functions\n\nimport\n\nstruct\n\nfrom\n\npyspark.sql\n\nimport\n\nSparkSession\n\nspark\n\nSparkSession\n\nbuilder\n\ngetOrCreate\n\n()\n\npyfunc_udf\n\nmlflow\n\npyfunc\n\nspark_udf\n\nspark\n\n\"path/to/model\"\n\nresult_type\n\nArrayType\n\nFloatType\n\n())\n\n# The prediction column will contain all the numeric columns returned by the model as floats\n\ndf\n\nspark_df\n\nwithColumn\n\n\"prediction\"\n\npyfunc_udf\n\nstruct\n\n\"name\"\n\n\"age\"\n\n)))\n\nIf you want to use conda to restore the python environment that was used to train the model,\nset the env_manager argument when calling mlflow.pyfunc.spark_udf().\n\nExample\n\nfrom", "metadata": {"url": "https://mlflow.org/docs/latest/models.html"}} {"id": "fe0d5088bc70-69", "text": "Example\n\nfrom\n\npyspark.sql.types\n\nimport\n\nArrayType\n\nFloatType\n\nfrom\n\npyspark.sql.functions\n\nimport\n\nstruct\n\nfrom\n\npyspark.sql\n\nimport\n\nSparkSession\n\nspark\n\nSparkSession\n\nbuilder\n\ngetOrCreate\n\n()\n\npyfunc_udf\n\nmlflow\n\npyfunc\n\nspark_udf\n\nspark\n\n\"path/to/model\"\n\nresult_type\n\nArrayType\n\nFloatType\n\n()),\n\nenv_manager\n\n\"conda\"\n\n# Use conda to restore the environment used in training\n\ndf\n\nspark_df\n\nwithColumn\n\n\"prediction\"\n\npyfunc_udf\n\nstruct\n\n\"name\"\n\n\"age\"\n\n)))\n\nDeployment to Custom Targets\n\nIn addition to the built-in deployment tools, MLflow provides a pluggable\nmlflow.deployments Python API and\nmlflow deployments CLI for deploying\nmodels to custom targets and environments. To deploy to a custom target, you must first install an\nappropriate third-party Python plugin. See the list of known community-maintained plugins\nhere.\n\nCommands\n\nThe mlflow deployments CLI contains the following commands, which can also be invoked programmatically\nusing the mlflow.deployments Python API:\n\nCreate: Deploy an MLflow model to a specified custom target\n\nDelete: Delete a deployment\n\nUpdate: Update an existing deployment, for example to\ndeploy a new model version or change the deployment\u2019s configuration (e.g. increase replica count)\n\nList: List IDs of all deployments\n\nGet: Print a detailed description of a particular deployment\n\nRun Local: Deploy the model locally for testing\n\nHelp: Show the help string for the specified target\n\nFor more info, see:\n\nmlflow\n\ndeployments\n\n-help\nmlflow\n\ndeployments\n\ncreate\n\n-help\nmlflow\n\ndeployments\n\ndelete\n\n-help\nmlflow\n\ndeployments\n\nupdate\n\n-help\nmlflow\n\ndeployments\n\nlist\n\n-help\nmlflow\n\ndeployments\n\nget\n\n-help\nmlflow\n\ndeployments", "metadata": {"url": "https://mlflow.org/docs/latest/models.html"}} {"id": "fe0d5088bc70-70", "text": "list\n\n-help\nmlflow\n\ndeployments\n\nget\n\n-help\nmlflow\n\ndeployments\n\nrun-local\n\n-help\nmlflow\n\ndeployments\n\nhelp\n\n-help\n\nCommunity Model Flavors\n\nOther useful MLflow flavors are developed and maintained by the\nMLflow community, enabling you to use MLflow Models with an\neven broader ecosystem of machine learning libraries. For more information,\ncheck out the description of each community-developed flavor below.\n\nMLflow VizMod\n\nBigML (bigmlflow)\n\nSktime\n\nMLflow VizMod\n\nThe mlflow-vizmod project allows data scientists\nto be more productive with their visualizations. We treat visualizations as models - just like ML\nmodels - thus being able to use the same infrastructure as MLflow to track, create projects,\nregister, and deploy visualizations.\n\nInstallation:\n\npip\n\ninstall\n\nmlflow-vizmod\n\nExample:\n\nfrom\n\nsklearn.datasets\n\nimport\n\nload_iris\n\nimport\n\naltair\n\nas\n\nalt\n\nimport\n\nmlflow_vismod\n\ndf_iris\n\nload_iris\n\nas_frame\n\nTrue\n\nviz_iris\n\nalt\n\nChart\n\ndf_iris\n\nmark_circle\n\nsize\n\n60\n\nencode\n\n\"x\"\n\n\"y\"\n\ncolor\n\n\"z:N\"\n\nproperties\n\nheight\n\n375\n\nwidth\n\n575\n\ninteractive\n\n()\n\nmlflow_vismod\n\nlog_model\n\nmodel\n\nviz_iris\n\nartifact_path\n\n\"viz\"\n\nstyle\n\n\"vegalite\"\n\ninput_example\n\ndf_iris\n\nhead\n\n),\n\nBigML (bigmlflow)\n\nbigmlflow library implements\nthe\n\nbigml\n\nBigML supervised models\nand offers the\n\nsave_model()\n\nlog_model()\n\nload_model()\n\nInstalling bigmlflow\n\nBigMLFlow can be installed from PyPI as follows:\n\npip\n\ninstall\n\nbigmlflow\n\nBigMLFlow usage", "metadata": {"url": "https://mlflow.org/docs/latest/models.html"}} {"id": "fe0d5088bc70-71", "text": "pip\n\ninstall\n\nbigmlflow\n\nBigMLFlow usage\n\nThe bigmlflow module defines the flavor that implements the\nsave_model() and log_model() methods. They can be used\nto save BigML models and their related information in MLflow Model format.\n\nimport\n\njson\n\nimport\n\nmlflow\n\nimport\n\nbigmlflow\n\nMODEL_FILE\n\n\"logistic_regression.json\"\n\nwith\n\nmlflow\n\nstart_run\n\n():\n\nwith\n\nopen\n\nMODEL_FILE\n\nas\n\nhandler\n\nmodel\n\njson\n\nload\n\nhandler\n\nbigmlflow\n\nlog_model\n\nmodel\n\nartifact_path\n\n\"model\"\n\nregistered_model_name\n\n\"my_model\"\n\nThese methods also add the python_function flavor to the MLflow Models\nthat they produce, allowing the models to be interpreted as generic Python\nfunctions for inference via mlflow.pyfunc.load_model().\nThis loaded PyFunc model can only be scored with DataFrame inputs.\n\n# saving the model\n\nsave_model\n\nmodel\n\npath\n\nmodel_path\n\n# retrieving model\n\npyfunc_model\n\npyfunc\n\nload_model\n\nmodel_path\n\npyfunc_predictions\n\npyfunc_model\n\npredict\n\ndataframe\n\nYou can also use the bigmlflow.load_model() method to load MLflow Models\nwith the bigmlflow model flavor as a BigML\nSupervisedModel.\n\nFor more information, see the\nBigMLFlow documentation\nand BigML\u2019s blog.\n\nSktime\n\nsktime\n\nsktime models in MLflow\nformat via the\n\nsave_model()\n\nlog_model()\n\npython_function\n\nmlflow.pyfunc.load_model().\nThis loaded PyFunc model can only be scored with a DataFrame input.\nYou can also use the\n\nload_model()\n\nsktime\n\nInstalling Sktime\n\nInstall sktime with mlflow dependency:\n\npip\n\ninstall\n\nsktime\n\n[mlflow\n\nUsage example", "metadata": {"url": "https://mlflow.org/docs/latest/models.html"}} {"id": "fe0d5088bc70-72", "text": "pip\n\ninstall\n\nsktime\n\n[mlflow\n\nUsage example\n\nRefer to the sktime mlflow documentation for details on the interface for utilizing sktime models loaded as a pyfunc type and an example notebook for extended code usage examples.\n\nimport\n\npandas\n\nas\n\npd\n\nfrom\n\nsktime.datasets\n\nimport\n\nload_airline\n\nfrom\n\nsktime.forecasting.arima\n\nimport\n\nAutoARIMA\n\nfrom\n\nsktime.utils\n\nimport\n\nmlflow_sktime\n\nairline\n\nload_airline\n\n()\n\nmodel_path\n\n\"model\"\n\nauto_arima_model\n\nAutoARIMA\n\nsp\n\n12\n\nmax_p\n\nmax_q\n\nsuppress_warnings\n\nTrue\n\nfit\n\nairline\n\nfh\n\nmlflow_sktime\n\nsave_model\n\nsktime_model\n\nauto_arima_model\n\npath\n\nmodel_path\n\nloaded_model\n\nmlflow_sktime\n\nload_model\n\nmodel_uri\n\nmodel_path\n\nloaded_pyfunc\n\nmlflow_sktime\n\npyfunc\n\nload_model\n\nmodel_uri\n\nmodel_path\n\nprint\n\nloaded_model\n\npredict\n\n())\n\nprint\n\nloaded_pyfunc\n\npredict\n\npd\n\nDataFrame\n\n()))", "metadata": {"url": "https://mlflow.org/docs/latest/models.html"}} {"id": "e9e0bdb6f304-0", "text": "MLflow Model Registry\n\nThe MLflow Model Registry component is a centralized model store, set of APIs, and UI, to\ncollaboratively manage the full lifecycle of an MLflow Model. It provides model lineage (which\nMLflow experiment and run produced the model), model versioning, stage transitions (for example from\nstaging to production), and annotations.\n\nTable of Contents\n\nConcepts\n\nModel Registry Workflows\n\nUI Workflow\n\nRegistering a Model\nUsing the Model Registry\n\n\nAPI Workflow\n\nAdding an MLflow Model to the Model Registry\nFetching an MLflow Model from the Model Registry\nServing an MLflow Model from Model Registry\nAdding or Updating an MLflow Model Descriptions\nRenaming an MLflow Model\nTransitioning an MLflow Model\u2019s Stage\nListing and Searching MLflow Models\nArchiving an MLflow Model\nDeleting MLflow Models\nRegistering a Saved Model\nRegistering an Unsupported Machine Learning Model\n\nConcepts\n\nThe Model Registry introduces a few concepts that describe and facilitate the full lifecycle of an MLflow Model.\n\nAn MLflow Model is created from an experiment or run that is logged with one of the model flavor\u2019s mlflow..log_model() methods. Once logged, this model can then be registered with the Model Registry.\n\nAn MLflow Model can be registered with the Model Registry. A registered model has a unique name, contains versions, associated transitional stages, model lineage, and other metadata.\n\nEach registered model can have one or many versions. When a new model is added to the Model Registry, it is added as version 1. Each new model registered to the same model name increments the version number.\n\nEach distinct model version can be assigned one stage at any given time. MLflow provides predefined stages for common use-cases such as Staging, Production or Archived. You can transition a model version from one stage to another stage.", "metadata": {"url": "https://mlflow.org/docs/latest/model-registry.html#concepts"}} {"id": "e9e0bdb6f304-1", "text": "You can annotate the top-level model and each version individually using Markdown, including description and any relevant information useful for the team such as algorithm descriptions, dataset employed or methodology.\n\nModel Registry Workflows\n\nIf running your own MLflow server, you must use a database-backed backend store in order to access\nthe model registry via the UI or API. See here for more information.\n\nBefore you can add a model to the Model Registry, you must log it using the log_model methods\nof the corresponding model flavors. Once a model has been logged, you can add, modify, update, transition,\nor delete model in the Model Registry through the UI or the API.\n\nUI Workflow\n\nRegistering a Model\n\nFrom the MLflow Runs detail page, select a logged MLflow Model in the Artifacts section.\n\nClick the Register Model button.\n\nIn the Model Name field, if you are adding a new model, specify a unique name to identify the model. If you are registering a new version to an existing model, pick the existing model name from the dropdown.\n\nUsing the Model Registry\n\nNavigate to the Registered Models page and view the model properties.\n\nGo to the Artifacts section of the run detail page, click the model, and then click the model version at the top right to view the version you just created.\n\nEach model has an overview page that shows the active versions.\n\nClick a version to navigate to the version detail page.\n\nOn the version detail page you can see model version details and the current stage of the model\nversion. Click the Stage drop-down at the top right, to transition the model\nversion to one of the other valid stages.\n\nAPI Workflow\n\nAn alternative way to interact with Model Registry is using the MLflow model flavor or MLflow Client Tracking API interface.\nIn particular, you can register a model during an MLflow experiment run or after all your experiment runs.\n\nAdding an MLflow Model to the Model Registry", "metadata": {"url": "https://mlflow.org/docs/latest/model-registry.html#concepts"}} {"id": "e9e0bdb6f304-2", "text": "Adding an MLflow Model to the Model Registry\n\nThere are three programmatic ways to add a model to the registry. First, you can use the mlflow..log_model() method. For example, in your code:\n\nfrom\n\nrandom\n\nimport\n\nrandom\n\nrandint\n\nfrom\n\nsklearn.ensemble\n\nimport\n\nRandomForestRegressor\n\nimport\n\nmlflow\n\nimport\n\nmlflow.sklearn\n\nwith\n\nmlflow\n\nstart_run\n\nrun_name\n\n\"YOUR_RUN_NAME\"\n\nas\n\nrun\n\nparams\n\n\"n_estimators\"\n\n\"random_state\"\n\n42\n\nsk_learn_rfr\n\nRandomForestRegressor\n\n*\n\nparams\n\n# Log parameters and metrics using the MLflow APIs\n\nmlflow\n\nlog_params\n\nparams\n\nmlflow\n\nlog_param\n\n\"param_1\"\n\nrandint\n\n100\n\n))\n\nmlflow\n\nlog_metrics\n\n({\n\n\"metric_1\"\n\nrandom\n\n(),\n\n\"metric_2\"\n\nrandom\n\n()\n\n})\n\n# Log the sklearn model and register as version 1\n\nmlflow\n\nsklearn\n\nlog_model\n\nsk_model\n\nsk_learn_rfr\n\nartifact_path\n\n\"sklearn-model\"\n\nregistered_model_name\n\n\"sk-learn-random-forest-reg-model\"\n\nIn the above code snippet, if a registered model with the name doesn\u2019t exist, the method registers a new model and creates Version 1.\nIf a registered model with the name exists, the method creates a new model version.\n\nThe second way is to use the mlflow.register_model() method, after all your experiment runs complete and when you have decided which model is most suitable to add to the registry.\nFor this method, you will need the run_id as part of the runs:URI argument.\n\nresult\n\nmlflow\n\nregister_model\n\n\"runs:/d16076a3ec534311817565e6527539c0/sklearn-model\"\n\n\"sk-learn-random-forest-reg\"", "metadata": {"url": "https://mlflow.org/docs/latest/model-registry.html#concepts"}} {"id": "e9e0bdb6f304-3", "text": "\"sk-learn-random-forest-reg\"\n\nIf a registered model with the name doesn\u2019t exist, the method registers a new model, creates Version 1, and returns a ModelVersion MLflow object.\nIf a registered model with the name exists, the method creates a new model version and returns the version object.\n\nAnd finally, you can use the create_registered_model() to create a new registered model. If the model name exists,\nthis method will throw an MlflowException because creating a new registered model requires a unique name.\n\nfrom\n\nmlflow\n\nimport\n\nMlflowClient\n\nclient\n\nMlflowClient\n\n()\n\nclient\n\ncreate_registered_model\n\n\"sk-learn-random-forest-reg-model\"\n\nWhile the method above creates an empty registered model with no version associated, the method below creates a new version of the model.\n\nclient\n\nMlflowClient\n\n()\n\nresult\n\nclient\n\ncreate_model_version\n\nname\n\n\"sk-learn-random-forest-reg-model\"\n\nsource\n\n\"mlruns/0/d16076a3ec534311817565e6527539c0/artifacts/sklearn-model\"\n\nrun_id\n\n\"d16076a3ec534311817565e6527539c0\"\n\nFetching an MLflow Model from the Model Registry\n\nAfter you have registered an MLflow model, you can fetch that model using mlflow..load_model(), or more generally, load_model().\n\nFetch a specific model version\n\nTo fetch a specific model version, just supply that version number as part of the model URI.\n\nimport\n\nmlflow.pyfunc\n\nmodel_name\n\n\"sk-learn-random-forest-reg-model\"\n\nmodel_version\n\nmodel\n\nmlflow\n\npyfunc\n\nload_model\n\nmodel_uri\n\n\"models:/\n\nmodel_name\n\nmodel_version\n\nmodel\n\npredict\n\ndata\n\nFetch the latest model version in a specific stage", "metadata": {"url": "https://mlflow.org/docs/latest/model-registry.html#concepts"}} {"id": "e9e0bdb6f304-4", "text": "model_version\n\nmodel\n\npredict\n\ndata\n\nFetch the latest model version in a specific stage\n\nTo fetch a model version by stage, simply provide the model stage as part of the model URI, and it will fetch the most recent version of the model in that stage.\n\nimport\n\nmlflow.pyfunc\n\nmodel_name\n\n\"sk-learn-random-forest-reg-model\"\n\nstage\n\n\"Staging\"\n\nmodel\n\nmlflow\n\npyfunc\n\nload_model\n\nmodel_uri\n\n\"models:/\n\nmodel_name\n\nstage\n\nmodel\n\npredict\n\ndata\n\nServing an MLflow Model from Model Registry\n\nAfter you have registered an MLflow model, you can serve the model as a service on your host.\n\n#!/usr/bin/env sh\n\n# Set environment variable for the tracking URL where the Model Registry resides\n\nexport\n\nMLFLOW_TRACKING_URI\n\n=http://localhost:5000\n\n# Serve the production model from the model registry\nmlflow\n\nmodels\n\nserve\n\nm\n\n\"models:/sk-learn-random-forest-reg-model/Production\"\n\nAdding or Updating an MLflow Model Descriptions\n\nAt any point in a model\u2019s lifecycle development, you can update a model version\u2019s description using update_model_version().\n\nclient\n\nMlflowClient\n\n()\n\nclient\n\nupdate_model_version\n\nname\n\n\"sk-learn-random-forest-reg-model\"\n\nversion\n\ndescription\n\n\"This model version is a scikit-learn random forest containing 100 decision trees\"\n\nRenaming an MLflow Model\n\nAs well as adding or updating a description of a specific version of the model, you can rename an existing registered model using rename_registered_model().\n\nclient\n\nMlflowClient\n\n()\n\nclient\n\nrename_registered_model\n\nname\n\n\"sk-learn-random-forest-reg-model\"\n\nnew_name\n\n\"sk-learn-random-forest-reg-model-100\"\n\nTransitioning an MLflow Model\u2019s Stage", "metadata": {"url": "https://mlflow.org/docs/latest/model-registry.html#concepts"}} {"id": "e9e0bdb6f304-5", "text": "Transitioning an MLflow Model\u2019s Stage\n\nOver the course of the model\u2019s lifecycle, a model evolves\u2014from development to staging to production.\nYou can transition a registered model to one of the stages: Staging, Production or Archived.\n\nclient\n\nMlflowClient\n\n()\n\nclient\n\ntransition_model_version_stage\n\nname\n\n\"sk-learn-random-forest-reg-model\"\n\nversion\n\nstage\n\n\"Production\"\n\nThe accepted values for are: Staging|Archived|Production|None.\n\nListing and Searching MLflow Models\n\nYou can fetch a list of registered models in the registry with a simple method.\n\nfrom\n\npprint\n\nimport\n\npprint\n\nclient\n\nMlflowClient\n\n()\n\nfor\n\nrm\n\nin\n\nclient\n\nsearch_registered_models\n\n():\n\npprint\n\ndict\n\nrm\n\n),\n\nindent\n\nThis outputs:", "metadata": {"url": "https://mlflow.org/docs/latest/model-registry.html#concepts"}} {"id": "e9e0bdb6f304-6", "text": "search_registered_models\n\n():\n\npprint\n\ndict\n\nrm\n\n),\n\nindent\n\nThis outputs:\n\n{ 'creation_timestamp': 1582671933216,\n 'description': None,\n 'last_updated_timestamp': 1582671960712,\n 'latest_versions': [,\n ],\n 'name': 'sk-learn-random-forest-reg-model'}\n\nWith hundreds of models, it can be cumbersome to peruse the results returned from this call. A more efficient approach would be to search for a specific model name and list its version\ndetails using search_model_versions() method\nand provide a filter string such as \"name='sk-learn-random-forest-reg-model'\"\n\nclient\n\nMlflowClient\n\n()\n\nfor\n\nmv\n\nin\n\nclient\n\nsearch_model_versions", "metadata": {"url": "https://mlflow.org/docs/latest/model-registry.html#concepts"}} {"id": "e9e0bdb6f304-7", "text": "client\n\nMlflowClient\n\n()\n\nfor\n\nmv\n\nin\n\nclient\n\nsearch_model_versions\n\n\"name='sk-learn-random-forest-reg-model'\"\n\n):\n\npprint\n\ndict\n\nmv\n\n),\n\nindent\n\nThis outputs:\n\n\"creation_timestamp\"\n\n1582671933246\n\n\"current_stage\"\n\n\"Production\"\n\n\"description\"\n\n\"A random forest model containing 100 decision trees \"\n\n\"trained in scikit-learn\"\n\n\"last_updated_timestamp\"\n\n1582671960712\n\n\"name\"\n\n\"sk-learn-random-forest-reg-model\"\n\n\"run_id\"\n\n\"ae2cc01346de45f79a44a320aab1797b\"\n\n\"source\"\n\n\"./mlruns/0/ae2cc01346de45f79a44a320aab1797b/artifacts/sklearn-model\"\n\n\"status\"\n\n\"READY\"\n\n\"status_message\"\n\nNone\n\n\"user_id\"\n\nNone\n\n\"version\"\n\n\"creation_timestamp\"\n\n1582671960628\n\n\"current_stage\"\n\n\"None\"\n\n\"description\"\n\nNone\n\n\"last_updated_timestamp\"\n\n1582671960628\n\n\"name\"\n\n\"sk-learn-random-forest-reg-model\"\n\n\"run_id\"\n\n\"d994f18d09c64c148e62a785052e6723\"\n\n\"source\"\n\n\"./mlruns/0/d994f18d09c64c148e62a785052e6723/artifacts/sklearn-model\"\n\n\"status\"\n\n\"READY\"\n\n\"status_message\"\n\nNone\n\n\"user_id\"\n\nNone\n\n\"version\"\n\nArchiving an MLflow Model\n\nYou can move models versions out of a Production stage into an Archived stage.\nAt a later point, if that archived model is not needed, you can delete it.\n\n# Archive models version 3 from Production into Archived\n\nclient\n\nMlflowClient\n\n()", "metadata": {"url": "https://mlflow.org/docs/latest/model-registry.html#concepts"}} {"id": "e9e0bdb6f304-8", "text": "# Archive models version 3 from Production into Archived\n\nclient\n\nMlflowClient\n\n()\n\nclient\n\ntransition_model_version_stage\n\nname\n\n\"sk-learn-random-forest-reg-model\"\n\nversion\n\nstage\n\n\"Archived\"\n\nDeleting MLflow Models\n\nNote\n\nDeleting registered models or model versions is irrevocable, so use it judiciously.\n\nYou can either delete specific versions of a registered model or you can delete a registered model and all its versions.\n\n# Delete versions 1,2, and 3 of the model\n\nclient\n\nMlflowClient\n\n()\n\nversions\n\nfor\n\nversion\n\nin\n\nversions\n\nclient\n\ndelete_model_version\n\nname\n\n\"sk-learn-random-forest-reg-model\"\n\nversion\n\nversion\n\n# Delete a registered model along with all its versions\n\nclient\n\ndelete_registered_model\n\nname\n\n\"sk-learn-random-forest-reg-model\"\n\nWhile the above workflow API demonstrates interactions with the Model Registry, two exceptional cases require attention.\nOne is when you have existing ML models saved from training without the use of MLflow. Serialized and persisted on disk\nin sklearn\u2019s pickled format, you want to register this model with the Model Registry. The second is when you use\nan ML framework without a built-in MLflow model flavor support, for instance, vaderSentiment, and want to register the model.\n\nRegistering a Saved Model\n\nNot everyone will start their model training with MLflow. So you may have some models trained before the use of MLflow.\nInstead of retraining the models, all you want to do is register your saved models with the Model Registry.\n\nThis code snippet creates a sklearn model, which we assume that you had created and saved in native pickle format.\n\nNote\n\nThe sklearn library and pickle versions with which the model was saved should be compatible with the\ncurrent MLflow supported built-in sklearn model flavor.\n\nimport\n\nnumpy\n\nas\n\nnp\n\nimport", "metadata": {"url": "https://mlflow.org/docs/latest/model-registry.html#concepts"}} {"id": "e9e0bdb6f304-9", "text": "import\n\nnumpy\n\nas\n\nnp\n\nimport\n\npickle\n\nfrom\n\nsklearn\n\nimport\n\ndatasets\n\nlinear_model\n\nfrom\n\nsklearn.metrics\n\nimport\n\nmean_squared_error\n\nr2_score\n\n# source: https://scikit-learn.org/stable/auto_examples/linear_model/plot_ols.html\n\n# Load the diabetes dataset\n\ndiabetes_X\n\ndiabetes_y\n\ndatasets\n\nload_diabetes\n\nreturn_X_y\n\nTrue\n\n# Use only one feature\n\ndiabetes_X\n\ndiabetes_X\n\n[:,\n\nnp\n\nnewaxis\n\n# Split the data into training/testing sets\n\ndiabetes_X_train\n\ndiabetes_X\n\n[:\n\n20\n\ndiabetes_X_test\n\ndiabetes_X\n\n20\n\n:]\n\n# Split the targets into training/testing sets\n\ndiabetes_y_train\n\ndiabetes_y\n\n[:\n\n20\n\ndiabetes_y_test\n\ndiabetes_y\n\n20\n\n:]\n\ndef\n\nprint_predictions\n\ny_pred\n\n):\n\n# The coefficients\n\nprint\n\n\"Coefficients:\n\n\\n\n\ncoef_\n\n# The mean squared error\n\nprint\n\n\"Mean squared error:\n\n%.2f\n\nmean_squared_error\n\ndiabetes_y_test\n\ny_pred\n\n))\n\n# The coefficient of determination: 1 is perfect prediction\n\nprint\n\n\"Coefficient of determination:\n\n%.2f\n\nr2_score\n\ndiabetes_y_test\n\ny_pred\n\n))\n\n# Create linear regression object\n\nlr_model\n\nlinear_model\n\nLinearRegression\n\n()\n\n# Train the model using the training sets\n\nlr_model\n\nfit\n\ndiabetes_X_train\n\ndiabetes_y_train\n\n# Make predictions using the testing set\n\ndiabetes_y_pred\n\nlr_model\n\npredict\n\ndiabetes_X_test\n\nprint_predictions\n\nlr_model\n\ndiabetes_y_pred\n\n# save the model in the native sklearn format\n\nfilename\n\n\"lr_model.pkl\"\n\npickle\n\ndump\n\nlr_model\n\nopen\n\nfilename\n\n\"wb\"\n\n))", "metadata": {"url": "https://mlflow.org/docs/latest/model-registry.html#concepts"}} {"id": "e9e0bdb6f304-10", "text": "pickle\n\ndump\n\nlr_model\n\nopen\n\nfilename\n\n\"wb\"\n\n))\n\nCoefficients:\n[938.23786125]\nMean squared error: 2548.07\nCoefficient of determination: 0.47\n\nOnce saved in pickled format, we can load the sklearn model into memory using pickle API and\nregister the loaded model with the Model Registry.\n\nimport\n\nmlflow\n\n# load the model into memory\n\nloaded_model\n\npickle\n\nload\n\nopen\n\nfilename\n\n\"rb\"\n\n))\n\n# log and register the model using MLflow scikit-learn API\n\nmlflow\n\nset_tracking_uri\n\n\"sqlite:///mlruns.db\"\n\nreg_model_name\n\n\"SklearnLinearRegression\"\n\nprint\n\n\"--\"\n\nmlflow\n\nsklearn\n\nlog_model\n\nloaded_model\n\n\"sk_learn\"\n\nserialization_format\n\n\"cloudpickle\"\n\nregistered_model_name\n\nreg_model_name\n\nNow, using MLflow fluent APIs, we reload the model from the Model Registry and score.\n\n-\nSuccessfully registered model 'SklearnLinearRegression'.\n2021/04/02 16:30:57 INFO mlflow.tracking._model_registry.client: Waiting up to 300 seconds for model version to finish creation.\nModel name: SklearnLinearRegression, version 1\nCreated version '1' of model 'SklearnLinearRegression'.\n\nNow, using MLflow fluent APIs, we reload the model from the Model Registry and score.\n\n# load the model from the Model Registry and score\n\nmodel_uri\n\n\"models:/\n\nreg_model_name\n\n/1\"\n\nloaded_model\n\nmlflow\n\nsklearn\n\nload_model\n\nmodel_uri\n\nprint\n\n\"--\"\n\n# Make predictions using the testing set\n\ndiabetes_y_pred\n\nloaded_model\n\npredict\n\ndiabetes_X_test\n\nprint_predictions\n\nloaded_model\n\ndiabetes_y_pred", "metadata": {"url": "https://mlflow.org/docs/latest/model-registry.html#concepts"}} {"id": "e9e0bdb6f304-11", "text": "loaded_model\n\npredict\n\ndiabetes_X_test\n\nprint_predictions\n\nloaded_model\n\ndiabetes_y_pred\n\n-\nCoefficients:\n[938.23786125]\nMean squared error: 2548.07\nCoefficient of determination: 0.47\n\nRegistering an Unsupported Machine Learning Model\n\nIn some cases, you might use a machine learning framework without its built-in MLflow Model flavor support.\nFor instance, the vaderSentiment library is a standard Natural Language Processing (NLP) library used\nfor sentiment analysis. Since it lacks a built-in MLflow Model flavor, you cannot log or register the model\nusing MLflow Model fluent APIs.\n\nTo work around this problem, you can create an instance of a mlflow.pyfunc model flavor and embed your NLP model\ninside it, allowing you to save, log or register the model. Once registered, load the model from the Model Registry\nand score using the predict function.\n\nThe code sections below demonstrate how to create a PythonFuncModel class with a vaderSentiment model embedded in it,\nsave, log, register, and load from the Model Registry and score.\n\nNote\n\nTo use this example, you will need to pip install vaderSentiment.\n\nfrom\n\nsys\n\nimport\n\nversion_info\n\nimport\n\ncloudpickle\n\nimport\n\npandas\n\nas\n\npd\n\nimport\n\nmlflow.pyfunc\n\nfrom\n\nvaderSentiment.vaderSentiment\n\nimport\n\nSentimentIntensityAnalyzer\n\n# Good and readable paper from the authors of this package\n\n# http://comp.social.gatech.edu/papers/icwsm14.vader.hutto.pdf\n\nINPUT_TEXTS\n\n\"text\"\n\n\"This is a bad movie. You don't want to see it! :-)\"\n\n},\n\n\"text\"\n\n\"Ricky Gervais is smart, witty, and creative!!!!!! :D\"\n\n},\n\n\"text\"", "metadata": {"url": "https://mlflow.org/docs/latest/model-registry.html#concepts"}} {"id": "e9e0bdb6f304-12", "text": "},\n\n\"text\"\n\n\"LOL, this guy fell off a chair while sleeping and snoring in a meeting\"\n\n},\n\n\"text\"\n\n\"Men shoots himself while trying to steal a dog, OMG\"\n\n},\n\n\"text\"\n\n\"Yay!! Another good phone interview. I nailed it!!\"\n\n},\n\n\"text\"\n\n\"This is INSANE! I can't believe it. How could you do such a horrible thing?\"\n\n},\n\nPYTHON_VERSION\n\n{major}\n\n{minor}\n\n{micro}\n\nformat\n\nmajor\n\nversion_info\n\nmajor\n\nminor\n\nversion_info\n\nminor\n\nmicro\n\nversion_info\n\nmicro\n\ndef\n\nscore_model\n\nmodel\n\n):\n\n# Use inference to predict output from the customized PyFunc model\n\nfor\n\ntext\n\nin\n\nenumerate\n\nINPUT_TEXTS\n\n):\n\ntext\n\nINPUT_TEXTS\n\n][\n\n\"text\"\n\nm_input\n\npd\n\nDataFrame\n\n([\n\ntext\n\n])\n\nscores\n\nloaded_model\n\npredict\n\nm_input\n\nprint\n\n\"<\n\ntext\n\n> --\n\nstr\n\nscores\n\n])\n\n# Define a class and extend from PythonModel\n\nclass\n\nSocialMediaAnalyserModel\n\nmlflow\n\npyfunc\n\nPythonModel\n\n):\n\ndef\n\n__init__\n\nself\n\n):\n\nsuper\n\n()\n\n__init__\n\n()\n\n# embed your vader model instance\n\nself\n\n_analyser\n\nSentimentIntensityAnalyzer\n\n()\n\n# preprocess the input with prediction from the vader sentiment model\n\ndef\n\n_score\n\nself\n\ntxt\n\n):\n\nprediction_scores\n\nself\n\n_analyser\n\npolarity_scores\n\ntxt\n\nreturn\n\nprediction_scores\n\ndef\n\npredict\n\nself\n\ncontext\n\nmodel_input\n\n):\n\n# Apply the preprocess function from the vader model to score\n\nmodel_output\n\nmodel_input\n\napply\n\nlambda\n\ncol\n\nself\n\n_score\n\ncol\n\n))\n\nreturn\n\nmodel_output\n\nmodel_path\n\n\"vader\"\n\nreg_model_name\n\n\"PyFuncVaderSentiments\"", "metadata": {"url": "https://mlflow.org/docs/latest/model-registry.html#concepts"}} {"id": "e9e0bdb6f304-13", "text": "model_path\n\n\"vader\"\n\nreg_model_name\n\n\"PyFuncVaderSentiments\"\n\nvader_model\n\nSocialMediaAnalyserModel\n\n()\n\n# Set the tracking URI to use local SQLAlchemy db file and start the run\n\n# Log MLflow entities and save the model\n\nmlflow\n\nset_tracking_uri\n\n\"sqlite:///mlruns.db\"\n\n# Save the conda environment for this model.\n\nconda_env\n\n\"channels\"\n\n\"defaults\"\n\n\"conda-forge\"\n\n],\n\n\"dependencies\"\n\n\"python=\n\n{}\n\nformat\n\nPYTHON_VERSION\n\n),\n\n\"pip\"\n\n],\n\n\"pip\"\n\n\"mlflow\"\n\n\"cloudpickle==\n\n{}\n\nformat\n\ncloudpickle\n\n__version__\n\n),\n\n\"vaderSentiment==3.3.2\"\n\n],\n\n\"name\"\n\n\"mlflow-env\"\n\n# Save the model\n\nwith\n\nmlflow\n\nstart_run\n\nrun_name\n\n\"Vader Sentiment Analysis\"\n\nas\n\nrun\n\nmodel_path\n\nmodel_path\n\nrun\n\ninfo\n\nrun_uuid\n\nmlflow\n\nlog_param\n\n\"algorithm\"\n\n\"VADER\"\n\nmlflow\n\nlog_param\n\n\"total_sentiments\"\n\nlen\n\nINPUT_TEXTS\n\n))\n\nmlflow\n\npyfunc\n\nsave_model\n\npath\n\nmodel_path\n\npython_model\n\nvader_model\n\nconda_env\n\nconda_env\n\n# Use the saved model path to log and register into the model registry\n\nmlflow\n\npyfunc\n\nlog_model\n\nartifact_path\n\nmodel_path\n\npython_model\n\nvader_model\n\nregistered_model_name\n\nreg_model_name\n\nconda_env\n\nconda_env\n\n# Load the model from the model registry and score\n\nmodel_uri\n\n\"models:/\n\nreg_model_name\n\n/1\"\n\nloaded_model\n\nmlflow\n\npyfunc\n\nload_model\n\nmodel_uri\n\nscore_model\n\nloaded_model", "metadata": {"url": "https://mlflow.org/docs/latest/model-registry.html#concepts"}} {"id": "e9e0bdb6f304-14", "text": "loaded_model\n\nmlflow\n\npyfunc\n\nload_model\n\nmodel_uri\n\nscore_model\n\nloaded_model\n\nSuccessfully registered model 'PyFuncVaderSentiments'.\n2021/04/05 10:34:15 INFO mlflow.tracking._model_registry.client: Waiting up to 300 seconds for model version to finish creation.\nCreated version '1' of model 'PyFuncVaderSentiments'.\n\n -- {'neg': 0.307, 'neu': 0.552, 'pos': 0.141, 'compound': -0.4047}\n -- {'neg': 0.0, 'neu': 0.316, 'pos': 0.684, 'compound': 0.8957}\n -- {'neg': 0.0, 'neu': 0.786, 'pos': 0.214, 'compound': 0.5473}\n -- {'neg': 0.262, 'neu': 0.738, 'pos': 0.0, 'compound': -0.4939}\n -- {'neg': 0.0, 'neu': 0.446, 'pos': 0.554, 'compound': 0.816}\n -- {'neg': 0.357, 'neu': 0.643, 'pos': 0.0, 'compound': -0.8034}", "metadata": {"url": "https://mlflow.org/docs/latest/model-registry.html#concepts"}}