{
 "cells": [
  {
   "cell_type": "markdown",
   "id": "fe75d580-e99a-4dce-947c-39de1a913f71",
   "metadata": {},
   "source": [
    "<p align=\"left\"><img src=\"_static/images/MLRun-logo.png\" alt=\"MLRun logo\" width=\"150\"/></p>\n",
    "\n",
    "\n",
    "# Welcome to Your MLRun CE Environment\n",
    "\n",
    "MLRun is an open-source AI orchestration platform for quickly building and managing continuous GenAI/ML projects across their lifecycles. It integrates into your development and CI/CD environments and automates the delivery of your production use cases.\n",
    "MLRun significantly reduces engineering efforts, time to production, and computation resources, by breaking the silos between data, ML, software, and DevOps/MLOps teams, enabling collaboration and fast continuous improvements.\n",
    "\n",
    "All the executions in MLRun are based on serverless functions. Functions are essentially Python code that can be executed locally or on a Kubernetes cluster. MLRun functions are used to run jobs, deploy models, create pipelines, and more.\n",
    "\n",
    "The different function runtimes automatically transform the code and spec to fully managed and elastic services over Kubernetes, which saves significant operational overhead, addresses scalability, and reduces infrastructure costs. The function parameters and capabilities are explained in more detail in [Create and use functions](https://docs.mlrun.org/en/stable/runtimes/create-and-use-functions.html#create-and-use-functions)\n",
    "\n",
    "MLRun enables users to manage and log artifacts that are data objects produced and/or consumed by functions, jobs, or pipelines.\n",
    "Each artifact includes metadata attributes such as tags, data previews, and model metrics, providing essential insights for managing machine learning (ML) and large language model (LLM) projects. These attributes help users track, review, and organize different artifacts efficiently.\n",
    "\n",
    "**For further details, check the [MLRun Documentation](https://docs.mlrun.org/en/stable/index.html), including -**\n",
    " * [Projects](https://docs.mlrun.org/en/stable/projects/project.html)\n",
    " * [Functions](https://docs.mlrun.org/en/stable/runtimes/functions.html)\n",
    " * [Artifacts](https://docs.mlrun.org/en/stable/store/artifacts.html)\n",
    " * [Quick start tutorial for machine learning](https://docs.mlrun.org/en/stable/tutorials/01-mlrun-basics.html)\n",
    " * [MLRun cheat sheet](https://docs.mlrun.org/en/stable/cheat-sheet.html#)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "01bf99e0",
   "metadata": {},
   "source": [
    "## Tutorials and Examples\n",
    "\n",
    "The following tutorials provide a hands-on introduction to using MLRun to implement a data science workflow and automate machine-learning operations (MLOps).\n",
    "\n",
    "Start with the [**Quick Start Tutorial**](./tutorial/01-mlrun-basics.ipynb) to understand the basics before going through the other notebooks.\n",
    "\n",
    "- [**Quick Start Tutorial**](./tutorials/01-mlrun-basics.ipynb)\n",
    "- [**Train, Compare, and Register Models**](./tutorials/02-model-training.ipynb)\n",
    "- [**Serving ML/DL models**](./tutorials/03-model-serving.ipynb)\n",
    "- [**Projects and Automated ML Pipeline**](./tutorials/04-pipeline.ipynb)\n",
    "\n",
    "Different end-to-end demos are found in the [**demos folder**](/lab/tree/demos).\n",
    "\n",
    "> **Note** - Demos that require additional components will not work with the default installation. Please make sure to enable the relevant components before running the examples."
   ]
  },
  {
   "cell_type": "markdown",
   "id": "248231ff",
   "metadata": {},
   "source": [
    "## MLRun CE components"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "ddeb6a5c",
   "metadata": {},
   "source": [
    "### Default components\n",
    "\n",
    "The MLRun CE (Community Edition) chart, by default, includes the following components:\n",
    "* [MLRun](https://github.com/mlrun/mlrun)\n",
    "  - MLRun API\n",
    "  - MLRun UI\n",
    "  - MLRun DB (MySQL)\n",
    "* [Nuclio](https://github.com/nuclio/nuclio)\n",
    "* [Jupyter](https://github.com/jupyter/notebook)\n",
    "* [MPI Operator](https://github.com/kubeflow/mpi-operator)\n",
    "* [MinIO](https://github.com/minio/minio/tree/master/helm/minio)\n",
    "* [Spark Operator](https://github.com/GoogleCloudPlatform/spark-on-k8s-operator)\n",
    "* [Pipelines](https://github.com/kubeflow/pipelines)\n",
    "* [Prometheus stack](https://github.com/prometheus-community/helm-charts)\n",
    "  - Prometheus\n",
    "  - Grafana\n",
    "* MLRun Model monitoring requires the installation of the following two applications. See more details in [Configuring TDengine and Kafka for model monitoring](https://docs.mlrun.org/en/stable/install/kubernetes.html#configuring-tdengine-and-kafka-for-model-monitoring).\n",
    "    * [TDengine](https://taosdata.github.io/TDengine-Operator/en/2.2-tdengine-with-helm.html) (3.3.2.0) \n",
    "    * [Kafka](https://github.com/bitnami/charts/tree/main/bitnami/kafka) (3.2.3)\n",
    "    > In MLRun CE 0.8.0, TDengine and Kafka are installed by default. If you are using an older chart version, install those components manually along with the MLRun chart."
   ]
  },
  {
   "cell_type": "markdown",
   "id": "7aeb1ea8",
   "metadata": {},
   "source": [
    "### Addional components \n",
    "\n",
    "In addition to the default components, you can also enable the following components, if required:\n",
    "\n",
    "* Real-Time feature store requires [Redis](https://redis.io/docs/latest/operate/oss_and_stack/install/) installation as the feature store real-time db (KV).\n",
    "    * See also - [Configuring the online feature store](https://docs.mlrun.org/en/stable/install/kubernetes.html#configuring-the-online-feature-store)\n",
    "\n",
    "* Ingress for Nuclio functions (disabled by default) can be configured in the [`values.yaml`](https://github.com/nuclio/nuclio/blob/development/hack/k8s/helm/nuclio/values.yaml)\n",
    "\n",
    "**Note** - The application runtime is not supported in MLRun CE."
   ]
  },
  {
   "cell_type": "markdown",
   "id": "37671239",
   "metadata": {},
   "source": [
    "## Configure Your Environment to your IDE\n",
    "\n",
    "This Jupyter Notebook server is designed to work with different options. It works out of the box with the **local** deployment mode, and you can connect your MLRun CE with your IDE (VSCode or PyCharm).<br> \n",
    "To work with your IDE, you need to edit and save the [**~/mlrun.env**](./mlrun.env) file in your local file system, as shown here:\n",
    "\n",
    "```sh\n",
    "# Set remote MLRun service address, username, and access-key\n",
    "MLRUN_DBPATH=https://<service-address> #mlrun api service URL's\n",
    "```\n",
    "\n",
    "See related information in the [configure remote environment documents](https://docs.mlrun.org/en/stable/install/remote.html#configure-remote-environment)."
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.8.8"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 5
}
