{
 "cells": [
  {
   "cell_type": "markdown",
   "id": "0",
   "metadata": {},
   "source": [
    "## Installation\n",
    "\n",
    "LightlyTrain can be installed directly via `pip`:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "1",
   "metadata": {},
   "outputs": [],
   "source": [
    "!pip install lightly-train"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "2",
   "metadata": {},
   "source": [
    "> **Important**: LightlyTrain is officially supported on\n",
    "> - Linux: CPU or CUDA\n",
    "> - MacOS: CPU only\n",
    "> - Windows (experimental): CPU or CUDA\n",
    ">\n",
    "> We are planning to support MPS for MacOS.\n",
    ">\n",
    "> Check the [installation instructions](https://docs.lightly.ai/train/stable/installation.html) for more details on installation."
   ]
  },
  {
   "cell_type": "markdown",
   "id": "3",
   "metadata": {},
   "source": [
    "## Prediction using LightlyTrain's model weights\n",
    "\n",
    "### Download an example image\n",
    "\n",
    "Download an example image for inference with the following command:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "4",
   "metadata": {},
   "outputs": [],
   "source": [
    "!wget -O image.jpg http://images.cocodataset.org/val2017/000000039769.jpg"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "5",
   "metadata": {},
   "source": [
    "### Load the model weights\n",
    "\n",
    "Then load the model weights with LightlyTrain's `load_model` function:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "6",
   "metadata": {},
   "outputs": [],
   "source": [
    "import lightly_train\n",
    "\n",
    "model = lightly_train.load_model(\"dinov3/vits16-eomt-inst-coco\")"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "7",
   "metadata": {},
   "source": [
    "### Predict the instances\n",
    "\n",
    "Run `model.predict` on the image. The method accepts file paths, URLs, PIL Images, or tensors as input."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "8",
   "metadata": {},
   "outputs": [],
   "source": [
    "prediction = model.predict(\"image.jpg\", threshold=0.8)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "9",
   "metadata": {},
   "source": [
    "### Visualize the results\n",
    "\n",
    "Visualize the image and predicted instance masks to inspect the segmentation output."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "10",
   "metadata": {},
   "outputs": [],
   "source": [
    "import matplotlib.pyplot as plt\n",
    "from torchvision.io import read_image\n",
    "from torchvision.utils import draw_segmentation_masks\n",
    "\n",
    "image = read_image(\"image.jpg\")\n",
    "masks = prediction[\"masks\"]\n",
    "labels = prediction[\"labels\"]\n",
    "scores = prediction[\"scores\"]\n",
    "image_with_masks = draw_segmentation_masks(\n",
    "    image,\n",
    "    masks=masks,\n",
    "    alpha=1.0,\n",
    ")\n",
    "plt.imshow(image_with_masks.permute(1, 2, 0))\n",
    "plt.axis(\"off\")\n",
    "plt.show()"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "11",
   "metadata": {},
   "source": [
    "The predicted masks are returned as tensors with shape `(N, height, width)` and coordinates aligned with the input image."
   ]
  },
  {
   "cell_type": "markdown",
   "id": "12",
   "metadata": {},
   "source": [
    "## Train an instance segmentation model\n",
    "\n",
    "Training your own instance segmentation model is straightforward with LightlyTrain."
   ]
  },
  {
   "cell_type": "markdown",
   "id": "13",
   "metadata": {},
   "source": [
    "### Download dataset\n",
    "\n",
    "First download a dataset in YOLO segmentation format."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "14",
   "metadata": {},
   "outputs": [],
   "source": [
    "!wget -O coco128-seg.zip https://github.com/ultralytics/assets/releases/download/v0.0.0/coco128-seg.zip && unzip coco128-seg.zip"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "15",
   "metadata": {},
   "source": [
    "Then start the training with the `train_instance_segmentation` function. You can specify various training parameters such as the model architecture, number of training steps, batch size, learning rate, and more."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "16",
   "metadata": {},
   "outputs": [],
   "source": [
    "lightly_train.train_instance_segmentation(\n",
    "    out=\"out/my_experiment\",\n",
    "    model=\"dinov3/vits16-eomt-inst-coco\",\n",
    "    steps=100,  # Small number of steps for demonstration, default is 90_000.\n",
    "    batch_size=4,  # Small batch size for demonstration, default is 16.\n",
    "    data={\n",
    "        \"path\": \"coco128-seg\",\n",
    "        \"train\": \"images/train2017\",\n",
    "        \"val\": \"images/val2017\",\n",
    "        \"names\": {\n",
    "            0: \"person\",\n",
    "            1: \"bicycle\",\n",
    "            2: \"car\",\n",
    "            3: \"motorcycle\",\n",
    "            4: \"airplane\",\n",
    "            5: \"bus\",\n",
    "            6: \"train\",\n",
    "            7: \"truck\",\n",
    "            8: \"boat\",\n",
    "            9: \"traffic light\",\n",
    "            10: \"fire hydrant\",\n",
    "            11: \"stop sign\",\n",
    "            12: \"parking meter\",\n",
    "            13: \"bench\",\n",
    "            14: \"bird\",\n",
    "            15: \"cat\",\n",
    "            16: \"dog\",\n",
    "            17: \"horse\",\n",
    "            18: \"sheep\",\n",
    "            19: \"cow\",\n",
    "            20: \"elephant\",\n",
    "            21: \"bear\",\n",
    "            22: \"zebra\",\n",
    "            23: \"giraffe\",\n",
    "            24: \"backpack\",\n",
    "            25: \"umbrella\",\n",
    "            26: \"handbag\",\n",
    "            27: \"tie\",\n",
    "            28: \"suitcase\",\n",
    "            29: \"frisbee\",\n",
    "            30: \"skis\",\n",
    "            31: \"snowboard\",\n",
    "            32: \"sports ball\",\n",
    "            33: \"kite\",\n",
    "            34: \"baseball bat\",\n",
    "            35: \"baseball glove\",\n",
    "            36: \"skateboard\",\n",
    "            37: \"surfboard\",\n",
    "            38: \"tennis racket\",\n",
    "            39: \"bottle\",\n",
    "            40: \"wine glass\",\n",
    "            41: \"cup\",\n",
    "            42: \"fork\",\n",
    "            43: \"knife\",\n",
    "            44: \"spoon\",\n",
    "            45: \"bowl\",\n",
    "            46: \"banana\",\n",
    "            47: \"apple\",\n",
    "            48: \"sandwich\",\n",
    "            49: \"orange\",\n",
    "            50: \"broccoli\",\n",
    "            51: \"carrot\",\n",
    "            52: \"hot dog\",\n",
    "            53: \"pizza\",\n",
    "            54: \"donut\",\n",
    "            55: \"cake\",\n",
    "            56: \"chair\",\n",
    "            57: \"couch\",\n",
    "            58: \"potted plant\",\n",
    "            59: \"bed\",\n",
    "            60: \"dining table\",\n",
    "            61: \"toilet\",\n",
    "            62: \"tv\",\n",
    "            63: \"laptop\",\n",
    "            64: \"mouse\",\n",
    "            65: \"remote\",\n",
    "            66: \"keyboard\",\n",
    "            67: \"cell phone\",\n",
    "            68: \"microwave\",\n",
    "            69: \"oven\",\n",
    "            70: \"toaster\",\n",
    "            71: \"sink\",\n",
    "            72: \"refrigerator\",\n",
    "            73: \"book\",\n",
    "            74: \"clock\",\n",
    "            75: \"vase\",\n",
    "            76: \"scissors\",\n",
    "            77: \"teddy bear\",\n",
    "            78: \"hair drier\",\n",
    "            79: \"toothbrush\",\n",
    "        },\n",
    "    },\n",
    ")"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "17",
   "metadata": {},
   "source": [
    "Once training completes, the final model checkpoint is saved in `out/my_experiment/exported_models/exported_last.pt`. If you have a validation dataset, the best model according to the validation mask mAP is saved in `out/my_experiment/exported_models/exported_best.pt`."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "18",
   "metadata": {},
   "outputs": [],
   "source": [
    "model = lightly_train.load_model(\"out/my_experiment/exported_models/exported_last.pt\")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "19",
   "metadata": {},
   "outputs": [],
   "source": [
    "prediction = model.predict(\"image.jpg\")\n",
    "\n",
    "image = read_image(\"image.jpg\")\n",
    "masks = prediction[\"masks\"]\n",
    "image_with_masks = draw_segmentation_masks(\n",
    "    image,\n",
    "    masks=masks,\n",
    "    alpha=1.0,\n",
    ")\n",
    "plt.imshow(image_with_masks.permute(1, 2, 0))\n",
    "plt.axis(\"off\")\n",
    "plt.show()"
   ]
  }
 ],
 "metadata": {},
 "nbformat": 4,
 "nbformat_minor": 5
}
