{
  "cells": [
    {
      "cell_type": "markdown",
      "metadata": {
        "colab_type": "text",
        "id": "view-in-github"
      },
      "source": [
        "<a href=\"https://colab.research.google.com/github/Arindam200/awesome-ai-apps/blob/main/fine_tuning/Fine_tuning.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "dj-AOm2-I5cQ"
      },
      "source": [
        "# Fine-tune Open-Source LLMs on <a href=\"https://tokenfactory.nebius.com/\"><picture><source media=\"(prefers-color-scheme: dark)\" srcset=\"https://mintcdn.com/nebius-723e8b65/jsgY7B_gdaTjMC6y/logo/Main-logo-TF-Dark.svg?fit=max&auto=format&n=jsgY7B_gdaTjMC6y&q=85&s=92ebc07d32d93f3918de2f7ec4a0754a\"><source media=\"(prefers-color-scheme: light)\" srcset=\"https://mintcdn.com/nebius-723e8b65/jsgY7B_gdaTjMC6y/logo/Main-logo-TF-Light.svg?fit=max&auto=format&n=jsgY7B_gdaTjMC6y&q=85&s=48ceb3cd949e5160c884634bbaf1af59\"><img alt=\"Nebius Token Factory\" src=\"https://mintcdn.com/nebius-723e8b65/jsgY7B_gdaTjMC6y/logo/Main-logo-TF-Light.svg?fit=max&auto=format&n=jsgY7B_gdaTjMC6y&q=85&s=48ceb3cd949e5160c884634bbaf1af59\" width=\"200\"></picture></a>\n",
        "\n",
        "Learn how to fine-tune & deploy open models like Llama 3.1 directly from your dataset using [Nebius Token Factory](https://dub.sh/nebius), an all-in-one platform for working with large language models (LLMs).\n",
        "\n",
        "Before you begin, get your API key from the [Dashboard](https://tokenfactory.nebius.com/?modals=create-api-key).\n",
        "\n",
        "Press Runtime → Run all to start fine-tuning on a free Google Colab instance."
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "BgvqoOcuKUjx"
      },
      "source": [
        "## Step 1: Installation & Setup"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "M9cHHBQWN62P"
      },
      "outputs": [],
      "source": [
        "!pip install -qq openai datasets"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "7QoDSA-YOMZ3"
      },
      "source": [
        "Before running, store your key in Colab Variables as `NEBIUS_API_KEY` or export it as an environment variable.\n",
        "\n",
        "\n",
        "\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "T8SuI35BPjjZ"
      },
      "outputs": [],
      "source": [
        "import os, json, time\n",
        "from openai import OpenAI\n",
        "from datasets import load_dataset\n",
        "import requests\n",
        "\n",
        "try:\n",
        "    from google.colab import userdata\n",
        "    nebius_api_key = userdata.get('NEBIUS_API_KEY')\n",
        "except Exception:\n",
        "    nebius_api_key = os.getenv(\"NEBIUS_API_KEY\")\n",
        "\n",
        "assert nebius_api_key, \"⚠️ Please set your NEBIUS_API_KEY via Colab or environment.\"\n",
        "\n",
        "client = OpenAI(\n",
        "    base_url=\"https://api.tokenfactory.nebius.com/v1/\",\n",
        "    api_key=nebius_api_key,\n",
        ")\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "FG4ObyzxPX75"
      },
      "source": [
        "## Step 2: Prepare your dataset\n",
        "\n",
        "Fine-tuning works best with conversational data (the OpenAI-style format with messages).\n",
        "We’ll use a sample [dataset](https://huggingface.co/datasets/olathepavilion/Conversational-datasets-json)  from Hugging Face to keep things simple.\n",
        "\n",
        "You can learn more about preparing DataSets [here](https://docs.tokenfactory.nebius.com/fine-tuning/datasets)"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/",
          "height": 254,
          "referenced_widgets": [
            "f9567fd4459b49e88590a6dfb653fcd8",
            "7c2326238d2647ec82ec38872ae6a605",
            "42cee0ec2af8489bae794e851379defb",
            "4fc7ae3b5f394d02abc19f1236d968f9",
            "7fde4813fd8348028f3f1e116eb23add",
            "43cae3cb7768402e8c9ddad01a181076",
            "ffb71826bad84b2b9efbddfcdf0a1ef1",
            "150a88e567514ffdbb4be84412a983f0",
            "c219c17fe72a4802b37e1f0a66ca5afa",
            "5cc65138b52d4412b546903bc9926a05",
            "f0e0efdd99f6498fb18e3fb17173816b",
            "abd5558192f542aba3a9b811d0a47870",
            "00e50b83f2644b8b87993f3d014c1859",
            "12d503b924cb4412a94dabfdb54e511b",
            "6d8a50513c9c471b89c0ae674adb9d59",
            "c7ec1a2d2c6e4cb092e43d2b51176f09",
            "18b5d9b6115a489896348d31b0a349a5",
            "caae42eb6aa040ca986c767a5f62ada1",
            "30ee645fef944c5ea8490ec9c6653914",
            "90e23c249e1b4806aaf6e2821f71cc8f",
            "df90c788f3124ca2a43b963233e1141d",
            "90ac4605d9c9490484ec8c056f0b6d4e",
            "b912b7956f974bd8bfd1bd8c84c18b02",
            "613a90a1b2064e74a46ea62d8ee0a159",
            "9c8760f1f08f4fd49ea320b8fe31b47c",
            "0f268df1029c4037b61041a7592d3502",
            "70e363f6c1f34f0abb677c40d17a27c6",
            "7e1c6e2cbe8f4a29bbeeea39aaaaa9f0",
            "dd04ce2adf92432f98365af12161da64",
            "95ebd9b59d46422b9c358c7ea1d585d7",
            "2e126c2540b948648b9f75f631b616fc",
            "0dd080c3ede244cd86661f6b936f7f5c",
            "7b19e6e699a94f478125bd81ca830131"
          ]
        },
        "id": "4gxUoUCcPrBt",
        "outputId": "bab8a508-cc91-43f6-8cc7-62b9ca9c4a67"
      },
      "outputs": [
        {
          "name": "stderr",
          "output_type": "stream",
          "text": [
            "/usr/local/lib/python3.12/dist-packages/huggingface_hub/utils/_auth.py:94: UserWarning: \n",
            "The secret `HF_TOKEN` does not exist in your Colab secrets.\n",
            "To authenticate with the Hugging Face Hub, create a token in your settings tab (https://huggingface.co/settings/tokens), set it as secret in your Google Colab and restart your session.\n",
            "You will be able to reuse this secret in all of your notebooks.\n",
            "Please note that authentication is recommended but still optional to access public models or datasets.\n",
            "  warnings.warn(\n"
          ]
        },
        {
          "data": {
            "application/vnd.jupyter.widget-view+json": {
              "model_id": "f9567fd4459b49e88590a6dfb653fcd8",
              "version_major": 2,
              "version_minor": 0
            },
            "text/plain": [
              "README.md:   0%|          | 0.00/27.0 [00:00<?, ?B/s]"
            ]
          },
          "metadata": {},
          "output_type": "display_data"
        },
        {
          "data": {
            "application/vnd.jupyter.widget-view+json": {
              "model_id": "abd5558192f542aba3a9b811d0a47870",
              "version_major": 2,
              "version_minor": 0
            },
            "text/plain": [
              "Validation.jsonl: 0.00B [00:00, ?B/s]"
            ]
          },
          "metadata": {},
          "output_type": "display_data"
        },
        {
          "data": {
            "application/vnd.jupyter.widget-view+json": {
              "model_id": "b912b7956f974bd8bfd1bd8c84c18b02",
              "version_major": 2,
              "version_minor": 0
            },
            "text/plain": [
              "Generating train split:   0%|          | 0/1000 [00:00<?, ? examples/s]"
            ]
          },
          "metadata": {},
          "output_type": "display_data"
        },
        {
          "name": "stdout",
          "output_type": "stream",
          "text": [
            "Saved 1000 samples to training_data.jsonl\n"
          ]
        }
      ],
      "source": [
        "dataset = load_dataset(\"olathepavilion/Conversational-datasets-json\", split=\"train\")\n",
        "\n",
        "formatted_data = [{\"messages\": entry[\"messages\"]} for entry in dataset]\n",
        "\n",
        "data_path = \"training_data.jsonl\"\n",
        "with open(data_path, \"w\", encoding=\"utf-8\") as f:\n",
        "    for ex in formatted_data:\n",
        "        f.write(json.dumps(ex, ensure_ascii=False) + \"\\n\")\n",
        "\n",
        "print(f\"Saved {len(formatted_data)} samples to {data_path}\")"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "ek_wzH1nP0_m"
      },
      "source": [
        "## Step 3: Upload your dataset to Token Factory\n",
        "\n",
        "Next, we’ll upload the dataset so Nebius can access it for training."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "id": "MCsPC0DwP0aV",
        "outputId": "2739f711-8a4f-4007-ab83-9c6022001312"
      },
      "outputs": [
        {
          "name": "stdout",
          "output_type": "stream",
          "text": [
            "Uploaded file ID: file-019a635b-2dca-7a23-92b3-74646ebf092a\n"
          ]
        }
      ],
      "source": [
        "with open(data_path, \"rb\") as f:\n",
        "    upload = client.files.create(file=f, purpose=\"fine-tune\")\n",
        "\n",
        "training_file_id = upload.id\n",
        "print(\"Uploaded file ID:\", training_file_id)"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "aqX39GW5P-fs"
      },
      "source": [
        "Keep that `training_file_id` handy, it’s used in the fine-tuning request."
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "o_VnyrSMQImC"
      },
      "source": [
        "## Step 4: Create and start your fine-tuning job\n",
        "\n",
        "We’ll fine-tune Llama 3.2 1B Instruct using LoRA, which is efficient and much faster than full fine-tuning."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "id": "HZfbqJZePqGn",
        "outputId": "37dd0915-a043-4584-fa09-8bfa22d12c7b"
      },
      "outputs": [
        {
          "name": "stdout",
          "output_type": "stream",
          "text": [
            "Job created: ftjob-821d852c7f8f4a28909ca5f56c5c84eb | status: running\n"
          ]
        }
      ],
      "source": [
        "job = client.fine_tuning.jobs.create(\n",
        "    model=\"meta-llama/Llama-3.1-8B-Instruct\",\n",
        "    suffix=\"demo-run\",\n",
        "    training_file=training_file_id,\n",
        "    hyperparameters={\n",
        "        \"batch_size\": 16,\n",
        "        \"learning_rate_multiplier\": 2e-4,\n",
        "        \"n_epochs\": 1,\n",
        "        \"warmup_ratio\": 0.03,\n",
        "        \"weight_decay\": 0,\n",
        "        \"lora\": True,\n",
        "        \"packing\": True,\n",
        "    },\n",
        ")\n",
        "\n",
        "print(\"Job created:\", job.id, \"| status:\", job.status)"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "XbrcRlWWRHkF"
      },
      "source": [
        "## Step 5: Monitor job progress\n",
        "\n",
        "When you create a fine-tune job, its initial status will usually be running.\n",
        "The script below polls the status every 15 seconds to check for updates.\n",
        "\n",
        "If it fails, Nebius will return an error message explaining what went wrong, and how to fix it. If you get a 500 error, just resubmit the job.\n",
        "\n",
        "The training is complete when you see either Dataset processed successfully or Training completed successfully in the event logs."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "id": "Wqg-jFJLQd4h",
        "outputId": "4e10753e-8481-4d32-8f5f-5b4eb5ed09e1"
      },
      "outputs": [
        {
          "name": "stdout",
          "output_type": "stream",
          "text": [
            "Current Status: running\n",
            "Current Status: running\n",
            "Current Status: running\n",
            "Current Status: running\n",
            "Current Status: running\n",
            "Current Status: running\n",
            "Current Status: running\n",
            "Current Status: running\n",
            "Current Status: running\n",
            "Current Status: running\n",
            "Current Status: running\n",
            "Current Status: running\n",
            "Current Status: running\n",
            "Current Status: succeeded\n",
            "Final status: succeeded\n"
          ]
        }
      ],
      "source": [
        "active = {\"validating_files\", \"queued\", \"running\"}\n",
        "while job.status in active:\n",
        "    time.sleep(15)\n",
        "    job = client.fine_tuning.jobs.retrieve(job.id)\n",
        "    print(\"Current Status:\", job.status)\n",
        "\n",
        "print(\"Final status:\", job.status)"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "IpPgjDryRYO-"
      },
      "source": [
        "Check job events:"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "id": "iqxzIN67RZTq",
        "outputId": "10502b67-aea6-4b7b-cda9-e222e5b7d6e0"
      },
      "outputs": [
        {
          "name": "stdout",
          "output_type": "stream",
          "text": [
            "[1762603610] info: Job is submitted\n",
            "[1762603639] info: Dataset 'training' processed successfully\n",
            "[1762603789] info: Training completed successfully\n"
          ]
        }
      ],
      "source": [
        "events = client.fine_tuning.jobs.list_events(job.id)\n",
        "for e in events.data:\n",
        "    print(f\"[{e.created_at}] {e.level}: {e.message}\")"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "R69WkAuFRm05"
      },
      "source": [
        "This is the best way to confirm that your fine-tune finished successfully."
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "-mFxCRqvRpJB"
      },
      "source": [
        "## Step 6: Download your checkpoints\n",
        "\n",
        "After every epoch, Nebius saves a checkpoint, a snapshot of the model at that stage. You’ll get all of them. For the final model, just grab the last checkpoint.\n",
        "\n",
        "The code below creates a folder for each checkpoint and saves all the files there."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "id": "XIKjNEPqRogE",
        "outputId": "caec40b5-7d26-4dad-ba95-367aba6fb328"
      },
      "outputs": [
        {
          "name": "stdout",
          "output_type": "stream",
          "text": [
            "SyncCursorPage[FineTuningJobEvent](data=[FineTuningJobEvent(id='3c4a9b92-37a0-4553-820d-90c089ce3066', created_at=1762603610, level='info', message='Job is submitted', object='fine_tuning.job.event', data=None, type=None, source='api', job_uuid='ftjob-821d852c7f8f4a28909ca5f56c5c84eb'), FineTuningJobEvent(id='fa1d272e-c27a-4856-976b-abc399145d65', created_at=1762603639, level='info', message=\"Dataset 'training' processed successfully\", object='fine_tuning.job.event', data=None, type=None, source='datasets', job_uuid='ftjob-821d852c7f8f4a28909ca5f56c5c84eb'), FineTuningJobEvent(id='c4ff1e70-d021-4955-bf17-7b122cf140f9', created_at=1762603789, level='info', message='Training completed successfully', object='fine_tuning.job.event', data=None, type=None, source='training', job_uuid='ftjob-821d852c7f8f4a28909ca5f56c5c84eb')], has_more=False)\n",
            "Checkpoint ID: ftckpt_17e89a2f-dc43-4341-aae3-73cd235e9542\n"
          ]
        }
      ],
      "source": [
        "if job.status == \"succeeded\":\n",
        "    # Check the job events\n",
        "    events = client.fine_tuning.jobs.list_events(job.id)\n",
        "    print(events)\n",
        "\n",
        "    for checkpoint in client.fine_tuning.jobs.checkpoints.list(job.id).data:\n",
        "        print(\"Checkpoint ID:\", checkpoint.id)\n",
        "\n",
        "        # Create a directory for every checkpoint\n",
        "        os.makedirs(checkpoint.id, exist_ok=True)\n",
        "\n",
        "        for model_file_id in checkpoint.result_files:\n",
        "            # Get the name of a model file\n",
        "            filename = client.files.retrieve(model_file_id).filename\n",
        "\n",
        "            # Retrieve the contents of the file\n",
        "            file_content = client.files.content(model_file_id)\n",
        "\n",
        "            # Save the contents into a local file\n",
        "            file_content.write_to_file(filename)"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "ow2c-XVuH0pJ"
      },
      "source": [
        "## Setp 7: Deploy Your LoRA Adapter\n",
        "\n",
        "Now that your fine-tune is complete, you can deploy the **LoRA adapter** directly on **Nebius Token Factory** for inference.  \n",
        "This lets you use your fine-tuned model as a hosted endpoint, ready for API calls, experiments, or integration into your own applications."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "id": "qkeENsP-5_Er",
        "outputId": "e8ad354f-5d92-4a4a-a04a-404d65b5d490"
      },
      "outputs": [
        {
          "name": "stdout",
          "output_type": "stream",
          "text": [
            "Creating LoRA model from job ftjob-821d852c7f8f4a28909ca5f56c5c84eb and checkpoint ftckpt_17e89a2f-dc43-4341-aae3-73cd235e9542...\n",
            "LoRA model creation request sent. Response: {'name': 'meta-llama/Meta-Llama-3.1-8B-Instruct-LoRa:demo-arindam-hnuw', 'base_model': 'meta-llama/Meta-Llama-3.1-8B-Instruct', 'source': 'ftjob-821d852c7f8f4a28909ca5f56c5c84eb:ftckpt_17e89a2f-dc43-4341-aae3-73cd235e9542', 'description': 'description', 'created_at': 1764528337, 'status': 'validating'}\n",
            "Generated LoRA model name: meta-llama/Meta-Llama-3.1-8B-Instruct-LoRa:demo-arindam-hnuw\n",
            "Waiting for validation of LoRA model 'meta-llama/Meta-Llama-3.1-8B-Instruct-LoRa:demo-arindam-hnuw'...\n",
            "Current status for 'meta-llama/Meta-Llama-3.1-8B-Instruct-LoRa:demo-arindam-hnuw': validating\n",
            "Current status for 'meta-llama/Meta-Llama-3.1-8B-Instruct-LoRa:demo-arindam-hnuw': active\n",
            "LoRA model 'meta-llama/Meta-Llama-3.1-8B-Instruct-LoRa:demo-arindam-hnuw' is active. Getting a sample completion...\n",
            "Requesting completion from model 'meta-llama/Meta-Llama-3.1-8B-Instruct-LoRa:demo-arindam-hnuw'...\n",
            "Completion received for model 'meta-llama/Meta-Llama-3.1-8B-Instruct-LoRa:demo-arindam-hnuw'.\n",
            "Hello How can I assist you today.\n"
          ]
        }
      ],
      "source": [
        "import requests, time\n",
        "\n",
        "api_url = \"https://api.tokenfactory.nebius.com\"\n",
        "base_model = \"meta-llama/Meta-Llama-3.1-8B-Instruct\"\n",
        "\n",
        "# Create a LoRA model from a fine-tuning job and checkpoint\n",
        "def create_lora_from_job(name, ft_job, ft_checkpoint, base_model):\n",
        "    print(f\"Creating LoRA model from job {ft_job} and checkpoint {ft_checkpoint}...\")\n",
        "    fine_tuning_result = ft_job + \":\" + ft_checkpoint\n",
        "    lora_creation_request = {\n",
        "        \"source\": fine_tuning_result,\n",
        "        \"base_model\": base_model,\n",
        "        \"name\": name,\n",
        "        \"description\": \"Example LoRA model deployment\"\n",
        "    }\n",
        "    response = requests.post(\n",
        "        f\"{api_url}/v0/models\",\n",
        "        json=lora_creation_request,\n",
        "        headers={\n",
        "            \"Content-Type\": \"application/json\",\n",
        "            \"Authorization\": f\"Bearer {nebius_api_key}\"\n",
        "        }\n",
        "    )\n",
        "    print(f\"LoRA model creation request sent. Response: {response.json()}\")\n",
        "    return response.json()\n",
        "\n",
        "# Wait for validation of the deployed model\n",
        "def wait_for_validation(name, delay=5):\n",
        "    print(f\"Waiting for validation of LoRA model '{name}'...\")\n",
        "    while True:\n",
        "        time.sleep(delay)\n",
        "        lora_info = requests.get(\n",
        "            f\"{api_url}/v0/models/{name}\",\n",
        "            headers={\n",
        "                \"Content-Type\": \"application/json\",\n",
        "                \"Authorization\": f\"Bearer {nebius_api_key}\"\n",
        "            }\n",
        "        ).json()\n",
        "        current_status = lora_info.get(\"status\", \"unknown\")\n",
        "        print(f\"Current status for '{name}': {current_status}\")\n",
        "        if current_status in {\"active\", \"error\"}:\n",
        "            return lora_info\n",
        "\n",
        "# Send a test completion request\n",
        "def get_completion(model):\n",
        "    print(f\"Requesting completion from model '{model}'...\")\n",
        "    completion = client.chat.completions.create(\n",
        "        model=model,\n",
        "        messages=[{\"role\": \"user\", \"content\": \"Hello\"}],\n",
        "    )\n",
        "    print(f\"Completion received for model '{model}'.\")\n",
        "    return completion.choices[0].message.content\n",
        "\n",
        "# Deploy a LoRA adapter model using the fine-tuning job and checkpoint IDs\n",
        "lora_name = create_lora_from_job(\"demo-arindam\", job.id, checkpoint.id, base_model).get(\"name\")\n",
        "print(f\"Generated LoRA model name: {lora_name}\")\n",
        "\n",
        "# Check model validation status\n",
        "lora_info = wait_for_validation(lora_name)\n",
        "\n",
        "# If validation passes, test inference\n",
        "if lora_info.get(\"status\") == \"active\":\n",
        "    print(f\"LoRA model '{lora_name}' is active. Getting a sample completion...\")\n",
        "    print(get_completion(lora_name))\n",
        "elif lora_info.get(\"status\") == \"error\":\n",
        "    print(f\"An error occurred during validation: {lora_info['status_reason']}\")"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "eXswWM0eLnKX"
      },
      "source": [
        "Once the model status becomes active, you can send chat completions just like any OpenAI-compatible model."
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "h4VnOCCk44us"
      },
      "source": [
        "And that’s it!\n",
        "\n",
        "You’ve just fine-tuned, deployed, and run inference with your own LoRA model, all using Nebius Token Factory.\n",
        "\n",
        "If you want to go further, here are a few next steps worth exploring:\n",
        "\n",
        "- [Track Fine-Tuning Jobs](https://tokenfactory.nebius.com/fine-tuning): Monitor progress, view logs, and check model checkpoints  \n",
        "- [Deploy Your Custom Model](https://docs.tokenfactory.nebius.com/fine-tuning/deploy-custom-model): Set up inference endpoints and integrate your fine-tuned model into applications  \n",
        "- [Fine-Tuning Docs](https://docs.tokenfactory.nebius.com/fine-tuning/overview): Learn about hyperparameters, LoRA configurations, and advanced options  \n",
        "- [Nebius Token Factory Dashboard](https://tokenfactory.nebius.com/): Manage models, datasets, and deployments visually  \n",
        "\n",
        "**Start tracking and deploying your fine-tuned models today at [Nebius Token Factory](https://tokenfactory.nebius.com/).**\n"
      ]
    }
  ],
  "metadata": {
    "colab": {
      "authorship_tag": "ABX9TyM3w5GDr5djpV7LcBTiS8MS",
      "include_colab_link": true,
      "provenance": []
    },
    "kernelspec": {
      "display_name": "Python 3",
      "name": "python3"
    },
    "language_info": {
      "name": "python"
    }
  },
  "nbformat": 4,
  "nbformat_minor": 0
}
