{
  "cells": [
    {
      "cell_type": "markdown",
      "metadata": {},
      "source": [
        "## Multipart Upload with AIStore SDK\n",
        "\n",
        "This notebook demonstrates how to use AIStore's multipart upload functionality to efficiently upload large files by splitting them into smaller parts that can be uploaded concurrently.\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {},
      "source": [
        "### Setup: Create a client and bucket\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {},
      "outputs": [
        {
          "name": "stdout",
          "output_type": "stream",
          "text": [
            "Using bucket: multipart-demo-bck\n"
          ]
        }
      ],
      "source": [
        "from aistore import Client\n",
        "\n",
        "# Connect to AIStore cluster\n",
        "ais_url = \"http://localhost:8080\"\n",
        "client = Client(ais_url)\n",
        "\n",
        "# Create or get a bucket for our multipart upload examples\n",
        "bucket = client.bucket(\"multipart-demo-bck\").create(exist_ok=True)\n",
        "print(f\"Using bucket: {bucket.name}\")\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {},
      "source": [
        "## Basic Multipart Upload Workflow\n",
        "\n",
        "A multipart upload consists of four main steps:\n",
        "1. **Create** a multipart upload session\n",
        "2. **Add parts** by uploading content to each part\n",
        "3. **Complete** the upload to assemble all parts into the final object\n",
        "4. **Abort** (optional) to cancel the upload if needed\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {},
      "source": [
        "### Example 1: Basic Multipart Upload\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 2,
      "metadata": {},
      "outputs": [
        {
          "name": "stdout",
          "output_type": "stream",
          "text": [
            "Created multipart upload with ID: N3eXTrPE4d\n",
            "Multipart upload completed with status: 200\n",
            "Final object size: 92 bytes\n"
          ]
        }
      ],
      "source": [
        "# Get a reference to the object we want to upload\n",
        "obj = bucket.object(\"my-multipart-object\")\n",
        "\n",
        "# Step 1: Create a multipart upload session\n",
        "mpu = obj.multipart_upload().create()\n",
        "print(f\"Created multipart upload with ID: {mpu.upload_id}\")\n",
        "\n",
        "# Step 2: Add parts (part numbers must start from 1)\n",
        "part1_content = b\"This is the content of part 1. \"\n",
        "part2_content = b\"This is the content of part 2. \"\n",
        "part3_content = b\"This is the content of part 3.\"\n",
        "\n",
        "# Upload each part\n",
        "mpu.add_part(1).put_content(part1_content)\n",
        "mpu.add_part(2).put_content(part2_content)\n",
        "mpu.add_part(3).put_content(part3_content)\n",
        "\n",
        "# Step 3: Complete the upload\n",
        "response = mpu.complete()\n",
        "print(f\"Multipart upload completed with status: {response.status_code}\")\n",
        "\n",
        "# Verify the final object content\n",
        "final_content = obj.get_reader().read_all()\n",
        "print(f\"Final object size: {len(final_content)} bytes\")\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {},
      "source": [
        "### Example 2: Parallel Part Upload\n",
        "\n",
        "For better performance with large files, you can upload parts in parallel using threading.\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 3,
      "metadata": {},
      "outputs": [
        {
          "name": "stdout",
          "output_type": "stream",
          "text": [
            "Parallel upload completed: [1, 2, 3, 5, 4]\n",
            "Final object size: 25000 bytes\n"
          ]
        }
      ],
      "source": [
        "import concurrent.futures\n",
        "import time\n",
        "\n",
        "def upload_part(mpu, part_number, content):\n",
        "    \"\"\"Upload a single part and return timing info.\"\"\"\n",
        "    mpu.add_part(part_number).put_content(content)\n",
        "    return part_number\n",
        "\n",
        "# Prepare parts for parallel upload\n",
        "obj = bucket.object(\"parallel-upload-object\")\n",
        "mpu = obj.multipart_upload().create()\n",
        "\n",
        "# Create parts with substantial content\n",
        "parts_data = []\n",
        "for i in range(1, 6):  # 5 parts\n",
        "    content = f\"Parallel part {i} content: \" * 200  # Larger content\n",
        "    parts_data.append((i, content.encode()))\n",
        "\n",
        "# Upload parts in parallel using ThreadPoolExecutor\n",
        "with concurrent.futures.ThreadPoolExecutor(max_workers=3) as executor:\n",
        "    # Submit all upload tasks\n",
        "    futures = [\n",
        "        executor.submit(upload_part, mpu, part_num, content)\n",
        "        for part_num, content in parts_data\n",
        "    ]\n",
        "    \n",
        "    # Collect results\n",
        "    results = []\n",
        "    for future in concurrent.futures.as_completed(futures):\n",
        "        part_num = future.result()\n",
        "        results.append(part_num)\n",
        "\n",
        "# Complete the upload\n",
        "mpu.complete()\n",
        "\n",
        "print(f\"Parallel upload completed: {results}\")\n",
        "print(f\"Final object size: {len(obj.get_reader().read_all())} bytes\")\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {},
      "source": [
        "### Example 3: Part Number Guidelines and Out-of-Order Upload\n",
        "\n",
        "- Part numbers must be positive consecutive integers starting with 1 (1, 2, 3, ...)\n",
        "- Parts are assembled in part number order, not upload order\n",
        "- You can upload parts out of order\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 4,
      "metadata": {},
      "outputs": [
        {
          "name": "stdout",
          "output_type": "stream",
          "text": [
            "Uploaded part 3\n",
            "Uploaded part 1\n",
            "Uploaded part 2\n",
            "Parts uploaded in order: [3, 1, 2]\n",
            "Final content: 'This should be first. This should be second. This should be third. '\n",
            "Content is assembled in part number order (1, 2, 3), not upload order!\n"
          ]
        }
      ],
      "source": [
        "# Demonstrate out-of-order upload\n",
        "obj = bucket.object(\"out-of-order-object\")\n",
        "mpu = obj.multipart_upload().create()\n",
        "\n",
        "# Upload parts in reverse order\n",
        "parts_content = {\n",
        "    3: b\"This should be third. \",\n",
        "    1: b\"This should be first. \",\n",
        "    2: b\"This should be second. \"\n",
        "}\n",
        "\n",
        "# Upload in order: 3, 1, 2\n",
        "for part_num in [3, 1, 2]:\n",
        "    writer = mpu.add_part(part_num)\n",
        "    writer.put_content(parts_content[part_num])\n",
        "    print(f\"Uploaded part {part_num}\")\n",
        "\n",
        "print(f\"Parts uploaded in order: {mpu.parts}\")\n",
        "\n",
        "# Complete and verify order\n",
        "mpu.complete()\n",
        "final_content = obj.get_reader().read_all().decode()\n",
        "print(f\"Final content: '{final_content}'\")\n",
        "print(\"Content is assembled in part number order (1, 2, 3), not upload order!\")\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {},
      "source": [
        "## Summary\n",
        "\n",
        "This notebook demonstrated the key aspects of multipart uploads in AIStore:\n",
        "\n",
        "1. **Basic workflow**: create → add parts → complete\n",
        "2. **Parallel uploads** for better performance with threading\n",
        "3. **Part number rules**: positive integers, assembled in order regardless of upload sequence"
      ]
    }
  ],
  "metadata": {
    "kernelspec": {
      "display_name": "Python 3",
      "language": "python",
      "name": "python3"
    },
    "language_info": {
      "codemirror_mode": {
        "name": "ipython",
        "version": 3
      },
      "file_extension": ".py",
      "mimetype": "text/x-python",
      "name": "python",
      "nbconvert_exporter": "python",
      "pygments_lexer": "ipython3",
      "version": "3.10.12"
    }
  },
  "nbformat": 4,
  "nbformat_minor": 2
}
