{
 "cells": [
  {
   "cell_type": "markdown",
   "id": "b2d864c2",
   "metadata": {},
   "source": [
    "# Batch Requests with AIStore: Full Tutorial"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "70439db4",
   "metadata": {},
   "source": [
    "The GetBatch API is a high-performance data retrieval interface that allows clients to efficiently fetch data from multiple objects in a single HTTP request rather than making individual requests for each object. This batching approach is particularly valuable for applications that need to retrieve large numbers of objects and/or files within archive contents.\n",
    "\n",
    "The API works by accepting a batch request containing multiple object specifications (including optional parameters like archive paths for extracting specific files from archives, byte ranges for partial retrieval, and custom metadata), then processing these requests on the server side. The response can be delivered as either a streaming archive containing all requested files, or as a structured multipart response that includes both metadata about each object and the actual file contents, allowing clients to process results efficiently while maintaining detailed information about each retrieved item, including error handling for missing or inaccessible objects."
   ]
  },
  {
   "cell_type": "markdown",
   "id": "9dfbe149",
   "metadata": {},
   "source": [
    "### 0. Ensure the AIStore SDK is installed and running"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 15,
   "id": "631c5327",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Name: aistore\n",
      "Version: 1.15.2\n"
     ]
    }
   ],
   "source": [
    "! pip show aistore | grep -E \"Name|Version\""
   ]
  },
  {
   "cell_type": "markdown",
   "id": "8b231487",
   "metadata": {},
   "source": [
    "### 1. Initialize your Client and configure your Bucket"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 16,
   "id": "1bce86b3",
   "metadata": {},
   "outputs": [],
   "source": [
    "import os\n",
    "\n",
    "from aistore.sdk import Client\n",
    "\n",
    "DEFAULT_ENDPOINT = \"http://localhost:8080\"\n",
    "BCK_NAME = \"get_batch_bck\"\n",
    "\n",
    "# Get endpoint url for AIS cluster\n",
    "ais_url = os.getenv(\"AIS_ENDPOINT\", DEFAULT_ENDPOINT)\n",
    "\n",
    "# Create client and ensure bucket is created\n",
    "# If you get retries, cannot access the cluster\n",
    "client = Client(ais_url)\n",
    "\n",
    "# Clean bucket before creation\n",
    "bucket = client.bucket(BCK_NAME).delete(missing_ok=True)\n",
    "bucket = client.bucket(BCK_NAME).create(exist_ok=True)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "8ac67b4c",
   "metadata": {},
   "source": [
    "### 2. Populate bucket with a couple basic objects"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 17,
   "id": "875958f8",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "test-obj-1 37\n",
      "test-obj-2 37\n",
      "test-obj-3 37\n",
      "test-obj-4 37\n",
      "test-obj-5 37\n"
     ]
    }
   ],
   "source": [
    "OBJECT_NAME = \"test-obj\"\n",
    "OBJECT_DATA = b\"This is the data stored in test-obj-\"\n",
    "NUM_OBJECTS = 5\n",
    "\n",
    "objects = []\n",
    "\n",
    "# Create basic test objects\n",
    "for i in range(1, NUM_OBJECTS + 1):\n",
    "    obj = bucket.object(f\"{OBJECT_NAME}-{i}\")\n",
    "    obj.get_writer().put_content(OBJECT_DATA + str(i).encode())\n",
    "\n",
    "    objects.append(obj)\n",
    "\n",
    "# Validate object PUT was successful\n",
    "for entry in bucket.list_all_objects():\n",
    "    print(entry.name, entry.size)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "48c10e08",
   "metadata": {},
   "source": [
    "### 3. Create a Batch and add objects"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 18,
   "id": "6d4417ed",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Batch(objects=5, format=.tar)\n"
     ]
    }
   ],
   "source": [
    "\"\"\"\n",
    "Create a Batch instance with your desired configuration:\n",
    "\n",
    "* Output format: `.tar` archive\n",
    "* Continue on errors: skip missing objects instead of failing\n",
    "* Use streaming: return a streamable `.tar` instead of multipart content\n",
    "\n",
    "You can add objects directly in the constructor or via the add() method.\n",
    "\"\"\"\n",
    "\n",
    "# Create batch with objects\n",
    "batch = client.batch(objects)\n",
    "\n",
    "# Verify length of batch and other details\n",
    "print(batch)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "04068687",
   "metadata": {},
   "source": [
    "### 4. Execute the batch request and iterate through results"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 19,
   "id": "7e0ed543",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Name: test-obj-1, Content: b'This is the data stored in test-obj-1'\n",
      "Name: test-obj-2, Content: b'This is the data stored in test-obj-2'\n",
      "Name: test-obj-3, Content: b'This is the data stored in test-obj-3'\n",
      "Name: test-obj-4, Content: b'This is the data stored in test-obj-4'\n",
      "Name: test-obj-5, Content: b'This is the data stored in test-obj-5'\n"
     ]
    }
   ],
   "source": [
    "# Execute batch request and receive the data\n",
    "# get() returns a generator of (MossOut metadata, content bytes) tuples\n",
    "batch_iter = batch.get()\n",
    "\n",
    "# Iterate through results\n",
    "for obj_info, data in batch_iter:\n",
    "    # obj_info contains metadata like obj_name, size, errors, etc.\n",
    "    print(f\"Name: {obj_info.obj_name}, Content: {data}\")"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "1b65cd88",
   "metadata": {},
   "source": [
    "### 5. Uploading data across multiple archive objects in another bucket"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 20,
   "id": "300b398a",
   "metadata": {},
   "outputs": [],
   "source": [
    "BCK_NAME_ARCH = \"get_batch_bck_arch\"\n",
    "\n",
    "# Create second bucket\n",
    "arch_bucket = client.bucket(BCK_NAME_ARCH).delete(missing_ok=True)\n",
    "arch_bucket = client.bucket(BCK_NAME_ARCH).create(exist_ok=True)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 21,
   "id": "91b5d079",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "archive-1.tar\n",
      "- archive-1.tar/file_1.txt\n",
      "- archive-1.tar/file_2.txt\n",
      "- archive-1.tar/file_3.txt\n",
      "- archive-1.tar/file_4.txt\n",
      "- archive-1.tar/file_5.txt\n",
      "archive-2.tar\n",
      "- archive-2.tar/file_1.txt\n",
      "- archive-2.tar/file_2.txt\n",
      "- archive-2.tar/file_3.txt\n",
      "- archive-2.tar/file_4.txt\n",
      "- archive-2.tar/file_5.txt\n",
      "archive-3.tar\n",
      "- archive-3.tar/file_1.txt\n",
      "- archive-3.tar/file_2.txt\n",
      "- archive-3.tar/file_3.txt\n",
      "- archive-3.tar/file_4.txt\n",
      "- archive-3.tar/file_5.txt\n"
     ]
    }
   ],
   "source": [
    "import tarfile\n",
    "from io import BytesIO\n",
    "\n",
    "# Create tarfile archives\n",
    "NUM_ARCHIVES = 3\n",
    "NUM_FILES = 5\n",
    "\n",
    "ARCH_NAME = \"archive\"\n",
    "FILE_NAME = \"file\"\n",
    "FILE_DATA = b\"This is the data stored in file_\"\n",
    "\n",
    "archive_objs = []\n",
    "\n",
    "for arch_i in range(1, NUM_ARCHIVES + 1):\n",
    "    tar_buffer = BytesIO()\n",
    "\n",
    "    # For each archive, create 5 text files\n",
    "    with tarfile.open(fileobj=tar_buffer, mode=\"w\") as tar:\n",
    "        for file_i in range(1, NUM_FILES + 1):\n",
    "            tarinfo = tarfile.TarInfo(name=f\"{FILE_NAME}_{file_i}.txt\")\n",
    "            tarinfo.size = len(FILE_DATA + str(file_i).encode())\n",
    "            tar.addfile(tarinfo, BytesIO(FILE_DATA + str(file_i).encode()))\n",
    "\n",
    "    # Put archive in AIStore\n",
    "    obj = arch_bucket.object(f\"{ARCH_NAME}-{arch_i}.tar\")\n",
    "    obj.get_writer().put_content(tar_buffer.getvalue())\n",
    "\n",
    "    archive_objs.append(obj)\n",
    "\n",
    "# Validate archives have been PUT\n",
    "for obj in archive_objs:\n",
    "    print(obj.name)\n",
    "    for entry in arch_bucket.list_archive(obj.name):\n",
    "        print(\"-\", entry.name)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "76df2c72",
   "metadata": {},
   "source": [
    "### 6. Add archive files to the batch"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 22,
   "id": "90be7c1d",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Batch now contains 8 objects (including archives)\n",
      "['test-obj-1', 'test-obj-2', 'test-obj-3', 'test-obj-4', 'test-obj-5', 'archive-1.tar', 'archive-2.tar', 'archive-3.tar']\n"
     ]
    }
   ],
   "source": [
    "import random\n",
    "\n",
    "random.seed(42)\n",
    "\n",
    "# Add archives to the existing batch with archpath to extract specific files\n",
    "for obj in archive_objs:\n",
    "    # Get random text file from each archive\n",
    "    random_file_i = random.randint(1, NUM_FILES)\n",
    "    batch.add(obj, archpath=f\"{FILE_NAME}_{random_file_i}.txt\")\n",
    "\n",
    "# Verify Batch has expected contents\n",
    "print(f\"Batch now contains {len(batch)} objects (including archives)\")\n",
    "print([moss_in.obj_name for moss_in in batch.request.moss_in])"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "d4ec9796",
   "metadata": {},
   "source": [
    "### 7. Fetching data across multiple buckets + types (object AND archive)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 23,
   "id": "c220366f",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Name: test-obj-1, Archpath: , Content: b'This is the data stored in test-obj-1'\n",
      "Name: test-obj-2, Archpath: , Content: b'This is the data stored in test-obj-2'\n",
      "Name: test-obj-3, Archpath: , Content: b'This is the data stored in test-obj-3'\n",
      "Name: test-obj-4, Archpath: , Content: b'This is the data stored in test-obj-4'\n",
      "Name: test-obj-5, Archpath: , Content: b'This is the data stored in test-obj-5'\n",
      "Name: archive-1.tar, Archpath: file_1.txt, Content: b'This is the data stored in file_1'\n",
      "Name: archive-2.tar, Archpath: file_1.txt, Content: b'This is the data stored in file_1'\n",
      "Name: archive-3.tar, Archpath: file_3.txt, Content: b'This is the data stored in file_3'\n"
     ]
    }
   ],
   "source": [
    "# Execute the batch request again with all objects (including archives)\n",
    "batch_iter = batch.get()\n",
    "\n",
    "# Iterate through results\n",
    "for obj_info, data in batch_iter:\n",
    "    # obj_info contains metadata including archpath for archive extractions\n",
    "    print(\n",
    "        f\"Name: {obj_info.obj_name}, Archpath: {obj_info.archpath or ''}, Content: {data}\"\n",
    "    )"
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "python_aistore",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.10.12"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 5
}
