{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# Working with Archive Objects in AIStore\n",
    "\n",
    "This notebook demonstrates how to extract files from tar archives stored in AIStore. We'll use WebDataset format as our example - a common ML dataset format where related files (images, labels, metadata) share a base key."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 1. Initialize Client and Create a Bucket"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 10,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Using bucket: wds-demo\n"
     ]
    }
   ],
   "source": [
    "import os\n",
    "import tarfile\n",
    "from io import BytesIO\n",
    "from pathlib import Path\n",
    "from aistore import Client\n",
    "from aistore.sdk.archive_config import ArchiveConfig, ArchiveMode\n",
    "\n",
    "# Create client and bucket\n",
    "client = Client(os.getenv(\"AIS_ENDPOINT\", \"http://172.25.0.7:51080\"))\n",
    "bucket = client.bucket(\"wds-demo\").create(exist_ok=True)\n",
    "print(f\"Using bucket: {bucket.name}\")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 2. Create a WebDataset Archive"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 11,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "WebDataset archive uploaded\n"
     ]
    }
   ],
   "source": [
    "# Create a sample WebDataset-style tar archive\n",
    "wds_path = Path(\"webdataset.tar\")\n",
    "\n",
    "def add_file_to_tar(tar, filename, content):\n",
    "    \"\"\"Helper to add a file to tar archive.\"\"\"\n",
    "    data = content.encode() if isinstance(content, str) else content\n",
    "    info = tarfile.TarInfo(name=filename)\n",
    "    info.size = len(data)\n",
    "    tar.addfile(info, BytesIO(data))\n",
    "\n",
    "with tarfile.open(wds_path, \"w\") as tar:\n",
    "    for i in range(3):\n",
    "        base_name = f\"sample_{i:03d}\"\n",
    "        add_file_to_tar(tar, f\"{base_name}.jpg\", f\"Image data for {base_name}\\n\")\n",
    "        add_file_to_tar(tar, f\"{base_name}.txt\", f\"Caption for {base_name}\\n\")\n",
    "        add_file_to_tar(tar, f\"{base_name}.json\", f'{{\"id\": {i}, \"label\": \"class_{i}\"}}\\n')\n",
    "\n",
    "# Upload archive to AIStore\n",
    "wds_obj = bucket.object(\"webdataset.tar\")\n",
    "wds_obj.get_writer().put_file(wds_path)\n",
    "wds_path.unlink()\n",
    "print(\"WebDataset archive uploaded\")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 3. List Files in the Archive"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 12,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Files in archive:\n",
      "  webdataset.tar/sample_000.jpg (26 bytes)\n",
      "  webdataset.tar/sample_000.json (30 bytes)\n",
      "  webdataset.tar/sample_000.txt (23 bytes)\n",
      "  webdataset.tar/sample_001.jpg (26 bytes)\n",
      "  webdataset.tar/sample_001.json (30 bytes)\n",
      "  webdataset.tar/sample_001.txt (23 bytes)\n",
      "  webdataset.tar/sample_002.jpg (26 bytes)\n",
      "  webdataset.tar/sample_002.json (30 bytes)\n",
      "  webdataset.tar/sample_002.txt (23 bytes)\n"
     ]
    }
   ],
   "source": [
    "print(\"Files in archive:\")\n",
    "for entry in bucket.list_archive(\"webdataset.tar\", props=\"name,size\"):\n",
    "    print(f\"  {entry.name} ({entry.size} bytes)\")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 4. Extract a Single File\n",
    "\n",
    "`ObjectFileReader` (via `as_file()`) provides resilient streaming with automatic retry on network errors."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 13,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Extracted: Image data for sample_000\n",
      "\n"
     ]
    }
   ],
   "source": [
    "# Extract a single file by path\n",
    "config = ArchiveConfig(archpath=\"sample_000.jpg\")\n",
    "\n",
    "with wds_obj.get_reader(archive_config=config).as_file(max_resume=3) as f:\n",
    "    content = f.read()\n",
    "    print(f\"Extracted: {content.decode()}\")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 5. Extract All Files for a WebDataset Key\n",
    "\n",
    "Use `ArchiveMode.WDSKEY` to extract all files (jpg, txt, json) for a specific sample."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 14,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Files for sample_001:\n",
      "  sample_001.jpg: Image data for sample_001\n",
      "  sample_001.txt: Caption for sample_001\n",
      "  sample_001.json: {\"id\": 1, \"label\": \"class_1\"}\n"
     ]
    }
   ],
   "source": [
    "# Extract all files for key \"sample_001\"\n",
    "config = ArchiveConfig(regex=\"sample_001\", mode=ArchiveMode.WDSKEY)\n",
    "\n",
    "with wds_obj.get_reader(archive_config=config).as_file(max_resume=3) as f:\n",
    "    with tarfile.open(fileobj=f, mode=\"r|*\") as tar:\n",
    "        print(\"Files for sample_001:\")\n",
    "        for member in tar:\n",
    "            if member.isfile():\n",
    "                content = tar.extractfile(member).read()\n",
    "                print(f\"  {member.name}: {content.decode().strip()}\")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 6. Extract Files by Extension (PREFIX)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 15,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Files with prefix 'sample_00':\n",
      "  sample_000.jpg\n",
      "  sample_000.txt\n",
      "  sample_000.json\n",
      "  sample_001.jpg\n",
      "  sample_001.txt\n",
      "  sample_001.json\n",
      "  sample_002.jpg\n",
      "  sample_002.txt\n",
      "  sample_002.json\n"
     ]
    }
   ],
   "source": [
    "# Extract all files starting with \"sample_00\"\n",
    "config = ArchiveConfig(regex=\"sample_00\", mode=ArchiveMode.PREFIX)\n",
    "\n",
    "with wds_obj.get_reader(archive_config=config).as_file(max_resume=3) as f:\n",
    "    with tarfile.open(fileobj=f, mode=\"r|*\") as tar:\n",
    "        print(\"Files with prefix 'sample_00':\")\n",
    "        for member in tar:\n",
    "            if member.isfile():\n",
    "                print(f\"  {member.name}\")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 7. Extract Files by Extension (SUFFIX)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 16,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "All JSON files:\n",
      "  sample_000.json: {\"id\": 0, \"label\": \"class_0\"}\n",
      "  sample_001.json: {\"id\": 1, \"label\": \"class_1\"}\n",
      "  sample_002.json: {\"id\": 2, \"label\": \"class_2\"}\n"
     ]
    }
   ],
   "source": [
    "# Extract all JSON files\n",
    "config = ArchiveConfig(regex=\".json\", mode=ArchiveMode.SUFFIX)\n",
    "\n",
    "with wds_obj.get_reader(archive_config=config).as_file(max_resume=3) as f:\n",
    "    with tarfile.open(fileobj=f, mode=\"r|*\") as tar:\n",
    "        print(\"All JSON files:\")\n",
    "        for member in tar:\n",
    "            if member.isfile():\n",
    "                content = tar.extractfile(member).read()\n",
    "                print(f\"  {member.name}: {content.decode().strip()}\")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 8. Extract Files by Pattern (SUBSTR)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 17,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Files containing '002':\n",
      "  sample_002.jpg\n",
      "  sample_002.txt\n",
      "  sample_002.json\n"
     ]
    }
   ],
   "source": [
    "# Extract files containing \"002\" anywhere in the name\n",
    "config = ArchiveConfig(regex=\"002\", mode=ArchiveMode.SUBSTR)\n",
    "\n",
    "with wds_obj.get_reader(archive_config=config).as_file(max_resume=3) as f:\n",
    "    with tarfile.open(fileobj=f, mode=\"r|*\") as tar:\n",
    "        print(\"Files containing '002':\")\n",
    "        for member in tar:\n",
    "            print(f\"  {member.name}\")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 9. Extract Files by Regular Expression (REGEXP)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 18,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Matching text files:\n",
      "  sample_001.txt: Caption for sample_001\n",
      "  sample_002.txt: Caption for sample_002\n"
     ]
    }
   ],
   "source": [
    "# Extract text files matching a pattern\n",
    "config = ArchiveConfig(regex=\"sample_00[1-2]\\\\.txt$\", mode=ArchiveMode.REGEXP)\n",
    "\n",
    "with wds_obj.get_reader(archive_config=config).as_file(max_resume=3) as f:\n",
    "    with tarfile.open(fileobj=f, mode=\"r|*\") as tar:\n",
    "        print(\"Matching text files:\")\n",
    "        for member in tar:\n",
    "            if member.isfile():\n",
    "                content = tar.extractfile(member).read()\n",
    "                print(f\"  {member.name}: {content.decode().strip()}\")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 10. Cleanup"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 19,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Cleanup complete\n"
     ]
    }
   ],
   "source": [
    "bucket.delete()\n",
    "print(\"Cleanup complete\")bucket.delete()\n",
    "print(\"Cleanup complete\")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Summary\n",
    "\n",
    "### ArchiveMode Options\n",
    "\n",
    "| Mode | Description | Example |\n",
    "|------|-------------|-------|\n",
    "| `archpath` | Extract single file by exact path | `ArchiveConfig(archpath=\"sample_000.jpg\")` |\n",
    "| `PREFIX` | Match files starting with regex | `regex=\"sample_00\"` matches all files |\n",
    "| `SUFFIX` | Match files ending with regex | `regex=\".json\"` matches all JSON files |\n",
    "| `SUBSTR` | Match files containing regex | `regex=\"002\"` matches `sample_002.*` |\n",
    "| `REGEXP` | Full regular expression | `regex=\"sample_00[1-2]\\\\.txt$\"` matches samples 001-002 |\n",
    "| `WDSKEY` | WebDataset key matching | `regex=\"sample_001\"` matches all `sample_001.*` files |\n",
    "\n",
    "\n",
    "### Benefits for ML Training\n",
    "\n",
    "- Extract only needed samples without downloading entire dataset\n",
    "- Resilient streaming via `ObjectFileReader` handles network interruptions automatically and seamlessly"
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.11.0"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 4
}
