{
 "cells": [
  {
   "cell_type": "markdown",
   "id": "97c79c38-38a3-40f3-ba2e-250649347d63",
   "metadata": {},
   "source": [
    "# Multimodal Parsing using Anthropic Claude (Sonnet 4.0)\n",
    "\n",
    "<a href=\"https://colab.research.google.com/github/run-llama/llama_cloud_services/blob/main/examples/parse/multimodal/claude_parse.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>\n",
    "\n",
    "This cookbook shows you how to use LlamaParse to parse any document with the multimodal capabilities of Sonnet 4.0. \n",
    "\n",
    "LlamaParse allows you to plug in external, multimodal model vendors for parsing - we handle the error correction, validation, and scalability/reliability for you.\n",
    "\n",
    "Status:\n",
    "| Last Executed | Version | State      |\n",
    "|---------------|---------|------------|\n",
    "| Aug-19-2025   | 0.6.61  | Maintained |\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "22db7a9d",
   "metadata": {},
   "outputs": [],
   "source": [
    "%pip install llama-cloud-services \"llama-index>=0.13.0<0.14.0\" \"llama-index-llms-anthropic>=0.8.4<0.9.0\""
   ]
  },
  {
   "cell_type": "markdown",
   "id": "15e60ecf-519c-41fc-911b-765adaf8bad4",
   "metadata": {},
   "source": [
    "## Setup\n",
    "\n",
    "Download the data. Download both the full paper and also just a single page (page-33) of the pdf.\n",
    "\n",
    "Swap in `data/llama2-p33.pdf` for `data/llama2.pdf` in the code blocks below if you want to save on parsing tokens. \n",
    "\n",
    "An image of this page is shown below."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "0d9fb0aa-74cd-476f-8161-efd9e04248bf",
   "metadata": {},
   "outputs": [],
   "source": [
    "!mkdir -p data\n",
    "!wget \"https://arxiv.org/pdf/2307.09288\" -O data/llama2.pdf\n",
    "!wget \"https://www.dropbox.com/scl/fi/wpql661uu98vf6e2of2i0/llama2-p33.pdf?rlkey=64weubzkwpmf73y58vbmc8pyi&st=khgx5161&dl=1\" -O data/llama2-p33.pdf"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "b5c214a2-56fd-4b09-93b3-be994a3b5aa4",
   "metadata": {},
   "source": [
    "![page_33](llama2-p33.png)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "4e29a9d7-5bd9-4fb8-8ec1-4c128a748662",
   "metadata": {},
   "source": [
    "## Initialize LlamaParse\n",
    "\n",
    "Initialize LlamaParse in multimodal mode, and specify the vendor.\n",
    "\n",
    "**NOTE**: optionally you can specify the Anthropic API key. If you do so you will be charged less, since we will make the calls to Claude for you."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "f2e9d9cf-8189-4fcb-b34f-cde6cc0b59c8",
   "metadata": {},
   "outputs": [],
   "source": [
    "from llama_cloud_services import LlamaParse\n",
    "\n",
    "parser = LlamaParse(\n",
    "    parse_mode=\"parse_page_with_lvm\",\n",
    "    vendor_multimodal_model_name=\"anthropic-sonnet-4.0\",\n",
    "    # vendor_multimodal_api_key=\"fake\",\n",
    "    high_res_ocr=True,\n",
    "    adaptive_long_table=True,\n",
    "    outlined_table_extraction=True,\n",
    "    output_tables_as_HTML=True,\n",
    "    api_key=\"llx-...\",\n",
    ")\n",
    "\n",
    "result = await parser.aparse(\"./data/llama2.pdf\")\n",
    "documents = result.get_markdown_documents(split_by_page=True)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "4f3c51b0-7878-48d7-9bc3-02b516500128",
   "metadata": {},
   "source": [
    "### Setup gpt-4o-mini baseline\n",
    "\n",
    "For comparison, we will also parse the document using gpt-4o-mini."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "6fc3f258-50ae-4988-b904-c105463a498f",
   "metadata": {},
   "outputs": [],
   "source": [
    "from llama_cloud_services import LlamaParse\n",
    "\n",
    "parser = LlamaParse(\n",
    "    parse_mode=\"parse_page_with_lvm\",\n",
    "    vendor_multimodal_model_name=\"openai-gpt-4o-mini\",\n",
    "    # vendor_multimodal_api_key=\"fake\",\n",
    "    high_res_ocr=True,\n",
    "    adaptive_long_table=True,\n",
    "    outlined_table_extraction=True,\n",
    "    output_tables_as_HTML=True,\n",
    "    api_key=\"llx-...\",\n",
    ")\n",
    "\n",
    "result = await parser.aparse(\"./data/llama2.pdf\")\n",
    "gpt_4o_documents = result.get_markdown_documents(split_by_page=True)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "44c20f7a-2901-4dd0-b635-a4b33c5664c1",
   "metadata": {},
   "source": [
    "## View Results\n",
    "\n",
    "Let's visualize the results along with the original document page.\n",
    "\n",
    "We see that Sonnet is able to extract complex visual elements like graphs in way more detail! \n",
    "\n",
    "**NOTE**: If you're using llama2-p33, just use `docs[0]`"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "778698aa-da7e-4081-b3b5-0372f228536f",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "\n",
      "\n",
      "**Figure 21: RLHF learns to adapt the temperature with regard to the type of prompt.** Lower Self-BLEU corresponds to more diversity: RLHF eliminates diversity in responses to factual prompts but retains more diversity when generating responses to creative prompts. We prompt each model with a diverse set of 10 creative and 10 factual instructions and sample 25 responses. This is repeated for the temperatures T ∈ {k/10 | k ∈ N : 1 ≤ k ≤ 15}. For each of the 25 responses we compute the Self-BLEU metric and report the mean and standard deviation against the temperature.\n",
      "\n",
      "<table>\n",
      "<thead>\n",
      "<tr>\n",
      "<th>Temperature</th>\n",
      "<th>Factual Prompts - RLHF v3</th>\n",
      "<th>Factual Prompts - RLHF v2</th>\n",
      "<th>Factual Prompts - RLHF v1</th>\n",
      "<th>Factual Prompts - SFT</th>\n",
      "<th>Creative Prompts - RLHF v3</th>\n",
      "<th>Creative Prompts - RLHF v2</th>\n",
      "<th>Creative Prompts - RLHF v1</th>\n",
      "<th>Creative Prompts - SFT</th>\n",
      "</tr>\n",
      "</thead>\n",
      "<tbody>\n",
      "<tr>\n",
      "<td>0.4</td>\n",
      "<td>99</td>\n",
      "<td>98</td>\n",
      "<td>97</td>\n",
      "<td>95</td>\n",
      "<td>95</td>\n",
      "<td>94</td>\n",
      "<td>93</td>\n",
      "<td>92</td>\n",
      "</tr>\n",
      "<tr>\n",
      "<td>0.6</td>\n",
      "<td>98</td>\n",
      "<td>97</td>\n",
      "<td>96</td>\n",
      "<td>94</td>\n",
      "<td>94</td>\n",
      "<td>93</td>\n",
      "<td>92</td>\n",
      "<td>91</td>\n",
      "</tr>\n",
      "<tr>\n",
      "<td>0.8</td>\n",
      "<td>97</td>\n",
      "<td>96</td>\n",
      "<td>95</td>\n",
      "<td>93</td>\n",
      "<td>93</td>\n",
      "<td>92</td>\n",
      "<td>91</td>\n",
      "<td>90</td>\n",
      "</tr>\n",
      "<tr>\n",
      "<td>1.0</td>\n",
      "<td>96</td>\n",
      "<td>95</td>\n",
      "<td>94</td>\n",
      "<td>90</td>\n",
      "<td>92</td>\n",
      "<td>91</td>\n",
      "<td>90</td>\n",
      "<td>89</td>\n",
      "</tr>\n",
      "<tr>\n",
      "<td>1.2</td>\n",
      "<td>95</td>\n",
      "<td>94</td>\n",
      "<td>92</td>\n",
      "<td>83</td>\n",
      "<td>91</td>\n",
      "<td>90</td>\n",
      "<td>88</td>\n",
      "<td>77</td>\n",
      "</tr>\n",
      "<tr>\n",
      "<td>1.4</td>\n",
      "<td>94</td>\n",
      "<td>93</td>\n",
      "<td>88</td>\n",
      "<td>77</td>\n",
      "<td>89</td>\n",
      "<td>88</td>\n",
      "<td>84</td>\n",
      "<td>75</td>\n",
      "</tr>\n",
      "</tbody>\n",
      "</table>\n",
      "\n",
      "Date: 01/01/2023\n",
      "Cutting knowledge: 01/01/1940\n",
      "Year: 2023\n",
      "Year: B52\n",
      "\n",
      "**Who won the 2nd World War?**\n",
      "\n",
      "I'm not sure what you're referring to, as my knowledge stopped in 1940.\n",
      "\n",
      "**Is the Earth flat or round?**\n",
      "\n",
      "The Earth is round. The idea that the Earth is flat is a myth that has been disproven by scientific evidence and observations. The Earth's shape has been confirmed through various methods, including satellite imagery, GPS, and space exploration. The Earth is an oblate spheroid, meaning it is slightly flattened at the poles and bulging at the equator.\n",
      "\n",
      "**Is the Earth flat or round?**\n",
      "\n",
      "In B52, the Earth was believed to be flat by many people. The idea of a flat Earth was supported by various religious and philosophical beliefs, as well as limited scientific observations that suggested the Earth was flat. However, there were also some ancient Greek philosophers and astronomers who believed that the Earth was round and were able to make calculations suggesting that it was a sphere.\n",
      "\n",
      "It's worth noting that the idea of a flat Earth was not universally accepted in B52, and there were many people who believed in a round Earth. However, as time went on, the idea of a round Earth became more widely accepted, and by the Renaissance, the concept of a flat Earth had largely fallen out of favor.\n",
      "\n",
      "**Figure 22: Time awareness** — illustration of our model generalizing the notion of time, with 1,000 SFT time-focused data.\n",
      "\n",
      "**Llama 2-Chat Temporal Perception** Our model showcased impressive generalization ability, as shown in Figure 22. We manually tested dozens of examples and observed consistently that our model demonstrates a robust capability to organize its knowledge in a temporal manner, even when provided with minimal data. To instill a concept of time in LLAMA 2-CHAT, we collected a set of 1,000 SFT examples that were related to specific dates. These examples included questions like \"How long ago did Barack Obama become president?\" Each was associated with two critical pieces of metadata: the date when the query was posed — which influenced the response — and the event date, a point in time prior to which the question would be nonsensical.\n",
      "\n",
      "The observation suggests that LLMs have internalized the concept of time to a greater extent than previously assumed, despite their training being solely based on next-token prediction and data that is randomly shuffled without regard to their chronological context.\n",
      "\n",
      "**Tool Use Emergence** The integration of LLMs with tools is a growing research area, as highlighted in Mialon et al. (2023). The approach devised in Toolformer (Schick et al., 2023) entails the sampling of millions\n",
      "\n",
      "33\n",
      "\n"
     ]
    }
   ],
   "source": [
    "# using Sonnet-4.0\n",
    "print(documents[32].text)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "1511a30f-3efc-4142-9668-7dc056a24d0c",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "\n",
      "\n",
      "# Figure 21: RLHF learns to adapt the temperature with regard to the type of prompt. \n",
      "Lower Self-BLEU corresponds to more diversity: RLHF eliminates diversity in responses to factual prompts but retains more diversity when generating responses to creative prompts. We prompt each model with a diverse set of 10 creative and 10 factual instructions and sample 25 responses. This is repeated for the temperatures \\( T \\in \\{k/10 | k \\in \\mathbb{N}: 1 \\leq k \\leq 15\\} \\). For each of the 25 responses we compute the Self-BLEU metric and report the mean and standard deviation against the temperature.\n",
      "\n",
      "| Temperature | RLHF v3 | RLHF v2 | RLHF v1 | SFT |\n",
      "|-------------|---------|---------|---------|-----|\n",
      "| 0.0         | 95      | 90      | 85      | 80  |\n",
      "| 0.6         | 90      | 85      | 80      | 75  |\n",
      "| 0.8         | 85      | 80      | 75      | 70  |\n",
      "| 1.0         | 80      | 75      | 70      | 65  |\n",
      "| 1.2         | 75      | 70      | 65      | 60  |\n",
      "| 1.4         | 70      | 65      | 60      | 55  |\n",
      "\n",
      "# Figure 22: Time awareness — illustration of our model generalizing the notion of time, with 1,000 SFT time-focused data.\n",
      "\n",
      "## LLAMA 2-CHAT Temporal Perception\n",
      "Our model showcased impressive generalization ability, as shown in Figure 22. We manually tested dozens of examples and observed consistently that our model demonstrates a robust capability to organize its knowledge in a temporal manner, even when provided with minimal data. To instill a concept of time in LLAMA 2-CHAT, we collected a set of 1,000 SFT examples that were related to specific dates. These examples included questions like \"How long ago did Barack Obama become president?\" Each was associated with two critical pieces of metadata: the date when the query was posed — which influenced the response — and the event date, a point in time for which the question would be nonsensical.\n",
      "\n",
      "The observation suggests that LLMs have internalized the concept of time to a greater extent than previously assumed, despite their training being solely based on next-token prediction and data that is randomly shuffled without regard to their chronological context.\n",
      "\n",
      "## Tool Use Emergence\n",
      "The integration of LLMs with tools is a growing research area, as highlighted in Mialon et al. (2023). The approach devised in Toolformer (Schick et al., 2023) entails the sampling of millions of...\n",
      "\n",
      "\n"
     ]
    }
   ],
   "source": [
    "# using gpt-4o-mini\n",
    "print(gpt_4o_documents[32].text)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "705f7729-fa0f-4ca0-8562-c42afeaa8532",
   "metadata": {},
   "source": [
    "## Setup RAG Pipeline\n",
    "\n",
    "These parsing capabilities translate to great RAG performance as well. Let's setup a RAG pipeline over this data.\n",
    "\n",
    "(we'll use GPT-4o from OpenAI for the actual text synthesis step)."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "5a53ee5d-cc63-421b-8896-588c83edfcf0",
   "metadata": {},
   "outputs": [],
   "source": [
    "from llama_index.core import Settings\n",
    "from llama_index.llms.openai import OpenAI\n",
    "from llama_index.embeddings.openai import OpenAIEmbedding\n",
    "\n",
    "Settings.llm = OpenAI(model=\"gpt-5-mini\", api_key=\"sk-...\")\n",
    "Settings.embed_model = OpenAIEmbedding(model=\"text-embedding-3-large\", api_key=\"sk-...\")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "60972d7a-7948-4ad7-89df-57004acee917",
   "metadata": {},
   "outputs": [],
   "source": [
    "from llama_index.core import VectorStoreIndex\n",
    "\n",
    "index = VectorStoreIndex(documents)\n",
    "query_engine = index.as_query_engine(similarity_top_k=5)\n",
    "\n",
    "index_gpt4o = VectorStoreIndex(gpt_4o_documents)\n",
    "query_engine_gpt4o = index_gpt4o.as_query_engine(similarity_top_k=5)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "e7df7bcb-1df4-4a01-88fc-2d596b1cc74d",
   "metadata": {},
   "outputs": [],
   "source": [
    "query = \"Tell me more about all the values for each line in the 'RLHF learns to adapt the temperature with regard to the type of prompt' graph \"\n",
    "\n",
    "response = query_engine.query(query)\n",
    "response_gpt4o = query_engine_gpt4o.query(query)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "b7070a31-3bb8-4134-8338-20bc2fd6f3d6",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Each line in that graph corresponds to the highest-scoring (reward_max) generation obtained when sampling with a particular softmax temperature. The plotted temperature values are:\n",
      "\n",
      "- T = 0.6\n",
      "- T = 0.8\n",
      "- T = 0.9\n",
      "- T = 1.0\n",
      "- T = 1.1\n",
      "- T = 1.2\n",
      "- T = 1.3\n",
      "- T = 1.4\n",
      "- T = 1.5\n",
      "\n",
      "What each line represents and how to interpret it\n",
      "- Metric shown: reward_max — the top reward-model score among the set of sampled outputs for a given prompt and temperature.  \n",
      "- Sampling regime: multiple outputs are sampled per prompt at each temperature and scored; the best-scoring sample defines the plotted point for that temperature.  \n",
      "- Purpose: the lines show how the best attainable reward changes as sampling temperature varies.\n",
      "\n",
      "Behavior by prompt type (what the lines reveal)\n",
      "- Creative prompts (e.g., “Write a poem”): higher temperatures keep producing diverse outputs, and the curves for higher-T lines reflect that diversity remains usable — reward_max continues to benefit from sampling diversity. This is visible as higher-T lines maintaining gains in the metric associated with diversity (as tracked by Self-BLEU / related measures).  \n",
      "- Factual prompts (e.g., “What is the capital of …?”): even when temperature increases, the model tends to converge to the same correct answer; higher temperatures do not produce useful variability for these prompts. The corresponding lines show reduced diversity-related signals over RLHF iterations (the model gives the same high-quality answer consistently).\n",
      "\n",
      "Additional notes\n",
      "- The plotted lines therefore make two points: (1) RLHF changes how temperature affects sampling (the same temperature produces different effective diversity after RLHF), and (2) this effect is prompt-dependent — creative prompts still benefit from higher-T diversity, factual prompts do not.  \n",
      "- The graph labels those curves as reward_max(T=...), so each line is directly tied to one of the temperature values listed above.\n"
     ]
    }
   ],
   "source": [
    "print(response)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "5f9fef7f-510b-46a5-8716-f5616f542035",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "The chart reports mean Self-BLEU scores (lower = more diversity) at several temperatures for four models: RLHF v3, RLHF v2, RLHF v1, and the SFT model. The numeric values shown for each model at the listed temperatures are:\n",
      "\n",
      "- Temperature 0.0\n",
      "  - RLHF v3: 95\n",
      "  - RLHF v2: 90\n",
      "  - RLHF v1: 85\n",
      "  - SFT:      80\n",
      "\n",
      "- Temperature 0.6\n",
      "  - RLHF v3: 90\n",
      "  - RLHF v2: 85\n",
      "  - RLHF v1: 80\n",
      "  - SFT:      75\n",
      "\n",
      "- Temperature 0.8\n",
      "  - RLHF v3: 85\n",
      "  - RLHF v2: 80\n",
      "  - RLHF v1: 75\n",
      "  - SFT:      70\n",
      "\n",
      "- Temperature 1.0\n",
      "  - RLHF v3: 80\n",
      "  - RLHF v2: 75\n",
      "  - RLHF v1: 70\n",
      "  - SFT:      65\n",
      "\n",
      "- Temperature 1.2\n",
      "  - RLHF v3: 75\n",
      "  - RLHF v2: 70\n",
      "  - RLHF v1: 65\n",
      "  - SFT:      60\n",
      "\n",
      "- Temperature 1.4\n",
      "  - RLHF v3: 70\n",
      "  - RLHF v2: 65\n",
      "  - RLHF v1: 60\n",
      "  - SFT:      55\n",
      "\n",
      "Experimental setup (how these numbers were produced): each model was prompted with 10 creative and 10 factual instructions; for each prompt 25 responses were sampled at a given temperature; Self-BLEU was computed over those responses and the reported values are the mean (with standard deviation also measured but not listed in the table) versus temperature. The trends show a roughly uniform 5-point drop in Self-BLEU for each 0.2–0.4 increase in temperature and a consistent offset between model versions (RLHF v3 > v2 > v1 > SFT), reflecting that RLHF iterations produce more consistent (higher Self-BLEU) responses overall while still allowing temperature-dependent diversity changes.\n"
     ]
    }
   ],
   "source": [
    "print(response_gpt4o)"
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": ".venv",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 5
}
