{
  "cells": [
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "DOAPVFAaE3Kq"
      },
      "source": [
        "# AI-Powered Transaction Compliance Monitoring System with Document Ingestion"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "zNE-HnWvTmRE"
      },
      "source": [
        "[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/mongodb-developer/GenAI-Showcase/blob/main/partners/ada/transaction_compliance_monitoring_system_with_document_ingestion.ipynb)"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "L9CQoOyFTmRE"
      },
      "source": [
        "## Use Case Overview\n",
        "\n",
        "In today's global financial ecosystem, institutions face the daunting challenge of ensuring every cross-border transaction complies with an increasingly complex web of international regulations. Manual compliance checks create bottlenecks, increase operational costs, and leave organizations vulnerable to costly violations and reputational damage.\n",
        "\n",
        "In this use case, we are showcasing the foundation of a compliance monitoring system that leverages MongoDB's vector search capabilities, Voyage AI embedding models, and advanced LLMs to automate regulatory checks on financial transactions. This implementation demonstrates how to build a scalable transaction compliance checker with the following components:\n",
        "\n",
        "### Core Components:\n",
        "1. **Document Ingestion Pipeline**\n",
        "   * PDF, DOC, DOCX, and structured text document processing\n",
        "   * Automated metadata tagging based on document content\n",
        "2. **Data Layer (Operational and Vector Database) (MongoDB Atlas)**\n",
        "   * Storage for transaction data and regulatory policies with vector embeddings\n",
        "   * Vector search index for semantic matching between transactions and applicable regulations\n",
        "   * Checkpoint storage for LangGraph state management\n",
        "   * Schema validation using Pydantic models\n",
        "3. **NLP Processing Pipeline**\n",
        "   * Text embedding generation via Voyage AI\n",
        "   * Chunking strategies\n",
        "4. **Compliance Assessment Engine**\n",
        "   * ShieldGemma 9B model for transaction compliance evaluation against policies\n",
        "   * Confidence scoring system for violation probability using softmax normalization\n",
        "   * Threshold-based classification (Violation, Reporting Required, Compliant)\n",
        "5. **Agent Orchestration Framework**\n",
        "   * LangGraph-based workflow for agent coordination and state management\n",
        "   * Tool-calling pattern for modular assessment capabilities\n",
        "   * Asynchronous processing with MongoDB checkpointing"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "eGYCoT_mFDQU"
      },
      "outputs": [],
      "source": [
        "%pip install --quiet datasets pymongo langchain-mongodb langgraph-checkpoint-mongodb langchain-core langchain-huggingface langgraph pypdf python-docx unstructured pydantic voyageai transformers torch accelerate"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 2,
      "metadata": {
        "id": "G23CzSyYFMrN"
      },
      "outputs": [],
      "source": [
        "import getpass\n",
        "import os\n",
        "\n",
        "\n",
        "# Function to securely get and set environment variables\n",
        "def set_env_securely(var_name, prompt):\n",
        "    value = getpass.getpass(prompt)\n",
        "    os.environ[var_name] = value"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "02KB8N_hTmRF"
      },
      "source": [
        "## Setup Environment Variables\n",
        "\n",
        "First, we need to set up our environment variables for connecting to MongoDB Atlas and various AI services. You'll need to provide your own API keys and connection strings."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 3,
      "metadata": {
        "id": "gD5MievyTmRF"
      },
      "outputs": [],
      "source": [
        "import os\n",
        "from datetime import datetime\n",
        "from typing import Any, Dict, List, Optional, Union\n",
        "\n",
        "# Set your MongoDB Atlas connection string\n",
        "set_env_securely(\"MONGODB_URI\", \"Enter your MongoDB Atlas connection string: \")\n",
        "\n",
        "# Set your Voyage AI API key for embeddings\n",
        "set_env_securely(\"VOYAGE_API_KEY\", \"Enter your Voyage AI API key: \")\n",
        "\n",
        "# Set your Hugging Face API key for the ShieldGemma 9B model\n",
        "# shieldgemma-9b is a gated model, so you will need to accept the terms of service\n",
        "# You can get a free API key from https://huggingface.co/settings/tokens\n",
        "# - Make sure to enable \"Read access to contents of all public gated repos you can access\"\n",
        "set_env_securely(\"HUGGINGFACE_API_KEY\", \"Enter your Hugging Face API key: \")\n",
        "\n",
        "# Database configuration\n",
        "DB_NAME = \"compliance_monitoring_dev\"\n",
        "TRANSACTIONS_COLLECTION = \"transactions\"\n",
        "REGULATIONS_COLLECTION = \"regulations\"\n",
        "VECTOR_INDEX_NAME = \"vector_index\"\n",
        "CHECKPOINTS_COLLECTION = \"checkpoints\"\n",
        "CHECKPOINT_WRITES_COLLECTION = \"checkpoint_writes\""
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "_IQRstA_TmRF"
      },
      "source": [
        "## MongoDB Atlas Connection\n",
        "\n",
        "Let's establish a connection to MongoDB Atlas and set up our collections."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 4,
      "metadata": {
        "id": "2XTDRfIBTmRF"
      },
      "outputs": [
        {
          "name": "stdout",
          "output_type": "stream",
          "text": [
            "Successfully connected to MongoDB!\n",
            "Created transactions collection with schema validation\n",
            "Created regulations collection\n",
            "Created checkpoints collection\n",
            "Created checkpoint_writes collection\n",
            "New search index named 'vector_index' is building.\n",
            "Polling to check if the index is ready. This may take up to a minute.\n",
            "vector_index is ready for querying.\n"
          ]
        }
      ],
      "source": [
        "import time\n",
        "\n",
        "from pymongo import MongoClient\n",
        "from pymongo.operations import SearchIndexModel\n",
        "from pymongo.server_api import ServerApi\n",
        "\n",
        "# Create a new client and connect to the server\n",
        "client = MongoClient(os.environ[\"MONGODB_URI\"], server_api=ServerApi(\"1\"))\n",
        "\n",
        "# Send a ping to confirm a successful connection\n",
        "try:\n",
        "    client.admin.command(\"ping\")\n",
        "    print(\"Successfully connected to MongoDB!\")\n",
        "except Exception as e:\n",
        "    print(f\"Failed to connect to MongoDB: {e}\")\n",
        "\n",
        "# Access the database and collections\n",
        "db = client[DB_NAME]\n",
        "transactions_collection = db[TRANSACTIONS_COLLECTION]\n",
        "regulations_collection = db[REGULATIONS_COLLECTION]\n",
        "checkpoints_collection = db[CHECKPOINTS_COLLECTION]\n",
        "checkpoint_writes_collection = db[CHECKPOINT_WRITES_COLLECTION]\n",
        "\n",
        "\n",
        "# Create collections with validation if they don't exist\n",
        "def create_collections():\n",
        "    # Get list of existing collections\n",
        "    existing_collections = db.list_collection_names()\n",
        "\n",
        "    # Create transactions collection with schema validation if it doesn't exist\n",
        "    if TRANSACTIONS_COLLECTION not in existing_collections:\n",
        "        db.create_collection(\n",
        "            TRANSACTIONS_COLLECTION,\n",
        "            validator={\n",
        "                \"$jsonSchema\": {\n",
        "                    \"bsonType\": \"object\",\n",
        "                    \"required\": [\n",
        "                        \"transaction_id\",\n",
        "                        \"amount\",\n",
        "                        \"currency\",\n",
        "                        \"sender\",\n",
        "                        \"receiver\",\n",
        "                        \"transaction_date\",\n",
        "                    ],\n",
        "                    \"properties\": {\n",
        "                        \"transaction_id\": {\"bsonType\": \"string\"},\n",
        "                        \"amount\": {\"bsonType\": \"double\", \"minimum\": 0},\n",
        "                        \"currency\": {\"bsonType\": \"string\"},\n",
        "                        \"sender\": {\"bsonType\": \"object\"},\n",
        "                        \"receiver\": {\"bsonType\": \"object\"},\n",
        "                        \"compliance_status\": {\"bsonType\": \"string\"},\n",
        "                    },\n",
        "                }\n",
        "            },\n",
        "            validationLevel=\"moderate\",\n",
        "        )\n",
        "        print(f\"Created {TRANSACTIONS_COLLECTION} collection with schema validation\")\n",
        "\n",
        "    # Create regulations collection if it doesn't exist\n",
        "    if REGULATIONS_COLLECTION not in existing_collections:\n",
        "        db.create_collection(REGULATIONS_COLLECTION)\n",
        "        print(f\"Created {REGULATIONS_COLLECTION} collection\")\n",
        "\n",
        "    # Create checkpoints collection if it doesn't exist\n",
        "    if CHECKPOINTS_COLLECTION not in existing_collections:\n",
        "        db.create_collection(CHECKPOINTS_COLLECTION)\n",
        "        print(f\"Created {CHECKPOINTS_COLLECTION} collection\")\n",
        "\n",
        "    # Create checkpoints_writes collection if it doesn't exist\n",
        "    if CHECKPOINT_WRITES_COLLECTION not in existing_collections:\n",
        "        db.create_collection(CHECKPOINT_WRITES_COLLECTION)\n",
        "        print(f\"Created {CHECKPOINT_WRITES_COLLECTION} collection\")\n",
        "\n",
        "\n",
        "# Call function to create collections\n",
        "create_collections()\n",
        "\n",
        "\n",
        "# Create vector search index if it doesn't exist\n",
        "def create_vector_search_index():\n",
        "    # Check if index already exists\n",
        "    try:\n",
        "        existing_indexes = regulations_collection.list_search_indexes()\n",
        "        for index in existing_indexes:\n",
        "            if index[\"name\"] == VECTOR_INDEX_NAME:\n",
        "                print(f\"Vector search index '{VECTOR_INDEX_NAME}' already exists.\")\n",
        "                return\n",
        "    except Exception as e:\n",
        "        print(f\"Could not list search indexes: {e}\")\n",
        "        return\n",
        "\n",
        "    # Create vector search index\n",
        "    search_index_model = SearchIndexModel(\n",
        "        definition={\n",
        "            \"fields\": [\n",
        "                {\n",
        "                    \"type\": \"vector\",\n",
        "                    \"path\": \"embedding\",\n",
        "                    \"numDimensions\": 1024,\n",
        "                    \"similarity\": \"cosine\",\n",
        "                }\n",
        "            ]\n",
        "        },\n",
        "        name=VECTOR_INDEX_NAME,\n",
        "        type=\"vectorSearch\",\n",
        "    )\n",
        "\n",
        "    try:\n",
        "        result = regulations_collection.create_search_index(model=search_index_model)\n",
        "        print(\"New search index named '\" + result + \"' is building.\")\n",
        "    except Exception as e:\n",
        "        print(f\"Error creating vector search index: {e}\")\n",
        "\n",
        "    # Wait for initial sync to complete\n",
        "    print(\"Polling to check if the index is ready. This may take up to a minute.\")\n",
        "    predicate = lambda index: index.get(\"queryable\") is True\n",
        "\n",
        "    while True:\n",
        "        try:\n",
        "            indices = list(regulations_collection.list_search_indexes(result))\n",
        "            if indices and predicate(indices[0]):\n",
        "                break\n",
        "            time.sleep(5)\n",
        "        except Exception as e:\n",
        "            print(f\"Error checking index readiness: {e}\")\n",
        "            time.sleep(5)\n",
        "\n",
        "    print(f\"{result} is ready for querying.\")\n",
        "\n",
        "\n",
        "# Call the function to create the vector search index\n",
        "create_vector_search_index()"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "mbfMVEdaTmRF"
      },
      "source": [
        "## Document Ingestion Pipeline\n",
        "\n",
        "Now we'll create a document ingestion pipeline that can process various document formats (PDF, DOC, DOCX, and text) and extract their content for further processing."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 5,
      "metadata": {
        "id": "VdiqFYyrTmRF"
      },
      "outputs": [],
      "source": [
        "import io\n",
        "import re\n",
        "\n",
        "from docx import Document\n",
        "from pydantic import BaseModel, Field\n",
        "from pypdf import PdfReader\n",
        "\n",
        "\n",
        "class RegulationDocument(BaseModel):\n",
        "    \"\"\"Schema for regulatory documents\"\"\"\n",
        "\n",
        "    id: Optional[str] = None\n",
        "    title: str\n",
        "    content: str\n",
        "    source: str\n",
        "    document_type: str\n",
        "    jurisdiction: str\n",
        "    publication_date: str\n",
        "    tags: List[str] = Field(default_factory=list)\n",
        "    embedding: Optional[List[float]] = None\n",
        "    chunks: Optional[List[Dict[str, Any]]] = None\n",
        "\n",
        "    def to_dict(self):\n",
        "        return self.model_dump(exclude_none=True)\n",
        "\n",
        "\n",
        "class DocumentProcessor:\n",
        "    \"\"\"Processes different document formats and extracts text\"\"\"\n",
        "\n",
        "    @staticmethod\n",
        "    def extract_text_from_pdf(file_path_or_bytes):\n",
        "        \"\"\"Extract text from PDF files\"\"\"\n",
        "        if isinstance(file_path_or_bytes, str):\n",
        "            # It's a file path\n",
        "            reader = PdfReader(file_path_or_bytes)\n",
        "        else:\n",
        "            # It's bytes\n",
        "            reader = PdfReader(io.BytesIO(file_path_or_bytes))\n",
        "\n",
        "        text = \"\"\n",
        "        for page in reader.pages:\n",
        "            text += page.extract_text() + \"\\n\"\n",
        "        return text\n",
        "\n",
        "    @staticmethod\n",
        "    def extract_text_from_docx(file_path_or_bytes):\n",
        "        \"\"\"Extract text from DOCX files\"\"\"\n",
        "        if isinstance(file_path_or_bytes, str):\n",
        "            # It's a file path\n",
        "            doc = Document(file_path_or_bytes)\n",
        "        else:\n",
        "            # It's bytes\n",
        "            doc = Document(io.BytesIO(file_path_or_bytes))\n",
        "\n",
        "        text = \"\"\n",
        "        for para in doc.paragraphs:\n",
        "            text += para.text + \"\\n\"\n",
        "        return text\n",
        "\n",
        "    @staticmethod\n",
        "    def extract_text_from_txt(file_path_or_bytes):\n",
        "        \"\"\"Extract text from TXT files\"\"\"\n",
        "        if isinstance(file_path_or_bytes, str):\n",
        "            # It's a file path\n",
        "            with open(file_path_or_bytes, encoding=\"utf-8\") as f:\n",
        "                return f.read()\n",
        "        else:\n",
        "            # It's bytes\n",
        "            return file_path_or_bytes.decode(\"utf-8\")\n",
        "\n",
        "    @staticmethod\n",
        "    def process_document(file_path, metadata=None):\n",
        "        \"\"\"Process a document and extract its text based on file extension\"\"\"\n",
        "        if metadata is None:\n",
        "            metadata = {}\n",
        "\n",
        "        file_extension = file_path.split(\".\")[-1].lower()\n",
        "\n",
        "        if file_extension == \"pdf\":\n",
        "            text = DocumentProcessor.extract_text_from_pdf(file_path)\n",
        "            doc_type = \"pdf\"\n",
        "        elif file_extension == \"docx\":\n",
        "            text = DocumentProcessor.extract_text_from_docx(file_path)\n",
        "            doc_type = \"docx\"\n",
        "        elif file_extension == \"txt\":\n",
        "            text = DocumentProcessor.extract_text_from_txt(file_path)\n",
        "            doc_type = \"txt\"\n",
        "        else:\n",
        "            raise ValueError(f\"Unsupported file format: {file_extension}\")\n",
        "\n",
        "        # Extract title from filename if not provided\n",
        "        if \"title\" not in metadata:\n",
        "            title = os.path.basename(file_path).rsplit(\".\", 1)[0]\n",
        "            metadata[\"title\"] = title\n",
        "\n",
        "        # Set document type if not provided\n",
        "        if \"document_type\" not in metadata:\n",
        "            metadata[\"document_type\"] = doc_type\n",
        "\n",
        "        # Create regulation document\n",
        "        regulation = RegulationDocument(\n",
        "            title=metadata.get(\"title\", \"\"),\n",
        "            content=text,\n",
        "            source=metadata.get(\"source\", file_path),\n",
        "            document_type=metadata.get(\"document_type\", doc_type),\n",
        "            jurisdiction=metadata.get(\"jurisdiction\", \"Unknown\"),\n",
        "            publication_date=metadata.get(\n",
        "                \"publication_date\", datetime.now().strftime(\"%Y-%m-%d\")\n",
        "            ),\n",
        "            tags=metadata.get(\"tags\", []),\n",
        "        )\n",
        "\n",
        "        return regulation\n",
        "\n",
        "    @staticmethod\n",
        "    def extract_metadata_from_content(content):\n",
        "        \"\"\"Extract metadata from document content using regex patterns\"\"\"\n",
        "        metadata = {}\n",
        "\n",
        "        # Extract jurisdiction\n",
        "        jurisdiction_pattern = r\"(?i)jurisdiction[:\\s]+(\\w+(?:\\s+\\w+)*)\"\n",
        "        jurisdiction_match = re.search(jurisdiction_pattern, content)\n",
        "        if jurisdiction_match:\n",
        "            metadata[\"jurisdiction\"] = jurisdiction_match.group(1).strip()\n",
        "\n",
        "        # Extract date\n",
        "        date_pattern = r\"(?i)(?:date|published)[:\\s]+(\\d{1,2}[/-]\\d{1,2}[/-]\\d{2,4}|\\d{4}[/-]\\d{1,2}[/-]\\d{1,2})\"\n",
        "        date_match = re.search(date_pattern, content)\n",
        "        if date_match:\n",
        "            metadata[\"publication_date\"] = date_match.group(1).strip()\n",
        "\n",
        "        # Extract tags\n",
        "        tags_pattern = r\"(?i)(?:keywords|tags)[:\\s]+([\\w\\s,]+)\"\n",
        "        tags_match = re.search(tags_pattern, content)\n",
        "        if tags_match:\n",
        "            tags = [tag.strip() for tag in tags_match.group(1).split(\",\")]\n",
        "            metadata[\"tags\"] = tags\n",
        "\n",
        "        return metadata"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "SeNyzV2KTmRG"
      },
      "source": [
        "## Text Chunking and Embedding Generation\n",
        "\n",
        "Now we'll implement text chunking strategies and generate embeddings using Voyage AI."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "Ffh6JV1wTmRG"
      },
      "outputs": [],
      "source": [
        "import voyageai\n",
        "from langchain_text_splitters import RecursiveCharacterTextSplitter\n",
        "\n",
        "\n",
        "class TextProcessor:\n",
        "    \"\"\"Handles text chunking and embedding generation\"\"\"\n",
        "\n",
        "    # Track the last time we called the Voyage API so we don't hit the rate limit\n",
        "    last_voyage_call = 0\n",
        "    _instance = None\n",
        "\n",
        "    def __new__(cls, *args, **kwargs):\n",
        "        if cls._instance is None:\n",
        "            cls._instance = super().__new__(cls)\n",
        "            cls._instance._initialized = False\n",
        "        return cls._instance\n",
        "\n",
        "    def __init__(self, chunk_size=1000, chunk_overlap=200):\n",
        "        if not hasattr(self, \"_initialized\") or not self._initialized:\n",
        "            self.chunk_size = chunk_size\n",
        "            self.chunk_overlap = chunk_overlap\n",
        "            self.text_splitter = RecursiveCharacterTextSplitter(\n",
        "                chunk_size=chunk_size,\n",
        "                chunk_overlap=chunk_overlap,\n",
        "                length_function=len,\n",
        "                separators=[\"\\n\\n\", \"\\n\", \".\", \"!\", \"?\", \",\", \" \", \"\"],\n",
        "            )\n",
        "            self.voyage_client = voyageai.Client(api_key=os.environ[\"VOYAGE_API_KEY\"])\n",
        "            self.model_name = \"voyage-3\"\n",
        "            self._initialized = True\n",
        "\n",
        "    def chunk_text(self, text):\n",
        "        \"\"\"Split text into chunks\"\"\"\n",
        "        return self.text_splitter.split_text(text)\n",
        "\n",
        "    def generate_embeddings(self, texts):\n",
        "        \"\"\"Generate embeddings for a list of texts using Voyage AI\"\"\"\n",
        "        if not texts:\n",
        "            return []\n",
        "\n",
        "        # Check time since last API call to respect rate limits (3 RPM)\n",
        "        current_time = time.time()\n",
        "        time_since_last_call = current_time - self.last_voyage_call\n",
        "\n",
        "        # If less than 20 seconds have passed since last call, wait\n",
        "        if time_since_last_call < 20:\n",
        "            wait_time = 20 - time_since_last_call\n",
        "            print(\n",
        "                f\"Rate limiting: waiting {wait_time:.2f} seconds before next API call\"\n",
        "            )\n",
        "            time.sleep(wait_time)\n",
        "\n",
        "        # Make the API call\n",
        "        embeddings = self.voyage_client.embed(texts, model=self.model_name).embeddings\n",
        "\n",
        "        # Update the last call timestamp\n",
        "        self.last_voyage_call = time.time()\n",
        "\n",
        "        return embeddings\n",
        "\n",
        "    def process_document(self, regulation_doc):\n",
        "        \"\"\"Process a regulation document: chunk text and generate embeddings\"\"\"\n",
        "        # Chunk the document content\n",
        "        chunks = self.chunk_text(regulation_doc.content)\n",
        "\n",
        "        # Generate embeddings for each chunk\n",
        "        chunk_embeddings = self.generate_embeddings(chunks)\n",
        "\n",
        "        # Create chunk objects with embeddings\n",
        "        processed_chunks = []\n",
        "        for i, (chunk, embedding) in enumerate(zip(chunks, chunk_embeddings)):\n",
        "            processed_chunks.append(\n",
        "                {\n",
        "                    \"chunk_id\": f\"{regulation_doc.id or 'doc'}_{i}\",\n",
        "                    \"content\": chunk,\n",
        "                    \"embedding\": embedding,\n",
        "                }\n",
        "            )\n",
        "\n",
        "        # Generate embedding for the entire document (using title + first chunk)\n",
        "        doc_text = f\"{regulation_doc.title}\\n{chunks[0] if chunks else ''}\"\n",
        "        doc_embedding = self.generate_embeddings([doc_text])[0]\n",
        "\n",
        "        # Update the regulation document\n",
        "        regulation_doc.embedding = doc_embedding\n",
        "        regulation_doc.chunks = processed_chunks\n",
        "\n",
        "        return regulation_doc\n",
        "\n",
        "    def store_regulation(self, regulation_doc):\n",
        "        \"\"\"Store a processed regulation document in MongoDB\"\"\"\n",
        "        # Convert to dictionary for MongoDB storage\n",
        "        regulation_dict = regulation_doc.to_dict()\n",
        "\n",
        "        # Insert into MongoDB\n",
        "        result = regulations_collection.insert_one(regulation_dict)\n",
        "        print(f\"Stored regulation document with ID: {result.inserted_id}\")\n",
        "\n",
        "        return result.inserted_id"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "03upL_GTTmRG"
      },
      "source": [
        "## Sample Regulatory Documents\n",
        "\n",
        "Let's create some sample regulatory documents to demonstrate the system."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 7,
      "metadata": {
        "collapsed": true,
        "id": "J0f4rsoiTmRG"
      },
      "outputs": [
        {
          "name": "stdout",
          "output_type": "stream",
          "text": [
            "Rate limiting: waiting 20.00 seconds before next API call\n",
            "Stored regulation document with ID: 680a57118cf2059167f7f13a\n",
            "Processed and stored regulation: Anti-Money Laundering Directive\n",
            "Rate limiting: waiting 19.84 seconds before next API call\n",
            "Rate limiting: waiting 20.00 seconds before next API call\n",
            "Stored regulation document with ID: 680a573a8cf2059167f7f13b\n",
            "Processed and stored regulation: Sanctions Compliance Framework\n"
          ]
        }
      ],
      "source": [
        "# Sample regulatory texts\n",
        "sample_regulations = [\n",
        "    {\n",
        "        \"title\": \"Anti-Money Laundering Directive\",\n",
        "        \"content\": \"\"\"ANTI-MONEY LAUNDERING DIRECTIVE\n",
        "Jurisdiction: European Union\n",
        "Date: 2021-06-15\n",
        "Keywords: AML, KYC, financial crime, cross-border\n",
        "\n",
        "Section 1: Scope and Definitions\n",
        "1.1 This directive applies to all financial institutions operating within the European Union that process cross-border transactions.\n",
        "1.2 'Cross-border transaction' refers to any financial transfer that originates in one country and terminates in another.\n",
        "1.3 'High-risk jurisdiction' refers to countries identified by the Financial Action Task Force (FATF) as having strategic deficiencies in their AML/CFT regimes.\n",
        "\n",
        "Section 2: Due Diligence Requirements\n",
        "2.1 Enhanced due diligence must be performed for all transactions exceeding €10,000 that involve high-risk jurisdictions.\n",
        "2.2 Financial institutions must verify the identity of both the sender and recipient for all cross-border transactions exceeding €3,000.\n",
        "2.3 For transactions with sanctioned countries, prior approval must be obtained from the compliance department.\n",
        "\n",
        "Section 3: Reporting Requirements\n",
        "3.1 All suspicious transactions must be reported to the national Financial Intelligence Unit within 24 hours of detection.\n",
        "3.2 Monthly reports must be submitted detailing all cross-border transactions exceeding €50,000.\n",
        "3.3 Failure to report suspicious activities may result in fines of up to €5 million or 10% of annual turnover.\n",
        "\"\"\",\n",
        "        \"source\": \"EU Financial Regulatory Authority\",\n",
        "        \"document_type\": \"directive\",\n",
        "        \"jurisdiction\": \"European Union\",\n",
        "        \"publication_date\": \"2021-06-15\",\n",
        "        \"tags\": [\"AML\", \"KYC\", \"financial crime\", \"cross-border\"],\n",
        "    },\n",
        "    {\n",
        "        \"title\": \"Sanctions Compliance Framework\",\n",
        "        \"content\": \"\"\"SANCTIONS COMPLIANCE FRAMEWORK\n",
        "Jurisdiction: United States\n",
        "Date: 2022-03-10\n",
        "Keywords: sanctions, OFAC, restricted parties, compliance\n",
        "\n",
        "Section 1: Overview\n",
        "1.1 This framework outlines compliance requirements for financial institutions regarding transactions subject to sanctions administered by the Office of Foreign Assets Control (OFAC).\n",
        "1.2 All US financial institutions and their foreign branches must comply with these requirements.\n",
        "\n",
        "Section 2: Prohibited Transactions\n",
        "2.1 No financial institution shall process transactions involving entities listed on the Specially Designated Nationals (SDN) list.\n",
        "2.2 Transactions with entities in comprehensively sanctioned countries including Iran, North Korea, Syria, Cuba, and the Crimea region are prohibited without specific OFAC authorization.\n",
        "2.3 Transactions that attempt to circumvent sanctions through third-party intermediaries are strictly prohibited and subject to severe penalties.\n",
        "\n",
        "Section 3: Screening Requirements\n",
        "3.1 All parties to a transaction must be screened against the most current OFAC sanctions lists prior to processing.\n",
        "3.2 Screening must include beneficial owners with 25% or greater ownership interest.\n",
        "3.3 Institutions must implement real-time screening for all international wire transfers regardless of amount.\n",
        "\n",
        "Section 4: Penalties for Non-Compliance\n",
        "4.1 Civil penalties may reach the greater of $1,000,000 per violation or twice the value of the transaction.\n",
        "4.2 Criminal penalties for willful violations may include fines up to $20 million and imprisonment up to 30 years.\n",
        "4.3 Financial institutions may be subject to regulatory actions including restrictions on activities or loss of licenses.\n",
        "\"\"\",\n",
        "        \"source\": \"US Department of Treasury\",\n",
        "        \"document_type\": \"framework\",\n",
        "        \"jurisdiction\": \"United States\",\n",
        "        \"publication_date\": \"2022-03-10\",\n",
        "        \"tags\": [\"sanctions\", \"OFAC\", \"restricted parties\", \"compliance\"],\n",
        "    },\n",
        "]\n",
        "\n",
        "# Process and store sample regulations\n",
        "text_processor = TextProcessor()\n",
        "\n",
        "for reg_data in sample_regulations:\n",
        "    # Create regulation document\n",
        "    regulation = RegulationDocument(**reg_data)\n",
        "\n",
        "    # Process document (chunk and generate embeddings)\n",
        "    processed_regulation = text_processor.process_document(regulation)\n",
        "\n",
        "    # Store in MongoDB\n",
        "    regulation_id = text_processor.store_regulation(processed_regulation)\n",
        "    print(f\"Processed and stored regulation: {regulation.title}\")"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "cmZLgIKkTmRG"
      },
      "source": [
        "## Transaction Data Model\n",
        "\n",
        "Let's define the data model for financial transactions that will be assessed for compliance."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 8,
      "metadata": {
        "id": "gahvZOryTmRG"
      },
      "outputs": [],
      "source": [
        "from enum import Enum\n",
        "from typing import Any, Dict, Optional\n",
        "\n",
        "from pydantic import BaseModel, field_validator\n",
        "\n",
        "\n",
        "class ComplianceStatus(str, Enum):\n",
        "    \"\"\"Enum for compliance status\"\"\"\n",
        "\n",
        "    COMPLIANT = \"Compliant\"\n",
        "    REPORTING_REQUIRED = \"Reporting Required\"\n",
        "    VIOLATION = \"Violation\"\n",
        "    PENDING = \"Pending Assessment\"\n",
        "\n",
        "\n",
        "class TransactionParty(BaseModel):\n",
        "    \"\"\"Model for a party in a transaction (sender or receiver)\"\"\"\n",
        "\n",
        "    name: str\n",
        "    country: str\n",
        "    account_number: str\n",
        "    institution: str\n",
        "    is_sanctioned: bool = False\n",
        "    risk_score: Optional[float] = None\n",
        "\n",
        "\n",
        "class Transaction(BaseModel):\n",
        "    \"\"\"Model for a financial transaction\"\"\"\n",
        "\n",
        "    id: Optional[str] = None\n",
        "    transaction_id: str\n",
        "    amount: float\n",
        "    currency: str\n",
        "    sender: TransactionParty\n",
        "    receiver: TransactionParty\n",
        "    transaction_date: str\n",
        "    transaction_type: str\n",
        "    description: str\n",
        "    compliance_status: ComplianceStatus = ComplianceStatus.PENDING\n",
        "    compliance_details: Optional[Dict[str, Any]] = None\n",
        "\n",
        "    @field_validator(\"amount\")\n",
        "    def amount_must_be_positive(cls, v):\n",
        "        if v <= 0:\n",
        "            raise ValueError(\"Amount must be positive\")\n",
        "        return v\n",
        "\n",
        "    def to_dict(self):\n",
        "        return self.model_dump(exclude_none=True)\n",
        "\n",
        "    def to_prompt(self):\n",
        "        \"\"\"Convert transaction to a prompt-friendly format\"\"\"\n",
        "        return f\"\"\"Transaction Details:\n",
        "- Transaction ID: {self.transaction_id}\n",
        "- Amount: {self.amount} {self.currency}\n",
        "- Date: {self.transaction_date}\n",
        "- Type: {self.transaction_type}\n",
        "- Description: {self.description}\n",
        "\n",
        "Sender Information:\n",
        "- Name: {self.sender.name}\n",
        "- Country: {self.sender.country}\n",
        "- Institution: {self.sender.institution}\n",
        "- Sanctioned: {self.sender.is_sanctioned}\n",
        "\n",
        "Receiver Information:\n",
        "- Name: {self.receiver.name}\n",
        "- Country: {self.receiver.country}\n",
        "- Institution: {self.receiver.institution}\n",
        "- Sanctioned: {self.receiver.is_sanctioned}\n",
        "\"\"\""
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "6ap2XMGJTmRG"
      },
      "source": [
        "## Compliance Assessment Engine\n",
        "\n",
        "Now we'll implement the compliance assessment engine that evaluates transactions against regulatory policies."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 9,
      "metadata": {
        "id": "GaT-t6yhTmRG"
      },
      "outputs": [],
      "source": [
        "import numpy as np\n",
        "import torch\n",
        "from huggingface_hub import login\n",
        "from langchain_core.output_parsers import JsonOutputParser\n",
        "from langchain_core.prompts import ChatPromptTemplate\n",
        "from langchain_huggingface import HuggingFacePipeline\n",
        "from torch.nn.functional import softmax\n",
        "from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline\n",
        "\n",
        "\n",
        "class ComplianceEngine:\n",
        "    \"\"\"Engine for assessing transaction compliance against regulations using LangChain with ShieldGemma\"\"\"\n",
        "\n",
        "    # Can use \"google/shieldgemma-2b\" or \"google/shieldgemma-9b\"\n",
        "    MODEL = \"google/shieldgemma-2b\"\n",
        "\n",
        "    def __init__(self):\n",
        "        # Initialize ShieldGemma model\n",
        "        login(token=os.environ[\"HUGGINGFACE_API_KEY\"])\n",
        "        self.tokenizer = AutoTokenizer.from_pretrained(self.MODEL)\n",
        "        model = AutoModelForCausalLM.from_pretrained(\n",
        "            self.MODEL,\n",
        "            torch_dtype=torch.bfloat16,\n",
        "            device_map=\"auto\",  # Remove if not using GPU\n",
        "        )\n",
        "\n",
        "        # Create a pipeline for text generation\n",
        "        text_generation_pipeline = pipeline(\n",
        "            \"text-generation\",\n",
        "            model=model,\n",
        "            tokenizer=self.tokenizer,\n",
        "            max_new_tokens=1024,\n",
        "            do_sample=False,\n",
        "            pad_token_id=self.tokenizer.eos_token_id,\n",
        "        )\n",
        "\n",
        "        # Create LangChain HF Pipeline\n",
        "        self.llm = HuggingFacePipeline(pipeline=text_generation_pipeline)\n",
        "\n",
        "        self.text_processor = TextProcessor()\n",
        "\n",
        "        # Define compliance assessment prompt template using LangChain\n",
        "        self.assessment_prompt = ChatPromptTemplate.from_template(\n",
        "            \"\"\"You are a financial compliance expert with extensive knowledge of regulatory frameworks. Your task is to evaluate whether the following transaction complies with the specified regulations.\n",
        "    \n",
        "    Transaction Details:\n",
        "    {transaction}\n",
        "\n",
        "    Relevant Regulations:\n",
        "    {regulations}\n",
        "\n",
        "    Compliance Assessment Framework:\n",
        "    - Compliant: Transaction fully adheres to all applicable regulations with no reporting requirements\n",
        "    - Reporting Required: Transaction is legal but requires mandatory reporting to regulatory authorities\n",
        "    - Violation: Transaction directly contravenes one or more regulatory requirements\n",
        "\n",
        "    Step-by-step Analysis Process:\n",
        "    1. Identify the transaction type and key participants\n",
        "    2. Determine which specific regulations apply to this transaction\n",
        "    3. Assess compliance with each applicable regulation\n",
        "    4. Evaluate if reporting requirements exist\n",
        "    5. Determine final compliance status\n",
        "\n",
        "    Provide your assessment in the following JSON format:\n",
        "    {{\n",
        "        \"status\": \"Compliant\" | \"Reporting Required\" | \"Violation\",\n",
        "        \"confidence\": <float between 0 and 1>,\n",
        "        \"reasoning\": \"<concise explanation with specific regulatory references>\",\n",
        "        \"applicable_regulations\": [\"<specific regulation sections that apply>\"],\n",
        "        \"recommended_actions\": [\"<actionable steps for compliance>\"],\n",
        "        \"risk_factors\": [\"<key risk elements identified>\"]\n",
        "    }}\n",
        "\n",
        "    Return ONLY the JSON object. No additional text, explanations, or formatting. YOU WILL BE PENALIZED IF YOU RETURN ANYTHING OTHER THAN THE JSON.\n",
        "    \"\"\"\n",
        "        )\n",
        "\n",
        "        # Output parser\n",
        "        self.parser = JsonOutputParser()\n",
        "\n",
        "        # Create the chain\n",
        "        self.chain = self.assessment_prompt | self.llm | self.parser\n",
        "\n",
        "    def retrieve_relevant_regulations(self, transaction):\n",
        "        \"\"\"Retrieve relevant regulations for a transaction using vector search\"\"\"\n",
        "        # Generate embedding for the transaction\n",
        "        transaction_text = transaction.to_prompt()\n",
        "        transaction_embedding = self.text_processor.generate_embeddings(\n",
        "            [transaction_text]\n",
        "        )[0]\n",
        "\n",
        "        # Perform vector search in MongoDB\n",
        "        # Define the vector search stage\n",
        "        vector_search_stage = {\n",
        "            \"$vectorSearch\": {\n",
        "                \"index\": VECTOR_INDEX_NAME,\n",
        "                \"queryVector\": transaction_embedding,\n",
        "                \"path\": \"embedding\",\n",
        "                \"numCandidates\": 150,  # Number of candidate matches to consider\n",
        "                \"limit\": 5,  # Return top 5 matches\n",
        "            }\n",
        "        }\n",
        "\n",
        "        project_stage = {\n",
        "            \"$project\": {\n",
        "                \"embedding\": 0,  # Remove embedding from top-level documents\n",
        "                \"_id\": 0,  # Remove _id\n",
        "                \"chunks\": 0,  # Remove chunks\n",
        "            }\n",
        "        }\n",
        "\n",
        "        # Define the aggregate pipeline with the vector search stage and additional stages\n",
        "        pipeline = [vector_search_stage, project_stage]\n",
        "\n",
        "        results = list(regulations_collection.aggregate(pipeline))\n",
        "\n",
        "        # Format regulations for prompt\n",
        "        regulations_text = \"\"\n",
        "        for i, reg in enumerate(results, 1):\n",
        "            regulations_text += f\"Regulation {i}: {reg['title']} ({reg['jurisdiction']}, {reg['publication_date']})\\n\"\n",
        "            regulations_text += f\"{reg['content']}\\n\\n\"\n",
        "\n",
        "        return regulations_text\n",
        "\n",
        "    def apply_softmax_normalization(self, assessment):\n",
        "        \"\"\"Apply softmax normalization to confidence scores\"\"\"\n",
        "        # Calculate confidence scores for each status\n",
        "        status_scores = {\"Compliant\": 0.0, \"Reporting Required\": 0.0, \"Violation\": 0.0}\n",
        "\n",
        "        # Set the score for the predicted status\n",
        "        status_scores[assessment[\"status\"]] = assessment[\"confidence\"]\n",
        "\n",
        "        # Convert to array and apply softmax\n",
        "        scores_array = np.array(list(status_scores.values()))\n",
        "        normalized_scores = softmax(torch.tensor(scores_array), dim=0).numpy()\n",
        "\n",
        "        # Update the confidence with normalized score\n",
        "        assessment[\"confidence\"] = float(\n",
        "            normalized_scores[list(status_scores.keys()).index(assessment[\"status\"])]\n",
        "        )\n",
        "        assessment[\"confidence_details\"] = {\n",
        "            status: float(score)\n",
        "            for status, score in zip(status_scores.keys(), normalized_scores)\n",
        "        }\n",
        "\n",
        "        return assessment\n",
        "\n",
        "    def assess_transaction(self, transaction):\n",
        "        \"\"\"Assess a transaction for compliance with softmax normalization\"\"\"\n",
        "        try:\n",
        "            # Retrieve relevant regulations\n",
        "            regulations = self.retrieve_relevant_regulations(transaction)\n",
        "\n",
        "            # Prepare inputs\n",
        "            inputs = {\n",
        "                \"transaction\": transaction.to_prompt(),\n",
        "                \"regulations\": regulations,\n",
        "            }\n",
        "\n",
        "            # Run assessment\n",
        "            assessment = self.chain.invoke(inputs)\n",
        "\n",
        "            # Apply softmax normalization\n",
        "            assessment = self.apply_softmax_normalization(assessment)\n",
        "\n",
        "            # Update transaction with assessment results\n",
        "            transaction.compliance_status = ComplianceStatus(assessment[\"status\"])\n",
        "            transaction.compliance_details = assessment\n",
        "\n",
        "            # Store updated transaction in MongoDB\n",
        "            if transaction.id:\n",
        "                transactions_collection.update_one(\n",
        "                    {\"_id\": transaction.id}, {\"$set\": transaction.to_dict()}\n",
        "                )\n",
        "            else:\n",
        "                result = transactions_collection.insert_one(transaction.to_dict())\n",
        "                transaction.id = str(result.inserted_id)\n",
        "\n",
        "            return assessment\n",
        "\n",
        "        except Exception as e:\n",
        "            # Fallback if LangChain parsing fails\n",
        "            print(f\"Error during LangChain processing: {e}\")\n",
        "            # Create a default assessment\n",
        "            assessment = {\n",
        "                \"status\": \"Reporting Required\",\n",
        "                \"confidence\": 0.5,\n",
        "                \"reasoning\": f\"Error during assessment: {e!s}. Please review the transaction manually.\",\n",
        "                \"applicable_regulations\": [],\n",
        "                \"recommended_actions\": [\"Review transaction manually\"],\n",
        "                \"risk_factors\": [\"Assessment processing failure\"],\n",
        "            }\n",
        "            return assessment"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "t7HJ786MTmRG"
      },
      "source": [
        "## Agent Orchestration with LangGraph\n",
        "\n",
        "Now we'll implement the agent orchestration framework using LangGraph to coordinate the compliance assessment workflow."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 10,
      "metadata": {
        "id": "bB1B_JbvTmRG"
      },
      "outputs": [],
      "source": [
        "from typing import Any, Dict, List, Optional, TypedDict\n",
        "\n",
        "import langgraph.graph as lg\n",
        "from langchain_core.messages import AIMessage, HumanMessage\n",
        "from langgraph.checkpoint.mongodb import MongoDBSaver\n",
        "\n",
        "\n",
        "# Define state for the graph\n",
        "class ComplianceState(TypedDict):\n",
        "    \"\"\"State for the compliance assessment workflow\"\"\"\n",
        "\n",
        "    transaction: Dict[str, Any]  # Transaction data\n",
        "    regulations: Optional[List[Dict[str, Any]]]  # Retrieved regulations\n",
        "    assessment: Optional[Dict[str, Any]]  # Compliance assessment results\n",
        "    messages: List[Union[HumanMessage, AIMessage]]  # Conversation history\n",
        "    errors: Optional[List[str]]  # Any errors encountered\n",
        "\n",
        "\n",
        "class ComplianceWorkflow:\n",
        "    \"\"\"Orchestrates the compliance assessment workflow using LangGraph\"\"\"\n",
        "\n",
        "    def __init__(self):\n",
        "        self.compliance_engine = ComplianceEngine()\n",
        "        self.text_processor = TextProcessor()\n",
        "\n",
        "        # Create MongoDB checkpointer\n",
        "        self.checkpoint_store = MongoDBSaver(client, DB_NAME, CHECKPOINTS_COLLECTION)\n",
        "\n",
        "        # Build the graph\n",
        "        self.workflow = self._build_graph()\n",
        "\n",
        "    def _parse_transaction(self, state: ComplianceState) -> ComplianceState:\n",
        "        \"\"\"Parse transaction data and create Transaction object\"\"\"\n",
        "        try:\n",
        "            # Create Transaction object from state data\n",
        "            transaction_data = state[\"transaction\"]\n",
        "            transaction = Transaction(**transaction_data)\n",
        "\n",
        "            # Update state with parsed transaction\n",
        "            state[\"transaction\"] = transaction.to_dict()\n",
        "            state[\"messages\"].append(\n",
        "                AIMessage(\n",
        "                    content=f\"Transaction {transaction.transaction_id} parsed successfully.\"\n",
        "                )\n",
        "            )\n",
        "\n",
        "        except Exception as e:\n",
        "            error_msg = f\"Error parsing transaction: {e!s}\"\n",
        "            state[\"errors\"] = state.get(\"errors\", []) + [error_msg]\n",
        "            state[\"messages\"].append(AIMessage(content=error_msg))\n",
        "\n",
        "        return state\n",
        "\n",
        "    def _retrieve_regulations(self, state: ComplianceState) -> ComplianceState:\n",
        "        \"\"\"Retrieve relevant regulations for the transaction\"\"\"\n",
        "        try:\n",
        "            # Create Transaction object from state\n",
        "            transaction = Transaction(**state[\"transaction\"])\n",
        "\n",
        "            # Retrieve relevant regulations\n",
        "            regulations_text = self.compliance_engine.retrieve_relevant_regulations(\n",
        "                transaction\n",
        "            )\n",
        "\n",
        "            # Update state with retrieved regulations\n",
        "            state[\"regulations\"] = regulations_text\n",
        "            state[\"messages\"].append(\n",
        "                AIMessage(\n",
        "                    content=\"Retrieved relevant regulations for compliance assessment.\"\n",
        "                )\n",
        "            )\n",
        "\n",
        "        except Exception as e:\n",
        "            error_msg = f\"Error retrieving regulations: {e!s}\"\n",
        "            state[\"errors\"] = state.get(\"errors\", []) + [error_msg]\n",
        "            state[\"messages\"].append(AIMessage(content=error_msg))\n",
        "\n",
        "        return state\n",
        "\n",
        "    def _assess_compliance(self, state: ComplianceState) -> ComplianceState:\n",
        "        \"\"\"Assess transaction compliance against regulations\"\"\"\n",
        "        try:\n",
        "            # Create Transaction object from state\n",
        "            transaction = Transaction(**state[\"transaction\"])\n",
        "\n",
        "            # Assess compliance\n",
        "            assessment = self.compliance_engine.assess_transaction(transaction)\n",
        "\n",
        "            # Update state with assessment results\n",
        "            state[\"assessment\"] = assessment\n",
        "            state[\"transaction\"] = (\n",
        "                transaction.to_dict()\n",
        "            )  # Update with compliance status\n",
        "\n",
        "            # Add message with assessment summary\n",
        "            summary = f\"Compliance assessment complete. Status: {assessment['status']} (Confidence: {assessment['confidence']:.2f})\\n\"\n",
        "            summary += f\"Reasoning: {assessment['reasoning']}\\n\"\n",
        "            if assessment.get(\"recommended_actions\"):\n",
        "                summary += f\"Recommended actions: {', '.join(assessment['recommended_actions'])}\\n\"\n",
        "\n",
        "            state[\"messages\"].append(AIMessage(content=summary))\n",
        "\n",
        "        except Exception as e:\n",
        "            error_msg = f\"Error assessing compliance: {e!s}\"\n",
        "            state[\"errors\"] = state.get(\"errors\", []) + [error_msg]\n",
        "            state[\"messages\"].append(AIMessage(content=error_msg))\n",
        "\n",
        "        return state\n",
        "\n",
        "    def _should_retry(self, state: ComplianceState) -> str:\n",
        "        \"\"\"Determine if workflow should retry or end based on errors\"\"\"\n",
        "        if state.get(\"errors\") and len(state[\"errors\"]) < 3:\n",
        "            return \"retry\"\n",
        "        return \"end\"\n",
        "\n",
        "    def _build_graph(self):\n",
        "        \"\"\"Build the LangGraph workflow\"\"\"\n",
        "        # Define the graph\n",
        "        builder = lg.StateGraph(ComplianceState)\n",
        "\n",
        "        # Add nodes\n",
        "        builder.add_node(\"parse_transaction\", self._parse_transaction)\n",
        "        builder.add_node(\"retrieve_regulations\", self._retrieve_regulations)\n",
        "        builder.add_node(\"assess_compliance\", self._assess_compliance)\n",
        "\n",
        "        # Define edges\n",
        "        builder.add_edge(\"parse_transaction\", \"retrieve_regulations\")\n",
        "        builder.add_edge(\"retrieve_regulations\", \"assess_compliance\")\n",
        "\n",
        "        # Add conditional edge for error handling\n",
        "        builder.add_conditional_edges(\n",
        "            \"assess_compliance\",\n",
        "            self._should_retry,\n",
        "            {\"retry\": \"parse_transaction\", \"end\": lg.END},\n",
        "        )\n",
        "\n",
        "        # Set entry point\n",
        "        builder.set_entry_point(\"parse_transaction\")\n",
        "\n",
        "        # Compile the graph with MongoDB checkpointing\n",
        "        return builder.compile(checkpointer=self.checkpoint_store)\n",
        "\n",
        "    def process_transaction(self, transaction_data: Dict[str, Any]) -> Dict[str, Any]:\n",
        "        \"\"\"Process a transaction through the compliance workflow\"\"\"\n",
        "        # Initialize state\n",
        "        initial_state = ComplianceState(\n",
        "            transaction=transaction_data,\n",
        "            regulations=None,\n",
        "            assessment=None,\n",
        "            messages=[\n",
        "                HumanMessage(\n",
        "                    content=f\"Process transaction {transaction_data.get('transaction_id', 'unknown')}\"\n",
        "                )\n",
        "            ],\n",
        "            errors=None,\n",
        "        )\n",
        "\n",
        "        # Run the workflow\n",
        "        config = {\"configurable\": {\"thread_id\": \"2\"}}\n",
        "        final_state = self.workflow.invoke(initial_state, config)\n",
        "\n",
        "        return final_state"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "3dfLIxhITmRH"
      },
      "source": [
        "## Demonstration: Processing Sample Transactions\n",
        "\n",
        "Let's demonstrate the system by processing some sample transactions."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 11,
      "metadata": {
        "id": "FyBVi8tkTmRH"
      },
      "outputs": [
        {
          "name": "stderr",
          "output_type": "stream",
          "text": [
            "Loading checkpoint shards: 100%|██████████| 2/2 [00:04<00:00,  2.24s/it]\n",
            "Device set to use mps\n"
          ]
        },
        {
          "name": "stdout",
          "output_type": "stream",
          "text": [
            "\n",
            "Processing transaction TX123456789...\n",
            "System: Transaction TX123456789 parsed successfully.\n",
            "System: Retrieved relevant regulations for compliance assessment.\n",
            "System: Compliance assessment complete. Status: Reporting Required (Confidence: 0.55)\n",
            "Reasoning: The transaction exceeds €10,000 and involves a cross-border transfer to a high-risk jurisdiction (United States). Therefore, enhanced due diligence is required.\n",
            "Recommended actions: European Trading Ltd must perform enhanced due diligence on the transaction, including verifying the identity of both the sender and recipient., The transaction should be reported to the relevant authorities in both Germany and the United States.\n",
            "\n",
            "\n",
            "Final Assessment for TX123456789:\n",
            "Status: Reporting Required\n",
            "Confidence: 0.55\n",
            "Reasoning: The transaction exceeds €10,000 and involves a cross-border transfer to a high-risk jurisdiction (United States). Therefore, enhanced due diligence is required.\n",
            "Risk Factors: High-risk jurisdiction (United States)\n",
            "Applicable Regulations: Regulation 1: Anti-Money Laundering Directive, Regulation 2: Sanctions Compliance Framework\n",
            "Recommended Actions: European Trading Ltd must perform enhanced due diligence on the transaction, including verifying the identity of both the sender and recipient., The transaction should be reported to the relevant authorities in both Germany and the United States.\n",
            "--------------------------------------------------------------------------------\n",
            "\n",
            "Processing transaction TX987654321...\n",
            "System: Transaction TX987654321 parsed successfully.\n",
            "System: Retrieved relevant regulations for compliance assessment.\n",
            "System: Compliance assessment complete. Status: Violation (Confidence: 0.58)\n",
            "Reasoning: The transaction involves a transfer to a sanctioned entity (Tehran Trading Co) in Iran, which is prohibited by the sanctions compliance framework.  Section 2.2 of the sanctions compliance framework explicitly states that transactions with entities in comprehensively sanctioned countries are prohibited without specific OFAC authorization.\n",
            "Recommended actions: Obtain specific OFAC authorization for the transaction\n",
            "\n",
            "\n",
            "Final Assessment for TX987654321:\n",
            "Status: Violation\n",
            "Confidence: 0.58\n",
            "Reasoning: The transaction involves a transfer to a sanctioned entity (Tehran Trading Co) in Iran, which is prohibited by the sanctions compliance framework.  Section 2.2 of the sanctions compliance framework explicitly states that transactions with entities in comprehensively sanctioned countries are prohibited without specific OFAC authorization.\n",
            "Risk Factors: Significant financial penalties, potential reputational damage, legal action\n",
            "Applicable Regulations: Sanctions Compliance Framework, Regulation 1\n",
            "Recommended Actions: Obtain specific OFAC authorization for the transaction\n",
            "--------------------------------------------------------------------------------\n"
          ]
        }
      ],
      "source": [
        "# Sample transactions for demonstration\n",
        "sample_transactions = [\n",
        "    {\n",
        "        \"transaction_id\": \"TX123456789\",\n",
        "        \"amount\": 150000.00,\n",
        "        \"currency\": \"EUR\",\n",
        "        \"sender\": {\n",
        "            \"name\": \"European Trading Ltd\",\n",
        "            \"country\": \"Germany\",\n",
        "            \"account_number\": \"DE89370400440532013000\",\n",
        "            \"institution\": \"Deutsche Bank\",\n",
        "            \"is_sanctioned\": False,\n",
        "        },\n",
        "        \"receiver\": {\n",
        "            \"name\": \"Global Imports Inc\",\n",
        "            \"country\": \"United States\",\n",
        "            \"account_number\": \"US12345678901234567890\",\n",
        "            \"institution\": \"Bank of America\",\n",
        "            \"is_sanctioned\": False,\n",
        "        },\n",
        "        \"transaction_date\": \"2023-11-15\",\n",
        "        \"transaction_type\": \"International Wire Transfer\",\n",
        "        \"description\": \"Payment for machinery parts\",\n",
        "    },\n",
        "    {\n",
        "        \"transaction_id\": \"TX987654321\",\n",
        "        \"amount\": 75000.00,\n",
        "        \"currency\": \"USD\",\n",
        "        \"sender\": {\n",
        "            \"name\": \"American Exports LLC\",\n",
        "            \"country\": \"United States\",\n",
        "            \"account_number\": \"US98765432109876543210\",\n",
        "            \"institution\": \"JP Morgan Chase\",\n",
        "            \"is_sanctioned\": False,\n",
        "        },\n",
        "        \"receiver\": {\n",
        "            \"name\": \"Tehran Trading Co\",\n",
        "            \"country\": \"Iran\",\n",
        "            \"account_number\": \"IR123456789012345678901234\",\n",
        "            \"institution\": \"Bank Melli Iran\",\n",
        "            \"is_sanctioned\": True,\n",
        "        },\n",
        "        \"transaction_date\": \"2023-12-01\",\n",
        "        \"transaction_type\": \"International Wire Transfer\",\n",
        "        \"description\": \"Consulting services\",\n",
        "    },\n",
        "]\n",
        "\n",
        "# Initialize the compliance workflow\n",
        "workflow = ComplianceWorkflow()\n",
        "\n",
        "# Process each transaction\n",
        "results = []\n",
        "for tx_data in sample_transactions:\n",
        "    print(f\"\\nProcessing transaction {tx_data['transaction_id']}...\")\n",
        "    result = workflow.process_transaction(tx_data)\n",
        "    results.append(result)\n",
        "\n",
        "    # Display messages from the workflow\n",
        "    for message in result[\"messages\"]:\n",
        "        if isinstance(message, AIMessage):\n",
        "            print(f\"System: {message.content}\")\n",
        "\n",
        "    # Display final assessment\n",
        "    if result.get(\"assessment\"):\n",
        "        assessment = result[\"assessment\"]\n",
        "        print(f\"\\nFinal Assessment for {tx_data['transaction_id']}:\")\n",
        "        print(f\"Status: {assessment['status']}\")\n",
        "        print(f\"Confidence: {assessment['confidence']:.2f}\")\n",
        "        print(f\"Reasoning: {assessment['reasoning']}\")\n",
        "        if assessment.get(\"risk_factors\"):\n",
        "            print(f\"Risk Factors: {', '.join(assessment['risk_factors'])}\")\n",
        "        if assessment.get(\"applicable_regulations\"):\n",
        "            print(\n",
        "                f\"Applicable Regulations: {', '.join(assessment['applicable_regulations'])}\"\n",
        "            )\n",
        "        if assessment.get(\"recommended_actions\"):\n",
        "            print(\n",
        "                f\"Recommended Actions: {', '.join(assessment['recommended_actions'])}\"\n",
        "            )\n",
        "        print(\"-\" * 80)"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "EDnowQ2ATmRH"
      },
      "source": [
        "## Conclusion\n",
        "\n",
        "In this notebook, we've demonstrated a comprehensive AI-powered transaction compliance monitoring system that leverages MongoDB's vector search capabilities, Voyage AI embeddings, and advanced LLMs to automate regulatory checks on financial transactions.\n",
        "\n",
        "The system includes:\n",
        "1. A document ingestion pipeline for processing regulatory documents\n",
        "2. A MongoDB Atlas data layer for storing transactions, regulations, and vector embeddings\n",
        "3. An NLP processing pipeline for text chunking and embedding generation\n",
        "4. A compliance assessment engine for evaluating transactions against regulations\n",
        "5. A LangGraph-based agent orchestration framework for workflow management\n",
        "\n",
        "This implementation provides a foundation that can be extended with additional features such as:\n",
        "- Real-time transaction monitoring\n",
        "- Integration with existing financial systems\n",
        "- Advanced risk scoring algorithms\n",
        "- Customizable compliance rules and thresholds\n",
        "- Audit trail and reporting capabilities\n",
        "\n",
        "By automating compliance checks, financial institutions can reduce operational costs, minimize human error, and ensure consistent application of regulatory requirements across all transactions."
      ]
    }
  ],
  "metadata": {
    "colab": {
      "collapsed_sections": [
        "jbg6qsphi0RC",
        "N-XJmokEi9OQ",
        "5cVYxfbSq7Ek",
        "zdujfkT0rCBy",
        "gbuA68uMsHtV",
        "E1VZ2I2nsKzj",
        "rmzk1RESsbMw"
      ],
      "provenance": []
    },
    "kernelspec": {
      "display_name": ".venv",
      "language": "python",
      "name": "python3"
    },
    "language_info": {
      "codemirror_mode": {
        "name": "ipython",
        "version": 3
      },
      "file_extension": ".py",
      "mimetype": "text/x-python",
      "name": "python",
      "nbconvert_exporter": "python",
      "pygments_lexer": "ipython3",
      "version": "3.12.2"
    },
    "widgets": {
      "application/vnd.jupyter.widget-state+json": {
        "state": {}
      }
    }
  },
  "nbformat": 4,
  "nbformat_minor": 0
}
