{
 "cells": [
  {
   "cell_type": "markdown",
   "id": "7a54f60b",
   "metadata": {},
   "source": [
    "# 🧠 DevMentor Agent: Bringing It All Together\n",
    "\n",
    "In previous lessons, we explored how to store long-term memory in an AI agent:\n",
    "\n",
    "- First, we saw how to manage a single memory object like a **user profile**\n",
    "- Then, we looked at **collections of memories** using TrustCall — perfect for task lists, notes, or logs\n",
    "- We also learned how to extract and update these memories automatically based on conversation history\n",
    "\n",
    "Now it's time to combine all of that into a single working agent.\n",
    "\n",
    "---\n",
    "\n",
    "Today, we're building a real mini-application:  \n",
    "an assistant for developers called **DevMentor** 👨‍💻🧑‍💻\n",
    "\n",
    "This agent can:\n",
    "- Remember who you are and how you like to work (📁 `DevProfile`)\n",
    "- Track what you're working on — including design decisions and TODOs (🧩 `FeatureTasks`)\n",
    "- Learn how to better help you based on your feedback (⚙️ `Preferences`)\n",
    "\n",
    "![DevMentor Agent](images/devmentor-agent-2.png)\n",
    "\n",
    "We'll use:\n",
    "- **LangGraph** to orchestrate the flow\n",
    "- **TrustCall** to manage memory updates\n",
    "- A combination of profile, collection, and instruction memories\n",
    "\n",
    "By the end of this lesson, you’ll see how to connect these pieces into a complete loop —  \n",
    "an AI agent that reasons, remembers, and evolves over time."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 214,
   "id": "5e381354",
   "metadata": {},
   "outputs": [
    {
     "data": {
      "image/png": "iVBORw0KGgoAAAANSUhEUgAAAxwAAAD5CAIAAAAiBKTzAAAAAXNSR0IArs4c6QAAIABJREFUeJzs3WdcE1nXAPAbAqETmvQqVUEFDdgVRMG+IpZVRFBWUey97FpQ165rRUGxIVZQsaDYUbCidEHpPfRQAiH1/TD7ZnkAETFhAjn/D/5CMuUkzkxO7j33DoHH4yEAAAAAAPBrJPAOAAAAAACgO4CkCgAAAABAACCpAgAAAAAQAEiqAAAAAAAEAJIqAAAAAAABgKQKAAAAAEAAJPEOAAAgKkryGupruPW1bDaL19jAxTucH5OSJhCJBDklSTlFooa+tKQU/EoEAOCJAPNUASDm0uNrsxLp2Sl0w15yHDZPTlFSVZPEZHSFpEpWoqacVV/Drq/llOY36pjIGFvLW1AUpWWIeIcGABBHkFQBIL5SP9S8uVehbyFr1Fve2EpeSrprt/TkpdVnJ9OLcxqMessPGq+GdzgAALEDSRUA4qi6nPU4mKqiSRoySU1OsbuVAXx8XPnxceWYOZpmNop4xwIAECOQVAEgdjIT62Lulk/20VHuQcI7FmHhsHmvbpXJyBEHT4QmKwBAJ4GkCgDxUpjRkPCKNn6+Nt6BdIbYp5VMBnfIRHW8AwEAiIWuXUIBAPgpyTHVcS+rxCSjQghRRqtKkSQeXaTiHQgAQCxAUgWAuCjObkiLrZ34hw7egXQqO2dVsrpU7JNKvAMBAHR/kFQBIBaYDO6HR5XTVujhHQgOBk9Qq6/h5H6h4x0IAKCbg6QKALEQfafc1FYB7yhw03cE+dXtcryjAAB0c5BUAdD90cqYhZkNVoPIeAeCG+UeJF0T2ZR31XgHAgDoziCpAqD7S4yuHuEq7iPghk5Wy0yowzsKAEB3BkkVAN1f4qtqg15yeEeBM2k5IpvJK8xswDsQAEC3BUkVAN1cdjLdyEqOQCB05k5v3Lixbdu2Dqy4cePG8PBwIUSEEELGfeSzk6BcHQAgLJBUAdDNFWY2mPfv7Lu1fPnypZNXbA/TvgrlxY3C2z4AQMxBUgVAN1eax5AnC+vufjk5ORs3bhwzZszo0aNXr14dHx+PEFq4cOH9+/cfPHhAoVDS0tIQQtevX1+6dKmDg4OLi8umTZsKCgqw1a9du+bi4vLy5Ut7e/uDBw9SKJSioqKdO3c6ODgII1pFVamC9AYeF24jAQAQCkiqAOjm6ms5copEYWyZyWQuXLiQSCQeP3781KlTkpKSq1atYjAYgYGB1tbWEyZMiI2NtbS0jI+PP3DgQL9+/Q4ePOjn51dZWfnXX39hWyCRSHQ6PTQ0dMeOHTNmzIiJiUEIbdmy5eXLl8IIGCEkr0Sk13CEtHEAgJjrbnenBwA0Q69hyysJ5UzPzc2trKycNWuWpaUlQmjv3r2fP39ms9nNFuvTp8+NGzcMDAwkJSURQiwWa9WqVdXV1WQymUAgMBgMT09POzs7hFBjo9D75uSVJOk1bAVluPQBAAQPriwAdHMkWQkJoTRUIQMDAxUVle3bt48fP37AgAH9+vWjUCgtFyMSiQUFBYcOHUpOTqbT/60Tr6ysJJP/nTfLyspKKPG1RlpOgsfttL0BAMQLdP8B0M1JSBDqhdPhJS0tfebMmWHDhl25csXb23vKlCkREREtF4uKilq9enXv3r3PnDnz8ePHEydONFuARCIJI7xW0UpZckrCyTEBAGIPkioAujl5RUl6bfMuOUExMjJauXLl/fv3Dx8+bGpqunXrVqwyvanbt2/b2NgsWbLE3NycQCDU1tYKKZj2EF5nKAAAQFIFQDenaSjNqBNKj1dOTs7du3cRQjIyMiNGjNi3b5+kpGRqamqzxaqrqzU0NPh/Pn/+XBjBtAeDztEzlyVKduqUXQAA8QFJFQDdnIaBTHqcUBqHqqurd+zYceTIkfz8/Nzc3PPnz7PZ7H79+iGE9PX1k5OTP378WFlZaW5u/u7du9jYWDabHRISgq1bXFzccoPS0tIaGhr8hQUecGZSnYKSlMA3CwAAGEiqAOjmjK3ls5OFMo14v379Nm/e/PDhQ1dXVzc3t7i4uNOnT/fs2RMhNHXqVAKBsGTJkvT0dF9f3yFDhqxevXrw4MFUKtXPz693797Lly9/9OhRy23Onz//48ePa9asaWgQ/P1kspPoxn3kBb5ZAADAEHg8mAcPgG7u2dWS3oOUtI1l8Q4ETzwe79aJwqlLdTv5jj0AAPEBLVUAdH+9Biq9uVeBdxQ4e/+w0sCis++BCAAQKzAKBoDuT6enrLSsRHYK3diq9c6vdevWffz4sdWX2Gw2NmlnS9u3bxfS/WQQQt/bMofD4fF43wvp6dOnrb7EbOQmRNF89pkIOkwAAPgPdP8BIBYqihs/Pq4c66nd6qv19fUcTutzWbWRVMnKyn7vpV/XxswLbYSkqNj6raM/PKpQUJHqPVBJcAECAEBzkFQBIC7SPtbkf2sY466JdyCdLfV9TWFWw+hZYvfGAQCdDGqqABAXlnZKsgrEmHvleAfSqfLS6InR1ZBRAQA6AbRUASBeEl/TamnsoZPU8Q6kM2Qn05NiaJN9dPEOBAAgFqClCgDx0ne4Mkla4kFQK3NvdjPxL6tS3lVDRgUA6DTQUgWAOMpKqnt5s9TWUcXWUQXvWAQvM7Huzb0KSztFO2dVvGMBAIgRSKoAEFMcNvftg8q0jzW2DspGveXVdKTxjuhX0avZ2cn0vK/1CKEhk9SUe5DwjggAIF4gqQJArDXUcRKjaZmJdBaDa2qrICFBkCcTlVRJXG4XuDIQiYS6aha9mkOvYZfmN9Kr2cbW8r3sFLWMxHrueAAAXiCpAgAghFBNJasoq6Guik2v5hAkUG2VgO9nnJSUZGFhQSIJsvVInkzkspE8mSivJKmhL61hICPAjQMAwM+CpAoA0BkmTZoUEBCgo6ODdyAAACAsMPoPAAAAAEAAIKkCAAAAABAASKoAAAAAAAQAkioAAAAAAAGApAoAAAAAQAAgqQIAAAAAEABIqgAAAAAABACSKgAAAAAAAYCkCgAAAABAACCpAgAAAAAQAEiqAAAAAAAEAJIqAAAAAAABgKQKAAAAAEAAIKkCAAAAABAASKoAAAAAAAQAkioAAAAAAAGApAoAAAAAQAAgqQIAAAAAEABIqgAAAAAABACSKgAAAAAAAYCkCgAAAABAACCpAgAAAAAQAEiqAAAAAAAEAJIqAEBn0NbWxjsEAAAQLkiqAACdobi4GO8QAABAuCCpAgAAAAAQAEiqAAAAAAAEAJIqAAAAAAABgKQKAAAAAEAAIKkCAAAAABAASKoAAAAAAAQAkioAAAAAAAGApAoAAAAAQAAgqQIAAAAAEABIqgAAAAAABACSKgAAAAAAAYCkCgAAAABAACCpAgAAAAAQAEiqAAAAAAAEAJIqAAAAAAABIPB4PLxjAAB0Wy4uLlJSUhISElQqVU1NjUgkIoTIZHJISAjeoQEAgIBJ4h0AAKA7IxKJVCoVe1xWVoYQIpFICxcuxDsuAAAQPOj+AwAIkb29fbPmcAMDg8mTJ+MXEQAACAskVQAAIfL09NTU1OT/SSKRZs2ahWtEAAAgLJBUAQCEyNjY2M7Ojv+noaHhb7/9hmtEAAAgLJBUAQCEy8vLC2usIpFIv//+O97hAACAsEBSBQAQLmNjYwqFwuPxoJkKANC9weg/ALqJqlJmdTmLy8U7jta4DJ+blUyf6DwxK5mOdyytICCevLKkqiZJUgp+ZwIAOg7mqQKgy8tOpse9rKqjcXTN5Og0Nt7hdD0kaUJVKYvL5VoMUKSMUcU7HABAVwVJFQBdW/YX+qentNFztIlEaGX5VR8flcnISQyZpIZ3IACALgmuwgB0YYWZDR8eVbp46kJGJRB2Y3s0MngfIivxDgQA0CXBhRiALuzzs6qhkzXbsSBoL4qzel5afX0t9KICAH4aJFUAdGG5qfXkHiS8o+iGqkpYeIcAAOh6IKkCoKuqrmBpGcngHUU3pKolU1cFLVUAgJ8GSRUAXZWEBKEOxvoJAbORw4URPACAnwdJFQAAAACAAEBSBQAAAAAgAJBUAQAAAAAIACRVAAAAAAACAEkVAAAAAIAAQFIFAAAAACAAkFQBAAAAAAgAJFUAAAAAAAIASRUAAAAAgABAUgUAAAAAIACQVAEAAAAACAAkVQCILxqtytGJ8uLlE7wDAQCA7gCSKgBAF5Odnfn77Il4RwEAAM1BUgUA6GK+fvuCdwgAANAKSKoAEC/PnkfO8Zgyecqovfu3V1VVNn3pUeQ936Ve4yYM813qFRp2hcfjIYTOBp2cMGkEi8XiL3bt+qUxLoPq6+vb2MuUqaPvhN88cfKQoxPF1W3M/gM76uvr/9q6xtGJMtfL7fHjB/wlU1IS129YOvk3Rw/Pqf6n/qHT6djzfjs27ti56c2bV5OnjBrjMmjFqgWpqckIofMXTu/b71dSQnV0otwMDUEI1dfX79r917QZY13GDfFZNOdO+E1sC2G3rrlNd4mOeek0xv74yYOC/iABAKA5SKoAECNZWRl/7/7L2Xni5eA7Ls4Tj584wH/p6bNH+/b7mZtZXrl89w/vJaFhV074H0IIOTo419fXf/jwhr/k6+gXgwcNl5OTa2NHUlJS165fNDAwinz45g/vJQ8f3V21eqHTqLFPIt85Oow5cGhnbV0tQqigMH/tel9GI+PE8fM7/Q5mZaWvWr2QzWYjhCQlJVO+JD55GnH6VPDDB9HSJOk9+7YhhOZ5Lfp95lxNTa0Xz2KnT3NHCG3cvLyoqGDnjkM3rkWMGOF09Ni+1LQUhBCJRKqvp9+9G7pp4w7X32YI+aMFAABIqgAQJ+F3b2pqaM31+ENJUcnWhjJhgiv/pYiIO3372q5csVFFRbW/rd08z0V37tyoqqo0MTHT0dF7Hf0CW6yiovzLl6RRo1x+uC8zU8vJk9xIJJLDyDEIISurvo4OYyQlJR0dnNlsdl5uNkLo6dOHUpJSO/0OGhgYGRn1XLtmS3rG1+iYl9gWGurr163dqqOtKykp6TRqbH5+bsvmsXfvY5KS4tet2dLL0opMVnafPa9PH5uLlwIRQgQCgcFg/P6752insXp6BgL9IAEAoBWQVAEgRgoL842MTfh/WlpaYQ+4XG5ySoIdZTD/JVtbOy6Xm5gUhxAaM3rc6+jnHA4HIfTq9XNZWdlhQx1+uC8DAyPsgby8PELIyOjf/crKyiGEamtrEEIpKQmWllZksjL2kpaWto6OHrZThJC+gRG/PUxBQZG/VlPZ2RkyMjLGTd6UuVmvr1//K7qytLD6yQ8JAAA6SBLvAAAAnaemprppm42sjCz2gMlkslisoHP+Qef8my6PFV2Ndhp38dKZz3Ef7SiDoqNfDB8+SlLyx5cOAoHQ9E8JiVZ+wtXV1aZ9/eLoRPmfnVZWtLFKMxUV5TL//y4wcnJyDQ3/NWiRSKQfbgQAAAQCkioAxIiSEpnRyOD/WV//b1W4jIyMnJyc85gJI0Y4NV1eR1sPIaSnZ2BiYhYT89LcvFd8wqe9e44JKh5VNfU+fWzmeS1q+iRZSbn9W5CXl2cwGpo+Q6+nq6v1EFSEAADQfpBUASBGNDW137x9xeVysUagt+9e818yMTGvrau1tfm30YjFYhUXF2poaGJ/Ojo4379/y9Cwp5ISub+tnaDiMelp9vjJg359+/MbpXJysn6q/snCvDeDwUjP+GpmaoE9k5qa3LSLEwAAOg3UVAEgRhwcxtBoVcdPHODxeHHxsXfu3OC/tMB7aUzMy4iH4VwuNykpfsfOTavXLmIymfwVqSXFjx7ddXR0JhKJgopn2jR3Lpd7wv8Qg8HIz88NCDw2/4+ZWdkZba+lp2dQUVEeHf0yPz/X3n6Ijo7e4cN/p339UllZEXTOPzU1eeZ0D0FFCAAA7QdJFQBixI4yaJHPig8f3owabbdv//aNG/wQQth8VH362ASeDklMjHN1G7N2vS+dXrdr52FpaWlsRV0dPQvzXt/S05wcfzzur/2UFJWCzl6XlZH1WTxnrpdbfMKndWu3mJtZtr3WoIHD+ljbbNm29tnzSElJyV07DikpkX2XeM6eM/nT5w87dxzs08dGgEECAEA7EbDrKQCgy6mtYocdK3BbaYR3IN1NTHiJoaVsL3sl/jPFxcXv37+PiIgIDAzENTQAgEiDmioAAGhdYWHhhw8fnj59mpubW15e3tjYiHdEAACRBkkVAKAjJk3+7lRVGzZsb89EVqLs06dPJy+F5eXllZaWMplMrI6+2SQRAADQDCRVAHQZVVVVycnJGhoaFhYWly5dunXj4aT+u/AKJjDwyvdeUlFW7dxYBO/Zs2fxmTFYST5/ZKKMjAyTyYSJrwAA3wM1VQCIoqqqqsbGRi0trXfv3oWGhg4fPvy33347d+5cYmKil5eXjY1Nbm4uYsu+uFwPNVUCFxNeQtZqPB+6KzExEbsRIUZCQoJIJKqqqhoYGBgYGBgaGhr8PzzDBQCIDEiqAMAZj8cjEAiFhYWPHz/W1tYeO3bs1atXg4KClixZ4urqGhcXR6PRbG1tlZWbT4kJhepCwi9UP3PmTGhoaHl5Odbxp6urGx4eTqVS8/Ly8vLycnNzsQf5+fn6+vpYmmVoaIg91tLSwvt9AAA6GyRVAHQqBoORmJiIELK3t3///v3u3bvt7e3//PPPmJiYuLi4UaNG9e7du519TJBUCUn0nRJFjfph44wRQklJSTt27MjMzOTxeCNGjKirqwsKCmq5Cj/Nys3Nzc/Pz8vLq6qq0tfXb9qaZWhoqKKi0vlvBwDQaSCpAkBY6uvrKyoq9PX1CwoK/P39e/TosWrVqpiYmMuXL0+YMGHixIlYEbSenl4HNp6dnf3ofpR0+Yjpq3sKIXaxFn2HevtRQBkj3t7ePiUlpbq6uqKiorGxkcvlxsXFtXMjjY2N+fn5/NYsLOVis9ktuw6xG04DALoBKFQHQGDodPrdu3eZTKanp2dKSsqiRYvGjx+/adMmHo83cuRIKysrhNDQoUOHDh2KLa+hofFT28/IyLhx44a1tfXkyZM/ffpEJBJhPJowEAgEHo9XVVV18+ZNAoGAfcgEAkFCQiIsLGzo0KHt6dqTlpY2NTU1NTVt+mRtbS0/wYqKisIey8rK8nsP+SlXe25ZDQAQNdBSBcBP4/F4ycnJpaWlTk5OpaWlS5YskZKSunLlSkFBwbVr1ygUioODA4vFkpKS+vV9ZWZm+vv7GxkZLVu2LDo6uqSkZNSoUVgvEnT/CUlMeAm1OjHk7v6qqqqmz5NIpHHjxqmqqi5ZsiQqKio+Pn78+PFmZma/uLvy8nKsx5DfrJWbm6uhoYH1HmJFWoaGhh1r0QQAdCZIqgBoC4fDyc/PNzIyYjKZe/furaysPHLkSFlZ2bp162xsbFauXFlfX0+lUg0NDQVyR7yamholJaWcnBw/Pz8NDY19+/Z9+fKltLR04MCBsrKyzRaGpEpIsEL1emL6tm3bioqK+FMqYFmymZmZra2tiYkJlUrV1taeNGnS9evX4+LiPDw8rKyssGEHvx5DUVER1nuIFWnl5uYWFRU1Lc/CHvxsYycAQKggqQLgP9g34t27dzMzM1esWMHj8QYPHmxmZhYSEtLY2Pjo0SMTExNra2vB7jQnJ8fIyKikpGTBggVGRkbHjh0rKioqLy/v27dv2ytCUiUk/NF/paWlS5cuzcjIkJCQ4HK5nz9/RgilpKTExcXFx8d//vxZTU3N1ta2V69ebDbbyMjIzs7uwIEDCQkJGzdutLa2Li8vV1dXF1RUXC63aXkW9qCmpobflMUfeNhyoCgAoHNAUgXEV3Z2dnp6+rBhw+Tk5BYvXpySkhIREaGgoLB37149PT13d3chVSw1NDQkJiYOHDiwrq7OxcVlwIABx44dq6urq66u1tXVbf92IKkSkmb3/lu+fPmHDx9YLNanT5+aLZmVlRUXF4flWAghGxsbW1tbMplsampqZGR04MCBR48enTx50tLS8suXLwYGBgoKCoINlcFg8Juy+AMPeTwef+Ah/4GcnJxgdw0AaAmSKiAWCgsLlZWV5eXlz507Fxsbu2XLFm1t7eXLl8vLy2/dulVWVvbbt2+6urrCG4dFpVI/ffo0atQoWVlZBweH/v37Hz58mM1ms9lsGRmZjm0TkiohaXlD5X379j169OjFixdtrEWlUuP+X0VFha2tra2trampqYWFhYqKyuHDh8PDwy9evGhkZBQZGWlkZGRhYSGk+Kurq/kDD/kP5OTkWg48hHJ4AAQLkirQ3bDZbElJyffv33/69Gn8+PFGRkbe3t5lZWX+/v56enqRkZHKysoDBgzohK+T9PT0t2/fOjs7a2lpLViwQFtb+6+//hLgTU7qqtlPr5Y6zdIR1AYBJvZJuYGZjEm/jrcqVVdXY81XcXFxX79+tf1/VlZWcnJyp0+ffvXq1fHjx9XU1AIDAy0sLEaOHCnQd9CK8vLyZl2HeXl5GhoazeYshXJ4AH4F/EwBXVt5eXlqair24/vYsWN3797FptNMS0uTlpbGiksCAwP5VeQuLi5CjSc5Ofnp06fOzs69e/e+efOmvLy8oqIiQujMmTMC35cCWbK8oLGBzpaVhxNZkAq+0fsNV2rHgt9FJpMdHBwcHBwQQiwWC2u+CgoKio+Pt7S0tLGx8fHxwcre5eTkwsPDR44cyWQyt2zZQqFQpk+fLri38h91dXV1dfX+/fs3fbKoqIg/Z2l0dHReXl5xcTG/x5DfptWjRw9hhARA9wMtVaDLqKys5HA4PXr0ePr06YMHDyZPnuzo6Hjo0KH8/PwlS5aYmZllZWWpqKh0/qTV8fHxt27dcnBwGDVqVHBwMELI1dVV4NUzrXp9u0xZS9aod2fsS0zU17Le3i2d4vsTxW0/JTk5md9L2KNHD6wFy8bGRktL6+nTpxkZGYsWLSouLl63bt3o0aO9vLw6+RbO2HBXrDWLX61VV1fXcuAhmUzutKgA6CogqQKiiMPhEInEzMzMqKgoc3PzYcOGnTx58s6dO5s3b3Z0dHz37h2Tyezfv3/nJC7NYBNQxcXFnTlzZujQoe7u7o8fP2axWA4ODrhMjX1xR87IGVpq2h0szALN3Dud5zJXS027M/KYzMxMfi8hgUDg9xIaGxunpqbm5uaOHTv269evvr6+bm5uvr6+VVVVUlJSnX/YNzQ0tBx4SCAQWs5Z2uECQQC6B0iqAP4aGhqSkpJkZGT69u37+PHjo0ePTp061dvbOzIyMiMjw8XFxdTUlMFg4Hi9rqysVFVVTU5O3r17N4VCWb169efPn1ksVufUZrWNw+aF7M2zHEhWVJZS0ZSGE7oDCARebRW7poL57kHZnE2GZHUBzNr6s4qLi/ktWFVVVVjzFVaGRaPRCgoKrK2tExMTly1b5ubmtnz58uzsbA6H02zG9s5Eo9FazlmqqKjI7zrkdyPyJ/oCoNuDpAp0KjqdXlVVpaenl5aWduHCBTMzM29v7/v37z948GDGjBmOjo7YXIvtuQ2IsGVlZfXs2TMtLW358uVOTk4bNmzIyclpbGwU3qCtXxH3oir/WwMPIRqViUsAPIQa6uvbGLff2NhIIpFE8746MgpESSmCtonMwLFqRCL+EdJoNKz5Ki4uLj09nd+CZWNjIykpiU1/9fnz53379g0fPnzp0qUfPnyor69vdYbYTlZWVtb0xtJYvqWjo9Os61BHB0ZXgO4JkiogXFVVVRERETIyMm5ubtHR0Zs3b3Z3d/fx8UlPT8/JyenXr5/oTAldWVn57du3QYMG5eXlTZ06derUqZs3b66srOTxeGpqanhHJ9KysrI2bdpUXl6+Z88ee3v7VpeZNGlSQEAAfJv+LCaTyZ8KKy4urlevXvwcS1FRERvrGh8fHxwcPGTIEDc3t/Dw8JqamgkTJqiqquId+78KCgqadR2Wlpa2nN9BgBOlAoAXSKqAYLDZ7JSUlNra2mHDhmVmZm7YsEFDQ8Pf3z8tLS0iImLIkCGDBg1qbGyUlpbGO9L/kZmZmZSUNGXKlJqaGjc3N0dHx82bN+Pb1djlxMbG/v333/n5+fLy8tu3b3d0dGx1sWfPng0ePBimoPxFSUlJ/F5CLS0trIvQ1taW/+Pk69evDx8+tLe3HzJkyPHjxxsaGubPny9q+QqbzW45v0NDQwO/x5CfcmGDZwHoKiCpAj+Nw+EUFxfr6enRaLTDhw9zudxdu3Z9+/Zt7969Q4cO9fb2ptFoVVVVxsbGeEfauvj4+A8fPnh4eMjKys6ePdvGxmb9+vVcLhcqPzrg3r17gYGBxcXFCCEJCYmtW7dOnDgR76DERUZGBj/BkpKS4ncRGhn9Ox9sQUFBTExM//79zczMli9fjhDatm2bmppafZsdtXih0+n8HkN+ykUkEpvN72BgYCBqv80A4IOkCvwYm82+c+dOaWmpr69veXn5+PHjKRSKv78/jUaLiYkxNzc3MzPDO8YfePv2bVRU1Jw5c/T09LZu3aqrq+vt7Y17jXlXd+7cuStXrtBoNOxPHo+3YsWKuXPntrrwkSNHvLy84LZ0QlJYWMjvIqyuruZ3Efbq1QtbgE6nx8fHW1hYqKuru7u7NzY2XrhwQUFBITMz08TEBO/wv4tGo/Hnd+AnW8rKys16Dw0NDfGOFAAESRVo7uvXrzk5OS4uLlwud+7cuSUlJU+ePGloaDhy5Ii5ubmbmxs22QHeYbZLVFTUgwcPsLaowMBAFRWVyZMnw29cQdm9e3dkZCSdTuc/w+Fw3N3d165d2+ryUFPVaaqqqvgtWNnZ2TY2Nv3798c6CvnNsdnZ2dra2jIyMl5eXt++fXvz5g2LxYqOju7bt6/oVxCWlJS07D3U09NrlmaJwngXIG4gqRJfhYWFGhoaUlJS//zzT1pa2tGjR2VkZObNm6etrb17924ej5eWlmZsbNxVqouwQqioqKhLly65urpOnDjx3r17cnJyI0aMwKauBgI3cuTI2tpa/vc0l8udNGmSn58ihsJkAAAgAElEQVRfqwtDTRUuGAxGfHz858+fsUasPn368HMs/nxXWLEjm83euHFjUVHRlStXKioqIiMjBwwYIJpjXVvVcn6HyspKgyawfEt06vdBtwRJlVjAmpeioqISExNnzZqlrq7u6urK4XAuX76spKT04MEDTU3NAQMGiOZw9zZgY8ujoqIOHz48e/bsmTNnvn//Xlpa2sbGBu/QxMiQIUOYTCaWVI0cOfKff/7BOyLwXQkJCfwcS1tbm99L2OxGNPX19adOnaLRaDt37szIyAgLC3N0dPzeuE6RxWQy85rA8i0mk9ly4CEu0wiDbgmSqm6ISqV+/fq1V69eGhoaO3fufPnyZUBAgKmpqb+/v5yc3MyZM2VlZbGR2HhH+tMaGxvz8/NNTU3fvHmzYcOGP/74w9PTMyMjQ0ZGBm4Ei4va2tpJkya9fPnS1dWVSqX27NkzJCSk1SWhpkrUpKen83sJsZ8iWAtWs/qkhoaGe/fuNTQ0eHp6xsTEXL9+3c3NbeTIkV30GlJXV9ey65BEIjVLswwNDaGFG3QAJFVdW3l5uaSkpLKycnh4+PPnz728vGxtbbdt21ZbW7t27VodHZ1v375paGh06W8y7J6vgwYNSkhIWLx48bx58xYsWFBaWqqgoAB9Sbi7cOFCbW3tsmXLfrgk1FSJsoKCAn4LVm1tLb8Fy9LSsulibDb7/fv3DAbDyckpPDz88uXLCxcuHDNmTEVFhehXYrWhsrKyWZqVm5urpqbWtELLwMBAX18f50CByIOkqsvAxvynpKS8efOGQqHY2tr6+fm9efNm9+7dAwYMePnypaSkJIVC6SolUG1LTk7OysqaPHlyVlbWihUrpkyZ4u3tTafTcbm5HmhD+1MlqKnqKiorK/ktWDk5ObZNNKsQyMrKamhosLKyCgsLO3jwoJ+fn7Ozc3p6uqqqapfOsTBUKrVphVZeXl5BQUHLG0tramriHSkQIZBUiaja2trk5GQVFRVLS8uwsLBz584tWLBgypQpt2/fLikpmThxop6eXjebo/LVq1fJycm+vr40Gm3FihUODg7z5s3rol0MYuLdu3fBwcEnT57EOxAgLAwGI66Jfv368XsJm/3CYTKZVVVVmpqaoaGhgYGBmzdvdnBwePXqFZlM7tevH37vQJB4PF7LG0vTaLSW8zt06c4B8CsgqcJfXV1dXV2dlpZWfHz8lStX7O3tp02bFhIS8vbtW09PTzs7u9zcXGlp6W45PPjRo0fv3r1bt26dvLz8unXrbGxs3N3d8Q4KtNe6devGjRs3atSo9iwMNVXdQHx8PL+XUFdXl9+C1XLG9oaGBllZ2bt37965c2fx4sV2dnYhISGqqqqjR4/uZrVKjY2NzdKs3NxcDofTrELLwMAAmmnFASRVOKBSqY8fP1ZXVx8/fvzdu3cPHTrk6+s7c+bMhISE8vJyW1vb7jroF+vBvHfv3pMnT1avXm1kZHTs2DFjY+MJEybAbOZdTlVV1fTp058+fdrO5aGmqpv59u0bvwVLRkaGn2AZGBi0uvyLFy+ePXvm7e1tbGy8a9cubW3tuXPndrMEi6+2trZZhVZeXp6cnFzL3kNoie9mIKkSIiaTmZyczGKxBg4cGBcXt2vXLisrqx07dnz8+PHNmzcODg79+vXrZl14LWE/WG/fvh0aGrpq1SoKhXLr1i1NTc3BgwdDItWlBQUFNTY2+vr6tnN5qKnqxgoKCvgJFp1Ot7W1xboIvzfN1adPn96/fz9nzhwlJSUPDw8TE5Pt27d3+1tFlZeXt+w91NDQaDbwEAYyd2mQVAkGi8UqKSnR09OjUqmnTp2Sl5dfv379+/fvz5496+zsPH369LKyMjqdzr8nV/eGTR9169atgICATZs2OTg4xMTEqKmpNRtJBLq08ePHnz9/Hqp0QTMVFRVxcXFYF2F+fj6/Bet7s8fl5OQkJSVNmjSJwWA4OjoOGzbswIEDDQ0NdDpd1O4DLQzY6OamzVpFRUXNKrQMDAyaTSQGRBYkVR3EYDDu3btHp9O9vLwyMjLmzJkzevToXbt2YXfgsrKyEtnbCQsDjUarrKzs2bPngwcP/Pz8tm3bNmHChK9fv6qpqYnDZVEMRUdH37x58+jRo+1fBWqqxFB9fT2/BSshIQG7VQ5GVla25fJMJjM1NbVfv37l5eXu7u4WFhbHjh0rKysrLCzs27dv927H4uNyuS3nd6itrW2WZhkYGJDJZJxjBS1AUvVjKSkpVCrVycmpurp68eLFHA7n+vXrVCr1woULNjY2Y8eOZbFY3bUyoA1ZWVkVFRV2dnbPnj3bvXv3smXLpkyZQqVSe/To0VVuDgg6bNWqVa6uriNGjGj/KlBTBZoOJDQ0NOQPJPze/AtYm3dhYeHWrVsVFBSOHj367du3rKwse3v77lp4+j0MBqPZ/A55eXkIoZazw7earYJOA0nV/8jPz9fX1+fxeHv27KFSqceOHaurq/P19bW2tl6/fj12WBsbG5NIJLwjxcenT5+oVOqECRM+fvy4f/9+d3f3KVOm1NXVwU0exArWihAZGflTa0FNFWgqLS2NP5BQXl6e34LVdkVRfn7+6dOn1dXVV61aFRMTk5qaOmbMmGZTwIuP6urqlrPDKygoNBt4qK+vDz90O434JlU8Ho9AIERGRn79+tXHx0daWnro0KE9evS4c+cOl8u9fft2z549bW1t8Q4TZxwO58mTJ/n5+QsWLEhPTz948OC4ceOmTJkC00eJs9OnTxOJxAULFuAdCOgm8vLy+C1YDAaDn2CZm5u3sRaVSr19+7aWlparq+uNGzdSU1Pd3d1NTU07MXBRVFZW1mx+h/z8fG1t7WYDD6HNWEjEJakqKCj49u0bhUJRUlJavXp1fHx8aGioqqrqgQMHNDQ05syZA4l8U9evX09OTt65cyeNRjtw4MDQoUPHjx+Pd1BAVDg7O1+9evVnp8yGmirQHuXl5fwEq7CwkN9F2PZd0mk02qtXrzQ1NQcOHLh///7MzMzVq1dbWFjU1NQoKSl1YvgiqqCgoNnAw9LS0pbzO0AJ7K/rhklVaWmpnJycgoJCSEjI+/fvV61aZWxsvGLFChKJ9Ndff5HJ5LS0NG1tbSjx4+NwOEQi8eLFi2/evDl06JCCgsLhw4f79evn5OSEd2hA5ERFRYWHhx8+fPhnV4SaKvCz6HQ6v4swISGh6Q1z2piJhs1mJyQkqKio9OzZ08/P78OHD0ePHjU1Nc3MzOzZs2ezO+2ILTab3XJ+h/r6en6PIT/Tgqz0p3TtpArLBuLi4mJjYx0cHMzMzJYtW5aRkXH8+HFTU9PIyEgFBQV7e3sxrCL/IWx+rLNnz0ZERBw9elRfX//GjRsmJiYDBgzAOzQg0pYvXz5z5syhQ4f+7IpQUwV+BY/Ha1rnbmRkxE+w2i5ap1KpJBJJVVV1586d4eHhd+7c0dPTi4qKMjExgRmhmqmvr+f3GPIzLQkJiZY3lu7e0yv+iq6UVNFotJSUFF1dXSMjo6CgoNDQ0M2bNw8fPvzixYsNDQ3Tpk1TV1dnMpliW0XeNhaLVVNTo6amdvbs2eDg4BMnTvTp0ycqKsrIyEhsyzzBz6JSqd7e3g8ePMA7ECDu0tLS+AmWoqIiv5fwh3kS9nvy0KFDr169unTpEplMvnDhQq9evQYOHNhZsXcxNBqt2cDD/Px8MpnccuAhtAKKdFJVVlbWo0eP6Ojo8PBwFxeX0aNHHz9+PD09ffHixb169crKylJQUNDQ0MA7TJFWUlLCYDAMDQ3Pnz8fEBBw9OjRgQMHpqWl6enpwXg90AFHjx5VVFScP39+B9bdt2+fj48P1FQBgcvNzeX3EjY2NmLNV0OHDtXV1W17RWy40rlz52JjY/39/RkMxv79+ykUCpSQ/lBJSUnLgYd6enpNMy1ra+tmd90WByKaVIWFhVGp1CVLlrx//55Op2MF5ngH1fXs3r2bTCYvWbKESqV2y/sxg06TnZ0dHBxcUlLyzz//dKwx+Pjx45qamjNmzBBCdAD8q6ysLC4uLjg4uLy8/OHDhz+1Lo/Hu3v3bk5OzooVKzIyMmpra2EA+E/Jz8/nZ1rZ2dkIoYCAALyD6mwimlRduHBBR0fH2dkZ70C6vC9fvvTu3TsiIgJ+e4GOiY2NvXTpUlFRkYeHx2+//fYrm8LmgTt+/HjPnj0nTJgguBgB+FdgYGBgYKCPj88vTvlRV1e3cuXKcePGubm5CS46MfLu3bvg4OCTJ0/iHUhnE9Gphry8vPAOoZvo3bs39gvM0dHx+fPn0OcN2i8yMvLSpUsKCgpz587tQGV6S/r6+gihGTNmnDx50srKysjISDzvRgCE4cyZMwEBAQsWLIiNjf31rSkoKJw9ezY/Px/bsp2dXdtzOoBmUlNTe/XqhXcUOBDRWymVlpbSaDS8o+g+JkyYEB4ezuPx0tLSPn78iHc4QNRduXJl3LhxUVFRW7ZsCQgIEEhGxaepqbljxw4swRo+fPiFCxcEuHEghs6ePUuhUDgcTmxsrI+PjwC3jB2ljo6Ox48fr6ioYLPZAtx494Z1kuAdBQ5ENKm6cOHCz94EA7RNSUlJQkICGzj5s6UGQEzQaLRjx47Z29sXFxdfvHhx9+7dlpaWQtoXNt3uu3fvsGq/+Pj4L1++CGlfoLsKCgqyt7dnsVgfP35ctGiRkPZiamoaFBSkoKBAp9N9fX1zcnKEtKPuBJIq0aKhoaGiooJ3FN2QjIzM6dOnsVbZa9eucTgcvCMCIiEjI2Pbtm1ubm5kMvnt27dr1qzptKG1Y8eOxZqv9uzZ8+jRo87ZKejqgoKCBg4c2NjY+Pbt28WLF3dCYYO0tDSZTPb09AwPD8emFxH2HrsuGo3GYDDEc3SUiBaqg07w4sWLDRs2fPjwAe9AAJ7ev38fHBxcVlbm4eExceJEfIPBhqn+/fff1tbWv1gUD7qr8+fPnz592tPTc+HChTjegfTWrVsvXrzYs2cPTE/T0ps3b65evXr8+HG8A8GBiCZVpaWlJBIJprTpHAkJCSUlJTDWUtxEREQEBwerqKh4eHgMHjwY73D+Q6VSAwMDFy9ejN1LCqbzBZgLFy6cPn16zpw5ixYtEoUbur9584ZMJltZWYltUfb3BAUFNTY2+vr64h0IDkS0+w9qqjqTlZXVixcv7t+/j3cgoJMEBwc7Ozu/ffvWz8/P399fpDIqhJCWltbWrVvV1dV5PN7IkSOvXr2Kd0QAZxcvXhw8eHBtbe3r16+XLl0qChkVQmjIkCFWVlbYHGy7d+/GOxwRIrYFVaKbVEFNVWeSlJTcs2ePnZ0dQuj06dPl5eV4RwSEory8/J9//qFQKBUVFVevXt25c6e5uTneQX0XgUCQlpZ++/attrY21iqQlJSEd1Cgs128eHHo0KHV1dVRUVHLli0TzQk4/P39XVxcsEndYLyFOM+nILpJlZeXF/RGdTJNTU2EEIVC8fT0xDsWIGDfvn3bsmWLu7t7jx49YmNjV65cqaamhndQ7eXg4IAQMjAwOHTo0OvXr/EOB3SSS5cuYenUs2fPli9fLuK9wNit6A0NDffs2RMTE4N3OHjC5p7AvlDEENRUgdZ9+PAhKyvr999/xzsQ8Evevn0bHBxcVVXl4eHRDWbVr6ioUFNTW79+/aBBg6ZOnYp3OEAoLl++fPr06enTp/v4+MjIyOAdzk8rKirS0dE5fPjwuHHjxLDBJiYm5vr168eOHcM7EHyIaEsV1FThzt7ePj8/PywsDO9AQAfdv39/5syZISEhnp6eV69e7QYZFUIIa2Bbs2ZNamoqk8msqalhMBh4BwUEJiQkZPjw4WVlZU+ePFmxYkVXzKgQQjo6OtiUoX///TdCSNwOUXEuqBLdpApqqkTBunXrsEKBPXv2fP36Fe9wQLtwOJyLFy86OTl9/Pjx77//PnHixMCBA/EOSsA0NTX//PNPEonE4/GcnJxu3bqFd0TgV4WEhIwYMaKkpCQyMnLVqlWysrJ4R/SrbG1tL1++jBAqKChYs2ZNWVkZ3hF1EkiqRBHUVIkIbAoWV1fXvXv3IoTgLg2irLS09NChQ4MHD66urg4LC/Pz8zM1NcU7KOEik8kxMTHYHIOPHz+Oj4/HOyLw065cuTJy5MiSkpKHDx+uXr1aTk4O74gEzNTUdNKkSdh9LEpKSvAOR+jEuUodaqrAz3n37t3nz5/Fc/YRUZaamhocHBwXF+fh4TF79my8w8FHTk7Ozp07Fy9eTKFQ8I4FtMvVq1cDAgImTZrk4+MjJlNonjhxIjc3d/fu3aI5jPHXlZeXu7u7i3P1jogmVfv37zc0NJw5cybegYDmgoKC5OTkZs2ahXcgACGEoqOjg4OD6XS6h4cH1lcr5qqrq8lkso+Pj4uLC1Syi6zr16+fPn16woQJPj4+ioqKeIfTqZ4/f967d29lZeXCwkITExO8wxGwV69e3b59+59//sE7ENyIxBRqLUFNlcjy9vbGOgE3btzo6ura/ep1uorw8PDg4GBdXd2FCxdiw7kB1iGIENqxY0dwcDDW26KoqNj9epS6ruvXrwcEBIwdOzY8PFxJSQnvcHAwatQorJRi06ZNY8eOnT9/Pt4RCZKYF1RBTRXoCGw642XLlt28eRMh1NDQ0GwBaDIRHhaLde7cOUdHx4SEhAMHDhw9ehQyqpY0NTXXrl2LzSDq4uISERGBd0QA3bhxw8nJKTc39/bt2+vXrxfPjIpPUlLyxo0b/fv3Rwg9e/YsOzu75TJz5szBI7RfIuYFVaKbVJWWltJoNLyjAG3R1dU9ePAgQigpKWn37t38GvYpU6aUlZWtXLkS7wC7GyqVun///uHDhzc0NISHh2/dutXY2BjvoESdhobG69evsUr28PDwz58/N1tg4MCB586dwyk6cXHz5s3Ro0dnZ2eHhYWtX78ea00ECCEbGxtsytB169alpaU1fWny5Mnp6enYNbYLgaRKRJMqmKeqC7G3t7ewsLh9+zb2Z35+voSERFxcHNaOBX5dcnLyhg0bvL29DQ0N3717t2TJEjH/lf+zsPYAKyurU6dONf3qGjNmDIfDCQkJefXqFa4BdluhoaFjxozJzMy8efPmhg0bYOxRq0xNTUNDQ9XV1RFCu3fvLi4uxiq+ORxORETEs2fP8A6wvUpKSiQlJbvQ3RqEAWqqgAC4ublhDwYOHEggEBBCdDr94sWLw4YNw27cBjomKioqODiYxWJ5eHjs27cP73C6NlNT0zNnztDpdITQrFmzZs2aVVFRISEhUV1dvX//fgsLC7G9sYYwhIWFBQQEODo6Xr9+XVVVFe9wugAsqRo0aNDOnTsLCwuZTCZCqKam5ujRo3369NHQ0MA7wB9LS0uztLTEOwqciejoP9BF9e/fX0Li3+ZPHo9HoVACAgLwDqpLunXrVnBwsLGxsYeHh62tLd7hdDdlZWWurq78qa65XK6JiQm0rQrErVu3AgICRo4c6ePjI+aNFh02ePBgFouFPeZyuTY2Nl2ik9rf319aWtrb2xvvQPAkot1/UFPVFY0dO5afUWE1womJiWfPnsU1qC6GwWCcOXNmxIgRqampR48ePXz4MGRUwtCjR4/6+nr+nxISEpmZmcuWLcM1qC7v9u3bLi4uqampISEhmzdvhoyqY6ZOncrPqLCDMyUlBZt+WcRBQZXodv9duHAB5qn6ocYGLpPBxTuK/9TROHIkNR6Ph/UAYm7ffGRnO6Jnz564htYF1NbWBgcHP336dOrUqTev3sdmAait6uwp7Hk8pKQqopeF76mrZvN+8jyYP3++omzz/pSkuIzdO/6B1KoDIiMjr1y5QqFQAv2Dsc4+QR26BAJSUO5iB2RNJbvJJfCnVVcw5UjN89FnkW8sTB6NHj36V4MTppyMEgMdi86/anWC9h+HotX9N2rUqOrqan5IBAKBx+NpaWnBiOhmYp9UprytkZKWYIlSUsVksbD/u2b/ynX923h1AiaLhRAi4T3Psoq2dGF6vWk/+YHj1ZRURX3S59e3y75+quuhJ00rYf7UinV0OkII++LjIYT9DCAgRCAQusFd5zoZD6HGxkYSiSTxK6nEd6hokUrzGOYDFEdM7SHwjQsWvZr95kFFZnydrplcZVFjh7dT32KSGoyIX0h5CLHZbCnJLpYBt5OajnRRVoOpjcJwV3UpUltdfKL1/ocMGRIREdG0C0lCQmLSpEm4BiVyHl2kKqhKOXvqKiiL+nce6IrYLC6tlHnzaMHUJboqGiS8w2kdm8W9vDtvgLP6ZF8VWXnRuo4BwWLUc0rzGs78mTVvm5Fkm99nOKquYIUeKXD8XXvAmB5tf+mCLorJ4FRSG8/8mTV/u7GMPPF7i4lWS1VKSsqGDRuoVCr/GX19/fPnz8NAXL5HF6gq2tK9B8HQSCB0Nw9nT1uhJ5rtVZd25Y6crqmqJYN3IKCT1FSyHl8omOcninOz1deyQ/bm/b4eihzEwsXtGUv/+e696kUrobaysrK2tub/SSAQxo4dCxkVX84XupQsETIq0DkcZ2q/i6jEO4pWxL2oshxIhoxKrCipSvUdqfrxiSgekG/uV4z6HeaOEReOv2u9vlP+vVdFK6lCCM2dOxebrgMhpKenN2PGDLwjEiGl+Y1S0iL3Xwa6KxVN6Yz4WryjaEVBRoMCWRTbz4BQKapIFXxrvd4IX1lJdHIPEe0oBwJHViflfKF/71WR+4bu3bt33759scfjxo2DKUCbaqznqGtL4x0FEBdESYKBhTyt7OdqwDsBARFEttgLCI+yljRB5L6yUH0NW0NfRlr2u0U2oJshq5PkFCQ5nNZLp0TvCEXIy8tLTU1NS0sLmqmaoddw2Kx2LAeAgFSWMAlCGNL1i6pKmVwRqgUFnYWLKopELsVHBELFL4z1A10RNbfhexfGXx01U5RZX13Optey62s4XA5iswUywl9tmMVieXn52IeNCJX8+uakZSUIiCCnRJRTIqrpSPfQgcYeAAAAAAhYB5Oq3FT6t891Wcl0FS1ZHo9AlCJKSBEliERBjSW07uuAEKr9bq/lz6mrJ3A5HE4hm8NksBjVLAbHpK+8JUVR0xAKXQEAAAAgGD+dVBVnN7y6XSElRyJISpsMVpGU6nodycwGdkU5PepOlawcGj5FTRkKDAEAAADwy34uqXp6tawoi6FmrCqv0oXbeEiykqr6ZIRQTSk97HhRL3vFIRPhHlUAAAAA+CXtLVRns7gXduQyONIG/XW6dEbVlJKGvMlg/VKqxO2ThXjHAgAAAICurV1JFYfNC9yUpd1bU0FNXvghdTZlXSUpstK1g/l4BwIAAACALuzHSRWXyzu1PrO3k7G0fLedbU9BTU5JV/Xirly8AwEAAABAV/XjpCpkT57ZEN1OCQZPcsoyqvrKD4KK8Q4EAAAAAF3SD5Kql2HlyvrK0vJiMT5OUUOBhaTjo2h4BwIAAACArqetpKqiqDE7ma7YQ6ET48GZsg45+k65oGbbAgAAAID4aCupenWnQt1YtRODEQla5iqv71TgHQUAAAAAupjvJlXUnAY2R0Kxh1znxtNe8UlP124ZWEevEviW1Y2UC7MaGxs4At9y1zXPe8aRo3u71n7Dbl1zGmPf9jK/uTpdCj7bse2LMrz+v7o3Gq3K0Yny4uUTYe+oPYcuXrrrKdMVvXj5xNGJQqMJ/ktQNPfbTllZGRs2LhvjMijkyvmmp9KUqaM759D9blKVkUAnELvtcL8fIEjkpNTjHUTX4+o2pqhYVGb86t3L2mPOH20vM3OGR98+tp0VERAL2dmZv8+e+CtbaM+h24bbd27s2bftVwJopul5DadMlyPw40HEPXv+KDEpzm/bfqdRY3/xVOqY786onplI1+ql0bnBiAo5Vfn0+DoLiiLegXQlVGqxSP126dXLulcv67aXmT3Lq7PCAeLi67cvv7iF9hy6bQXw9VcDaKrZeQ2nTJcj2ONB9NHpdVpaOkOGjEAIaWlp/8qp1DGtJ1VVpUxZRSnhDfrLyUt8/OJsfsEXBXmVXhbDnB3/kJGRRwjFvLv5JOrc4vmnLl3bVFKapa1pOmLILLv+//7su//oeGxChDRJzravi4a6gZBiQwgpacgVp9QIb/udJjUtxXeJp//Ji70srbBn5nhMGTJkpO/iVd/S03wWzfHbvv/ipcCsrAw1NXVHB+clvquxxXJysvbu25abl21jQ5n7v5n+rdvX3717nZqaTJKW7te3v7f3El0dvbj42NVrFiGE3Of8NnToyF07DrHZ7KBz/u/eR5eWUq2tbVx/mzFo0LAfBtzGflNSEi9eCkxLSyErqwweNNxz7kJ5+X+nos3Lyzn0z9+JiXE62rrDh4+aP28xiUQKu3XN/9ThZ08+YAucv3A6PuETj8ezsur7+4y5ffrYYH0ZblNnzfX4A1vmyNG939JTiURJI6OeXp4+tjYU7Hde8OWzRw4HbvNbn5OT1bOn6fRp7mNdJrX9RrZtX08kEjU1ta9dv+S3ff+I4aN+Nv42Qgq7de3K1fOrVm7atn39lCkzli1Z28bn9u59zPXrl9K+pqiqqltb91v4xzI1NfV2HDvdShsnwo2bl69cvbB29V+Hj+ym0ap0dPTmzvnD2XkCttiz55Hnz5+qqa0ZMmTEzOke/A3W1dXdDL384ePbnJxMNVX1IUNGzp+3WEZG5vyF01gXg6MTxXfxqunT3CsrK/xPHU5OSWAwGHZ2g+fO+UNf37DtaJseulOmjp7ntai6mnbxUqCsrKwdZfDSJWux/8FWj+qVqxcmJHxGCD1+/CDg9OWkpPimh8pop3Hf+xxaPQ5TviQ2O69/8ZTh8Xhht65GRt7PL8g1NDCmUAbNn7eYSOx6N5D9RdeuX7p4KfDhg2jsz5IS6u+zJ+7acWjo0JF/blktJSllaGh87folLpfb09h03dqtpqbm2JKnA44+fvJATlbOyWmsnt5/B8ggX6UAABoCSURBVFJ2dubde6Gf4z5SqUVGhj3Hj5/y2+RpCKFmx4O5mWUbF6I2fG+/CKFHkffu3gvLzs4wNjYd5ejsNnUWgUBYtsJbVkZ2/74T/MU2/bmyuprmf+JCG3uZOHnk7Fnzvn798ur1c3l5+T59bDdv2qmooJiVleG94Pc9fx85eHiXsrLK2cCrCKGYmKiLlwJz87LJZGVTU4sVyzZoamotW+GdnJyAnYB/eC+RkZHln0pNdexDaKfWu//qaGxGA1dQ+2imvCI/4MIyFqtx6cKznrP3FZeknzq3mMNhI4SIklINDbV3HhycMWXzgR3v+lqPunFnVxWNihB68yHszYfQqRPWrfA5r6ai8+RFkJDCQwgRCIS6Kha9hi28XeBOkiiJELp8OWjXzsORD98s8V0Tfvfmg4g7CCEWi7Vh07IePTQvnAv1WbD82vVLFRXl2FpJSfHHTxywsuq3Y8fBjRv8qqoq/979F0LI1oay5+8jCKGQy+G7dhxCCB07vj807IrrlJlXQu6NHOG0zW991KtnbYfUxn4LCvPXrvdlNDJOHD+/0+9gVlb6qtUL2Ww29kt66bJ5faxtDh08NXPm3GfPHx07vr/pZplM5srVC4lE4r69xw8dOCVJlPzzr1UMBqPpMlVVlUuXzdPQ0AoMuHLy+HkVZdWduzbX19cjhKSkpOrqao8d379uzZbnTz+OHDF6/4EdJSXUtt+LlJRUVnZGVnbG3zsP9+1j24H42wiJRCLV19Pv3g3dtHGH628z2vjcvqWnbdq8wtbW7sK50OXL1mdmftu3f/vPHyzdGZEoSafXPXv+KCQ4/M7tZ06jXPbu356fn4sVZ/y9+y9n54mXg++4OE88fuIAf61bt69duXph5gyP3X8f8fFZ8TLqycVLgQiheV6Lfp85V1NT68Wz2OnT3Dkczqo1PvEJn1at3Hzu7HUVZVXfJZ6FRQXtD09KSur69UsSEhJ3bj+7eD4sKTn+wsWANo7qI4cDe/Wydnae8OJZrLmZZbNDpY0dtXoctjyv+Tp2yty6de1yyLlpbrOvXbk/aZLbg4g7165f6tD/W7clSZSMi49FCD2KiLl4IUxVTf2vras5HA5CKPxuaPjdmyuWb/D3v6StrXsp+Ax/rZP+hz5+fLti+Ya9e46NHz/l6LF9797HIISaHQ9tXIja0MZ+nz57tG+/n7mZ5ZXLd//wXhIaduWE/yGEkOPIMZ8+f6DT6dhiDAYjNvbd6FFj294RkSh5MzRk4sSpz59+3L/3RF5eDnbSSUlJIYQuXT47c4bHmtV/IYRiP73fun2ds/OEG9citm3ZW1JSfOTYXoTQ8aNBv02eZmTU88WzWPfZ81rdS8c+hPZrPamqr+EQpYT16+FzwiNJopTXrH2aPYy0NHpO/+3PwuKvyalR2KscDmuM4x+G+n0IBALFZgKPxyss/oYQin57o6+VU1/rUXJySnb9J5r2pAgpPAxJhkiv7s5JFWb48FHaWjokEsnRYYyd3eBnzx4hhF69fl5aWrLEd42mppaRUc/ly9bX1dViy/fu3ed80A332fNsbSh2lEEzps9JTU2urqluttnGxsbIx/dnz/KaPMmNrEQeP+43p1Fjm56KrWpjv0+fPpSSlNrpd9DAwMjIqOfaNVvSM75Gx7xECIWGXZGWkZnntai/rd3kSW7e832xM5AvPz+3qqrSbeosczNLExOzbVv3+vkdaHYK3QwNIUlLr13zl462rp6ewbq1Wxsa6sPv3sReZbFYnnMX9u7dh0AguDhP5PF4GRlf234vBAKBSi3y27Z/yJARysoqHYi/jZAIBAKDwfj9d8/RTmP19Aza+NySk+JlZGTmuM/X1NQaaD/k0IFTs6D7pgU2mz3V9XdZWVklRSUvTx95OflnzyMRQuF3b2pqaM31+ENJUcnWhjJhgit/lRnT55wNvOowcrStDWX4MEdHB+cPH9+03HJSUnxeXs7mTTsH2g9RVVVbvGilElk5LOzKT4Wnq6s/x32+ooKimpq6HWXwt2+p7TyqWx4qbezlh+dRMx07ZRISP1tY9HZxmaisrDJxguvJExcG2g/9qU9DHDCZjR5z/iAQCDrauvO8FpWUUJOS4rFUfuSI0SNHOCkpKo11mdTf1o6/ypYtew4c8O9va2drQ/lt8jQL816tHpBtXIja0MZ+IyLu9O1ru3LFRhUV1f62dvM8F925c6OqqnLkyNFcLvd19HNsseiYl1wu18FhzA/fu6mJuR1lEIFA6N27z2+Tp718+YTFYhEIBISQHWXQ9GnuWCPrufOnRgwfNc1tNpmsbGXV13fx6nfvotPa19HZsQ+h/b6TVNWyiaTvllv9opy8RH293vLyytifqiraaqp62bnx/AUMdP9tmpaTVUIINTBqeTxeeWW+poYxfxk9HUshhYeRkiXWd+uWKoyZqQX/sa6Ofk5uFkKosDBfRkZGS0sbe15NTV1DQxN7TCQSi4oKNm1eMXHySEcnyua/ViGEaFWVzTb77Vsqk8m0owzmP2PTb0BWVkbL9KupNvabkpJgaWlFJv97zGhpaevo6CUmxSGEsrLSzcws+T0IY10mrVi+oelm9fQMlJVV9u7ffjnkXHJygoSEhK0NRUHhf2Zfy8rOMDOzlJT895iXl5fX1zPEvr0wlv/fXaKoqIQQ4mctbTA0MJaRkelw/D8OycLqh5+bdR8bBoOx6c+VN0NDCgrzyWRlrIMGNGNu3gt7QCAQdHT08vKysQ/WyNiEvwz/GMB+On+MfbvYd+4Yl0GOTpQbNy9XtTgLEEJJyfFSUlL8LyECgWDTb0BC4ueOxYYdfnR6XTuP6v8it7Bq9fmmfngeNV++Q6eMtXW/T5/e7z+w41Hkveqaal0dPX7HFuAzNjblf7B6ugYIody8bB6PV1iYb2TUk79Y0wMD8Xi3bl2b6+Xm6ERxdKKkff3S8rLc9oXoe9rYL5fLTU5JaHqdt7W143K5iUlxamrqNv0GvI5+gT0fE/NyQH97VVW1H7530//9SmKxWEX/37Jrbvbf+83KSm96PlqY90YIpaWl/HD7HfsQfsp3MycCEtYEmA2MuvzCL2u3DGz6ZE3tf1NDYWlpU4xGOpfLkZb+b34HEklWSOFhuByEWoTR/cjIyDZ5LINdr2tqqmVl/2cqDWnpf5ODmJiov7aucZ89z2fhChMTs9hP79dvWNpys9gFdNkK72bPV1VWkJXI3wumjf3W1dWmff3i6PQ/CUFVZQVWlqisrNLGe5SWlj76z5kHEXdCw64EnfPX0dHzmrtwzJjxTZeprCjX1dVv+oyMrGx9w38jQFsekz9EkpbmP+5A/D8MCau7avtzMzez3Lvn2KtXzwLPHPc/9c+A/vZenj7W1v1+9r10e9JN/rOkm5wITVt3ZJucLIFnjkdE3PHxWWFHGaypqXU26GTEw/CWm62rq2WxWM3+39s+XFtq9dhrz1HNxz9U2vDD86iZjp0y09xmy8nJx7yJ2rffT1JS0sFhjM+C5erqPdq/X3Eg8//nL3ZZxv536HQ6h8Npeqbzr95cLnfj5hUsFnPBH0ttbCiKCootr72YNi5E39PGfplMJovFCjrnH3TO/382WFWJEHJwGHPi5EEGg0EkEt++e7182fr2vHfppu9dVhZ770pK5KZX1Lq6usbGxqZLysnJIYTq6+nt2UUHPoSf0npSJackyWExWn3p1ykqqhkb2riMWtj0SXn5737XIoRkpOUlJIisJiE1MoU75QGHyZFXElZbHY7YnP9pfmva4sJgMLCzRUmJ3NDwPx8v/2C9H3G7Tx+bP7yXtFy9KTX1HgihNav/bHbZ1dDQaiO2Nvarqqbep4/NPK9FTV8lKykjhOTlFeg/OpcMDIwWL1o5z2vR588fHj66u3vvVkOjnuZm/zV2ysnLMxr/54BvqK/HfiMKRAfib39IbXxuCKGB9kMG2g+Z57Xo06f3Ybeubv5z5e1bT8WwNLiZZicCnU7nV6o2MhgqyqrYB9v0v4D/qfJ4vHv3w6a5zZ74/x2C3z0R1NRl/6+9O49r6sgDAD45yEUAgcQAciPaet+46IoK2iqigLe13uuB7q4i3qJUxQPxKNrqalutrker9W7Fz9ZjVTxRooB4YgCVcAm5SPJy7R+PZiNNQoiBJOT3/fBHPnnvZSbDzMu8Oen01A07dN8kES2T+A3malNo08GUcqTLvCJDJBJHRMeNiI7j8QofPrx38NA+iUS88cP0cUAq9QfLIuJ1ehw++pNKpTk7O5NIJLlOmmtL/fMXT58+zU/f+m3PHnVrMonFIjZLz+R9IzciQ4yES6PRGAzG0CHRAwZE6l7i4+2LV6oydqXdun2dQqGo1eqBEQ33/dX/7lJpvSd/bbgIIZlM+v+raiUIIU8Pk6bgmJEIjaK/+4/hQlIpmmr1Sx9OaI2AHxzYvW1wT/yPyXRvzQo0cgmBQHBv5c0rztW+U/Asq4mih8NkKoar3f/wUClU3TIgFosrKyt0T+A+eqB9/fLls+CgtgghL463TCYrLHz5x/vPtVcJhQLdsnrjxhW94fq28ccf/bt364X/BQYEB/gH4c8ThhgJNyQ4tLyc37VLD+0Hurfy8PcPRAi1b98hP/+RdjTJ5SuXkpYk4OM6ccXFvIuZ5/CiGB4+IGXtFjKZrNtPgbceFxTkKRSKuq8pEhYVvw7S6fr5SGbE3/QoGUk3LvfB3Xu3EEIsFvuzz0bMT1gsEovq5QFH0GBByOHex1/I5fLiEh6ezhyOd0FBnlpdN2Xn9p0b+AuFQiGVSll/FAQMw27dvq433JCQdlKptHVrL+3/ncPx1u3gMJspufrPjKRDg+WoHvOKzKVLF16/foUQCgwMjo+fMDp+YoPDE1skJyeKXC7XpnZx0Wvdo68KXwgEdVvQ4v/T4OC2BAKBw/HOz3+sPe3O3brJg/jJ2jszj1fI4xXqDdfIjcgQI+HiOVwkFmk/rVPHrp4edcMP3Fzdevboc+/ercuXM/uFRxi/+Ws90vlJevHyGZlMrvdkjhAik8nt232qGyX8dXBIqClBmJEIjaK/UuXqQXaiNFXn14DwiWq1+tzFHRgmK68ounBp97bdk0rLXhq/qmunqNwnV7m5vyOErtw4VPQmr4mihxBSqzXMVuQW0FLl5xfgwnT57eJZjUajVCo3p63Fxzdo3c++jf/o3sy6lsPNjooahhAKD4+gUCjp2zfIZLLKyop1G1a4/tFn1zak3f3sOzncbKVSeeLkEfxNflkpQsjPPxAhdO3af54U5DEYjGlT5xw6vD83l4th2H+vX05amtDgGt9Gwh0z5gu1Wr37220ymaykpOhf+zJmzBpf+PolQih6eCyGYdt3bMx+cPfGzav7v9vlyWLrtsQIhYK0rev27N355m1JSUnRkaMHlEplp44fdIHFxIyWSMTbtqeWlfF5vMJNm9fQqLThw2It9H8wJ/6mR8lIuuXlP0r5aun5C6dqaqqfFOSdOn2cxWI7YG+L8YJAJBJPnTpeXMxTqVQ/HNgjl8sjB3+OP2rX1FTv2r1Vo9HkcLPPnPkZP59Cofj7B17MPPf23RuBoCYtfV3nTt1EIiE+18nX17+qqvLmzWslJUU9e/Tp0yc8PX19WRlfIKg5c/bE3HlfZmae+/hvZCRXt2njV1CQ9zDn/p+HeRlJB0P5ULdc636UeUXm8pXMNSlLbt26LhAK7ty5eePmlXol0UF06NBZo9FkXjqPr6dw9PgHCw24urpl7EoTioRCkfDQ4f0cjhe+4OqggUOu37iCr+l/7PiPT57UtTIEBgSTyeSffj4sFAnxGXO9e/XFb8v18oORG5ERhsJFCP1t5oKsrGu/XTyrVqtzc7nr1q9ITJqLYRh+NCIi6vHjhw8e3DVliDquorL8xMkjKpWquJh34ddTgwYN1e2a14qLHX8z69ovvxwTioQ53Oxv92zv0b13qGmPK+Ylgun0V6rcWBSlTCUTYZYKRheD4Zq04CjFib5z79S0jHGFvIdjY1c1OPA8KmJ6WM9RZ37blpQcVvAsa+SwhXg7fFPEUFgmcW/dElaTd3JySk7e9PRp/uCo3hO/iBkYMcTbu41uok2aMO37778ZFNlrbcrS+PgJ0cNjEUJMJnNj6k6VUjliZMS0GWPGjJ4UEFA3RWDGjISwPuGrkxOHfv6XsjL+8mVffdK+w/IV//j9cmYbH9/PP4s5cHDv/v27EEITxk9ZkrTm6PGDMaMGfp2xxcfbd/Hi1cZjayRcVxfX77/7iU6jz5k3ecq00dxHD5YkJeM9Hb6+/ps3ZXC52UuWzk/duDqsT78F85N0P7ZTp66Ji1b+fvnil1PipkwbnZubs33bXt1xlwgh3zZ+a9dsfv365YRJIxYmzkYIfb3zOwuuXGJG/E2PkpF0Gzd2cvTwuN3fpMeNHrIocTaD4bxj+z4H7PszXhAIBMK4sZMTk+ZGDQ07f+GX5UtT8KWkevfqO3fOP+/duzU4qveWtJTly77S3nOSV22kUWnTpo+ZPCW2Z48+s2YtoFFpcaOjSvnv+ob179ypW/LaJHwK4abUnRERUes2rIiNjzp1+nhU1LD4+Akf/42M5OqY6HgCgbBk6fxXhS9MTwdD+bBeudYyr8gsTlwdGBC8KjkxNi5y67b1/cIjEhet+vjUsDufftJx3tyF+/ZlDIrstW7DipnTE3R/zoKD2gYGhowbP2xU7GA+/92GddvxMjv5i5nRw2N37d46KLLX7Ts3EuYl4ldxOF6rVm54UpA7KnbwytWLZs2cP3LkmIKCvKnTx9TLD0ZuREYYChch1Llzt317jzx+nBM3ekjS0gSJRLxh/XZtNWhgxJCycr5SpewXHmFiyoyIjsvPfxw1NGzq9DEB/kF/X7BE72lDh0bPnJHw04nDo2IHb0lL6dK5+5rkTSYGYV4imI5gqF5y+9eqNzwNO7hxYypbhnf55b0jmaHdbW5F9cwf+T4hzKDO+uf4NAq+nNrXO/Z36QKbTgCDTu8qGjXXx41lW88Yh1OLBk/ycfWwQKx0V9oENk4qVp3fWzxzfZAJ5zafWpHqWFrxuCTLxGptylKxWLQtfY9FPs2+6C4ta+MOrXs5b2tbor5WKYN7/7Xt6qyx3HJY9oVAUAV1tFgrBQAAAAAcgcFhQ2xfGp2hEZRJ3Dj6qxc1gvL03RP1HqJTmVK5WO8hL3bwgtkNLALZKKtTIw0dUqmUJJKeL+jv23H21AxDV1UUVgd1oJMpBqubwGxHjx08dkz/NgUBgcG7M35o9hiZb8WqhXm5XL2Hhg+PnTd3YbPHCNgNyDzApsSMHGjo0LJlKf37GTzaKLm53JWrDObtfx8+Y5FQrM5g9x9CSFClOLnzbUh4/bH3OJVKKRCW6z2EYTIKhab3EJFIbuVmyX2a31e/M3QIU8gpTnrGuJHJFFcX/XMv1Sr102vFCekWm/ZlWRbs/rMKkVhkaPI5mURms+1pA++qqkpMoX/QIYPO0K4sZ+9afPefVThI5rE4R+j+s4pSvsGfUfdWHtpFjJs0IG8vH0uF0gyMdP8Zm+Dm5un0aRizqkLkwtYzuohEInu4Wz8VLBsHYalg4FiH22622bgwXVyYNjdSzTwOuC0xsBTIPMCmNFuFxr5qTuZpoJMrfASrtlJcW9NUC4HaFEGpkOms7hBmbBlSAAAAAAC9Gh45ND7RtziHr5C18EHrNXyx9L04apI99UABAAAAwHaYNBx7zpbgF1klLbi9SsAXI5lkQpL+0WMAAAAAAA0yqVJFIBAS0tsK374XlukfZWzXqkuqKQRp7LyW39cLAAAAgKbTiIUDJiT5eXqqCu+8EZY3Yt9NW1b9Vvj0WlFQe/KwacY2+gUAAAAAaFDjtrfrF+PZIczl+umqyle1GpKTK9uZ6mx/s5qlQrmoolYtl7N8nIanBFDpDrdrBwAAAAAsrtF7Bru3poya483nyV5wxa8el1EZZLWaQKKQSE4kIpmEUJNsxveRCASCUqFSY0olpsKkCiqdGNqN2a4HuxWbYu2oAQAAAKCFaHSlCucVSPMKpP01lvWejwkqFRKhUiJQqpRqldIWK1UUGoFIIjq7MhiuJFYbCtPN/lrXAAAAAGDjzKxUaXl4UTy8oL0HAAAAAI4OdrizJ85uZBK0soFm5OFFtcE+fQ8vKoFg7UiAZkcgILavnp3HrEyD2L4W28gF2AXvQLqhLf6gUmVP6M7Eyrdya8cCOAoFpn7zXOLGssGmaM17PhQEh1NVKteoba6Kz3AllZdIZRKVtSMCmklNuVwqUZFI+h/soFJlTzgBNIUcii5oJu/58tDutrhXo187urhGYe1YgOYmfI/5f8Kwdiz0COnKrC6HWr6jqCnHgjoZzIdQqbInfu0YRALKuVpl7YgAh3Dl6Lt+Iz2tHQs9uvy1FS9X9OZFC1kwD5iilFf7PFvQfZC7tSOiR/9RrMtHSq0dC9AcpGJl1tmy8BEG90QnGOoXBDbr+ukKBaYJ6eLq6QMd+cDyJEKloEJ+9Tj/y1X+zrY6VVat1vy8rSS0lxvHn26THZTAYgSVWMUbWX5W9aTl/kSijQ6mqxUrD6zlRU3ydmNTbLbUgI8hqlZUl8mvnyyblRrkRDHYIAWVKruUd1uQf0soq1XJpWprxwW0KOw21JpyLKizc78YlhPV1luy72ZWvXgopruQq0qh86VlYvtQxUJlaHdm32G22GiqS6XUZJ2rfPVY4s6hlJe02K1yHRPHj1ZTiYV0de4/km38TKhU2TGNBmEyqFQBS9KoNTRnO9tjQIFp1Cq4j7VMRCKy/cp9PbJaFQGmp7YsBIQodNP2SoZKFQAAAADAx7OzJwAAAAAAANsElSoAAAAAAAuAShUAAAAAgAVApQoAAAAAwAKgUgUAAAAAYAFQqQIAAAAAsID/ASgZ6mjhjL5sAAAAAElFTkSuQmCC",
      "text/plain": [
       "<IPython.core.display.Image object>"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    }
   ],
   "source": [
    "import uuid\n",
    "from IPython.display import Image, display\n",
    "from datetime import datetime\n",
    "from typing import Optional, Literal\n",
    "from pydantic import BaseModel, Field\n",
    "\n",
    "from langchain_core.runnables import RunnableConfig\n",
    "from langchain_core.messages import merge_message_runs, HumanMessage, SystemMessage\n",
    "from langgraph.checkpoint.memory import MemorySaver\n",
    "from langgraph.graph import StateGraph, MessagesState, END, START\n",
    "from langgraph.store.base import BaseStore\n",
    "from langgraph.store.memory import InMemoryStore\n",
    "from langchain_openai import ChatOpenAI\n",
    "from trustcall import create_extractor\n",
    "\n",
    "# Initialize the model\n",
    "model = ChatOpenAI(model=\"gpt-4o-mini\", temperature=0)\n",
    "\n",
    "# Node: Main reasoning\n",
    "################################################################################################\n",
    "MODEL_SYSTEM_MESSAGE = \"\"\"You are DevMentor, a helpful coding companion.\n",
    "You assist a developer by tracking and respecting three types of long-term memory:\n",
    "1. DevProfile — info about the developer and their work style\n",
    "2. DecisionRecord — architectural or design decisions and their implications\n",
    "3. Instructions — preferences for how to interact and help in the future\n",
    "\n",
    "When responding, you must:\n",
    "- Follow all active DecisionRecords (ADRs). Do not suggest actions that contradict them unless the user explicitly says the ADR is obsolete.\n",
    "- Respect all Instructions. They describe how the user wants you to communicate and assist.\n",
    "- Adapt your tone and suggestions to the DevProfile.\n",
    "\n",
    "<dev_profile>\n",
    "{dev_profile}\n",
    "</dev_profile>\n",
    "\n",
    "<adrs>\n",
    "{adrs}\n",
    "</adrs>\n",
    "\n",
    "<preferences>\n",
    "{preferences}\n",
    "</preferences>\n",
    "\n",
    "Use tool calls to update:\n",
    "- DevProfile when the user mentions anything about their work style or habits\n",
    "- DecisionRecord when architectural or policy decisions are discussed\n",
    "- Instructions when they give feedback about how you should behave\n",
    "\n",
    "Only confirm updates about DecisionRecord.\n",
    "\"\"\"\n",
    "\n",
    "# Memory update signal tool\n",
    "class UpdateMemory(BaseModel):\n",
    "    update_type: Literal[\"user\", \"adr\", \"instructions\"] = Field(description=\"Which memory to update\")\n",
    "\n",
    "def dev_mentor(state: MessagesState, config: RunnableConfig, store: BaseStore):\n",
    "    user_id = config[\"configurable\"][\"user_id\"]\n",
    "\n",
    "    # Fetch single profile object\n",
    "    profile_obj = store.get((\"dev_profile\", user_id), \"profile\")\n",
    "    dev_profile = profile_obj.value if profile_obj else None\n",
    "\n",
    "    # Fetch ADRs as list\n",
    "    adrs = store.search((\"adrs\", user_id))\n",
    "    adr_dump = \"\\n\".join(f\"{f.value}\" for f in adrs)\n",
    "\n",
    "    # Fetch preferences (instructions)\n",
    "    prefs = store.search((\"preferences\", user_id))\n",
    "    preferences = \"\\n\".join(f\"- {p.value['instruction']}\" for p in prefs) if prefs else \"\"\n",
    "\n",
    "    # Prepare full system prompt\n",
    "    system_msg = MODEL_SYSTEM_MESSAGE.format(\n",
    "        dev_profile=dev_profile,\n",
    "        adrs=adr_dump,\n",
    "        preferences=preferences\n",
    "    )\n",
    "\n",
    "    # Run model and return\n",
    "    response = model.bind_tools([UpdateMemory], parallel_tool_calls=False).invoke(\n",
    "        [SystemMessage(content=system_msg)] + state[\"messages\"]\n",
    "    )\n",
    "\n",
    "    return {\"messages\": [response]}\n",
    "\n",
    "\n",
    "\n",
    "# Router\n",
    "################################################################################################\n",
    "def route(state: MessagesState, config: RunnableConfig, store: BaseStore) -> Literal[END, \"update_decision_records\", \"update_instructions\", \"update_dev_profile\"]:\n",
    "    calls = state[\"messages\"][-1].tool_calls\n",
    "    if not calls:\n",
    "        return END\n",
    "    t = calls[0]['args']['update_type']\n",
    "    if t == \"user\":\n",
    "        return \"update_dev_profile\"\n",
    "    elif t == \"adr\":\n",
    "        return \"update_decision_records\"\n",
    "    elif t == \"instructions\":\n",
    "        return \"update_instructions\"\n",
    "    else:\n",
    "        raise ValueError\n",
    "\n",
    "\n",
    "# Prompt used in all 3 update nodes\n",
    "################################################################################################\n",
    "TRUSTCALL_INSTRUCTION = \"\"\"Reflect on this developer conversation. Update memory accordingly.\n",
    "System Time: {time}\"\"\"\n",
    "\n",
    "# Developer Profile\n",
    "################################################################################################\n",
    "class DevProfile(BaseModel):\n",
    "    name: Optional[str] = Field(description=\"Developer's name\")\n",
    "    language: Optional[str] = Field(description=\"Primary programming language\")\n",
    "    framework: Optional[str] = Field(description=\"Main framework or library used\")\n",
    "    experience_level: Optional[str] = Field(description=\"Developer seniority or self-perceived level\")\n",
    "    prefers: Optional[str] = Field(description=\"Short note about preferred explanation style or coding habits\")\n",
    "\n",
    "profile_extractor = create_extractor(model, tools=[DevProfile], tool_choice=\"DevProfile\")\n",
    "\n",
    "# Node: Update profile\n",
    "def update_dev_profile(state: MessagesState, config: RunnableConfig, store: BaseStore):\n",
    "    user_id = config[\"configurable\"][\"user_id\"]\n",
    "    namespace = (\"dev_profile\", user_id)\n",
    "    key = \"profile\"\n",
    "\n",
    "    # Retrieve current profile (if exists)\n",
    "    current_profile = store.get(namespace, key)\n",
    "    current_value = current_profile.value if current_profile else None\n",
    "\n",
    "    # Create system prompt for context\n",
    "    sys_msg = TRUSTCALL_INSTRUCTION.format(time=datetime.now().isoformat())\n",
    "    updated_messages = merge_message_runs([SystemMessage(content=sys_msg)] + state[\"messages\"][:-1])\n",
    "\n",
    "    # Run extraction with or without existing profile\n",
    "    if current_value:\n",
    "        result = profile_extractor.invoke({\n",
    "            \"messages\": updated_messages,\n",
    "            \"existing\": {\"DevProfile\": current_value}\n",
    "        })\n",
    "    else:\n",
    "        result = profile_extractor.invoke({\"messages\": updated_messages})\n",
    "\n",
    "    # Save profile as a single object under the same key\n",
    "    store.put(namespace, key, result[\"responses\"][0].model_dump(mode=\"json\"))\n",
    "\n",
    "    # Return tool call confirmation\n",
    "    tool_calls = state[\"messages\"][-1].tool_calls\n",
    "    return {\n",
    "        \"messages\": [{\n",
    "            \"role\": \"tool\",\n",
    "            \"content\": \"updated profile\",\n",
    "            \"tool_call_id\": tool_calls[0][\"id\"]\n",
    "        }]\n",
    "    }\n",
    "\n",
    "\n",
    "\n",
    "# Instruction\n",
    "################################################################################################\n",
    "class Instruction(BaseModel):\n",
    "    instruction: str = Field(\n",
    "        description=(\n",
    "            \"An independent, atomic instruction that describes one specific way the assistant should adapt its interaction to the user. \"\n",
    "            \"Each instruction should focus on a single behavior or style — do not combine multiple preferences into one. \"\n",
    "            \"Instructions must be phrased as imperatives. \"\n",
    "            \"Examples: \"\n",
    "            \"'Explain technical topics using a step-by-step format.', \"\n",
    "            \"'Use concise code examples.', \"\n",
    "            \"'Avoid adding explanations unless explicitly requested.', \"\n",
    "            \"'Include inline comments in code when requested.', \"\n",
    "            \"'Ask if the user wants a deeper explanation after giving an answer.'\"\n",
    "        )\n",
    "    )\n",
    "\n",
    "instruction_extractor = create_extractor(model, tools=[Instruction], tool_choice=\"Instruction\", enable_inserts=True)\n",
    "\n",
    "# Node: Update instructions\n",
    "def update_instructions(state: MessagesState, config: RunnableConfig, store: BaseStore):\n",
    "    user_id = config[\"configurable\"][\"user_id\"]\n",
    "    ns = (\"preferences\", user_id)\n",
    "    existing = store.search(ns)\n",
    "    existing_mem = [(item.key, \"Instruction\", item.value) for item in existing] if existing else None\n",
    "\n",
    "    sys_msg = TRUSTCALL_INSTRUCTION.format(time=datetime.now().isoformat())\n",
    "    updated_messages = merge_message_runs([SystemMessage(content=sys_msg)] + state[\"messages\"][:-1])\n",
    "\n",
    "    result = instruction_extractor.invoke({\n",
    "        \"messages\": updated_messages, \n",
    "        \"existing\": existing_mem\n",
    "    })\n",
    "    for r, meta in zip(result[\"responses\"], result[\"response_metadata\"]):\n",
    "        store.put(ns, meta.get(\"json_doc_id\", str(uuid.uuid4())), r.model_dump(mode=\"json\"))\n",
    "\n",
    "    tool_calls = state[\"messages\"][-1].tool_calls\n",
    "    return {\n",
    "        \"messages\": [{\n",
    "            \"role\": \"tool\", \n",
    "            \"content\": \"updated instructions\", \n",
    "            \"tool_call_id\": tool_calls[0]['id']\n",
    "        }]\n",
    "    }\n",
    "\n",
    "\n",
    "\n",
    "# Architectural Decision Record (ADR)\n",
    "################################################################################################\n",
    "class DecisionRecord(BaseModel):\n",
    "    decision: str = Field(description=\"The technical or architectural decision made, stated clearly and unambiguously.\")\n",
    "    date: datetime = Field(description=\"When this decision was made or last updated.\")\n",
    "    rationale: str = Field(description=\"The reasoning or motivation behind this decision, including any trade-offs considered.\")\n",
    "    consequences: str = Field(description=\"The consequences or guidelines resulting from this decision—what should or should not be done because of it.\")\n",
    "    status: Literal[\"active\", \"obsolete\", \"rejected\", \"archived\"] = Field(\n",
    "        default=\"active\",\n",
    "        description=\"Current relevance of this decision for future development.\"\n",
    "    )\n",
    "\n",
    "\n",
    "# Listener to capture tool calls\n",
    "class ToolsListener:\n",
    "    def __init__(self):\n",
    "        self.tools = []\n",
    "\n",
    "    def __call__(self, execution):\n",
    "        runs = [execution]\n",
    "        while runs:\n",
    "            run = runs.pop()\n",
    "            if run.child_runs:\n",
    "                runs.extend(run.child_runs)\n",
    "            if run.run_type == \"chat_model\":\n",
    "                self.tools.append(\n",
    "                    run.outputs[\"generations\"][0][0][\"message\"][\"kwargs\"][\"tool_calls\"]\n",
    "                )\n",
    "\n",
    "# Extract tool info from listener\n",
    "\n",
    "def extract_tool_info(tool_calls, tool_name):\n",
    "    entries = []\n",
    "    for call_batch in tool_calls:\n",
    "        for call in call_batch:\n",
    "            if call[\"name\"] != tool_name:\n",
    "                continue\n",
    "            args = call[\"args\"]\n",
    "\n",
    "            decision = args.get(\"decision\", \"No decision text provided.\")\n",
    "            rationale = args.get(\"rationale\")\n",
    "            consequences = args.get(\"consequences\")\n",
    "            status = args.get(\"status\")\n",
    "            date = args.get(\"date\")\n",
    "\n",
    "            # Determine if it's a new or updated record\n",
    "            if \"updated_fields\" in args:\n",
    "                changed_fields = args[\"updated_fields\"]\n",
    "                lines = [f\"🔄 **Updated ADR**: *{decision}*\"]\n",
    "                if \"rationale\" in changed_fields:\n",
    "                    lines.append(f\"**Reason updated to**: {rationale}\")\n",
    "                if \"consequences\" in changed_fields:\n",
    "                    lines.append(f\"**Consequences updated to**: {consequences}\")\n",
    "                if \"status\" in changed_fields:\n",
    "                    lines.append(f\"**Status changed to**: {status}\")\n",
    "                if \"date\" in changed_fields or date:\n",
    "                    lines.append(f\"🕒 **Date**: {date}\")\n",
    "                entries.append(\"\\n\".join(lines))\n",
    "            else:\n",
    "                lines = [f\"✅ **New ADR**: *{decision}*\"]\n",
    "                if rationale:\n",
    "                    lines.append(f\"**Reason**: {rationale}\")\n",
    "                if consequences:\n",
    "                    lines.append(f\"**Consequences**: {consequences}\")\n",
    "                if status:\n",
    "                    lines.append(f\"**Status**: {status}\")\n",
    "                if date:\n",
    "                    lines.append(f\"🕒 **Date**: {date}\")\n",
    "                entries.append(\"\\n\".join(lines))\n",
    "\n",
    "    return \"\\n\\n\".join(entries) if entries else \"No ADR changes detected.\"\n",
    "\n",
    "\n",
    "# Node: Update ADRs\n",
    "def update_decision_records(state: MessagesState, config: RunnableConfig, store: BaseStore):\n",
    "    user_id = config[\"configurable\"][\"user_id\"]\n",
    "    ns = (\"adrs\", user_id)\n",
    "    existing = store.search(ns)\n",
    "    existing_mem = [(item.key, \"DecisionRecord\", item.value) for item in existing] if existing else None\n",
    "\n",
    "    sys_msg = TRUSTCALL_INSTRUCTION.format(time=datetime.now().isoformat())\n",
    "    updated_messages = merge_message_runs([SystemMessage(content=sys_msg)] + state[\"messages\"][:-1])\n",
    "\n",
    "    listener = ToolsListener()\n",
    "    extractor = create_extractor(model, tools=[DecisionRecord], tool_choice=\"DecisionRecord\", enable_inserts=True).with_listeners(on_end=listener)\n",
    "    result = extractor.invoke({\n",
    "        \"messages\": updated_messages, \n",
    "        \"existing\": existing_mem\n",
    "    })\n",
    "\n",
    "    for r, meta in zip(result[\"responses\"], result[\"response_metadata\"]):\n",
    "        store.put(ns, meta.get(\"json_doc_id\", str(uuid.uuid4())), r.model_dump(mode=\"json\"))\n",
    "\n",
    "    tool_calls = state[\"messages\"][-1].tool_calls\n",
    "    summary = extract_tool_info(listener.tools, \"DecisionRecord\")\n",
    "    return {\n",
    "        \"messages\": [{\n",
    "            \"role\": \"tool\", \n",
    "            \"content\": summary, \n",
    "            \"tool_call_id\": tool_calls[0]['id']\n",
    "        }]\n",
    "    }  \n",
    "\n",
    "\n",
    "\n",
    "\n",
    "# Build the graph\n",
    "################################################################################################\n",
    "builder = StateGraph(MessagesState)\n",
    "builder.add_node(\"dev_mentor\", dev_mentor)\n",
    "builder.add_node(\"update_decision_records\", update_decision_records)\n",
    "builder.add_node(\"update_dev_profile\", update_dev_profile)\n",
    "builder.add_node(\"update_instructions\", update_instructions)\n",
    "builder.add_edge(START, \"dev_mentor\")\n",
    "builder.add_conditional_edges(\"dev_mentor\", route)\n",
    "builder.add_edge(\"update_decision_records\", \"dev_mentor\")\n",
    "builder.add_edge(\"update_dev_profile\", \"dev_mentor\")\n",
    "builder.add_edge(\"update_instructions\", \"dev_mentor\")\n",
    "\n",
    "# Memory\n",
    "long_term = InMemoryStore()\n",
    "short_term = MemorySaver()\n",
    "graph = builder.compile(checkpointer=short_term, store=long_term)\n",
    "\n",
    "display(Image(graph.get_graph(xray=1).draw_mermaid_png()))"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 215,
   "id": "a07b6c59",
   "metadata": {},
   "outputs": [],
   "source": [
    "# 1. Set up the session\n",
    "config = {\"configurable\": {\"thread_id\": \"1\", \"user_id\": \"evgeny\"}}"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 216,
   "id": "0ac076fc",
   "metadata": {},
   "outputs": [],
   "source": [
    "def show_dev_profile():\n",
    "    print(\"\\n👤 Developer Profile:\")\n",
    "    profile_obj = long_term.get((\"dev_profile\", \"evgeny\"), \"profile\")\n",
    "    if not profile_obj:\n",
    "        print(\"No profile found.\")\n",
    "        return\n",
    "    profile = profile_obj.value\n",
    "    print(f\"🧑 Name: {profile.get('name', 'n/a')}\")\n",
    "    print(f\"💻 Language: {profile.get('language', 'n/a')}\")\n",
    "    print(f\"📚 Framework: {profile.get('framework', 'n/a')}\")\n",
    "    print(f\"📈 Experience: {profile.get('experience_level', 'n/a')}\")\n",
    "    print(f\"🧠 Prefers: {profile.get('prefers', 'n/a')}\")\n",
    "    print(\"-\" * 40)\n",
    "\n",
    "def show_decision_records():\n",
    "    print(\"\\n📁 Architecture Decision Records:\")\n",
    "    records = long_term.search((\"adrs\", \"evgeny\"))\n",
    "    if records:\n",
    "        for memory in records:\n",
    "            record = memory.value\n",
    "            print(f\"🔑 ID: {memory.key}\")\n",
    "            print(f\"🔖 Decision: {record.get('decision', 'n/a')}\")\n",
    "            print(f\"📅 Date: {record.get('date', 'n/a')}\")\n",
    "            print(f\"💭 Rationale: {record.get('rationale', 'n/a')}\")\n",
    "            print(f\"📌 Consequences: {record.get('consequences', 'n/a')}\")\n",
    "            print(f\"📂 Status: {record.get('status', 'n/a')}\")\n",
    "            print(\"-\" * 40)\n",
    "    else:\n",
    "        print(\"No decision records found.\")\n",
    "\n",
    "def show_user_instructions():\n",
    "    print(\"\\n📋 Preferences (Instructions):\")\n",
    "    prefs = long_term.search((\"preferences\", \"evgeny\"))\n",
    "    if prefs:\n",
    "        for memory in prefs:\n",
    "            print(f\"🔑 ID: {memory.key}\")\n",
    "            print(f\"📌 Instruction: {memory.value.get('instruction', 'n/a')}\")\n",
    "            print(\"-\" * 40)\n",
    "    else:\n",
    "        print(\"No instructions stored.\")\n",
    "    print(\"=\" * 40)\n",
    "\n",
    "def send_message(message: str):\n",
    "    input_messages = [HumanMessage(content=message)]\n",
    "    for chunk in graph.stream({\"messages\": input_messages}, config, stream_mode=\"values\"):\n",
    "        chunk[\"messages\"][-1].pretty_print()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 217,
   "id": "4c093262",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "================================\u001b[1m Human Message \u001b[0m=================================\n",
      "\n",
      "\n",
      "I'm Evgeny, and I'm currently learning how to build AI agents using Python and LangGraph.\n",
      "I appreciate advice that’s broken down into clear, manageable steps, with practical examples and explanations that help me understand new concepts.\n",
      "My main focus right now is improving my skills in agent development and understanding best practices in this area.\n",
      "\n",
      "==================================\u001b[1m Ai Message \u001b[0m==================================\n",
      "Tool Calls:\n",
      "  UpdateMemory (call_sFflJVl2WhzqclOEpaiyTDDF)\n",
      " Call ID: call_sFflJVl2WhzqclOEpaiyTDDF\n",
      "  Args:\n",
      "    update_type: user\n",
      "=================================\u001b[1m Tool Message \u001b[0m=================================\n",
      "\n",
      "updated profile\n",
      "==================================\u001b[1m Ai Message \u001b[0m==================================\n",
      "\n",
      "Great to meet you, Evgeny! It sounds like you're on an exciting journey with AI agents and LangGraph. I'll make sure to provide advice in clear, manageable steps with practical examples to help you along the way. If you have any specific questions or topics you'd like to explore, feel free to ask!\n"
     ]
    }
   ],
   "source": [
    "# Create a DevProfile\n",
    "send_message(\"\"\"\n",
    "I'm Evgeny, and I'm currently learning how to build AI agents using Python and LangGraph.\n",
    "I appreciate advice that’s broken down into clear, manageable steps, with practical examples and explanations that help me understand new concepts.\n",
    "My main focus right now is improving my skills in agent development and understanding best practices in this area.\n",
    "\"\"\")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 218,
   "id": "5aa88e34",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "\n",
      "👤 Developer Profile:\n",
      "🧑 Name: Evgeny\n",
      "💻 Language: Python\n",
      "📚 Framework: LangGraph\n",
      "📈 Experience: Learning\n",
      "🧠 Prefers: Clear, manageable steps with practical examples and explanations.\n",
      "----------------------------------------\n"
     ]
    }
   ],
   "source": [
    "show_dev_profile()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 219,
   "id": "ce992670",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "================================\u001b[1m Human Message \u001b[0m=================================\n",
      "\n",
      "\n",
      "I’ve decided that, while I’m learning, all agent memory should use \n",
      "in-memory stores instead of databases. This is because setting up a \n",
      "database is time-consuming and could distract me from focusing on agent \n",
      "architecture and LangGraph itself. As a result, using a database for \n",
      "memory is not allowed in my current projects unless I explicitly decide otherwise.\n",
      "\n",
      "==================================\u001b[1m Ai Message \u001b[0m==================================\n",
      "Tool Calls:\n",
      "  UpdateMemory (call_GCpMvZ6VM3mUODUdJxz983hM)\n",
      " Call ID: call_GCpMvZ6VM3mUODUdJxz983hM\n",
      "  Args:\n",
      "    update_type: adr\n",
      "=================================\u001b[1m Tool Message \u001b[0m=================================\n",
      "\n",
      "✅ **New ADR**: *Use in-memory stores for agent memory instead of databases during the learning phase.*\n",
      "**Reason**: Setting up a database is time-consuming and could distract from focusing on agent architecture and LangGraph.\n",
      "**Consequences**: All current projects will use in-memory stores for memory. Databases are not allowed unless explicitly decided otherwise.\n",
      "**Status**: active\n",
      "🕒 **Date**: 2025-06-08T17:31:07.816012\n",
      "==================================\u001b[1m Ai Message \u001b[0m==================================\n",
      "\n",
      "I've recorded your decision to use in-memory stores for agent memory during your learning phase. This will help you stay focused on agent architecture and LangGraph without the distraction of setting up a database. If you have any questions or need assistance with implementing in-memory stores, just let me know!\n"
     ]
    }
   ],
   "source": [
    "send_message(\"\"\"\n",
    "I’ve decided that, while I’m learning, all agent memory should use \n",
    "in-memory stores instead of databases. This is because setting up a \n",
    "database is time-consuming and could distract me from focusing on agent \n",
    "architecture and LangGraph itself. As a result, using a database for \n",
    "memory is not allowed in my current projects unless I explicitly decide otherwise.\n",
    "\"\"\")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 220,
   "id": "14e52f6f",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "\n",
      "📁 Architecture Decision Records:\n",
      "🔑 ID: 219563de-7b64-4671-b748-392e2050a2b4\n",
      "🔖 Decision: Use in-memory stores for agent memory instead of databases during the learning phase.\n",
      "📅 Date: 2025-06-08T17:31:07.816012\n",
      "💭 Rationale: Setting up a database is time-consuming and could distract from focusing on agent architecture and LangGraph.\n",
      "📌 Consequences: All current projects will use in-memory stores for memory. Databases are not allowed unless explicitly decided otherwise.\n",
      "📂 Status: active\n",
      "----------------------------------------\n"
     ]
    }
   ],
   "source": [
    "show_decision_records()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 221,
   "id": "872fcb40",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "================================\u001b[1m Human Message \u001b[0m=================================\n",
      "\n",
      "\n",
      "from langchain_openai import ChatOpenAI\n",
      "\n",
      "from langgraph.checkpoint.memory import MemorySaver\n",
      "from langgraph.graph import StateGraph, MessagesState, START, END\n",
      "from langchain_core.runnables.config import RunnableConfig\n",
      "from langgraph.store.base import BaseStore\n",
      "from langchain_core.messages import SystemMessage\n",
      "from langgraph.store.memory import InMemoryStore\n",
      "import configuration\n",
      "\n",
      "\n",
      "model = ChatOpenAI(model=\"gpt-4o-mini\")\n",
      "\n",
      "### Nodes\n",
      "\n",
      "def chat(state: MessagesState, config: RunnableConfig, store: BaseStore):\n",
      "    # Get the user ID from the config\n",
      "    configurable = configuration.Configuration.from_runnable_config(config)\n",
      "    user_id = configurable.user_id\n",
      "\n",
      "    # Retrieve memory from the store\n",
      "    user_details = store.get((\"memory\", user_id), \"user_details\")\n",
      "\n",
      "    # Extract the actual memory content if it exists and add a prefix\n",
      "    if user_details:\n",
      "        # Value is a dictionary with a memory key\n",
      "        user_details_content = user_details.value.get('memory')\n",
      "    else:\n",
      "        user_details_content = \"No existing details found.\"\n",
      "\n",
      "    # Format the memory in the system prompt\n",
      "    system_msg = f\"\"\"\n",
      "    You are a helpful assistant with memory capabilities.\n",
      "    If user-specific memory is available, use it to personalize \n",
      "    your responses based on what you know about the user.\n",
      "    \n",
      "    Your goal is to provide relevant, friendly, and tailored \n",
      "    assistance that reflects the user’s preferences, context, and past interactions.\n",
      "\n",
      "    If the user’s name or relevant personal context is available, always personalize your responses by:\n",
      "        – Addressing the user by name (e.g., \"Sure, Bob...\") when appropriate\n",
      "        – Referencing known projects, tools, or preferences (e.g., \"your MCP  server typescript based project\")\n",
      "        – Adjusting the tone to feel friendly, natural, and directly aimed at the user\n",
      "\n",
      "    Avoid generic phrasing when personalization is possible. For example, instead of \"In TypeScript apps...\" say \"Since your project is built with TypeScript...\"\n",
      "\n",
      "    Use personalization especially in:\n",
      "        – Greetings and transitions\n",
      "        – Help or guidance tailored to tools and frameworks the user uses\n",
      "        – Follow-up messages that continue from past context\n",
      "\n",
      "    Always ensure that personalization is based only on known user details and not assumed.\n",
      "    \n",
      "    The user’s memory (which may be empty) is provided as: {user_details_content}\n",
      "    \"\"\"\n",
      "    \n",
      "    response = model.invoke([SystemMessage(content=system_msg)] + state[\"messages\"])\n",
      "\n",
      "    return {\"messages\": response}\n",
      "\n",
      "\n",
      "def update_memory(state: MessagesState, config: RunnableConfig, store: BaseStore):\n",
      "    # Get the user ID from the config\n",
      "    configurable = configuration.Configuration.from_runnable_config(config)\n",
      "    user_id = configurable.user_id\n",
      "\n",
      "    namespace = (\"memory\", user_id)\n",
      "    key = \"user_details\"\n",
      "    user_details = store.get(namespace, key)\n",
      "        \n",
      "    if user_details:\n",
      "        user_details_content = user_details.value.get('memory')\n",
      "    else:\n",
      "        user_details_content = \"No existing details found.\"\n",
      "\n",
      "    # Format the memory in the system prompt\n",
      "    system_msg = f\"\"\"\n",
      "    You are responsible for updating and maintaining accurate user memory to enable personalized responses.\n",
      "\n",
      "    CURRENT USER DETAILS:\n",
      "    {user_details_content}\n",
      "\n",
      "    INSTRUCTIONS:\n",
      "    1. Carefully review the chat history below.\n",
      "    2. Identify any new, explicitly stated user information, such as:\n",
      "        - Personal details (e.g., name, location)\n",
      "        - Preferences (likes, dislikes)\n",
      "        - Interests and hobbies\n",
      "        - Experiences or background\n",
      "        - Goals and future plans\n",
      "    3. If no new information is present, do not output anything.\n",
      "    4. If new information is found:\n",
      "        - Merge it with the existing memory\n",
      "        - Format the updated memory as a clear, bulleted list\n",
      "        - Include only factual, user-stated details\n",
      "    5. If new information contradicts existing memory, keep the most recent version stated by the user.\n",
      "\n",
      "    Important:\n",
      "    - Do NOT include summaries like \"no update needed\".\n",
      "    - ONLY return output when actual new user information is added.\n",
      "\n",
      "    Your final output should either be a clean, updated bulleted list — or nothing at all.\n",
      "    \"\"\"\n",
      "    \n",
      "    new_memory = model.invoke([SystemMessage(content=system_msg)] + state['messages'])\n",
      "\n",
      "    if new_memory.content.strip():\n",
      "        store.put(namespace, key, {\"memory\": new_memory.content})\n",
      "\n",
      "\n",
      "# Define the graph\n",
      "builder = StateGraph(MessagesState)\n",
      "builder.add_node(\"chat\", chat)\n",
      "builder.add_node(\"update_memory\", update_memory)\n",
      "builder.add_edge(START, \"chat\")\n",
      "builder.add_edge(\"chat\", \"update_memory\")\n",
      "builder.add_edge(\"update_memory\", END)\n",
      "\n",
      "long_term_memory = InMemoryStore()\n",
      "short_term_memory = MemorySaver()\n",
      "\n",
      "graph = builder.compile()\n",
      "\n",
      "---\n",
      "\n",
      "Can you show me how the chatbot_long_term_memory.py agent saves conversation history?\n",
      "\n",
      "==================================\u001b[1m Ai Message \u001b[0m==================================\n",
      "\n",
      "To save conversation history in your `chatbot_long_term_memory.py` agent, you can utilize the `MemorySaver` class from `langgraph.checkpoint.memory`. This class is designed to handle the storage of conversation history, allowing you to maintain a record of interactions.\n",
      "\n",
      "Here's a breakdown of how you can implement this:\n",
      "\n",
      "1. **Initialize the MemorySaver**: You already have an instance of `MemorySaver` called `short_term_memory`. This will be used to save the conversation history.\n",
      "\n",
      "2. **Update the MemorySaver**: You need to update the `MemorySaver` with the conversation history after each interaction. This can be done in the `chat` function after generating a response.\n",
      "\n",
      "3. **Store the Conversation**: You can append the current message and response to the `short_term_memory` to keep track of the conversation.\n",
      "\n",
      "Here’s how you can modify your `chat` function to save the conversation history:\n",
      "\n",
      "```python\n",
      "def chat(state: MessagesState, config: RunnableConfig, store: BaseStore):\n",
      "    # Get the user ID from the config\n",
      "    configurable = configuration.Configuration.from_runnable_config(config)\n",
      "    user_id = configurable.user_id\n",
      "\n",
      "    # Retrieve memory from the store\n",
      "    user_details = store.get((\"memory\", user_id), \"user_details\")\n",
      "\n",
      "    # Extract the actual memory content if it exists and add a prefix\n",
      "    if user_details:\n",
      "        user_details_content = user_details.value.get('memory')\n",
      "    else:\n",
      "        user_details_content = \"No existing details found.\"\n",
      "\n",
      "    # Format the memory in the system prompt\n",
      "    system_msg = f\"\"\"\n",
      "    You are a helpful assistant with memory capabilities.\n",
      "    ...\n",
      "    The user’s memory (which may be empty) is provided as: {user_details_content}\n",
      "    \"\"\"\n",
      "    \n",
      "    response = model.invoke([SystemMessage(content=system_msg)] + state[\"messages\"])\n",
      "\n",
      "    # Save the conversation history\n",
      "    short_term_memory.put(user_id, {\"messages\": state[\"messages\"] + [{\"role\": \"assistant\", \"content\": response.content}]})\n",
      "\n",
      "    return {\"messages\": response}\n",
      "```\n",
      "\n",
      "### Key Changes:\n",
      "- After generating the response, the current state of the conversation (including the user's messages and the assistant's response) is saved to `short_term_memory`.\n",
      "- The `put` method is used to store the conversation history under the user's ID.\n",
      "\n",
      "### Note:\n",
      "Make sure to handle the retrieval of conversation history from `short_term_memory` when needed, so you can provide context in future interactions.\n",
      "\n",
      "If you have any further questions or need more examples, feel free to ask!\n"
     ]
    }
   ],
   "source": [
    "send_message('''\n",
    "from langchain_openai import ChatOpenAI\n",
    "\n",
    "from langgraph.checkpoint.memory import MemorySaver\n",
    "from langgraph.graph import StateGraph, MessagesState, START, END\n",
    "from langchain_core.runnables.config import RunnableConfig\n",
    "from langgraph.store.base import BaseStore\n",
    "from langchain_core.messages import SystemMessage\n",
    "from langgraph.store.memory import InMemoryStore\n",
    "import configuration\n",
    "\n",
    "\n",
    "model = ChatOpenAI(model=\"gpt-4o-mini\")\n",
    "\n",
    "### Nodes\n",
    "\n",
    "def chat(state: MessagesState, config: RunnableConfig, store: BaseStore):\n",
    "    # Get the user ID from the config\n",
    "    configurable = configuration.Configuration.from_runnable_config(config)\n",
    "    user_id = configurable.user_id\n",
    "\n",
    "    # Retrieve memory from the store\n",
    "    user_details = store.get((\"memory\", user_id), \"user_details\")\n",
    "\n",
    "    # Extract the actual memory content if it exists and add a prefix\n",
    "    if user_details:\n",
    "        # Value is a dictionary with a memory key\n",
    "        user_details_content = user_details.value.get('memory')\n",
    "    else:\n",
    "        user_details_content = \"No existing details found.\"\n",
    "\n",
    "    # Format the memory in the system prompt\n",
    "    system_msg = f\"\"\"\n",
    "    You are a helpful assistant with memory capabilities.\n",
    "    If user-specific memory is available, use it to personalize \n",
    "    your responses based on what you know about the user.\n",
    "    \n",
    "    Your goal is to provide relevant, friendly, and tailored \n",
    "    assistance that reflects the user’s preferences, context, and past interactions.\n",
    "\n",
    "    If the user’s name or relevant personal context is available, always personalize your responses by:\n",
    "        – Addressing the user by name (e.g., \"Sure, Bob...\") when appropriate\n",
    "        – Referencing known projects, tools, or preferences (e.g., \"your MCP  server typescript based project\")\n",
    "        – Adjusting the tone to feel friendly, natural, and directly aimed at the user\n",
    "\n",
    "    Avoid generic phrasing when personalization is possible. For example, instead of \"In TypeScript apps...\" say \"Since your project is built with TypeScript...\"\n",
    "\n",
    "    Use personalization especially in:\n",
    "        – Greetings and transitions\n",
    "        – Help or guidance tailored to tools and frameworks the user uses\n",
    "        – Follow-up messages that continue from past context\n",
    "\n",
    "    Always ensure that personalization is based only on known user details and not assumed.\n",
    "    \n",
    "    The user’s memory (which may be empty) is provided as: {user_details_content}\n",
    "    \"\"\"\n",
    "    \n",
    "    response = model.invoke([SystemMessage(content=system_msg)] + state[\"messages\"])\n",
    "\n",
    "    return {\"messages\": response}\n",
    "\n",
    "\n",
    "def update_memory(state: MessagesState, config: RunnableConfig, store: BaseStore):\n",
    "    # Get the user ID from the config\n",
    "    configurable = configuration.Configuration.from_runnable_config(config)\n",
    "    user_id = configurable.user_id\n",
    "\n",
    "    namespace = (\"memory\", user_id)\n",
    "    key = \"user_details\"\n",
    "    user_details = store.get(namespace, key)\n",
    "        \n",
    "    if user_details:\n",
    "        user_details_content = user_details.value.get('memory')\n",
    "    else:\n",
    "        user_details_content = \"No existing details found.\"\n",
    "\n",
    "    # Format the memory in the system prompt\n",
    "    system_msg = f\"\"\"\n",
    "    You are responsible for updating and maintaining accurate user memory to enable personalized responses.\n",
    "\n",
    "    CURRENT USER DETAILS:\n",
    "    {user_details_content}\n",
    "\n",
    "    INSTRUCTIONS:\n",
    "    1. Carefully review the chat history below.\n",
    "    2. Identify any new, explicitly stated user information, such as:\n",
    "        - Personal details (e.g., name, location)\n",
    "        - Preferences (likes, dislikes)\n",
    "        - Interests and hobbies\n",
    "        - Experiences or background\n",
    "        - Goals and future plans\n",
    "    3. If no new information is present, do not output anything.\n",
    "    4. If new information is found:\n",
    "        - Merge it with the existing memory\n",
    "        - Format the updated memory as a clear, bulleted list\n",
    "        - Include only factual, user-stated details\n",
    "    5. If new information contradicts existing memory, keep the most recent version stated by the user.\n",
    "\n",
    "    Important:\n",
    "    - Do NOT include summaries like \"no update needed\".\n",
    "    - ONLY return output when actual new user information is added.\n",
    "\n",
    "    Your final output should either be a clean, updated bulleted list — or nothing at all.\n",
    "    \"\"\"\n",
    "    \n",
    "    new_memory = model.invoke([SystemMessage(content=system_msg)] + state['messages'])\n",
    "\n",
    "    if new_memory.content.strip():\n",
    "        store.put(namespace, key, {\"memory\": new_memory.content})\n",
    "\n",
    "\n",
    "# Define the graph\n",
    "builder = StateGraph(MessagesState)\n",
    "builder.add_node(\"chat\", chat)\n",
    "builder.add_node(\"update_memory\", update_memory)\n",
    "builder.add_edge(START, \"chat\")\n",
    "builder.add_edge(\"chat\", \"update_memory\")\n",
    "builder.add_edge(\"update_memory\", END)\n",
    "\n",
    "long_term_memory = InMemoryStore()\n",
    "short_term_memory = MemorySaver()\n",
    "\n",
    "graph = builder.compile()\n",
    "\n",
    "---\n",
    "\n",
    "Can you show me how the chatbot_long_term_memory.py agent saves conversation history?\n",
    "''')"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 222,
   "id": "9a246836",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "================================\u001b[1m Human Message \u001b[0m=================================\n",
      "\n",
      "\n",
      "That explanation was a bit too verbose and confusing. Please, from now on, \n",
      "start with a short summary before the code, and keep explanations clear and concise.\n",
      "\n",
      "==================================\u001b[1m Ai Message \u001b[0m==================================\n",
      "Tool Calls:\n",
      "  UpdateMemory (call_Xy57Fffljo5Rx7ihs8ayjraA)\n",
      " Call ID: call_Xy57Fffljo5Rx7ihs8ayjraA\n",
      "  Args:\n",
      "    update_type: instructions\n",
      "=================================\u001b[1m Tool Message \u001b[0m=================================\n",
      "\n",
      "updated instructions\n",
      "==================================\u001b[1m Ai Message \u001b[0m==================================\n",
      "\n",
      "Got it! I'll start with a short summary before the code and keep explanations clear and concise from now on. If you have any other preferences, just let me know! \n",
      "\n",
      "### Summary\n",
      "To save conversation history in your agent, you can update the `short_term_memory` in the `chat` function after generating a response. Here's how to do it:\n",
      "\n",
      "```python\n",
      "def chat(state: MessagesState, config: RunnableConfig, store: BaseStore):\n",
      "    # Get the user ID from the config\n",
      "    configurable = configuration.Configuration.from_runnable_config(config)\n",
      "    user_id = configurable.user_id\n",
      "\n",
      "    # Retrieve memory from the store\n",
      "    user_details = store.get((\"memory\", user_id), \"user_details\")\n",
      "    user_details_content = user_details.value.get('memory') if user_details else \"No existing details found.\"\n",
      "\n",
      "    # Format the memory in the system prompt\n",
      "    system_msg = f\"\"\"\n",
      "    You are a helpful assistant with memory capabilities.\n",
      "    ...\n",
      "    The user’s memory (which may be empty) is provided as: {user_details_content}\n",
      "    \"\"\"\n",
      "    \n",
      "    response = model.invoke([SystemMessage(content=system_msg)] + state[\"messages\"])\n",
      "\n",
      "    # Save the conversation history\n",
      "    short_term_memory.put(user_id, {\"messages\": state[\"messages\"] + [{\"role\": \"assistant\", \"content\": response.content}]})\n",
      "\n",
      "    return {\"messages\": response}\n",
      "```\n",
      "\n",
      "This code updates the `short_term_memory` with the current conversation after generating a response. Let me know if you need further assistance!\n"
     ]
    }
   ],
   "source": [
    "send_message(\"\"\"\n",
    "That explanation was a bit too verbose and confusing. Please, from now on, \n",
    "start with a short summary before the code, and keep explanations clear and concise.\n",
    "\"\"\")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 223,
   "id": "14c56722",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "\n",
      "📋 Preferences (Instructions):\n",
      "🔑 ID: 813f56ea-8464-45ce-b9d4-7529ed10fea7\n",
      "📌 Instruction: Start with a short summary before the code, and keep explanations clear and concise.\n",
      "----------------------------------------\n",
      "========================================\n"
     ]
    }
   ],
   "source": [
    "show_user_instructions()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 224,
   "id": "d1293631",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "================================\u001b[1m Human Message \u001b[0m=================================\n",
      "\n",
      "\n",
      "Can you update the chatbot_long_term_memory.py agent to save conversation history \n",
      "in a SQLite database instead?\n",
      "\n",
      "==================================\u001b[1m Ai Message \u001b[0m==================================\n",
      "\n",
      "Since your current decision is to use in-memory stores for agent memory during the learning phase, I cannot suggest using a SQLite database for saving conversation history unless you explicitly decide that the current decision is obsolete. \n",
      "\n",
      "If you want to proceed with using a SQLite database, please let me know, and I can help you update the decision record accordingly!\n"
     ]
    }
   ],
   "source": [
    "send_message(\"\"\"\n",
    "Can you update the chatbot_long_term_memory.py agent to save conversation history \n",
    "in a SQLite database instead?\n",
    "\"\"\")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 225,
   "id": "e57b65bd",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "================================\u001b[1m Human Message \u001b[0m=================================\n",
      "\n",
      "\n",
      "hey I do not remember it, when was this decision made?\n",
      "\n",
      "==================================\u001b[1m Ai Message \u001b[0m==================================\n",
      "\n",
      "The decision to use in-memory stores for agent memory instead of databases during the learning phase was made on June 8, 2025. The rationale was that setting up a database is time-consuming and could distract from focusing on agent architecture and LangGraph. \n",
      "\n",
      "If you want to change this decision or need further assistance, just let me know!\n"
     ]
    }
   ],
   "source": [
    "send_message(\"\"\"\n",
    "hey I do not remember it, when was this decision made?\n",
    "\"\"\")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 226,
   "id": "12d0ded9",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "================================\u001b[1m Human Message \u001b[0m=================================\n",
      "\n",
      "\n",
      "Actually, I think I’m ready to try using a database now. The previous ADR about \n",
      "in-memory stores is obsolete.\n",
      "\n",
      "==================================\u001b[1m Ai Message \u001b[0m==================================\n",
      "Tool Calls:\n",
      "  UpdateMemory (call_sICAkVQhA7Qpivhw8uEtovwj)\n",
      " Call ID: call_sICAkVQhA7Qpivhw8uEtovwj\n",
      "  Args:\n",
      "    update_type: adr\n",
      "=================================\u001b[1m Tool Message \u001b[0m=================================\n",
      "\n",
      "✅ **New ADR**: *Use a SQLite database for conversation history instead of in-memory stores.*\n",
      "**Reason**: Using a database allows for persistent storage and better management of conversation history.\n",
      "**Consequences**: All current projects will use a SQLite database for memory. In-memory stores are not allowed unless explicitly decided otherwise.\n",
      "**Status**: active\n",
      "🕒 **Date**: 2025-06-08T17:35:56.273577\n",
      "==================================\u001b[1m Ai Message \u001b[0m==================================\n",
      "\n",
      "I've updated the decision record to reflect that you are now using a SQLite database for conversation history instead of in-memory stores. \n",
      "\n",
      "### Summary\n",
      "To save conversation history in a SQLite database, you can modify your `chat` function to store the conversation in the database. Here’s how you can do it:\n",
      "\n",
      "1. **Set up the SQLite database**: Ensure you have a SQLite database connection established.\n",
      "2. **Insert conversation history**: After generating a response, insert the conversation into the database.\n",
      "\n",
      "Here’s an example of how to implement this:\n",
      "\n",
      "```python\n",
      "import sqlite3\n",
      "\n",
      "# Connect to the SQLite database (create it if it doesn't exist)\n",
      "conn = sqlite3.connect('conversation_history.db')\n",
      "c = conn.cursor()\n",
      "\n",
      "# Create a table for conversation history if it doesn't exist\n",
      "c.execute('''\n",
      "CREATE TABLE IF NOT EXISTS history (\n",
      "    user_id TEXT,\n",
      "    role TEXT,\n",
      "    content TEXT\n",
      ")\n",
      "''')\n",
      "conn.commit()\n",
      "\n",
      "def chat(state: MessagesState, config: RunnableConfig, store: BaseStore):\n",
      "    # Get the user ID from the config\n",
      "    configurable = configuration.Configuration.from_runnable_config(config)\n",
      "    user_id = configurable.user_id\n",
      "\n",
      "    # Retrieve memory from the store\n",
      "    user_details = store.get((\"memory\", user_id), \"user_details\")\n",
      "    user_details_content = user_details.value.get('memory') if user_details else \"No existing details found.\"\n",
      "\n",
      "    # Format the memory in the system prompt\n",
      "    system_msg = f\"\"\"\n",
      "    You are a helpful assistant with memory capabilities.\n",
      "    ...\n",
      "    The user’s memory (which may be empty) is provided as: {user_details_content}\n",
      "    \"\"\"\n",
      "    \n",
      "    response = model.invoke([SystemMessage(content=system_msg)] + state[\"messages\"])\n",
      "\n",
      "    # Save the conversation history in the SQLite database\n",
      "    for message in state[\"messages\"]:\n",
      "        c.execute(\"INSERT INTO history (user_id, role, content) VALUES (?, ?, ?)\", \n",
      "                  (user_id, message['role'], message['content']))\n",
      "    c.execute(\"INSERT INTO history (user_id, role, content) VALUES (?, ?, ?)\", \n",
      "              (user_id, \"assistant\", response.content))\n",
      "    conn.commit()\n",
      "\n",
      "    return {\"messages\": response}\n",
      "```\n",
      "\n",
      "### Key Changes:\n",
      "- A SQLite database connection is established, and a table for conversation history is created if it doesn't exist.\n",
      "- After generating a response, both the user's messages and the assistant's response are inserted into the database.\n",
      "\n",
      "Let me know if you need further assistance or modifications!\n"
     ]
    }
   ],
   "source": [
    "send_message(\"\"\"\n",
    "Actually, I think I’m ready to try using a database now. The previous ADR about \n",
    "in-memory stores is obsolete.\n",
    "\"\"\")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 227,
   "id": "ecd6ea56",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "\n",
      "📁 Architecture Decision Records:\n",
      "🔑 ID: 219563de-7b64-4671-b748-392e2050a2b4\n",
      "🔖 Decision: Use in-memory stores for agent memory instead of databases during the learning phase.\n",
      "📅 Date: 2025-06-08T17:31:07.816012\n",
      "💭 Rationale: Setting up a database is time-consuming and could distract from focusing on agent architecture and LangGraph.\n",
      "📌 Consequences: All current projects will use in-memory stores for memory. Databases are not allowed unless explicitly decided otherwise.\n",
      "📂 Status: obsolete\n",
      "----------------------------------------\n",
      "🔑 ID: c38aee31-ced0-4124-8d8a-bde6707b364d\n",
      "🔖 Decision: Use a SQLite database for conversation history instead of in-memory stores.\n",
      "📅 Date: 2025-06-08T17:35:56.273577\n",
      "💭 Rationale: Using a database allows for persistent storage and better management of conversation history.\n",
      "📌 Consequences: All current projects will use a SQLite database for memory. In-memory stores are not allowed unless explicitly decided otherwise.\n",
      "📂 Status: active\n",
      "----------------------------------------\n"
     ]
    }
   ],
   "source": [
    "show_decision_records()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 228,
   "id": "bb68b0f4",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "================================\u001b[1m Human Message \u001b[0m=================================\n",
      "\n",
      "\n",
      "Can you show me the list of all actual decisions you have so far?\n",
      "\n",
      "==================================\u001b[1m Ai Message \u001b[0m==================================\n",
      "\n",
      "Here are the active decisions recorded so far:\n",
      "\n",
      "1. **Use a SQLite database for conversation history instead of in-memory stores.**\n",
      "   - **Reason**: Using a database allows for persistent storage and better management of conversation history.\n",
      "   - **Consequences**: All current projects will use a SQLite database for memory. In-memory stores are not allowed unless explicitly decided otherwise.\n",
      "   - **Status**: Active\n",
      "   - **Date**: June 8, 2025\n",
      "\n",
      "If you need to make any changes or have further questions, feel free to ask!\n"
     ]
    }
   ],
   "source": [
    "send_message(\"\"\"\n",
    "Can you show me the list of all actual decisions you have so far?\n",
    "\"\"\")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 229,
   "id": "67bc75ce",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "================================\u001b[1m Human Message \u001b[0m=================================\n",
      "\n",
      "\n",
      "Can you show me the list of all my current preferences?\n",
      "\n",
      "==================================\u001b[1m Ai Message \u001b[0m==================================\n",
      "\n",
      "Here are your current preferences:\n",
      "\n",
      "- You appreciate advice that’s broken down into clear, manageable steps, with practical examples and explanations that help you understand new concepts.\n",
      "- You prefer that I start with a short summary before the code and keep explanations clear and concise.\n",
      "\n",
      "If you have any additional preferences or changes, just let me know!\n"
     ]
    }
   ],
   "source": [
    "send_message(\"\"\"\n",
    "Can you show me the list of all my current preferences?\n",
    "\"\"\")"
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "venv",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.13.3"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 5
}
