{
 "cells": [
  {
   "metadata": {
    "collapsed": true
   },
   "cell_type": "markdown",
   "source": [
    "#### Memory模块的设计思路\n",
    "    层次1(最直接的方式)：保留一个聊天消息列表\n",
    "    层次2(简单的新思路)：只返回最近交互的k条消息\n",
    "    层次3(稍微复杂一点)：返回过去k条消息的简介摘要\n",
    "    层次4(更复杂)：从存储的消息中提取实体，并且仅返回有关当前运行中引用的实体的信息"
   ],
   "id": "6de40cf85480fa5e"
  },
  {
   "metadata": {},
   "cell_type": "markdown",
   "source": [
    "##### ChatMessageHistory(基础)\n",
    "ChatMessageHistory是一个用于存储和管理对话消息的基础类，它直接操作消息对象(如HumanMessage、AIMessage等)，是其它记忆组件的底层存储工具\n",
    "在API文档中，ChatMessageHistory还有一个别名类：InMemoryChatMessageHistory。导包时，需使用：from langchain.memory import ChatMessageHistory\n",
    "特点：\n",
    "    纯粹是消息对象的存储器，与记忆策略(如缓冲、接口、摘要等)无关\n",
    "    不涉及消息的格式化(如转成文本字符串)"
   ],
   "id": "6316a6a323cd5197"
  },
  {
   "metadata": {},
   "cell_type": "markdown",
   "source": "##### 场景1：记忆存储",
   "id": "64c9ad85b0391c68"
  },
  {
   "metadata": {},
   "cell_type": "code",
   "source": [
    "from langchain.memory import ChatMessageHistory\n",
    "\n",
    "# 创建history实例\n",
    "message_history = ChatMessageHistory()\n",
    "# 添加相关的消息进行存储\n",
    "message_history.add_user_message(\"你好\")\n",
    "message_history.add_ai_message(\"很高兴认识你\")\n",
    "# 打印存储的消息\n",
    "print(message_history.messages)"
   ],
   "id": "dc5e7f6213355125",
   "outputs": [],
   "execution_count": null
  },
  {
   "metadata": {},
   "cell_type": "markdown",
   "source": "##### 场景2：对接大模型",
   "id": "bb4d02e4db87dd57"
  },
  {
   "metadata": {},
   "cell_type": "code",
   "source": [
    "from langchain.memory import ChatMessageHistory\n",
    "from langchain_openai import ChatOpenAI\n",
    "import os\n",
    "\n",
    "os.environ['OPENAI_BASE_URL'] = 'https://vip.apiyi.com/v1'\n",
    "os.environ['OPENAI_API_KEY'] = 'sk-xU64G4hXJ4L47ko3764958119dB245D2BdEcE528767dA1Da'\n",
    "\n",
    "chat_model = ChatOpenAI(model=\"gpt-4o-mini\")\n",
    "\n",
    "history = ChatMessageHistory()\n",
    "history.add_user_message(\"你好\")\n",
    "history.add_ai_message(\"很高兴认识你\")\n",
    "history.add_user_message(\"帮我计算1 + 2 * 3=?\")\n",
    "\n",
    "response = chat_model.invoke(history.messages)\n",
    "print(response.content)"
   ],
   "id": "422ca13efc641832",
   "outputs": [],
   "execution_count": null
  },
  {
   "metadata": {},
   "cell_type": "markdown",
   "source": [
    "##### ConversationBufferMemory\n",
    "ConversationBufferMemory是一个基础的对话记忆(Memory)组件，专门用于按原始顺序存储完成的对话历史\n",
    "适用场景：对话轮次较少、依赖完整上下文的场景(如简单的聊天机器)\n",
    "特点：\n",
    "    完整存储对话历史\n",
    "    简单、无裁剪、无压缩\n",
    "    与Chains/Models无缝集成\n",
    "    支持两种返回格式(通过return_messages参数控制输出格式)\n",
    "        return_messages=True，返回消息对象列表(List[BaseMessage])\n",
    "        return_messages=False(默认)，返回拼接的纯文本字符串"
   ],
   "id": "f8103ccbe4a9c795"
  },
  {
   "metadata": {},
   "cell_type": "markdown",
   "source": "##### 场景1：入门使用",
   "id": "7e83122825ef19d6"
  },
  {
   "metadata": {},
   "cell_type": "code",
   "source": [
    "from langchain.memory import ConversationBufferMemory\n",
    "\n",
    "#创建实例化对象\n",
    "buffer_memory = ConversationBufferMemory(return_messages=True)\n",
    "#存储相关消息\n",
    "buffer_memory.save_context(inputs={\"input\": \"你好，我叫小明\"}, outputs={\"output\": \"很高兴认识你\"})\n",
    "buffer_memory.save_context(inputs={\"input\": \"1+2*3=?\"}, outputs={\"output\": \"7\"})\n",
    "# 获取存储信息\n",
    "print(buffer_memory.load_memory_variables({}))"
   ],
   "id": "950a2b3d68b11fd5",
   "outputs": [],
   "execution_count": null
  },
  {
   "metadata": {},
   "cell_type": "markdown",
   "source": "##### 场景2：对接大模型",
   "id": "4a44e3022d87aa85"
  },
  {
   "metadata": {},
   "cell_type": "code",
   "source": [
    "from langchain.memory import ConversationBufferMemory\n",
    "from langchain_openai import ChatOpenAI\n",
    "from langchain_core.prompts import PromptTemplate\n",
    "from langchain.chains.llm import LLMChain\n",
    "import os\n",
    "\n",
    "os.environ['OPENAI_BASE_URL'] = 'https://vip.apiyi.com/v1'\n",
    "os.environ['OPENAI_API_KEY'] = 'sk-xU64G4hXJ4L47ko3764958119dB245D2BdEcE528767dA1Da'\n",
    "\n",
    "chat_model = ChatOpenAI(model=\"gpt-4o-mini\")\n",
    "\n",
    "# 提供提示词模板\n",
    "prompt_template = PromptTemplate.from_template(template=\"\"\"\n",
    "    你可以与人类对话。\n",
    "    当前对话历史：{history}\n",
    "    人类问题：{question}\n",
    "    回复：\n",
    "    \"\"\")\n",
    "\n",
    "# 提供memory实例\n",
    "buffer_memory = ConversationBufferMemory(return_messages=True)\n",
    "# 存储相关消息\n",
    "buffer_memory.save_context(inputs={\"input\": \"你好，我叫小明\"}, outputs={\"output\": \"很高兴认识你\"})\n",
    "buffer_memory.save_context(inputs={\"input\": \"1+2*3=?\"}, outputs={\"output\": \"7\"})\n",
    "\n",
    "# 提供chain\n",
    "chain = LLMChain(llm=chat_model, prompt=prompt_template, memory=buffer_memory)\n",
    "response = chain.invoke({\"question\": \"我刚刚问了什么\"})\n",
    "print(response)"
   ],
   "id": "c8a218e22af4018c",
   "outputs": [],
   "execution_count": null
  },
  {
   "metadata": {},
   "cell_type": "markdown",
   "source": [
    "#### ConversationChain\n",
    "ConversationChain实际上就是对ConversationBufferMemory和LLMChain进行了封装，并提供一个默认格式的提示词模板(可不用)，从而简化了初始化ConversationBufferMemory的步骤"
   ],
   "id": "7ef79a1418656aae"
  },
  {
   "metadata": {},
   "cell_type": "code",
   "source": [
    "import os\n",
    "from langchain_openai import ChatOpenAI\n",
    "from langchain_core.prompts import PromptTemplate\n",
    "from langchain.chains.conversation.base import ConversationChain\n",
    "\n",
    "os.environ['OPENAI_BASE_URL'] = 'https://vip.apiyi.com/v1'\n",
    "os.environ['OPENAI_API_KEY'] = 'sk-xU64G4hXJ4L47ko3764958119dB245D2BdEcE528767dA1Da'\n",
    "\n",
    "chat_model = ChatOpenAI(model=\"gpt-4o-mini\")\n",
    "\n",
    "# 提供提示词模板\n",
    "prompt_template = PromptTemplate.from_template(template=\"\"\"\n",
    "    你可以与人类对话。\n",
    "    当前对话历史：{history}\n",
    "    人类问题：{question}\n",
    "    回复：\n",
    "    \"\"\")\n",
    "\n",
    "# 创建ConversationChain\n",
    "chain = ConversationChain(llm=chat_model, prompt=prompt_template)\n",
    "response = chain.invoke({\"input\": \"我刚刚问了什么\"})\n",
    "print(response)"
   ],
   "id": "90cd239523cbac80",
   "outputs": [],
   "execution_count": null
  },
  {
   "metadata": {},
   "cell_type": "markdown",
   "source": [
    "#### ConversationBufferWindowMemory\n",
    "在了解了ConversationBufferMemory记忆类后，我们知道它能够无限的将历史对话信息填充到History中，从而给大模型提供上下文的背景。\n",
    "但这会导致内存量十分大，并消耗的token是非常多的，且每个大模型都存在最大输入token的限制\n",
    "我们发现，过久远的对话数据往往并不能对当前轮次的回答提供有效的信息，langchain给出的解决方案是：ConversationBufferWindowMemory模块。\n",
    "该记忆类会保存一段时间内对话交互的列表，仅使用最近k个交互。这样使缓存区不会变得太大。\n",
    "特点：\n",
    "    适合长对话场景\n",
    "    与chains/models无缝集成\n",
    "    支持两种返回格式(通过return_messages参数控制输出格式)\n",
    "        return_message=True返回消息对象队列(List[BaseMessage])\n",
    "        return_message=False(默认)返回拼接的纯文本字符串"
   ],
   "id": "ef8a2a91994fcef7"
  },
  {
   "metadata": {},
   "cell_type": "markdown",
   "source": "##### 场景1：入门使用",
   "id": "d8da1aca7e83fd2c"
  },
  {
   "metadata": {},
   "cell_type": "code",
   "source": [
    "from langchain.memory import ConversationBufferWindowMemory\n",
    "\n",
    "# 实例化\n",
    "memory = ConversationBufferWindowMemory(k=2)\n",
    "# 保存消息\n",
    "memory.save_context({\"input\": \"你好\"}, {\"output\": \"怎么了\"})\n",
    "memory.save_context({\"input\": \"你是谁\"}, {\"output\": \"我是AI助手\"})\n",
    "memory.save_context({\"input\": \"你的生日是哪天？\"}, {\"output\": \"我不清楚\"})\n",
    "# 读取内容中的消息\n",
    "print(memory.load_memory_variables({}))"
   ],
   "id": "25eb365cfc95e350",
   "outputs": [],
   "execution_count": null
  },
  {
   "metadata": {},
   "cell_type": "markdown",
   "source": "##### 场景2：返回对象列表",
   "id": "a231c624396f4cda"
  },
  {
   "metadata": {},
   "cell_type": "code",
   "source": [
    "from langchain.memory import ConversationBufferWindowMemory\n",
    "\n",
    "# 实例化\n",
    "memory = ConversationBufferWindowMemory(k=2, return_messages=True)\n",
    "# 保存消息\n",
    "memory.save_context({\"input\": \"你好\"}, {\"output\": \"怎么了\"})\n",
    "memory.save_context({\"input\": \"你是谁\"}, {\"output\": \"我是AI助手\"})\n",
    "memory.save_context({\"input\": \"你的生日是哪天？\"}, {\"output\": \"我不清楚\"})\n",
    "# 读取内容中的消息\n",
    "print(memory.load_memory_variables({}))"
   ],
   "id": "edf9dd7dd7a7ed7d",
   "outputs": [],
   "execution_count": null
  },
  {
   "metadata": {},
   "cell_type": "markdown",
   "source": "##### 场景3：结合llm、chain的使用",
   "id": "4b60bdc6b0324c0"
  },
  {
   "metadata": {
    "jupyter": {
     "is_executing": true
    }
   },
   "cell_type": "code",
   "source": [
    "import os\n",
    "from langchain_openai import ChatOpenAI\n",
    "from langchain.chains.llm import LLMChain\n",
    "from langchain_core.prompts import PromptTemplate\n",
    "from langchain.memory import ConversationBufferWindowMemory\n",
    "\n",
    "os.environ['OPENAI_BASE_URL'] = 'https://vip.apiyi.com/v1'\n",
    "os.environ['OPENAI_API_KEY'] = 'sk-xU64G4hXJ4L47ko3764958119dB245D2BdEcE528767dA1Da'\n",
    "\n",
    "chat_model = ChatOpenAI(model=\"gpt-4o-mini\")\n",
    "\n",
    "# 提供提示词模板\n",
    "prompt_template = PromptTemplate.from_template(template=\"\"\"\n",
    "    你可以与人类对话。\n",
    "    当前对话历史：{history}\n",
    "    人类问题：{question}\n",
    "    回复：\n",
    "    \"\"\")\n",
    "\n",
    "# 实例化\n",
    "memory = ConversationBufferWindowMemory(k=2, return_messages=True)\n",
    "# 保存消息\n",
    "memory.save_context({\"input\": \"你好\"}, {\"output\": \"怎么了\"})\n",
    "memory.save_context({\"input\": \"你是谁\"}, {\"output\": \"我是AI助手\"})\n",
    "memory.save_context({\"input\": \"你的生日是哪天？\"}, {\"output\": \"我不清楚\"})\n",
    "\n",
    "llm_chain = LLMChain(llm=chat_model, prompt=prompt_template, memory=memory, verbose=True)\n",
    "response1 = llm_chain.invoke({\"question\": \"你好，我是孙小空\"})\n",
    "response2 = llm_chain.invoke({\"question\": \"我叫什么\"})\n",
    "print(response1)\n",
    "print(response2)"
   ],
   "id": "d8bd0a43459e343d",
   "outputs": [],
   "execution_count": null
  },
  {
   "metadata": {},
   "cell_type": "markdown",
   "source": [
    "#### ConversationTokenBufferMemory\n",
    "ConversationTokenBufferMemory是langchain中一种基于token数量限制的对话记忆机制。\n",
    "如果字符数量超出指定数目，它会切掉这个对话的早期部分，以保留与最近交流相对应的字符数量\n",
    "特点：\n",
    "    token精准控制\n",
    "    原始对话保留"
   ],
   "id": "d77ebe6f8b7424ac"
  },
  {
   "metadata": {},
   "cell_type": "markdown",
   "source": "##### 举例1",
   "id": "41bbee71ebe6e655"
  },
  {
   "metadata": {},
   "cell_type": "code",
   "source": [
    "import os\n",
    "from langchain_openai import ChatOpenAI\n",
    "from langchain_core.prompts import PromptTemplate\n",
    "from langchain.memory import ConversationTokenBufferMemory\n",
    "\n",
    "os.environ['OPENAI_BASE_URL'] = 'https://vip.apiyi.com/v1'\n",
    "os.environ['OPENAI_API_KEY'] = 'sk-xU64G4hXJ4L47ko3764958119dB245D2BdEcE528767dA1Da'\n",
    "\n",
    "chat_model = ChatOpenAI(model=\"gpt-4o-mini\")\n",
    "\n",
    "# 提供提示词模板\n",
    "prompt_template = PromptTemplate.from_template(template=\"\"\"\n",
    "    你可以与人类对话。\n",
    "    当前对话历史：{history}\n",
    "    人类问题：{question}\n",
    "    回复：\n",
    "    \"\"\")\n",
    "\n",
    "# 实例化\n",
    "memory = ConversationTokenBufferMemory(llm=chat_model, max_token_limit=50)\n",
    "memory.save_context({\"input\": \"你好\"}, {\"output\": \"怎么了\"})\n",
    "memory.save_context({\"input\": \"你是谁\"}, {\"output\": \"我是AI助手\"})\n",
    "\n",
    "print(memory.load_memory_variables({}))"
   ],
   "id": "40305e68d691cdf5",
   "outputs": [],
   "execution_count": null
  },
  {
   "metadata": {},
   "cell_type": "markdown",
   "source": [
    "#### ConversationSummaryMemory\n",
    "ConversationSummaryMemory是langchain中一种智能压缩对话历史的记忆机制，它通过大预言模型自动生成对话内容的精简摘要，而不是存储原始对话文本\n",
    "这种记忆方式特别适合长对话和需要保留核心信息的场景\n",
    "特点：\n",
    "    摘要生成\n",
    "    动态更新\n",
    "    上下文优化"
   ],
   "id": "52493e942ce08e98"
  },
  {
   "metadata": {},
   "cell_type": "markdown",
   "source": [
    "##### 举例1\n",
    "如果实例化ConversationSummaryMemory前，没有历史消息，可以使用构造方法实例化"
   ],
   "id": "f22f5ae512ea4aca"
  },
  {
   "metadata": {},
   "cell_type": "code",
   "source": [
    "import os\n",
    "from langchain_openai import ChatOpenAI\n",
    "from langchain.chains.llm import LLMChain\n",
    "from langchain_core.prompts import PromptTemplate\n",
    "from langchain.memory import ConversationSummaryMemory\n",
    "\n",
    "os.environ['OPENAI_BASE_URL'] = 'https://vip.apiyi.com/v1'\n",
    "os.environ['OPENAI_API_KEY'] = 'sk-xU64G4hXJ4L47ko3764958119dB245D2BdEcE528767dA1Da'\n",
    "\n",
    "chat_model = ChatOpenAI(model=\"gpt-4o-mini\")\n",
    "\n",
    "# 提供提示词模板\n",
    "prompt_template = PromptTemplate.from_template(template=\"\"\"\n",
    "    你可以与人类对话。\n",
    "    当前对话历史：{history}\n",
    "    人类问题：{question}\n",
    "    回复：\n",
    "    \"\"\")\n",
    "\n",
    "memory = ConversationSummaryMemory(llm=chat_model)\n",
    "memory.save_context({\"input\": \"你好\"}, {\"output\": \"怎么了\"})\n",
    "memory.save_context({\"input\": \"你是谁\"}, {\"output\": \"我是AI助手\"})\n",
    "memory.save_context({\"input\": \"我的名字叫MAX\"}, {\"output\": \"你好，MAX\"})\n",
    "# print(memory.load_memory_variables({}))\n",
    "llm_chain = LLMChain(llm=chat_model, prompt=prompt_template, memory=memory)\n",
    "response = llm_chain.invoke({\"question\": \"我叫什么名字\"})\n",
    "print(response['text'])"
   ],
   "id": "bbfcc90ca68416ea",
   "outputs": [],
   "execution_count": null
  },
  {
   "metadata": {},
   "cell_type": "markdown",
   "source": [
    "##### 举例2\n",
    "如果实例化ConversationSummaryMemory前，已有历史消息，可以调用from_message()实例化"
   ],
   "id": "dc4f2431a3639e2"
  },
  {
   "metadata": {},
   "cell_type": "code",
   "source": [
    "import os\n",
    "from langchain_openai import ChatOpenAI\n",
    "from langchain.chains.llm import LLMChain\n",
    "from langchain_core.prompts import PromptTemplate\n",
    "from langchain.memory import ConversationSummaryMemory, ChatMessageHistory\n",
    "\n",
    "os.environ['OPENAI_BASE_URL'] = 'https://vip.apiyi.com/v1'\n",
    "os.environ['OPENAI_API_KEY'] = 'sk-xU64G4hXJ4L47ko3764958119dB245D2BdEcE528767dA1Da'\n",
    "\n",
    "chat_model = ChatOpenAI(model=\"gpt-4o-mini\")\n",
    "\n",
    "# 提供提示词模板\n",
    "prompt_template = PromptTemplate.from_template(template=\"\"\"\n",
    "    你可以与人类对话。\n",
    "    当前对话历史：{history}\n",
    "    人类问题：{question}\n",
    "    回复：\n",
    "    \"\"\")\n",
    "\n",
    "history = ChatMessageHistory()\n",
    "history.add_user_message(\"你好，你是谁？\")\n",
    "history.add_ai_message(\"我是AI助手小智\")\n",
    "\n",
    "memory = ConversationSummaryMemory.from_messages(llm=chat_model, chat_memory=history)\n",
    "\n",
    "# print(memory.load_memory_variables({}))\n",
    "llm_chain = LLMChain(llm=chat_model, prompt=prompt_template, memory=memory)\n",
    "response = llm_chain.invoke({\"question\": \"你叫什么名字\"})\n",
    "print(response)"
   ],
   "id": "7ee871d95a0e3937",
   "outputs": [],
   "execution_count": null
  },
  {
   "metadata": {},
   "cell_type": "markdown",
   "source": [
    "#### ConversationSummaryBufferMemory\n",
    "ConversationSummaryBufferMemory是langchain中一种混合型记忆机制，它结合了ConversationBufferMemory(完整对话记录)和ConversationSummaryMemory(摘要记忆)的优点。\n",
    "在保留近对话原始记录的同时，对较早的对话内容进行智能摘要\n",
    "特点：\n",
    "    保留最近N条原始对话：确保最新交互的完整上下文\n",
    "    摘要较早历史：对超出缓冲区的旧对话进行压缩，避免信息过载\n",
    "    平衡细节和效率：既不丢失关键细节，又能处理长对话"
   ],
   "id": "afb6afe0498bcf16"
  },
  {
   "metadata": {},
   "cell_type": "markdown",
   "source": "##### 场景1：入门使用",
   "id": "dab4a32d6929ab06"
  },
  {
   "metadata": {},
   "cell_type": "code",
   "source": [
    "import os\n",
    "from langchain_openai import ChatOpenAI\n",
    "from langchain.memory import ConversationSummaryBufferMemory\n",
    "\n",
    "os.environ['OPENAI_BASE_URL'] = 'https://vip.apiyi.com/v1'\n",
    "os.environ['OPENAI_API_KEY'] = 'sk-xU64G4hXJ4L47ko3764958119dB245D2BdEcE528767dA1Da'\n",
    "\n",
    "chat_model = ChatOpenAI(model=\"gpt-4o-mini\")\n",
    "\n",
    "memory = ConversationSummaryBufferMemory(llm=chat_model, max_token_limit=50, return_messages=True)\n",
    "# 向memory中存储信息\n",
    "memory.save_context(inputs={\"input\": \"您好，我的名字叫小明\"}, outputs={\"output\": \"很高兴认识你\"})\n",
    "memory.save_context(inputs={\"input\": \"你叫什么名字\"}, outputs={\"output\": \"我叫AI小智\"})\n",
    "print(memory.load_memory_variables({}))"
   ],
   "id": "3289baef9d69f175",
   "outputs": [],
   "execution_count": null
  },
  {
   "metadata": {},
   "cell_type": "markdown",
   "source": [
    "#### ConversationEntityMemory(了解)\n",
    "ConversationEntityMemory是一种基于实体的对话记忆机制，它能够智能的识别、存储和利用对话中出现的实体信息(如人名、地点、产品等)及其属性/关系，并结构化存储，使AI具备\n",
    "更强的上下文理解和记忆能力。\n",
    "好处：解决信息过载问题\n",
    "    长对话中大量冗余信息会干扰关键事实记忆\n",
    "    通过对实体摘要，可以压缩非常重要细节(如删除寒暄等，保留价格/时间等硬性事实)\n",
    "应用场景：在医疗等高风险领域，必须用实体记忆确保关键信息(如过敏史)被100%准确识别和拦截\n",
    "例如：\n",
    "    {\"input\":\"我头疼，血压140/90，在吃阿司匹林\"},\n",
    "    {\"output\":\"建议监测血压，阿司匹林可继续服用\"}\n",
    "    {\"input\":\"我对青霉素过敏\"},\n",
    "    {\"output\":\"已记录您的青霉素过敏史\"}\n",
    "    {\"input\":\"阿司匹林吃了三天，头疼没缓解\"},\n",
    "    {\"output\":\"建议停用阿司匹林，换布洛芬试试\"}\n",
    "使用ConversationSummaryMemory\n",
    "    患者主诉头疼和高血压(140/90)，正在服用阿司匹林。患者对青霉素过敏，三天后头疼未缓解，建议更换止疼药。\n",
    "\n",
    "使用ConversationEntityMemory\n",
    "    {\n",
    "        \"症状\":\"头疼\",\n",
    "        \"血压\":\"140/90\",\n",
    "        \"当前用药\":\"阿司匹林(无效)\",\n",
    "        \"过敏药物\":\"青霉素\"\n",
    "    }"
   ],
   "id": "604775dcf875af28"
  },
  {
   "metadata": {},
   "cell_type": "markdown",
   "source": [
    "#### ConversationKGMemory(了解)\n",
    "ConversationKGMemory是一种基于只是图片(Knowledge Graph)的对话记忆模块，它比ConversationEntityMemory更进一步，不仅能识别和存储实体，\n",
    "还能捕捉实体之间的复杂关系，形成结构化的知识网络\n",
    "特点：\n",
    "    知识图谱结构将对话内容转化为(头实体、关系、尾实体)的三元组形式\n",
    "    动态关系推理"
   ],
   "id": "78006d2f194ade3b"
  },
  {
   "metadata": {},
   "cell_type": "code",
   "outputs": [],
   "execution_count": null,
   "source": "",
   "id": "b492de2cd9de5962"
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 2
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython2",
   "version": "2.7.6"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 5
}
