{
 "cells": [
  {
   "cell_type": "code",
   "execution_count": 1,
   "metadata": {},
   "outputs": [],
   "source": [
    "import asyncio\n",
    "import nest_asyncio\n",
    "nest_asyncio.apply()\n",
    "import sys\n",
    "import os\n",
    "\n",
    "# 添加项目根目录到Python路径\n",
    "sys.path.append(os.path.abspath('../..'))  # 修改这行，向上追溯三层目录到项目根目录\n",
    "\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 2,
   "metadata": {},
   "outputs": [],
   "source": [
    "topic = \"What does the technology development roadmap of multi-modal large models look like?\"\n",
    "with open(r\"D:\\GoodStudy\\FX15\\FX15H\\final_work\\FX15_research_agent\\summary-generation-match\\review.md\", \"r\", encoding=\"utf-8\") as file:\n",
    "    content = file.read()\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 2,
   "metadata": {},
   "outputs": [],
   "source": [
    "content = '''## 2 Background and Related Work\n",
    "## 2.1 Multimodal Language Models\n",
    "\n",
    "Multimodal Learning (MML) has emerged as a crucial field of study, aiming to build AI models capable of extracting and correlating information from various data modalities. Vision-language pre-training, a key branch of MML, focuses on developing foundation models with enhanced performance in vision and language tasks. Notable milestones in this domain include Vision Transformer (ViT), which introduced an end-to-end solution for image understanding using Transformer encoders, and CLIP, which utilized multimodal pre-training for zero-shot recognition by converting classification into a retrieval task. The recent advancements in LLMs, such as LLaMA, BLOOM, and ChatGPT, have further propelled the integration of auto-regressive language models as decoders in vision-language tasks, facilitating knowledge sharing between language and multimodal domains. These developments highlight the growing significance of MML and its potential to revolutionize various applications by bridging the gap between different modalities.  \n",
    "\n",
    "Multimodal Learning (MML) has emerged as a crucial field of study, aiming to build AI models capable of extracting and correlating information from various data modalities. As more data has become available, a wider selection of datasets containing more than one modality has also enabled growth in the multimodal research sphere. Multimodal data is intrinsic to biomedical research and clinical care. While data belonging to a single modality can be conceptualized as a way in which something is perceived or captured in the world into an abstract digitized representation such as a waveform or image, multimodal data aggregates multiple modalities and thus consists of several intrinsically different representation spaces (and potentially even different data geometries). Computed tomography (CT) and positron emission tomography (PET) are specific examples of single imaging modalities, while magnetic resonance imaging (MRI) is an example itself of multimodal data, as its component sequences T1-weighted, T2-weighted, and fluid-attenuated inversion recovery (FLAIR) can each be considered their own unique modalities, since each of the MR sequences measure some different biophysical or biological property. Laboratory blood tests, patient demographics, electrocardiogram (ECG) and genetic expression values are also common modalities in clinical decision models. This work discusses unique ways that differences between modalities have been addressed and mitigated to improve accuracy of AI models in similar ways to which a human would naturally be able to re-calibrate to these differences.  \n",
    "\n",
    "Vision-language pre-training, a key branch of MML, focuses on developing foundation models with enhanced performance in vision and language tasks.  \n",
    "\n",
    "Notable milestones in this domain include Vision Transformer (ViT), which introduced an end-to-end solution for image understanding using Transformer encoders, and CLIP, which utilized multimodal pre-training for zero-shot recognition by converting classification into a retrieval task.  \n",
    "\n",
    "The recent advancements in LLMs, such as LLaMA, BLOOM, and ChatGPT, have further propelled the integration of auto-regressive language models as decoders in vision-language tasks, facilitating knowledge sharing between language and multimodal domains.  \n",
    "\n",
    "These developments highlight the growing significance of MML and its potential to revolutionize various applications by bridging the gap between different modalities.  \n",
    "## 2.2 Model Editing Techniques\n",
    "\n",
    "Editing multimodal large language models (MLLMs) presents unique challenges compared to editing single-modal LLMs. The inherent complexity and diversity of MLLMs, stemming from their integration of multiple modalities like text, images, and audio, necessitate more sophisticated editing techniques. This subsection explores the existing approaches for editing single-modal LLMs and their potential applicability to MLLMs.\n",
    "\n",
    "**Knowledge Infusion:** This technique involves incrementally updating a language model with new facts or information without significant retraining. Methods like knowledge distillation and fine-tuning allow for the transfer of knowledge from a larger, more knowledgeable model to a smaller, less knowledgeable one. While effective for single-modal LLMs, knowledge infusion for MLLMs requires careful consideration of the interplay between different modalities and the potential for cross-modal knowledge transfer.\n",
    "\n",
    "**Incremental Learning:** Incremental learning involves training a model on new data while retaining its knowledge from previous training sessions. This approach is particularly relevant for MLLMs, as it allows for the continuous updating of the model with new multimodal data without forgetting previously learned information. Techniques like experience replay and model regularization can be employed to mitigate catastrophic forgetting and ensure the stability of the model.\n",
    "\n",
    "**Modality-Specific Editing:** Given the distinct characteristics of each modality, it may be necessary to develop modality-specific editing techniques for MLLMs. For example, image editing techniques like style transfer and image inpainting can be used to modify the visual representations learned by the model, while text editing techniques like grammar correction and sentiment modification can be used to refine the linguistic representations.\n",
    "\n",
    "**Challenges and Limitations:** Editing MLLMs is still an evolving field, and several challenges and limitations remain. These include the difficulty of aligning edits across different modalities, the potential for introducing biases or errors during the editing process, and the lack of standardized evaluation metrics for assessing the effectiveness of editing techniques. Future research should focus on addressing these challenges and developing more robust and efficient editing methods for MLLMs.\n",
    "## 2.3 Challenges and Limitations\n",
    "\n",
    "Editing multimodal large language models (MLLMs) presents unique challenges compared to editing single-modal LLMs. The inherent complexity and diversity of MLLMs, stemming from their integration of multiple modalities like text, images, and audio, necessitate more sophisticated editing techniques. This subsection explores the existing approaches for editing single-modal LLMs and their potential applicability to MLLMs.\n",
    "\n",
    "**Knowledge Infusion:** This technique involves incrementally updating a language model with new facts or information without significant retraining. Methods like knowledge distillation and fine-tuning allow for the transfer of knowledge from a larger, more knowledgeable model to a smaller, less knowledgeable one. While effective for single-modal LLMs, knowledge infusion for MLLMs requires careful consideration of the interplay between different modalities and the potential for cross-modal knowledge transfer.\n",
    "\n",
    "**Incremental Learning:** Incremental learning involves training a model on new data while retaining its knowledge from previous training sessions. This approach is particularly relevant for MLLMs, as it allows for the continuous updating of the model with new multimodal data without forgetting previously learned information. Techniques like experience replay and model regularization can be employed to mitigate catastrophic forgetting and ensure the stability of the model.\n",
    "\n",
    "**Modality-Specific Editing:** Given the distinct characteristics of each modality, it may be necessary to develop modality-specific editing techniques for MLLMs. For example, image editing techniques like style transfer and image inpainting can be used to modify the visual representations learned by the model, while text editing techniques like grammar correction and sentiment modification can be used to refine the linguistic representations.\n",
    "\n",
    "**Challenges and Limitations:** Editing MLLMs is still an evolving field, and several challenges and limitations remain. These include the difficulty of aligning edits across different modalities, the potential for introducing biases or errors during the editing process, and the lack of standardized evaluation metrics for assessing the effectiveness of editing techniques. Future research should focus on addressing these challenges and developing more robust and efficient editing methods for MLLMs.'''"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 4,
   "metadata": {},
   "outputs": [],
   "source": [
    "topic = \"What does the technology development roadmap of multi-modal large models look like?\"\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 5,
   "metadata": {},
   "outputs": [
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "2025-02-19 00:58:26,810 - research_agent.core.pipeline_reference - INFO - 开始处理全文，文本长度：8371\n",
      "2025-02-19 00:58:26,811 - research_agent.core.pipeline_reference - INFO - 拆分为 4 个 section\n",
      "2025-02-19 00:58:26,812 - research_agent.core.pipeline_reference - INFO - 同步处理短 section\n",
      "2025-02-19 00:58:26,815 - research_agent.core.pipeline_reference - INFO - 开始处理:section\n",
      "## 2.1 Multimodal L，主题What does the technology development roadmap of multi-modal large models look like?\n",
      "2025-02-19 00:58:26,817 - research_agent.core.pipeline_reference - INFO - 开始处理:section\n",
      "## 2.2 Model Editin，主题What does the technology development roadmap of multi-modal large models look like?\n",
      "2025-02-19 00:58:26,817 - research_agent.core.pipeline_reference - INFO - 开始处理:section\n",
      "## 2.3 Challenges a，主题What does the technology development roadmap of multi-modal large models look like?\n",
      "2025-02-19 00:58:28,832 - research_agent.core.pipeline_reference - DEBUG - 尝试运行 pipeline，重试次数：0\n",
      "2025-02-19 00:58:29,426 - research_agent.core.pipeline_reference - DEBUG - 尝试运行 pipeline，重试次数：0\n",
      "2025-02-19 00:58:40,926 - research_agent.core.reference_checker - DEBUG - find statements: ['Editing multimodal large language models (MLLMs) presents unique challenges compared to editing single-modal LLMs.', 'The inherent complexity and diversity of MLLMs, stemming from their integration of multiple modalities like text, images, and audio, necessitate more sophisticated editing techniques.', 'This subsection explores the existing approaches for editing single-modal LLMs and their potential applicability to MLLMs.', 'Knowledge Infusion: This technique involves incrementally updating a language model with new facts or information without significant retraining.', 'Methods like knowledge distillation and fine-tuning allow for the transfer of knowledge from a larger, more knowledgeable model to a smaller, less knowledgeable one.', 'While effective for single-modal LLMs, knowledge infusion for MLLMs requires careful consideration of the interplay between different modalities and the potential for cross-modal knowledge transfer.', 'Incremental Learning: Incremental learning involves training a model on new data while retaining its knowledge from previous training sessions.', 'This approach is particularly relevant for MLLMs, as it allows for the continuous updating of the model with new multimodal data without forgetting previously learned information.', 'Techniques like experience replay and model regularization can be employed to mitigate catastrophic forgetting and ensure the stability of the model.', 'Modality-Specific Editing: Given the distinct characteristics of each modality, it may be necessary to develop modality-specific editing techniques for MLLMs.', 'For example, image editing techniques like style transfer and image inpainting can be used to modify the visual representations learned by the model, while text editing techniques like grammar correction and sentiment modification can be used to refine the linguistic representations.', 'Challenges and Limitations: Editing MLLMs is still an evolving field, and several challenges and limitations remain.', 'These include the difficulty of aligning edits across different modalities, the potential for introducing biases or errors during the editing process, and the lack of standardized evaluation metrics for assessing the effectiveness of editing techniques.', 'Future research should focus on addressing these challenges and developing more robust and efficient editing methods for MLLMs.']\n",
      "2025-02-19 00:58:44,512 - research_agent.core.reference_checker - DEBUG - find statements: ['Multimodal Learning (MML) has emerged as a crucial field of study, aiming to build AI models capable of extracting and correlating information from various data modalities.', 'Vision-language pre-training, a key branch of MML, focuses on developing foundation models with enhanced performance in vision and language tasks.', 'Notable milestones in this domain include Vision Transformer (ViT), which introduced an end-to-end solution for image understanding using Transformer encoders, and CLIP, which utilized multimodal pre-training for zero-shot recognition by converting classification into a retrieval task.', 'The recent advancements in LLMs, such as LLaMA, BLOOM, and ChatGPT, have further propelled the integration of auto-regressive language models as decoders in vision-language tasks, facilitating knowledge sharing between language and multimodal domains.', 'These developments highlight the growing significance of MML and its potential to revolutionize various applications by bridging the gap between different modalities.', 'Multimodal data is intrinsic to biomedical research and clinical care.', 'Computed tomography (CT) and positron emission tomography (PET) are specific examples of single imaging modalities, while magnetic resonance imaging (MRI) is an example itself of multimodal data, as its component sequences T1-weighted, T2-weighted, and fluid-attenuated inversion recovery (FLAIR) can each be considered their own unique modalities, since each of the MR sequences measure some different biophysical or biological property.', 'Laboratory blood tests, patient demographics, electrocardiogram (ECG) and genetic expression values are also common modalities in clinical decision models.', 'This work discusses unique ways that differences between modalities have been addressed and mitigated to improve accuracy of AI models in similar ways to which a human would naturally be able to re-calibrate to these differences.']\n",
      "2025-02-19 00:59:06,333 - research_agent.core.reference_checker - DEBUG - supplement citations: [{'statement': 'Multimodal Learning (MML) has emerged as a crucial field of study, aiming to build AI models capable of extracting and correlating information from various data modalities. <sup>{\"chunk_id\":\"1\", \"paper_id\":\"65499d88939a5f4082be99ae\"}</sup>'}, {'statement': 'Vision-language pre-training, a key branch of MML, focuses on developing foundation models with enhanced performance in vision and language tasks. <sup>{\"chunk_id\":\"2\", \"paper_id\":\"64b60eaa3fda6d7f06eae95b\"}</sup>'}, {'statement': 'Notable milestones in this domain include Vision Transformer (ViT), which introduced an end-to-end solution for image understanding using Transformer encoders, and CLIP, which utilized multimodal pre-training for zero-shot recognition by converting classification into a retrieval task. <sup>{\"chunk_id\":\"2\", \"paper_id\":\"65fc055d13fb2c6cf6df23e1\"}</sup><sup>{\"chunk_id\":\"1\", \"paper_id\":\"668c9ec201d2a3fbfc3aa397\"}</sup>'}, {'statement': 'The recent advancements in LLMs, such as LLaMA, BLOOM, and ChatGPT, have further propelled the integration of auto-regressive language models as decoders in vision-language tasks, facilitating knowledge sharing between language and multimodal domains. <sup>{\"chunk_id\":\"1\", \"paper_id\":\"65e7dcc013fb2c6cf6fdddc3\"}</sup><sup>{\"chunk_id\":\"2\", \"paper_id\":\"6552dee2939a5f408239c275\"}</sup><sup>{\"chunk_id\":\"1\", \"paper_id\":\"646d8642d68f896efa0a2f4d\"}</sup><sup>{\"chunk_id\":\"1\", \"paper_id\":\"6566b085939a5f40827a9785\"}</sup><sup>{\"chunk_id\":\"2\", \"paper_id\":\"6588e965939a5f408200ef6f\"}</sup>'}, {'statement': 'These developments highlight the growing significance of MML and its potential to revolutionize various applications by bridging the gap between different modalities. <sup>{\"chunk_id\":\"1\", \"paper_id\":\"65499d88939a5f4082be99ae\"}</sup><sup>{\"chunk_id\":\"9\", \"paper_id\":\"66ee050801d2a3fbfc9d543e\"}</sup><sup>{\"chunk_id\":\"0\", \"paper_id\":\"6373035b90e50fcafd09fe89\"}</sup>'}, {'statement': 'Multimodal data is intrinsic to biomedical research and clinical care. <sup>{\"chunk_id\":\"1\", \"paper_id\":\"65499d88939a5f4082be99ae\"}</sup>'}, {'statement': 'Computed tomography (CT) and positron emission tomography (PET) are specific examples of single imaging modalities, while magnetic resonance imaging (MRI) is an example itself of multimodal data, as its component sequences T1-weighted, T2-weighted, and fluid-attenuated inversion recovery (FLAIR) can each be considered their own unique modalities, since each of the MR sequences measure some different biophysical or biological property<sup>{\"chunk_id\":\"1\", \"paper_id\":\"65499d88939a5f4082be99ae\"}</sup>.'}, {'statement': 'Laboratory blood tests, patient demographics, electrocardiogram (ECG) and genetic expression values are also common modalities in clinical decision models. <sup>{\"chunk_id\":\"7\", \"paper_id\":\"64ab82833fda6d7f06f77daa\"}</sup>'}, {'statement': 'This work discusses unique ways that differences between modalities have been addressed and mitigated to improve accuracy of AI models in similar ways to which a human would naturally be able to re-calibrate to these differences. <sup>{\"chunk_id\":\"1\", \"paper_id\":\"655ac423939a5f4082e26049\"}</sup><sup>{\"chunk_id\":\"1\", \"paper_id\":\"643e0ad00746dc40e3419426\"}</sup><sup>{\"chunk_id\":\"1\", \"paper_id\":\"64dc49903fda6d7f06389c95\"}</sup><sup>{\"chunk_id\":\"6\", \"paper_id\":\"64c88ca43fda6d7f06268aff\"}</sup>'}]\n",
      "2025-02-19 00:59:08,717 - research_agent.core.reference_checker - DEBUG - supplement citations: [{'statement': 'Editing multimodal large language models (MLLMs) presents unique challenges compared to editing single-modal LLMs. For instance, multimodal model editing demands a higher level of scrutiny and careful consideration in the editing process <sup>{\"chunk_id\":\"0\", \"paper_id\":\"6528a864939a5f408257a0cf\"}</sup>. Specifically, incorrect outputs from multimodal models may stem from the synergistic effects of various modalities, such as misreading or misrecognition, which is analogous to human errors like color blindness affecting color identification in images <sup>{\"chunk_id\":\"0\", \"paper_id\":\"6528a864939a5f408257a0cf\"}</sup>. Furthermore, the task of editing multimodal LLMs presents considerable challenges due to their inherent diversity and complexity, as incorrect outputs may stem not just from LLMs but also from the interaction between different modalities <sup>{\"chunk_id\":\"0\", \"paper_id\":\"6528a864939a5f408257a0cf\"}</sup>. Empirically, current editing approaches are effective for editing the textual model in the multimodal language model but not as effective for editing the vision module, indicating the potential difficulty and opportunities of this task <sup>{\"chunk_id\":\"4\", \"paper_id\":\"6528a864939a5f408257a0cf\"}</sup>.'}, {'statement': 'The inherent complexity and diversity of MLLMs, stemming from their integration of multiple modalities like text, images, and audio, necessitate more sophisticated editing techniques. <sup>{\"chunk_id\":\"4\", \"paper_id\":\"6528a864939a5f408257a0cf\"}</sup> <sup>{\"chunk_id\":\"2\", \"paper_id\":\"656fdcf8939a5f4082920de7\"}</sup> <sup>{\"chunk_id\":\"3\", \"paper_id\":\"6571365b939a5f4082f7ccfa\"}</sup> <sup>{\"chunk_id\":\"1\", \"paper_id\":\"6684b06d01d2a3fbfce33e31\"}</sup>'}, {'statement': 'This subsection explores the existing approaches for editing single-modal LLMs and their potential applicability to MLLMs. <sup>{\"chunk_id\":\"1\", \"paper_id\":\"6528a864939a5f408257a0cf\"}</sup>'}, {'statement': 'Knowledge Infusion: This technique involves incrementally updating a language model with new facts or information without significant retraining. <sup>{\"chunk_id\":\"3\", \"paper_id\":\"646465fdd68f896efa1950fb\"}</sup>'}, {'statement': 'Methods like knowledge distillation and fine-tuning allow for the transfer of knowledge from a larger, more knowledgeable model to a smaller, less knowledgeable one. <sup>{\"chunk_id\":\"8\", \"paper_id\":\"6386c9e090e50fcafdfa0a19\"}</sup><sup>{\"chunk_id\":\"5\", \"paper_id\":\"65c97cd4939a5f4082307083\"}</sup><sup>{\"chunk_id\":\"2\", \"paper_id\":\"64af735d3fda6d7f0644baeb\"}</sup><sup>{\"chunk_id\":\"1\", \"paper_id\":\"65c97cd4939a5f4082307083\"}</sup><sup>{\"chunk_id\":\"2\", \"paper_id\":\"62b595eb5aee126c0f4793f8\"}</sup>'}, {'statement': 'While effective for single-modal LLMs, knowledge infusion for MLLMs requires careful consideration of the interplay between different modalities and the potential for cross-modal knowledge transfer. <sup>{\"chunk_id\":\"2\", \"paper_id\":\"65e68afc13fb2c6cf6f6e33d\"}</sup><sup>{\"chunk_id\":\"3\", \"paper_id\":\"6006d0cb91e0111a1b6a2507\"}</sup><sup>{\"chunk_id\":\"0\", \"paper_id\":\"65a75aa9939a5f408261970a\"}</sup><sup>{\"chunk_id\":\"2\", \"paper_id\":\"655ac423939a5f4082e26049\"}</sup>'}, {'statement': 'Incremental Learning: Incremental learning involves training a model on new data while retaining its knowledge from previous training sessions. <sup>{\"chunk_id\":\"2\", \"paper_id\":\"6694828f01d2a3fbfc8654c0\"}</sup><sup>{\"chunk_id\":\"1\", \"paper_id\":\"641137fe90e50fcafd17b992\"}</sup><sup>{\"chunk_id\":\"2\", \"paper_id\":\"62d7730e5aee126c0f9009f3\"}</sup>'}, {'statement': 'This approach is particularly relevant for MLLMs, as it allows for the continuous updating of the model with new multimodal data without forgetting previously learned information. <sup>{\"chunk_id\":\"1\", \"paper_id\":\"623004305aee126c0f9b322d\"}</sup>'}, {'statement': 'Techniques like experience replay and model regularization can be employed to mitigate catastrophic forgetting and ensure the stability of the model. <sup>{\"chunk_id\":\"1\", \"paper_id\":\"6464afdfd68f896efa356511\"}</sup><sup>{\"chunk_id\":\"1\", \"paper_id\":\"64e2e14f3fda6d7f064665d0\"}</sup><sup>{\"chunk_id\":\"1\", \"paper_id\":\"6413dac290e50fcafd3ce260\"}</sup><sup>{\"chunk_id\":\"1\", \"paper_id\":\"656fde3c939a5f4082948795\"}</sup>'}, {'statement': 'Modality-Specific Editing: Given the distinct characteristics of each modality, it may be necessary to develop modality-specific editing techniques for MLLMs. <sup>{\"chunk_id\":\"1\", \"paper_id\":\"6528a864939a5f408257a0cf\"}</sup>'}, {'statement': 'For example, image editing techniques like style transfer and image inpainting can be used to modify the visual representations learned by the model, while text editing techniques like grammar correction and sentiment modification can be used to refine the linguistic representations. <sup>{\"chunk_id\":\"1\", \"paper_id\":\"6556d305939a5f4082dc359b\"}</sup><sup>{\"chunk_id\":\"5\", \"paper_id\":\"6392a77190e50fcafd8c4e48\"}</sup><sup>{\"chunk_id\":\"6\", \"paper_id\":\"66ac3e6d01d2a3fbfc896b1b\"}</sup>'}, {'statement': 'Challenges and Limitations: Editing MLLMs is still an evolving field, and several challenges and limitations remain. <sup>{\"chunk_id\":\"0\", \"paper_id\":\"646c3addd68f896efa5d1901\"}</sup><sup>{\"chunk_id\":\"6\", \"paper_id\":\"647eaf35d68f896efad408e7\"}</sup><sup>{\"chunk_id\":\"9\", \"paper_id\":\"66f4cd3401d2a3fbfcbfac37\"}</sup>'}, {'statement': 'These include the difficulty of aligning edits across different modalities, the potential for introducing biases or errors during the editing process, and the lack of standardized evaluation metrics for assessing the effectiveness of editing techniques. <sup>{\"chunk_id\":\"7\", \"paper_id\":\"64741c33d68f896efaa7b664\"}</sup><sup>{\"chunk_id\":\"3\", \"paper_id\":\"6528a864939a5f408257a0cf\"}</sup><sup>{\"chunk_id\":\"4\", \"paper_id\":\"6528a864939a5f408257a0cf\"}</sup>'}, {'statement': 'Future research should focus on addressing these challenges and developing more robust and efficient editing methods for MLLMs. <sup>{\"chunk_id\":\"0\", \"paper_id\":\"646c3addd68f896efa5d1901\"}</sup><sup>{\"chunk_id\":\"0\", \"paper_id\":\"6528a864939a5f408257a0cf\"}</sup><sup>{\"chunk_id\":\"4\", \"paper_id\":\"6528a864939a5f408257a0cf\"}</sup>'}]\n",
      "2025-02-19 00:59:46,993 - research_agent.core.reference_checker - DEBUG - verified results: {'supported': [{'statement': 'Multimodal Learning (MML) has emerged as a crucial field of study, aiming to build AI models capable of extracting and correlating information from various data modalities. <sup>{\"chunk_id\":\"1\", \"paper_id\":\"65499d88939a5f4082be99ae\"}</sup>'}, {'statement': 'Vision-language pre-training, a key branch of MML, focuses on developing foundation models with enhanced performance in vision and language tasks. <sup>{\"chunk_id\":\"2\", \"paper_id\":\"64b60eaa3fda6d7f06eae95b\"}</sup>'}, {'statement': 'Notable milestones in this domain include Vision Transformer (ViT), which introduced an end-to-end solution for image understanding using Transformer encoders, and CLIP, which utilized multimodal pre-training for zero-shot recognition by converting classification into a retrieval task. <sup>{\"chunk_id\":\"2\", \"paper_id\":\"65fc055d13fb2c6cf6df23e1\"}</sup><sup>{\"chunk_id\":\"1\", \"paper_id\":\"668c9ec201d2a3fbfc3aa397\"}</sup>'}, {'statement': 'The recent advancements in LLMs, such as LLaMA, BLOOM, and ChatGPT, have further propelled the integration of auto-regressive language models as decoders in vision-language tasks, facilitating knowledge sharing between language and multimodal domains. <sup>{\"chunk_id\":\"1\", \"paper_id\":\"65e7dcc013fb2c6cf6fdddc3\"}</sup><sup>{\"chunk_id\":\"2\", \"paper_id\":\"6552dee2939a5f408239c275\"}</sup><sup>{\"chunk_id\":\"1\", \"paper_id\":\"646d8642d68f896efa0a2f4d\"}</sup><sup>{\"chunk_id\":\"1\", \"paper_id\":\"6566b085939a5f40827a9785\"}</sup><sup>{\"chunk_id\":\"2\", \"paper_id\":\"6588e965939a5f408200ef6f\"}</sup>'}, {'statement': 'These developments highlight the growing significance of MML and its potential to revolutionize various applications by bridging the gap between different modalities. <sup>{\"chunk_id\":\"1\", \"paper_id\":\"65499d88939a5f4082be99ae\"}</sup><sup>{\"chunk_id\":\"9\", \"paper_id\":\"66ee050801d2a3fbfc9d543e\"}</sup><sup>{\"chunk_id\":\"0\", \"paper_id\":\"6373035b90e50fcafd09fe89\"}</sup>'}, {'statement': 'Multimodal data is intrinsic to biomedical research and clinical care. <sup>{\"chunk_id\":\"1\", \"paper_id\":\"65499d88939a5f4082be99ae\"}</sup>'}, {'statement': 'Computed tomography (CT) and positron emission tomography (PET) are specific examples of single imaging modalities, while magnetic resonance imaging (MRI) is an example itself of multimodal data, as its component sequences T1-weighted, T2-weighted, and fluid-attenuated inversion recovery (FLAIR) can each be considered their own unique modalities, since each of the MR sequences measure some different biophysical or biological property<sup>{\"chunk_id\":\"1\", \"paper_id\":\"65499d88939a5f4082be99ae\"}</sup>.'}, {'statement': 'Laboratory blood tests, patient demographics, electrocardiogram (ECG) and genetic expression values are also common modalities in clinical decision models.<sup>{\"chunk_id\":\"7\", \"paper_id\":\"64ab82833fda6d7f06f77daa\"}</sup>'}, {'statement': 'This work discusses unique ways that differences between modalities have been addressed and mitigated to improve accuracy of AI models in similar ways to which a human would naturally be able to re-calibrate to these differences. <sup>{\"chunk_id\":\"1\", \"paper_id\":\"655ac423939a5f4082e26049\"}</sup><sup>{\"chunk_id\":\"1\", \"paper_id\":\"643e0ad00746dc40e3419426\"}</sup><sup>{\"chunk_id\":\"1\", \"paper_id\":\"64dc49903fda6d7f06389c95\"}</sup>'}], 'unsupported_count': 0, 'retries_remaining': 2}\n",
      "2025-02-19 00:59:54,183 - research_agent.core.reference_checker - DEBUG - verified results: {'supported': [{'statement': 'Editing multimodal large language models (MLLMs) presents unique challenges compared to editing single-modal LLMs. For instance, multimodal model editing demands a higher level of scrutiny and careful consideration in the editing process <sup>{\"chunk_id\":\"0\", \"paper_id\":\"6528a864939a5f408257a0cf\"}</sup>. Specifically, incorrect outputs from multimodal models may stem from the synergistic effects of various modalities, such as misreading or misrecognition, which is analogous to human errors like color blindness affecting color identification in images <sup>{\"chunk_id\":\"0\", \"paper_id\":\"6528a864939a5f408257a0cf\"}</sup>. Furthermore, the task of editing multimodal LLMs presents considerable challenges due to their inherent diversity and complexity, as incorrect outputs may stem not just from LLMs but also from the interaction between different modalities <sup>{\"chunk_id\":\"0\", \"paper_id\":\"6528a864939a5f408257a0cf\"}</sup>. Empirically, current editing approaches are effective for editing the textual model in the multimodal language model but not as effective for editing the vision module, indicating the potential difficulty and opportunities of this task <sup>{\"chunk_id\":\"4\", \"paper_id\":\"6528a864939a5f408257a0cf\"}</sup>.'}, {'statement': 'This subsection explores the existing approaches for editing single-modal LLMs and their potential applicability to MLLMs. <sup>{\"chunk_id\":\"1\", \"paper_id\":\"6528a864939a5f408257a0cf\"}</sup>'}, {'statement': 'Methods like knowledge distillation and fine-tuning allow for the transfer of knowledge from a larger, more knowledgeable model to a smaller, less knowledgeable one. <sup>{\"chunk_id\":\"8\", \"paper_id\":\"6386c9e090e50fcafdfa0a19\"}</sup><sup>{\"chunk_id\":\"5\", \"paper_id\":\"65c97cd4939a5f4082307083\"}</sup><sup>{\"chunk_id\":\"2\", \"paper_id\":\"64af735d3fda6d7f0644baeb\"}</sup><sup>{\"chunk_id\":\"1\", \"paper_id\":\"65c97cd4939a5f4082307083\"}</sup><sup>{\"chunk_id\":\"2\", \"paper_id\":\"62b595eb5aee126c0f4793f8\"}</sup>'}, {'statement': 'While effective for single-modal LLMs, knowledge infusion for MLLMs requires careful consideration of the interplay between different modalities and the potential for cross-modal knowledge transfer. <sup>{\"chunk_id\":\"2\", \"paper_id\":\"65e68afc13fb2c6cf6f6e33d\"}</sup><sup>{\"chunk_id\":\"3\", \"paper_id\":\"6006d0cb91e0111a1b6a2507\"}</sup><sup>{\"chunk_id\":\"0\", \"paper_id\":\"65a75aa9939a5f408261970a\"}</sup><sup>{\"chunk_id\":\"2\", \"paper_id\":\"655ac423939a5f4082e26049\"}</sup>'}, {'statement': 'Incremental Learning: Incremental learning involves training a model on new data while retaining its knowledge from previous training sessions. <sup>{\"chunk_id\":\"2\", \"paper_id\":\"6694828f01d2a3fbfc8654c0\"}</sup><sup>{\"chunk_id\":\"1\", \"paper_id\":\"641137fe90e50fcafd17b992\"}</sup><sup>{\"chunk_id\":\"2\", \"paper_id\":\"62d7730e5aee126c0f9009f3\"}</sup>'}, {'statement': 'This approach is particularly relevant for MLLMs, as it allows for the continuous updating of the model with new multimodal data without forgetting previously learned information. <sup>{\"chunk_id\":\"1\", \"paper_id\":\"623004305aee126c0f9b322d\"}</sup>'}, {'statement': 'Techniques like experience replay and model regularization can be employed to mitigate catastrophic forgetting and ensure the stability of the model. <sup>{\"chunk_id\":\"1\", \"paper_id\":\"6464afdfd68f896efa356511\"}</sup><sup>{\"chunk_id\":\"1\", \"paper_id\":\"64e2e14f3fda6d7f064665d0\"}</sup><sup>{\"chunk_id\":\"1\", \"paper_id\":\"6413dac290e50fcafd3ce260\"}</sup><sup>{\"chunk_id\":\"1\", \"paper_id\":\"656fde3c939a5f4082948795\"}</sup>'}, {'statement': 'Modality-Specific Editing: Given the distinct characteristics of each modality, it may be necessary to develop modality-specific editing techniques for MLLMs. <sup>{\"chunk_id\":\"1\", \"paper_id\":\"6528a864939a5f408257a0cf\"}</sup>'}, {'statement': 'For example, image editing techniques like style transfer and image inpainting can be used to modify the visual representations learned by the model, while text editing techniques like grammar correction and sentiment modification can be used to refine the linguistic representations. <sup>{\"chunk_id\":\"1\", \"paper_id\":\"6556d305939a5f4082dc359b\"}</sup><sup>{\"chunk_id\":\"5\", \"paper_id\":\"6392a77190e50fcafd8c4e48\"}</sup><sup>{\"chunk_id\":\"6\", \"paper_id\":\"66ac3e6d01d2a3fbfc896b1b\"}</sup>'}, {'statement': 'These include the difficulty of aligning edits across different modalities, the potential for introducing biases or errors during the editing process, and the lack of standardized evaluation metrics for assessing the effectiveness of editing techniques. <sup>{\"chunk_id\":\"7\", \"paper_id\":\"64741c33d68f896efaa7b664\"}</sup><sup>{\"chunk_id\":\"3\", \"paper_id\":\"6528a864939a5f408257a0cf\"}</sup><sup>{\"chunk_id\":\"4\", \"paper_id\":\"6528a864939a5f408257a0cf\"}</sup>'}, {'statement': 'Future research should focus on addressing these challenges and developing more robust and efficient editing methods for MLLMs. <sup>{\"chunk_id\":\"0\", \"paper_id\":\"646c3addd68f896efa5d1901\"}</sup><sup>{\"chunk_id\":\"0\", \"paper_id\":\"6528a864939a5f408257a0cf\"}</sup><sup>{\"chunk_id\":\"4\", \"paper_id\":\"6528a864939a5f408257a0cf\"}</sup>'}, {'statement': 'The inherent complexity and diversity of MLLMs, stemming from their integration of multiple modalities like text, images, and audio, necessitate more sophisticated editing techniques. <sup>{\"chunk_id\":\"4\", \"paper_id\":\"6528a864939a5f408257a0cf\"}</sup><sup>{\"chunk_id\":\"3\", \"paper_id\":\"6571365b939a5f4082f7ccfa\"}</sup><sup>{\"chunk_id\":\"1\", \"paper_id\":\"6684b06d01d2a3fbfce33e31\"}</sup>'}, {'statement': 'Knowledge Infusion: This technique involves incrementally updating a language model with new facts or information without significant retraining. <sup>{\"chunk_id\":\"1\", \"paper_id\":\"616e37435244ab9dcbd1a6fa\"}</sup>'}, {'statement': 'Challenges and Limitations: Editing MLLMs is still an evolving field, and several challenges and limitations remain. <sup>{\"chunk_id\":\"0\", \"paper_id\":\"646c3addd68f896efa5d1901\"}</sup>'}], 'unsupported_count': 0, 'retries_remaining': 2}\n",
      "2025-02-19 01:00:28,342 - research_agent.core.reference_checker - DEBUG - Successfully updated \n",
      "## 2.1 Multimodal Language Models\n",
      "\n",
      "Multimodal Lea with citations\n",
      "2025-02-19 01:00:28,343 - research_agent.core.pipeline_reference - INFO - section \n",
      "## 2.1 Multimodal L 处理成功\n",
      "2025-02-19 01:00:30,356 - research_agent.core.pipeline_reference - DEBUG - 尝试运行 pipeline，重试次数：0\n",
      "2025-02-19 01:00:40,827 - research_agent.core.reference_checker - DEBUG - Successfully updated \n",
      "## 2.2 Model Editing Techniques\n",
      "\n",
      "Editing multimod with citations\n",
      "2025-02-19 01:00:40,828 - research_agent.core.pipeline_reference - INFO - section \n",
      "## 2.2 Model Editin 处理成功\n",
      "2025-02-19 01:00:42,430 - research_agent.core.reference_checker - DEBUG - find statements: ['Editing multimodal large language models (MLLMs) presents unique challenges compared to editing single-modal LLMs.', 'The inherent complexity and diversity of MLLMs, stemming from their integration of multiple modalities like text, images, and audio, necessitate more sophisticated editing techniques.', 'This subsection explores the existing approaches for editing single-modal LLMs and their potential applicability to MLLMs.', 'Knowledge Infusion: This technique involves incrementally updating a language model with new facts or information without significant retraining.', 'Methods like knowledge distillation and fine-tuning allow for the transfer of knowledge from a larger, more knowledgeable model to a smaller, less knowledgeable one.', 'While effective for single-modal LLMs, knowledge infusion for MLLMs requires careful consideration of the interplay between different modalities and the potential for cross-modal knowledge transfer.', 'Incremental Learning: Incremental learning involves training a model on new data while retaining its knowledge from previous training sessions.', 'This approach is particularly relevant for MLLMs, as it allows for the continuous updating of the model with new multimodal data without forgetting previously learned information.', 'Techniques like experience replay and model regularization can be employed to mitigate catastrophic forgetting and ensure the stability of the model.', 'Modality-Specific Editing: Given the distinct characteristics of each modality, it may be necessary to develop modality-specific editing techniques for MLLMs.', 'For example, image editing techniques like style transfer and image inpainting can be used to modify the visual representations learned by the model, while text editing techniques like grammar correction and sentiment modification can be used to refine the linguistic representations.', 'Editing MLLMs is still an evolving field, and several challenges and limitations remain.', 'These include the difficulty of aligning edits across different modalities, the potential for introducing biases or errors during the editing process, and the lack of standardized evaluation metrics for assessing the effectiveness of editing techniques.', 'Future research should focus on addressing these challenges and developing more robust and efficient editing methods for MLLMs.']\n",
      "2025-02-19 01:01:23,157 - research_agent.core.reference_checker - DEBUG - supplement citations: [{'statement': 'Editing multimodal large language models (MLLMs) presents unique challenges compared to editing single-modal LLMs. For instance, multimodal model editing demands a higher level of scrutiny and careful consideration in the editing process <sup>{\"chunk_id\":\"0\", \"paper_id\":\"6528a864939a5f408257a0cf\"}</sup>. Specifically, incorrect outputs from multimodal models may stem from the synergistic effects of various modalities, such as misreading or misrecognition, which is analogous to human errors like color blindness affecting color identification in images <sup>{\"chunk_id\":\"0\", \"paper_id\":\"6528a864939a5f408257a0cf\"}</sup>. Furthermore, the task of editing multimodal LLMs presents considerable challenges due to their inherent diversity and complexity, as incorrect outputs may stem not just from LLMs but also from the interaction between different modalities <sup>{\"chunk_id\":\"0\", \"paper_id\":\"6528a864939a5f408257a0cf\"}</sup>. Empirically, current editing approaches are effective for editing the textual model in the multimodal language model but not as effective for editing the vision module, indicating the potential difficulty and opportunities of this task <sup>{\"chunk_id\":\"4\", \"paper_id\":\"6528a864939a5f408257a0cf\"}</sup>.'}, {'statement': 'The inherent complexity and diversity of MLLMs, stemming from their integration of multiple modalities like text, images, and audio, necessitate more sophisticated editing techniques. <sup>{\"chunk_id\":\"4\", \"paper_id\":\"6528a864939a5f408257a0cf\"}</sup> <sup>{\"chunk_id\":\"2\", \"paper_id\":\"656fdcf8939a5f4082920de7\"}</sup> <sup>{\"chunk_id\":\"3\", \"paper_id\":\"6571365b939a5f4082f7ccfa\"}</sup> <sup>{\"chunk_id\":\"1\", \"paper_id\":\"6684b06d01d2a3fbfce33e31\"}</sup>'}, {'statement': 'This subsection explores the existing approaches for editing single-modal LLMs and their potential applicability to MLLMs. <sup>{\"chunk_id\":\"0\", \"paper_id\":\"6528a864939a5f408257a0cf\"}</sup><sup>{\"chunk_id\":\"1\", \"paper_id\":\"6528a864939a5f408257a0cf\"}</sup><sup>{\"chunk_id\":\"2\", \"paper_id\":\"656fdcf8939a5f4082920de7\"}</sup><sup>{\"chunk_id\":\"0\", \"paper_id\":\"646c3addd68f896efa5d1901\"}</sup>'}, {'statement': 'Knowledge Infusion: This technique involves incrementally updating a language model with new facts or information without significant retraining. <sup>{\"chunk_id\":\"3\", \"paper_id\":\"646465fdd68f896efa1950fb\"}</sup>'}, {'statement': 'Methods like knowledge distillation and fine-tuning allow for the transfer of knowledge from a larger, more knowledgeable model to a smaller, less knowledgeable one. <sup>{\"chunk_id\":\"8\", \"paper_id\":\"6386c9e090e50fcafdfa0a19\"}</sup><sup>{\"chunk_id\":\"5\", \"paper_id\":\"65c97cd4939a5f4082307083\"}</sup><sup>{\"chunk_id\":\"2\", \"paper_id\":\"64af735d3fda6d7f0644baeb\"}</sup><sup>{\"chunk_id\":\"1\", \"paper_id\":\"65c97cd4939a5f4082307083\"}</sup><sup>{\"chunk_id\":\"2\", \"paper_id\":\"62b595eb5aee126c0f4793f8\"}</sup>'}, {'statement': 'While effective for single-modal LLMs, knowledge infusion for MLLMs requires careful consideration of the interplay between different modalities and the potential for cross-modal knowledge transfer. <sup>{\"chunk_id\":\"2\", \"paper_id\":\"65e68afc13fb2c6cf6f6e33d\"}</sup><sup>{\"chunk_id\":\"3\", \"paper_id\":\"6006d0cb91e0111a1b6a2507\"}</sup><sup>{\"chunk_id\":\"0\", \"paper_id\":\"65a75aa9939a5f408261970a\"}</sup><sup>{\"chunk_id\":\"2\", \"paper_id\":\"655ac423939a5f4082e26049\"}</sup>'}, {'statement': 'Incremental Learning: Incremental learning involves training a model on new data while retaining its knowledge from previous training sessions. <sup>{\"chunk_id\":\"2\", \"paper_id\":\"6694828f01d2a3fbfc8654c0\"}</sup><sup>{\"chunk_id\":\"1\", \"paper_id\":\"641137fe90e50fcafd17b992\"}</sup><sup>{\"chunk_id\":\"2\", \"paper_id\":\"62d7730e5aee126c0f9009f3\"}</sup>'}, {'statement': 'This approach is particularly relevant for MLLMs, as it allows for the continuous updating of the model with new multimodal data without forgetting previously learned information. <sup>{\"chunk_id\":\"1\", \"paper_id\":\"623004305aee126c0f9b322d\"}</sup>'}, {'statement': 'Techniques like experience replay and model regularization can be employed to mitigate catastrophic forgetting and ensure the stability of the model. <sup>{\"chunk_id\":\"1\", \"paper_id\":\"6464afdfd68f896efa356511\"}</sup><sup>{\"chunk_id\":\"1\", \"paper_id\":\"64e2e14f3fda6d7f064665d0\"}</sup><sup>{\"chunk_id\":\"1\", \"paper_id\":\"6413dac290e50fcafd3ce260\"}</sup><sup>{\"chunk_id\":\"1\", \"paper_id\":\"656fde3c939a5f4082948795\"}</sup>'}, {'statement': 'Modality-Specific Editing: Given the distinct characteristics of each modality, it may be necessary to develop modality-specific editing techniques for MLLMs. <sup>{\"chunk_id\":\"1\", \"paper_id\":\"6528a864939a5f408257a0cf\"}</sup>'}, {'statement': 'For example, image editing techniques like style transfer and image inpainting can be used to modify the visual representations learned by the model, while text editing techniques like grammar correction and sentiment modification can be used to refine the linguistic representations. <sup>{\"chunk_id\":\"1\", \"paper_id\":\"6556d305939a5f4082dc359b\"}</sup><sup>{\"chunk_id\":\"5\", \"paper_id\":\"6392a77190e50fcafd8c4e48\"}</sup><sup>{\"chunk_id\":\"6\", \"paper_id\":\"66ac3e6d01d2a3fbfc896b1b\"}</sup>'}, {'statement': 'Editing MLLMs is still an evolving field, and several challenges and limitations remain. For instance, the task of editing multimodal LLMs presents considerable challenges, given their inherent diversity and complexity. Specifically, incorrect outputs from multimodal models may stem from the synergistic effects of various modalities. Incorrect outputs may stem not just from LLMs, analogous to human errors like misreading or misrecognition (e.g., color blindness affecting color identification in images). As shown in Figure 1, before the editing, the model misidentified the object as a “ladder” instead of the correct “barrier”, resulting in an erroneous prediction. After the editing, the model accurately recognized the “barrier”. Note that the utility of multimodal LLMs (Yin et al., 2023) is increasing, yet there is a lack of corresponding dataset resources and benchmarks for editing multimodal large language models. Additionally, current editing approaches are effective for editing the textual model in the multimodal language model but not as effective for editing the vision module. For example, in editing the language module of the BLIP-2 model, the reliability of MEND can reach 99.4%, but only attain 65.2% if editing the vision module, indicating the potential difficulty and opportunities of this task. Furthermore, the primary constraint pertains to the scale of the LLMs utilized. Current evaluations mainly employ 7B LLMs as the base model, and despite the impressive results garnered, the potential benefits of larger model sizes, such as 65B or 130B (Kaplan et al., 2020), are worth future exploration. The second challenge relates to the quality and quantity of training data (Jia et al., 2021). As the model size and capabilities scale up, a corresponding increase in data is crucial. However, the procurement and refinement of high-quality training data present substantial logistical and financial hurdles. For instance, the open-source interleaved dataset MMC4 contains a significant amount of noise in the form of text and images, like commercial advertisements. This noise could adversely affect the model’s output language and image style. The sensitivity of LLMs to human prompts is a known issue (Wei et al., 2022b; Wang et al., 2023b; Zhou et al., 2023), a challenge that extends to MLLMs. For instance, MLLMs’ propensity for detailed responses necessitates tailored prompting to elicit concise and short answers, which is particularly useful when addressing Visual Question Answering (VQA) tasks. <sup>{\"chunk_id\":\"0\", \"paper_id\":\"6528a864939a5f408257a0cf\"}</sup><sup>{\"chunk_id\":\"4\", \"paper_id\":\"6528a864939a5f408257a0cf\"}</sup><sup>{\"chunk_id\":\"11\", \"paper_id\":\"650ba7c03fda6d7f06e613ee\"}</sup>'}, {'statement': 'These include the difficulty of aligning edits across different modalities, the potential for introducing biases or errors during the editing process, and the lack of standardized evaluation metrics for assessing the effectiveness of editing techniques. <sup>{\"chunk_id\":\"7\", \"paper_id\":\"64741c33d68f896efaa7b664\"}</sup><sup>{\"chunk_id\":\"3\", \"paper_id\":\"6528a864939a5f408257a0cf\"}</sup><sup>{\"chunk_id\":\"4\", \"paper_id\":\"6528a864939a5f408257a0cf\"}</sup><sup>{\"chunk_id\":\"6\", \"paper_id\":\"65f7a01c13fb2c6cf668ebd0\"}</sup>'}, {'statement': 'Future research should focus on addressing these challenges and developing more robust and efficient editing methods for MLLMs. <sup>{\"chunk_id\":\"0\", \"paper_id\":\"646c3addd68f896efa5d1901\"}</sup><sup>{\"chunk_id\":\"4\", \"paper_id\":\"6528a864939a5f408257a0cf\"}</sup>'}]\n",
      "2025-02-19 01:02:31,147 - research_agent.core.reference_checker - DEBUG - verified results: {'supported': [{'statement': 'Editing multimodal large language models (MLLMs) presents unique challenges compared to editing single-modal LLMs. For instance, multimodal model editing demands a higher level of scrutiny and careful consideration in the editing process <sup>{\"chunk_id\":\"0\", \"paper_id\":\"6528a864939a5f408257a0cf\"}</sup>. Specifically, incorrect outputs from multimodal models may stem from the synergistic effects of various modalities, such as misreading or misrecognition, which is analogous to human errors like color blindness affecting color identification in images <sup>{\"chunk_id\":\"0\", \"paper_id\":\"6528a864939a5f408257a0cf\"}</sup>. Furthermore, the task of editing multimodal LLMs presents considerable challenges due to their inherent diversity and complexity, as incorrect outputs may stem not just from LLMs but also from the interaction between different modalities <sup>{\"chunk_id\":\"0\", \"paper_id\":\"6528a864939a5f408257a0cf\"}</sup>. Empirically, current editing approaches are effective for editing the textual model in the multimodal language model but not as effective for editing the vision module, indicating the potential difficulty and opportunities of this task <sup>{\"chunk_id\":\"4\", \"paper_id\":\"6528a864939a5f408257a0cf\"}</sup>.'}, {'statement': 'Methods like knowledge distillation and fine-tuning allow for the transfer of knowledge from a larger, more knowledgeable model to a smaller, less knowledgeable one. <sup>{\"chunk_id\":\"8\", \"paper_id\":\"6386c9e090e50fcafdfa0a19\"}</sup><sup>{\"chunk_id\":\"5\", \"paper_id\":\"65c97cd4939a5f4082307083\"}</sup><sup>{\"chunk_id\":\"2\", \"paper_id\":\"64af735d3fda6d7f0644baeb\"}</sup><sup>{\"chunk_id\":\"1\", \"paper_id\":\"65c97cd4939a5f4082307083\"}</sup><sup>{\"chunk_id\":\"2\", \"paper_id\":\"62b595eb5aee126c0f4793f8\"}</sup>'}, {'statement': 'While effective for single-modal LLMs, knowledge infusion for MLLMs requires careful consideration of the interplay between different modalities and the potential for cross-modal knowledge transfer. <sup>{\"chunk_id\":\"2\", \"paper_id\":\"65e68afc13fb2c6cf6f6e33d\"}</sup><sup>{\"chunk_id\":\"3\", \"paper_id\":\"6006d0cb91e0111a1b6a2507\"}</sup><sup>{\"chunk_id\":\"0\", \"paper_id\":\"65a75aa9939a5f408261970a\"}</sup><sup>{\"chunk_id\":\"2\", \"paper_id\":\"655ac423939a5f4082e26049\"}</sup>'}, {'statement': 'Incremental Learning: Incremental learning involves training a model on new data while retaining its knowledge from previous training sessions. <sup>{\"chunk_id\":\"2\", \"paper_id\":\"6694828f01d2a3fbfc8654c0\"}</sup><sup>{\"chunk_id\":\"1\", \"paper_id\":\"641137fe90e50fcafd17b992\"}</sup><sup>{\"chunk_id\":\"2\", \"paper_id\":\"62d7730e5aee126c0f9009f3\"}</sup>'}, {'statement': 'This approach is particularly relevant for MLLMs, as it allows for the continuous updating of the model with new multimodal data without forgetting previously learned information. <sup>{\"chunk_id\":\"1\", \"paper_id\":\"623004305aee126c0f9b322d\"}</sup>'}, {'statement': 'Techniques like experience replay and model regularization can be employed to mitigate catastrophic forgetting and ensure the stability of the model. <sup>{\"chunk_id\":\"1\", \"paper_id\":\"6464afdfd68f896efa356511\"}</sup><sup>{\"chunk_id\":\"1\", \"paper_id\":\"64e2e14f3fda6d7f064665d0\"}</sup><sup>{\"chunk_id\":\"1\", \"paper_id\":\"6413dac290e50fcafd3ce260\"}</sup><sup>{\"chunk_id\":\"1\", \"paper_id\":\"656fde3c939a5f4082948795\"}</sup>'}, {'statement': 'Modality-Specific Editing: Given the distinct characteristics of each modality, it may be necessary to develop modality-specific editing techniques for MLLMs. <sup>{\"chunk_id\":\"1\", \"paper_id\":\"6528a864939a5f408257a0cf\"}</sup>'}, {'statement': 'Editing MLLMs is still an evolving field, and several challenges and limitations remain. For instance, the task of editing multimodal LLMs presents considerable challenges, given their inherent diversity and complexity. Specifically, incorrect outputs from multimodal models may stem from the synergistic effects of various modalities. Incorrect outputs may stem not just from LLMs, analogous to human errors like misreading or misrecognition (e.g., color blindness affecting color identification in images). As shown in Figure 1, before the editing, the model misidentified the object as a “ladder” instead of the correct “barrier”, resulting in an erroneous prediction. After the editing, the model accurately recognized the “barrier”. Note that the utility of multimodal LLMs (Yin et al., 2023) is increasing, yet there is a lack of corresponding dataset resources and benchmarks for editing multimodal large language models. Additionally, current editing approaches are effective for editing the textual model in the multimodal language model but not as effective for editing the vision module. For example, in editing the language module of the BLIP-2 model, the reliability of MEND can reach 99.4%, but only attain 65.2% if editing the vision module, indicating the potential difficulty and opportunities of this task. Furthermore, the primary constraint pertains to the scale of the LLMs utilized. Current evaluations mainly employ 7B LLMs as the base model, and despite the impressive results garnered, the potential benefits of larger model sizes, such as 65B or 130B (Kaplan et al., 2020), are worth future exploration. The second challenge relates to the quality and quantity of training data (Jia et al., 2021). As the model size and capabilities scale up, a corresponding increase in data is crucial. However, the procurement and refinement of high-quality training data present substantial logistical and financial hurdles. For instance, the open-source interleaved dataset MMC4 contains a significant amount of noise in the form of text and images, like commercial advertisements. This noise could adversely affect the model’s output language and image style. The sensitivity of LLMs to human prompts is a known issue (Wei et al., 2022b; Wang et al., 2023b; Zhou et al., 2023), a challenge that extends to MLLMs. For instance, MLLMs’ propensity for detailed responses necessitates tailored prompting to elicit concise and short answers, which is particularly useful when addressing Visual Question Answering (VQA) tasks. <sup>{\"chunk_id\":\"0\", \"paper_id\":\"6528a864939a5f408257a0cf\"}</sup><sup>{\"chunk_id\":\"4\", \"paper_id\":\"6528a864939a5f408257a0cf\"}</sup><sup>{\"chunk_id\":\"11\", \"paper_id\":\"650ba7c03fda6d7f06e613ee\"}</sup>'}, {'statement': 'These include the difficulty of aligning edits across different modalities, the potential for introducing biases or errors during the editing process, and the lack of standardized evaluation metrics for assessing the effectiveness of editing techniques. <sup>{\"chunk_id\":\"7\", \"paper_id\":\"64741c33d68f896efaa7b664\"}</sup><sup>{\"chunk_id\":\"3\", \"paper_id\":\"6528a864939a5f408257a0cf\"}</sup><sup>{\"chunk_id\":\"4\", \"paper_id\":\"6528a864939a5f408257a0cf\"}</sup><sup>{\"chunk_id\":\"6\", \"paper_id\":\"65f7a01c13fb2c6cf668ebd0\"}</sup>'}, {'statement': 'Future research should focus on addressing these challenges and developing more robust and efficient editing methods for MLLMs. <sup>{\"chunk_id\":\"0\", \"paper_id\":\"646c3addd68f896efa5d1901\"}</sup><sup>{\"chunk_id\":\"4\", \"paper_id\":\"6528a864939a5f408257a0cf\"}</sup>'}, {'statement': 'The inherent complexity and diversity of MLLMs, stemming from their integration of multiple modalities like text, images, and audio, necessitate more sophisticated editing techniques. <sup>{\"chunk_id\":\"4\", \"paper_id\":\"6528a864939a5f408257a0cf\"}</sup><sup>{\"chunk_id\":\"3\", \"paper_id\":\"6571365b939a5f4082f7ccfa\"}</sup><sup>{\"chunk_id\":\"1\", \"paper_id\":\"6684b06d01d2a3fbfce33e31\"}</sup>'}, {'statement': 'This subsection explores the existing approaches for editing single-modal LLMs and their potential applicability to MLLMs. <sup>{\"chunk_id\":\"0\", \"paper_id\":\"6528a864939a5f408257a0cf\"}</sup><sup>{\"chunk_id\":\"1\", \"paper_id\":\"6528a864939a5f408257a0cf\"}</sup><sup>{\"chunk_id\":\"0\", \"paper_id\":\"646c3addd68f896efa5d1901\"}</sup>'}, {'statement': 'Knowledge Infusion: This technique involves incrementally updating a language model with new facts or information without significant retraining. <sup>{\"chunk_id\":\"1\", \"paper_id\":\"65c437c0939a5f4082d8c312\"}</sup>'}, {'statement': 'For example, image editing techniques like style transfer and image inpainting can be used to modify the visual representations learned by the model, while text editing techniques like grammar correction and sentiment modification can be used to refine the linguistic representations.<sup>{\"chunk_id\":\"1\", \"paper_id\":\"6556d305939a5f4082dc359b\"}</sup><sup>{\"chunk_id\":\"5\", \"paper_id\":\"6392a77190e50fcafd8c4e48\"}</sup>'}], 'unsupported_count': 0, 'retries_remaining': 2}\n",
      "2025-02-19 01:03:32,383 - research_agent.core.reference_checker - DEBUG - Successfully updated \n",
      "## 2.3 Challenges and Limitations\n",
      "\n",
      "Editing multim with citations\n",
      "2025-02-19 01:03:32,384 - research_agent.core.pipeline_reference - INFO - section \n",
      "## 2.3 Challenges a 处理成功\n",
      "2025-02-19 01:03:32,385 - research_agent.core.pipeline_reference - INFO - 所有 section 处理完毕\n",
      "2025-02-19 01:03:32,385 - research_agent.core.pipeline_reference - INFO - 开始处理引用部分\n",
      "2025-02-19 01:03:32,386 - research_agent.core.pipeline_reference - DEBUG - 提取到 46 个引用\n",
      "2025-02-19 01:03:32,387 - research_agent.core.pipeline_reference - INFO - 开始替换引用为数字，处理 46 个引用\n",
      "2025-02-19 01:03:44,886 - research_agent.core.pipeline_reference - INFO - 引用替换完成，生成 46 条参考文献\n",
      "2025-02-19 01:03:44,887 - research_agent.core.pipeline_reference - INFO - 生成最终文档\n"
     ]
    }
   ],
   "source": [
    "from research_agent.core.pipeline_reference import CitationProcessor\n",
    "pipeliner = CitationProcessor()\n",
    "draft = await pipeliner.process_sections(content,topic)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 7,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "\n",
      "## 2 Background and Related Work\n",
      "## 2.1 Multimodal Language Models\n",
      "\n",
      "Multimodal Learning (MML) has emerged as a crucial field of study, aiming to build AI models capable of extracting and correlating information from various data modalities.<sup>22</sup> Vision-language pre-training, a key branch of MML, focuses on developing foundation models with enhanced performance in vision and language tasks.<sup>16</sup> Notable milestones in this domain include Vision Transformer (ViT), which introduced an end-to-end solution for image understanding using Transformer encoders, and CLIP, which utilized multimodal pre-training for zero-shot recognition by converting classification into a retrieval task.<sup>25</sup><sup>1</sup> The recent advancements in LLMs, such as LLaMA, BLOOM, and ChatGPT, have further propelled the integration of auto-regressive language models as decoders in vision-language tasks, facilitating knowledge sharing between language and multimodal domains.<sup>45</sup><sup>36</sup><sup>2</sup><sup>14</sup><sup>29</sup> These developments highlight the growing significance of MML and its potential to revolutionize various applications by bridging the gap between different modalities.<sup>22</sup><sup>3</sup><sup>8</sup>  \n",
      "\n",
      "Multimodal Learning (MML) has emerged as a crucial field of study, aiming to build AI models capable of extracting and correlating information from various data modalities.<sup>22</sup> As more data has become available, a wider selection of datasets containing more than one modality has also enabled growth in the multimodal research sphere. Multimodal data is intrinsic to biomedical research and clinical care.<sup>22</sup> While data belonging to a single modality can be conceptualized as a way in which something is perceived or captured in the world into an abstract digitized representation such as a waveform or image, multimodal data aggregates multiple modalities and thus consists of several intrinsically different representation spaces (and potentially even different data geometries). Computed tomography (CT) and positron emission tomography (PET) are specific examples of single imaging modalities, while magnetic resonance imaging (MRI) is an example itself of multimodal data, as its component sequences T1-weighted, T2-weighted, and fluid-attenuated inversion recovery (FLAIR) can each be considered their own unique modalities, since each of the MR sequences measure some different biophysical or biological property<sup>22</sup>. Laboratory blood tests, patient demographics, electrocardiogram (ECG) and genetic expression values are also common modalities in clinical decision models.<sup>17</sup> This work discusses unique ways that differences between modalities have been addressed and mitigated to improve accuracy of AI models in similar ways to which a human would naturally be able to re-calibrate to these differences.<sup>27</sup><sup>21</sup><sup>12</sup>  \n",
      "\n",
      "Vision-language pre-training, a key branch of MML, focuses on developing foundation models with enhanced performance in vision and language tasks.<sup>16</sup>  \n",
      "\n",
      "Notable milestones in this domain include Vision Transformer (ViT), which introduced an end-to-end solution for image understanding using Transformer encoders, and CLIP, which utilized multimodal pre-training for zero-shot recognition by converting classification into a retrieval task.<sup>25</sup><sup>1</sup>  \n",
      "\n",
      "The recent advancements in LLMs, such as LLaMA, BLOOM, and ChatGPT, have further propelled the integration of auto-regressive language models as decoders in vision-language tasks, facilitating knowledge sharing between language and multimodal domains.<sup>45</sup><sup>36</sup><sup>2</sup><sup>14</sup><sup>29</sup>  \n",
      "\n",
      "These developments highlight the growing significance of MML and its potential to revolutionize various applications by bridging the gap between different modalities.<sup>22</sup><sup>3</sup><sup>8</sup>  \n",
      "## 2.2 Model Editing Techniques\n",
      "\n",
      "Editing multimodal large language models (MLLMs) presents unique challenges compared to editing single-modal LLMs. For instance, multimodal model editing demands a higher level of scrutiny and careful consideration in the editing process<sup>41</sup>. Specifically, incorrect outputs from multimodal models may stem from the synergistic effects of various modalities, such as misreading or misrecognition, which is analogous to human errors like color blindness affecting color identification in images<sup>41</sup>. Furthermore, the task of editing multimodal LLMs presents considerable challenges due to their inherent diversity and complexity, as incorrect outputs may stem not just from LLMs but also from the interaction between different modalities<sup>41</sup>. Empirically, current editing approaches are effective for editing the textual model in the multimodal language model but not as effective for editing the vision module, indicating the potential difficulty and opportunities of this task<sup>24</sup>. The inherent complexity and diversity of MLLMs, stemming from their integration of multiple modalities like text, images, and audio, necessitate more sophisticated editing techniques<sup>24</sup><sup>13</sup><sup>31</sup>. This subsection explores the existing approaches for editing single-modal LLMs and their potential applicability to MLLMs<sup>11</sup>.\n",
      "\n",
      "**Knowledge Infusion:** This technique involves incrementally updating a language model with new facts or information without significant retraining<sup>33</sup>. Methods like knowledge distillation and fine-tuning allow for the transfer of knowledge from a larger, more knowledgeable model to a smaller, less knowledgeable one<sup>15</sup><sup>18</sup><sup>42</sup><sup>23</sup><sup>10</sup>. While effective for single-modal LLMs, knowledge infusion for MLLMs requires careful consideration of the interplay between different modalities and the potential for cross-modal knowledge transfer<sup>37</sup><sup>19</sup><sup>40</sup><sup>5</sup>.\n",
      "\n",
      "**Incremental Learning:** Incremental learning involves training a model on new data while retaining its knowledge from previous training sessions<sup>39</sup><sup>30</sup><sup>44</sup>. This approach is particularly relevant for MLLMs, as it allows for the continuous updating of the model with new multimodal data without forgetting previously learned information<sup>9</sup>. Techniques like experience replay and model regularization can be employed to mitigate catastrophic forgetting and ensure the stability of the model<sup>6</sup><sup>7</sup><sup>4</sup><sup>43</sup>.\n",
      "\n",
      "**Modality-Specific Editing:** Given the distinct characteristics of each modality, it may be necessary to develop modality-specific editing techniques for MLLMs<sup>11</sup>. For example, image editing techniques like style transfer and image inpainting can be used to modify the visual representations learned by the model, while text editing techniques like grammar correction and sentiment modification can be used to refine the linguistic representations<sup>46</sup><sup>34</sup><sup>32</sup>.\n",
      "\n",
      "**Challenges and Limitations:** Editing MLLMs is still an evolving field, and several challenges and limitations remain<sup>20</sup>. These include the difficulty of aligning edits across different modalities, the potential for introducing biases or errors during the editing process, and the lack of standardized evaluation metrics for assessing the effectiveness of editing techniques<sup>38</sup><sup>28</sup><sup>24</sup>. Future research should focus on addressing these challenges and developing more robust and efficient editing methods for MLLMs<sup>20</sup><sup>41</sup><sup>24</sup>.\n",
      "## 2.3 Challenges and Limitations\n",
      "\n",
      "Editing multimodal large language models (MLLMs) presents unique challenges compared to editing single-modal LLMs. For instance, multimodal model editing demands a higher level of scrutiny and careful consideration in the editing process<sup>41</sup>. Specifically, incorrect outputs from multimodal models may stem from the synergistic effects of various modalities, such as misreading or misrecognition, which is analogous to human errors like color blindness affecting color identification in images<sup>41</sup>. Furthermore, the task of editing multimodal LLMs presents considerable challenges due to their inherent diversity and complexity, as incorrect outputs may stem not just from LLMs but also from the interaction between different modalities<sup>41</sup>. Empirically, current editing approaches are effective for editing the textual model in the multimodal language model but not as effective for editing the vision module, indicating the potential difficulty and opportunities of this task<sup>24</sup>. The inherent complexity and diversity of MLLMs, stemming from their integration of multiple modalities like text, images, and audio, necessitate more sophisticated editing techniques. This subsection explores the existing approaches for editing single-modal LLMs and their potential applicability to MLLMs.\n",
      "\n",
      "**Knowledge Infusion:** This technique involves incrementally updating a language model with new facts or information without significant retraining. Methods like knowledge distillation and fine-tuning allow for the transfer of knowledge from a larger, more knowledgeable model to a smaller, less knowledgeable one<sup>15</sup><sup>18</sup><sup>42</sup><sup>23</sup><sup>10</sup>. While effective for single-modal LLMs, knowledge infusion for MLLMs requires careful consideration of the interplay between different modalities and the potential for cross-modal knowledge transfer<sup>37</sup><sup>19</sup><sup>40</sup><sup>5</sup>.\n",
      "\n",
      "**Incremental Learning:** Incremental learning involves training a model on new data while retaining its knowledge from previous training sessions<sup>39</sup><sup>30</sup><sup>44</sup>. This approach is particularly relevant for MLLMs, as it allows for the continuous updating of the model with new multimodal data without forgetting previously learned information<sup>9</sup>. Techniques like experience replay and model regularization can be employed to mitigate catastrophic forgetting and ensure the stability of the model<sup>6</sup><sup>7</sup><sup>4</sup><sup>43</sup>.\n",
      "\n",
      "**Modality-Specific Editing:** Given the distinct characteristics of each modality, it may be necessary to develop modality-specific editing techniques for MLLMs<sup>11</sup>. For example, image editing techniques like style transfer and image inpainting can be used to modify the visual representations learned by the model, while text editing techniques like grammar correction and sentiment modification can be used to refine the linguistic representations<sup>46</sup><sup>34</sup>.\n",
      "\n",
      "**Challenges and Limitations:** Editing MLLMs is still an evolving field, and several challenges and limitations remain. For instance, the task of editing multimodal LLMs presents considerable challenges, given their inherent diversity and complexity. Specifically, incorrect outputs from multimodal models may stem from the synergistic effects of various modalities. Incorrect outputs may stem not just from LLMs, analogous to human errors like misreading or misrecognition (e.g., color blindness affecting color identification in images). As shown in Figure 1, before the editing, the model misidentified the object as a “ladder” instead of the correct “barrier”, resulting in an erroneous prediction. After the editing, the model accurately recognized the “barrier”. Note that the utility of multimodal LLMs (Yin et al., 2023) is increasing, yet there is a lack of corresponding dataset resources and benchmarks for editing multimodal large language models. Additionally, current editing approaches are effective for editing the textual model in the multimodal language model but not as effective for editing the vision module. For example, in editing the language module of the BLIP-2 model, the reliability of MEND can reach 99.4%, but only attain 65.2% if editing the vision module, indicating the potential difficulty and opportunities of this task. Furthermore, the primary constraint pertains to the scale of the LLMs utilized. Current evaluations mainly employ 7B LLMs as the base model, and despite the impressive results garnered, the potential benefits of larger model sizes, such as 65B or 130B (Kaplan et al., 2020), are worth future exploration. The second challenge relates to the quality and quantity of training data (Jia et al., 2021). As the model size and capabilities scale up, a corresponding increase in data is crucial. However, the procurement and refinement of high-quality training data present substantial logistical and financial hurdles. For instance, the open-source interleaved dataset MMC4 contains a significant amount of noise in the form of text and images, like commercial advertisements. This noise could adversely affect the model’s output language and image style. The sensitivity of LLMs to human prompts is a known issue (Wei et al., 2022b; Wang et al., 2023b; Zhou et al., 2023), a challenge that extends to MLLMs. For instance, MLLMs’ propensity for detailed responses necessitates tailored prompting to elicit concise and short answers, which is particularly useful when addressing Visual Question Answering (VQA) tasks<sup>41</sup><sup>24</sup><sup>26</sup>. These include the difficulty of aligning edits across different modalities, the potential for introducing biases or errors during the editing process, and the lack of standardized evaluation metrics for assessing the effectiveness of editing techniques<sup>38</sup><sup>28</sup><sup>24</sup><sup>35</sup>. Future research should focus on addressing these challenges and developing more robust and efficient editing methods for MLLMs<sup>20</sup><sup>24</sup>. The inherent complexity and diversity of MLLMs, stemming from their integration of multiple modalities like text, images, and audio, necessitate more sophisticated editing techniques<sup>24</sup><sup>13</sup><sup>31</sup>. This subsection explores the existing approaches for editing single-modal LLMs and their potential applicability to MLLMs<sup>41</sup><sup>11</sup><sup>20</sup>.\n",
      "\n",
      "# References\n",
      "\n",
      "[1] FALIP: Visual Prompt As Foveal Attention Boosts CLIP Zero-Shot Performance.ECCV2024 chunk 1\n",
      "\n",
      "[2] DetGPT: Detect What You Need Via Reasoning.EMNLP_2023 chunk 1\n",
      "\n",
      "[3] Multi-sensor Learning Enables Information Transfer Across Different Sensory Data and Augments Multi-modality Imaging.IEEE_Transactions_on_Pattern_Analysis_and_Machine_Intelligence chunk 9\n",
      "\n",
      "[4] Achieving a Better Stability-Plasticity Trade-off Via Auxiliary Networks in Continual Learning.CVPR_2023 chunk 1\n",
      "\n",
      "[5] Multimodal Representation Learning by Alternating Unimodal Adaptation.CVPR2024 chunk 2\n",
      "\n",
      "[6] Batch Model Consolidation: A Multi-Task Model Consolidation Framework.CVPR_2023 chunk 1\n",
      "\n",
      "[7] NAPA-VQ: Neighborhood Aware Prototype Augmentation with Vector Quantization for Continual Learning..ICCV_2023 chunk 1\n",
      "\n",
      "[8] PMR: Prototypical Modal Rebalance for Multimodal Learning.CVPR_2023 chunk 0\n",
      "\n",
      "[9] ELLE: Efficient Lifelong Pre-training for Emerging Data.ACL_2022_Annual_Meeting_of_the_Association_for_Computational_Linguistics chunk 1\n",
      "\n",
      "[10] Learning to Explore Distillability and Sparsability: A Joint Framework for Model Compression.IEEE_Transactions_on_Pattern_Analysis_and_Machine_Intelligence chunk 2\n",
      "\n",
      "[11] Can We Edit Multimodal Large Language Models?.EMNLP_2023 chunk 1\n",
      "\n",
      "[12] Boosting Multi-modal Model Performance with Adaptive Gradient Modulation.ICCV_2023 chunk 1\n",
      "\n",
      "[13] OneLLM: One Framework to Align All Modalities with Language.CVPR2024 chunk 3\n",
      "\n",
      "[14] LLaMA-VID: an Image is Worth 2 Tokens in Large Language Models.ECCV2024 chunk 1\n",
      "\n",
      "[15] Decentralized Learning with Multi-Headed Distillation.CVPR_2023 chunk 8\n",
      "\n",
      "[16] SINC: Self-Supervised In-Context Learning for Vision-Language Tasks..ICCV_2023 chunk 2\n",
      "\n",
      "[17] Assisting Clinical Decisions for Scarcely Available Treatment Via Disentangled Latent Representation.KDD2023 chunk 7\n",
      "\n",
      "[18] Cooperative Knowledge Distillation: A Learner Agnostic Approach.AAAI2024 chunk 5\n",
      "\n",
      "[19] Cross-modal Learning for Domain Adaptation in 3D Semantic Segmentation.IEEE_Transactions_on_Pattern_Analysis_and_Machine_Intelligence chunk 3\n",
      "\n",
      "[20] Editing Large Language Models: Problems, Methods, and Opportunities..EMNLP_2023 chunk 0\n",
      "\n",
      "[21] MMANet: Margin-aware Distillation and Modality-aware Regularization for Incomplete Multimodal Learning.CVPR_2023 chunk 1\n",
      "\n",
      "[22] Multimodal Machine Learning in Image-Based and Clinical Biomedicine: Survey and Prospects.International_Journal_of_Computer_Vision chunk 1\n",
      "\n",
      "[23] Cooperative Knowledge Distillation: A Learner Agnostic Approach.AAAI2024 chunk 1\n",
      "\n",
      "[24] Can We Edit Multimodal Large Language Models?.EMNLP_2023 chunk 4\n",
      "\n",
      "[25] ViTamin: Designing Scalable Vision Models in the Vision-Language Era.CVPR2024 chunk 2\n",
      "\n",
      "[26] DreamLLM: Synergistic Multimodal Comprehension and Creation.ICLR2024 chunk 11\n",
      "\n",
      "[27] Multimodal Representation Learning by Alternating Unimodal Adaptation.CVPR2024 chunk 1\n",
      "\n",
      "[28] Can We Edit Multimodal Large Language Models?.EMNLP_2023 chunk 3\n",
      "\n",
      "[29] InternVL: Scaling Up Vision Foundation Models and Aligning for Generic Visual-Linguistic Tasks.CVPR2024 chunk 2\n",
      "\n",
      "[30] ICICLE: Interpretable Class Incremental Continual Learning.ICCV_2023 chunk 1\n",
      "\n",
      "[31] Meerkat: Audio-Visual Large Language Model for Grounding in Space and Time.ECCV2024 chunk 1\n",
      "\n",
      "[32] WAS: Dataset and Methods for Artistic Text Segmentation.ECCV2024 chunk 6\n",
      "\n",
      "[33] Generated Knowledge Prompting for Commonsense Reasoning.ACL_2022_Annual_Meeting_of_the_Association_for_Computational_Linguistics chunk 1\n",
      "\n",
      "[34] SINE: SINgle Image Editing with Text-to-Image Diffusion Models.CVPR_2023 chunk 5\n",
      "\n",
      "[35] Improving Medical Multi-modal Contrastive Learning with Expert Annotations.ECCV2024 chunk 6\n",
      "\n",
      "[36] PerceptionGPT: Effectively Fusing Visual Perception into LLM.CVPR2024 chunk 2\n",
      "\n",
      "[37] Pseudo-Label Calibration Semi-supervised Multi-Modal Entity Alignment.AAAI2024 chunk 2\n",
      "\n",
      "[38] To Revise or Not to Revise: Learning to Detect Improvable Claims for Argumentative Writing Support.ACL_2023 chunk 7\n",
      "\n",
      "[39] Cs2K: Class-specific and Class-shared Knowledge Guidance for Incremental Semantic Segmentation.ECCV2024 chunk 2\n",
      "\n",
      "[40] Generative Multi-Modal Knowledge Retrieval with Large Language Models.AAAI2024 chunk 0\n",
      "\n",
      "[41] Can We Edit Multimodal Large Language Models?.EMNLP_2023 chunk 0\n",
      "\n",
      "[42] MNGNAS: Distilling Adaptive Combination of Multiple Searched Networks for One-Shot Neural Architecture Search..IEEE_Transactions_on_Pattern_Analysis_and_Machine_Intelligence chunk 2\n",
      "\n",
      "[43] MIND: Multi-Task Incremental Network Distillation.AAAI2024 chunk 1\n",
      "\n",
      "[44] Incremental Task Learning with Incremental Rank Updates.ECCV_2022_European_Conference_on_Computer_Vision chunk 2\n",
      "\n",
      "[45] RegionGPT: Towards Region Understanding Vision Language Model.CVPR2024 chunk 1\n",
      "\n",
      "[46] Emu Edit: Precise Image Editing Via Recognition and Generation Tasks.CVPR2024 chunk 1\n",
      "\n",
      "\n"
     ]
    }
   ],
   "source": [
    "print(draft)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 5,
   "metadata": {},
   "outputs": [],
   "source": [
    "from research_agent.core.reference_checker import MultiModalCitationPipeline\n",
    "reference_checker = MultiModalCitationPipeline(sections[2],topic)\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 8,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "['\\n## 2.1 Multimodal Language Models\\n\\nMultimodal Learning (MML) has emerged as a crucial field of study, aiming to build AI models capable of extracting and correlating information from various data modalities. Vision-language pre-training, a key branch of MML, focuses on developing foundation models with enhanced performance in vision and language tasks. Notable milestones in this domain include Vision Transformer (ViT), which introduced an end-to-end solution for image understanding using Transformer encoders, and CLIP, which utilized multimodal pre-training for zero-shot recognition by converting classification into a retrieval task. The recent advancements in LLMs, such as LLaMA, BLOOM, and ChatGPT, have further propelled the integration of auto-regressive language models as decoders in vision-language tasks, facilitating knowledge sharing between language and multimodal domains. These developments highlight the growing significance of MML and its potential to revolutionize various applications by bridging the gap between different modalities.  \\n\\nMultimodal Learning (MML) has emerged as a crucial field of study, aiming to build AI models capable of extracting and correlating information from various data modalities. As more data has become available, a wider selection of datasets containing more than one modality has also enabled growth in the multimodal research sphere. Multimodal data is intrinsic to biomedical research and clinical care. While data belonging to a single modality can be conceptualized as a way in which something is perceived or captured in the world into an abstract digitized representation such as a waveform or image, multimodal data aggregates multiple modalities and thus consists of several intrinsically different representation spaces (and potentially even different data geometries). Computed tomography (CT) and positron emission tomography (PET) are specific examples of single imaging modalities, while magnetic resonance imaging (MRI) is an example itself of multimodal data, as its component sequences T1-weighted, T2-weighted, and fluid-attenuated inversion recovery (FLAIR) can each be considered their own unique modalities, since each of the MR sequences measure some different biophysical or biological property. Laboratory blood tests, patient demographics, electrocardiogram (ECG) and genetic expression values are also common modalities in clinical decision models. This work discusses unique ways that differences between modalities have been addressed and mitigated to improve accuracy of AI models in similar ways to which a human would naturally be able to re-calibrate to these differences.  \\n\\nVision-language pre-training, a key branch of MML, focuses on developing foundation models with enhanced performance in vision and language tasks.  \\n\\nNotable milestones in this domain include Vision Transformer (ViT), which introduced an end-to-end solution for image understanding using Transformer encoders, and CLIP, which utilized multimodal pre-training for zero-shot recognition by converting classification into a retrieval task.  \\n\\nThe recent advancements in LLMs, such as LLaMA, BLOOM, and ChatGPT, have further propelled the integration of auto-regressive language models as decoders in vision-language tasks, facilitating knowledge sharing between language and multimodal domains.  \\n\\nThese developments highlight the growing significance of MML and its potential to revolutionize various applications by bridging the gap between different modalities.',\n",
       " '\\n## 2.2 Model Editing Techniques\\n\\nEditing multimodal large language models (MLLMs) presents unique challenges compared to editing single-modal LLMs. The inherent complexity and diversity of MLLMs, stemming from their integration of multiple modalities like text, images, and audio, necessitate more sophisticated editing techniques. This subsection explores the existing approaches for editing single-modal LLMs and their potential applicability to MLLMs.\\n\\n**Knowledge Infusion:** This technique involves incrementally updating a language model with new facts or information without significant retraining. Methods like knowledge distillation and fine-tuning allow for the transfer of knowledge from a larger, more knowledgeable model to a smaller, less knowledgeable one. While effective for single-modal LLMs, knowledge infusion for MLLMs requires careful consideration of the interplay between different modalities and the potential for cross-modal knowledge transfer.\\n\\n**Incremental Learning:** Incremental learning involves training a model on new data while retaining its knowledge from previous training sessions. This approach is particularly relevant for MLLMs, as it allows for the continuous updating of the model with new multimodal data without forgetting previously learned information. Techniques like experience replay and model regularization can be employed to mitigate catastrophic forgetting and ensure the stability of the model.\\n\\n**Modality-Specific Editing:** Given the distinct characteristics of each modality, it may be necessary to develop modality-specific editing techniques for MLLMs. For example, image editing techniques like style transfer and image inpainting can be used to modify the visual representations learned by the model, while text editing techniques like grammar correction and sentiment modification can be used to refine the linguistic representations.\\n\\n**Challenges and Limitations:** Editing MLLMs is still an evolving field, and several challenges and limitations remain. These include the difficulty of aligning edits across different modalities, the potential for introducing biases or errors during the editing process, and the lack of standardized evaluation metrics for assessing the effectiveness of editing techniques. Future research should focus on addressing these challenges and developing more robust and efficient editing methods for MLLMs.',\n",
       " '\\n## 2.3 Challenges and Limitations\\n\\nEditing multimodal large language models (MLLMs) presents unique challenges compared to editing single-modal LLMs. The inherent complexity and diversity of MLLMs, stemming from their integration of multiple modalities like text, images, and audio, necessitate more sophisticated editing techniques. This subsection explores the existing approaches for editing single-modal LLMs and their potential applicability to MLLMs.\\n\\n**Knowledge Infusion:** This technique involves incrementally updating a language model with new facts or information without significant retraining. Methods like knowledge distillation and fine-tuning allow for the transfer of knowledge from a larger, more knowledgeable model to a smaller, less knowledgeable one. While effective for single-modal LLMs, knowledge infusion for MLLMs requires careful consideration of the interplay between different modalities and the potential for cross-modal knowledge transfer.\\n\\n**Incremental Learning:** Incremental learning involves training a model on new data while retaining its knowledge from previous training sessions. This approach is particularly relevant for MLLMs, as it allows for the continuous updating of the model with new multimodal data without forgetting previously learned information. Techniques like experience replay and model regularization can be employed to mitigate catastrophic forgetting and ensure the stability of the model.\\n\\n**Modality-Specific Editing:** Given the distinct characteristics of each modality, it may be necessary to develop modality-specific editing techniques for MLLMs. For example, image editing techniques like style transfer and image inpainting can be used to modify the visual representations learned by the model, while text editing techniques like grammar correction and sentiment modification can be used to refine the linguistic representations.\\n\\n**Challenges and Limitations:** Editing MLLMs is still an evolving field, and several challenges and limitations remain. These include the difficulty of aligning edits across different modalities, the potential for introducing biases or errors during the editing process, and the lack of standardized evaluation metrics for assessing the effectiveness of editing techniques. Future research should focus on addressing these challenges and developing more robust and efficient editing methods for MLLMs.']"
      ]
     },
     "execution_count": 8,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "sections = sections[1:4]\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "from"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 7,
   "metadata": {},
   "outputs": [
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "2025-02-19 00:50:24,635 - research_agent.core.reference_checker - DEBUG - find statements: ['Editing multimodal large language models (MLLMs) presents unique challenges compared to editing single-modal LLMs.', 'The inherent complexity and diversity of MLLMs, stemming from their integration of multiple modalities like text, images, and audio, necessitate more sophisticated editing techniques.', 'This subsection explores the existing approaches for editing single-modal LLMs and their potential applicability to MLLMs.', 'Knowledge Infusion: This technique involves incrementally updating a language model with new facts or information without significant retraining.', 'Methods like knowledge distillation and fine-tuning allow for the transfer of knowledge from a larger, more knowledgeable model to a smaller, less knowledgeable one.', 'While effective for single-modal LLMs, knowledge infusion for MLLMs requires careful consideration of the interplay between different modalities and the potential for cross-modal knowledge transfer.', 'Incremental Learning: Incremental learning involves training a model on new data while retaining its knowledge from previous training sessions.', 'This approach is particularly relevant for MLLMs, as it allows for the continuous updating of the model with new multimodal data without forgetting previously learned information.', 'Techniques like experience replay and model regularization can be employed to mitigate catastrophic forgetting and ensure the stability of the model.', 'Modality-Specific Editing: Given the distinct characteristics of each modality, it may be necessary to develop modality-specific editing techniques for MLLMs.', 'For example, image editing techniques like style transfer and image inpainting can be used to modify the visual representations learned by the model, while text editing techniques like grammar correction and sentiment modification can be used to refine the linguistic representations.', 'Challenges and Limitations: Editing MLLMs is still an evolving field, and several challenges and limitations remain.', 'These include the difficulty of aligning edits across different modalities, the potential for introducing biases or errors during the editing process, and the lack of standardized evaluation metrics for assessing the effectiveness of editing techniques.', 'Future research should focus on addressing these challenges and developing more robust and efficient editing methods for MLLMs.']\n",
      "2025-02-19 00:50:50,442 - research_agent.core.reference_checker - DEBUG - supplement citations: [{'statement': 'Editing multimodal large language models (MLLMs) presents unique challenges compared to editing single-modal LLMs. For instance, multimodal model editing demands a higher level of scrutiny and careful consideration in the editing process <sup>{\"chunk_id\":\"0\", \"paper_id\":\"6528a864939a5f408257a0cf\"}</sup>. Specifically, incorrect outputs from multimodal models may stem from the synergistic effects of various modalities, such as misreading or misrecognition, which is analogous to human errors like color blindness affecting color identification in images <sup>{\"chunk_id\":\"0\", \"paper_id\":\"6528a864939a5f408257a0cf\"}</sup>. Furthermore, the task of editing multimodal LLMs presents considerable challenges due to their inherent diversity and complexity, as incorrect outputs may stem not just from LLMs but also from the interaction between different modalities <sup>{\"chunk_id\":\"0\", \"paper_id\":\"6528a864939a5f408257a0cf\"}</sup>. Empirically, current editing approaches are effective for editing the textual model in the multimodal language model but not as effective for editing the vision module, indicating the potential difficulty and opportunities of this task <sup>{\"chunk_id\":\"4\", \"paper_id\":\"6528a864939a5f408257a0cf\"}</sup>.'}, {'statement': 'The inherent complexity and diversity of MLLMs, stemming from their integration of multiple modalities like text, images, and audio, necessitate more sophisticated editing techniques. <sup>{\"chunk_id\":\"4\", \"paper_id\":\"6528a864939a5f408257a0cf\"}</sup> <sup>{\"chunk_id\":\"2\", \"paper_id\":\"656fdcf8939a5f4082920de7\"}</sup> <sup>{\"chunk_id\":\"3\", \"paper_id\":\"6571365b939a5f4082f7ccfa\"}</sup> <sup>{\"chunk_id\":\"1\", \"paper_id\":\"6684b06d01d2a3fbfce33e31\"}</sup>'}, {'statement': 'This subsection explores the existing approaches for editing single-modal LLMs and their potential applicability to MLLMs. <sup>{\"chunk_id\":\"0\", \"paper_id\":\"6528a864939a5f408257a0cf\"}</sup><sup>{\"chunk_id\":\"1\", \"paper_id\":\"6528a864939a5f408257a0cf\"}</sup><sup>{\"chunk_id\":\"2\", \"paper_id\":\"656fdcf8939a5f4082920de7\"}</sup><sup>{\"chunk_id\":\"0\", \"paper_id\":\"646c3addd68f896efa5d1901\"}</sup>'}, {'statement': 'Knowledge Infusion: This technique involves incrementally updating a language model with new facts or information without significant retraining. <sup>{\"chunk_id\":\"3\", \"paper_id\":\"646465fdd68f896efa1950fb\"}</sup>'}, {'statement': 'Methods like knowledge distillation and fine-tuning allow for the transfer of knowledge from a larger, more knowledgeable model to a smaller, less knowledgeable one. <sup>{\"chunk_id\":\"8\", \"paper_id\":\"6386c9e090e50fcafdfa0a19\"}</sup><sup>{\"chunk_id\":\"5\", \"paper_id\":\"65c97cd4939a5f4082307083\"}</sup><sup>{\"chunk_id\":\"2\", \"paper_id\":\"64af735d3fda6d7f0644baeb\"}</sup><sup>{\"chunk_id\":\"1\", \"paper_id\":\"65c97cd4939a5f4082307083\"}</sup><sup>{\"chunk_id\":\"2\", \"paper_id\":\"62b595eb5aee126c0f4793f8\"}</sup>'}, {'statement': 'While effective for single-modal LLMs, knowledge infusion for MLLMs requires careful consideration of the interplay between different modalities and the potential for cross-modal knowledge transfer. <sup>{\"chunk_id\":\"2\", \"paper_id\":\"65e68afc13fb2c6cf6f6e33d\"}</sup><sup>{\"chunk_id\":\"3\", \"paper_id\":\"6006d0cb91e0111a1b6a2507\"}</sup><sup>{\"chunk_id\":\"0\", \"paper_id\":\"65a75aa9939a5f408261970a\"}</sup><sup>{\"chunk_id\":\"2\", \"paper_id\":\"655ac423939a5f4082e26049\"}</sup>'}, {'statement': 'Incremental Learning: Incremental learning involves training a model on new data while retaining its knowledge from previous training sessions. <sup>{\"chunk_id\":\"2\", \"paper_id\":\"6694828f01d2a3fbfc8654c0\"}</sup><sup>{\"chunk_id\":\"1\", \"paper_id\":\"641137fe90e50fcafd17b992\"}</sup><sup>{\"chunk_id\":\"2\", \"paper_id\":\"62d7730e5aee126c0f9009f3\"}</sup>'}, {'statement': 'This approach is particularly relevant for MLLMs, as it allows for the continuous updating of the model with new multimodal data without forgetting previously learned information. <sup>{\"chunk_id\":\"1\", \"paper_id\":\"623004305aee126c0f9b322d\"}</sup>'}, {'statement': 'Techniques like experience replay and model regularization can be employed to mitigate catastrophic forgetting and ensure the stability of the model. <sup>{\"chunk_id\":\"1\", \"paper_id\":\"6464afdfd68f896efa356511\"}</sup><sup>{\"chunk_id\":\"1\", \"paper_id\":\"64e2e14f3fda6d7f064665d0\"}</sup><sup>{\"chunk_id\":\"1\", \"paper_id\":\"6413dac290e50fcafd3ce260\"}</sup><sup>{\"chunk_id\":\"1\", \"paper_id\":\"656fde3c939a5f4082948795\"}</sup>'}, {'statement': 'Modality-Specific Editing: Given the distinct characteristics of each modality, it may be necessary to develop modality-specific editing techniques for MLLMs. <sup>{\"chunk_id\":\"1\", \"paper_id\":\"6528a864939a5f408257a0cf\"}</sup>'}, {'statement': 'For example, image editing techniques like style transfer and image inpainting can be used to modify the visual representations learned by the model, while text editing techniques like grammar correction and sentiment modification can be used to refine the linguistic representations. <sup>{\"chunk_id\":\"1\", \"paper_id\":\"6556d305939a5f4082dc359b\"}</sup><sup>{\"chunk_id\":\"5\", \"paper_id\":\"6392a77190e50fcafd8c4e48\"}</sup><sup>{\"chunk_id\":\"6\", \"paper_id\":\"66ac3e6d01d2a3fbfc896b1b\"}</sup>'}, {'statement': 'Challenges and Limitations: Editing MLLMs is still an evolving field, and several challenges and limitations remain.<sup>{\"chunk_id\":\"0\",\"paper_id\":\"646c3addd68f896efa5d1901\"}</sup><sup>{\"chunk_id\":\"6\",\"paper_id\":\"647eaf35d68f896efad408e7\"}</sup><sup>{\"chunk_id\":\"9\",\"paper_id\":\"66f4cd3401d2a3fbfcbfac37\"}</sup>'}, {'statement': 'These include the difficulty of aligning edits across different modalities, the potential for introducing biases or errors during the editing process, and the lack of standardized evaluation metrics for assessing the effectiveness of editing techniques. <sup>{\"chunk_id\":\"7\", \"paper_id\":\"64741c33d68f896efaa7b664\"}</sup><sup>{\"chunk_id\":\"3\", \"paper_id\":\"6528a864939a5f408257a0cf\"}</sup><sup>{\"chunk_id\":\"4\", \"paper_id\":\"6528a864939a5f408257a0cf\"}</sup><sup>{\"chunk_id\":\"6\", \"paper_id\":\"65f7a01c13fb2c6cf668ebd0\"}</sup>'}, {'statement': 'Future research should focus on addressing these challenges and developing more robust and efficient editing methods for MLLMs. <sup>{\"chunk_id\":\"0\", \"paper_id\":\"646c3addd68f896efa5d1901\"}</sup><sup>{\"chunk_id\":\"4\", \"paper_id\":\"6528a864939a5f408257a0cf\"}</sup>'}]\n",
      "2025-02-19 00:51:38,260 - research_agent.core.reference_checker - DEBUG - verified results: {'supported': [{'statement': 'Editing multimodal large language models (MLLMs) presents unique challenges compared to editing single-modal LLMs. For instance, multimodal model editing demands a higher level of scrutiny and careful consideration in the editing process <sup>{\"chunk_id\":\"0\", \"paper_id\":\"6528a864939a5f408257a0cf\"}</sup>. Specifically, incorrect outputs from multimodal models may stem from the synergistic effects of various modalities, such as misreading or misrecognition, which is analogous to human errors like color blindness affecting color identification in images <sup>{\"chunk_id\":\"0\", \"paper_id\":\"6528a864939a5f408257a0cf\"}</sup>. Furthermore, the task of editing multimodal LLMs presents considerable challenges due to their inherent diversity and complexity, as incorrect outputs may stem not just from LLMs but also from the interaction between different modalities <sup>{\"chunk_id\":\"0\", \"paper_id\":\"6528a864939a5f408257a0cf\"}</sup>. Empirically, current editing approaches are effective for editing the textual model in the multimodal language model but not as effective for editing the vision module, indicating the potential difficulty and opportunities of this task <sup>{\"chunk_id\":\"4\", \"paper_id\":\"6528a864939a5f408257a0cf\"}</sup>.'}, {'statement': 'Methods like knowledge distillation and fine-tuning allow for the transfer of knowledge from a larger, more knowledgeable model to a smaller, less knowledgeable one. <sup>{\"chunk_id\":\"8\", \"paper_id\":\"6386c9e090e50fcafdfa0a19\"}</sup><sup>{\"chunk_id\":\"5\", \"paper_id\":\"65c97cd4939a5f4082307083\"}</sup><sup>{\"chunk_id\":\"2\", \"paper_id\":\"64af735d3fda6d7f0644baeb\"}</sup><sup>{\"chunk_id\":\"1\", \"paper_id\":\"65c97cd4939a5f4082307083\"}</sup><sup>{\"chunk_id\":\"2\", \"paper_id\":\"62b595eb5aee126c0f4793f8\"}</sup>'}, {'statement': 'While effective for single-modal LLMs, knowledge infusion for MLLMs requires careful consideration of the interplay between different modalities and the potential for cross-modal knowledge transfer. <sup>{\"chunk_id\":\"2\", \"paper_id\":\"65e68afc13fb2c6cf6f6e33d\"}</sup><sup>{\"chunk_id\":\"3\", \"paper_id\":\"6006d0cb91e0111a1b6a2507\"}</sup><sup>{\"chunk_id\":\"0\", \"paper_id\":\"65a75aa9939a5f408261970a\"}</sup><sup>{\"chunk_id\":\"2\", \"paper_id\":\"655ac423939a5f4082e26049\"}</sup>'}, {'statement': 'Incremental Learning: Incremental learning involves training a model on new data while retaining its knowledge from previous training sessions. <sup>{\"chunk_id\":\"2\", \"paper_id\":\"6694828f01d2a3fbfc8654c0\"}</sup><sup>{\"chunk_id\":\"1\", \"paper_id\":\"641137fe90e50fcafd17b992\"}</sup><sup>{\"chunk_id\":\"2\", \"paper_id\":\"62d7730e5aee126c0f9009f3\"}</sup>'}, {'statement': 'This approach is particularly relevant for MLLMs, as it allows for the continuous updating of the model with new multimodal data without forgetting previously learned information. <sup>{\"chunk_id\":\"1\", \"paper_id\":\"623004305aee126c0f9b322d\"}</sup>'}, {'statement': 'Techniques like experience replay and model regularization can be employed to mitigate catastrophic forgetting and ensure the stability of the model. <sup>{\"chunk_id\":\"1\", \"paper_id\":\"6464afdfd68f896efa356511\"}</sup><sup>{\"chunk_id\":\"1\", \"paper_id\":\"64e2e14f3fda6d7f064665d0\"}</sup><sup>{\"chunk_id\":\"1\", \"paper_id\":\"6413dac290e50fcafd3ce260\"}</sup><sup>{\"chunk_id\":\"1\", \"paper_id\":\"656fde3c939a5f4082948795\"}</sup>'}, {'statement': 'Modality-Specific Editing: Given the distinct characteristics of each modality, it may be necessary to develop modality-specific editing techniques for MLLMs. <sup>{\"chunk_id\":\"1\", \"paper_id\":\"6528a864939a5f408257a0cf\"}</sup>'}, {'statement': 'These include the difficulty of aligning edits across different modalities, the potential for introducing biases or errors during the editing process, and the lack of standardized evaluation metrics for assessing the effectiveness of editing techniques. <sup>{\"chunk_id\":\"7\", \"paper_id\":\"64741c33d68f896efaa7b664\"}</sup><sup>{\"chunk_id\":\"3\", \"paper_id\":\"6528a864939a5f408257a0cf\"}</sup><sup>{\"chunk_id\":\"4\", \"paper_id\":\"6528a864939a5f408257a0cf\"}</sup><sup>{\"chunk_id\":\"6\", \"paper_id\":\"65f7a01c13fb2c6cf668ebd0\"}</sup>'}, {'statement': 'Future research should focus on addressing these challenges and developing more robust and efficient editing methods for MLLMs. <sup>{\"chunk_id\":\"0\", \"paper_id\":\"646c3addd68f896efa5d1901\"}</sup><sup>{\"chunk_id\":\"4\", \"paper_id\":\"6528a864939a5f408257a0cf\"}</sup>'}, {'statement': 'The inherent complexity and diversity of MLLMs, stemming from their integration of multiple modalities like text, images, and audio, necessitate more sophisticated editing techniques. <sup>{\"chunk_id\":\"4\", \"paper_id\":\"6528a864939a5f408257a0cf\"}</sup><sup>{\"chunk_id\":\"3\", \"paper_id\":\"6571365b939a5f4082f7ccfa\"}</sup><sup>{\"chunk_id\":\"1\", \"paper_id\":\"6684b06d01d2a3fbfce33e31\"}</sup>'}, {'statement': 'This subsection explores the existing approaches for editing single-modal LLMs and their potential applicability to MLLMs. <sup>{\"chunk_id\":\"0\", \"paper_id\":\"6528a864939a5f408257a0cf\"}</sup><sup>{\"chunk_id\":\"1\", \"paper_id\":\"6528a864939a5f408257a0cf\"}</sup><sup>{\"chunk_id\":\"0\", \"paper_id\":\"646c3addd68f896efa5d1901\"}</sup>'}, {'statement': 'Knowledge Infusion: This technique involves incrementally updating a language model with new facts or information without significant retraining. <sup>{\"chunk_id\":\"1\", \"paper_id\":\"616e37435244ab9dcbd1a6fa\"}</sup>'}, {'statement': 'For example, image editing techniques like style transfer and image inpainting can be used to modify the visual representations learned by the model, while text editing techniques like grammar correction and sentiment modification can be used to refine the linguistic representations.<sup>{\"chunk_id\":\"1\", \"paper_id\":\"6556d305939a5f4082dc359b\"}</sup><sup>{\"chunk_id\":\"5\", \"paper_id\":\"6392a77190e50fcafd8c4e48\"}</sup>'}, {'statement': 'Challenges and Limitations: Editing MLLMs is still an evolving field, and several challenges and limitations remain. <sup>{\"chunk_id\":\"0\",\"paper_id\":\"646c3addd68f896efa5d1901\"}</sup>'}], 'unsupported_count': 0, 'retries_remaining': 2}\n",
      "2025-02-19 00:52:16,374 - research_agent.core.reference_checker - DEBUG - Successfully updated \n",
      "## 2.2 Model Editing Techniques\n",
      "\n",
      "Editing multimod with citations\n"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "## 2.2 Model Editing Techniques\n",
      "\n",
      "Editing multimodal large language models (MLLMs) presents unique challenges compared to editing single-modal LLMs.<sup>{\"chunk_id\":\"0\", \"paper_id\":\"6528a864939a5f408257a0cf\"}</sup> The inherent complexity and diversity of MLLMs, stemming from their integration of multiple modalities like text, images, and audio, necessitate more sophisticated editing techniques.<sup>{\"chunk_id\":\"4\", \"paper_id\":\"6528a864939a5f408257a0cf\"}</sup><sup>{\"chunk_id\":\"3\", \"paper_id\":\"6571365b939a5f4082f7ccfa\"}</sup><sup>{\"chunk_id\":\"1\", \"paper_id\":\"6684b06d01d2a3fbfce33e31\"}</sup> This subsection explores the existing approaches for editing single-modal LLMs and their potential applicability to MLLMs.<sup>{\"chunk_id\":\"0\", \"paper_id\":\"6528a864939a5f408257a0cf\"}</sup><sup>{\"chunk_id\":\"1\", \"paper_id\":\"6528a864939a5f408257a0cf\"}</sup><sup>{\"chunk_id\":\"0\", \"paper_id\":\"646c3addd68f896efa5d1901\"}</sup>\n",
      "\n",
      "**Knowledge Infusion:** This technique involves incrementally updating a language model with new facts or information without significant retraining.<sup>{\"chunk_id\":\"1\", \"paper_id\":\"616e37435244ab9dcbd1a6fa\"}</sup> Methods like knowledge distillation and fine-tuning allow for the transfer of knowledge from a larger, more knowledgeable model to a smaller, less knowledgeable one.<sup>{\"chunk_id\":\"8\", \"paper_id\":\"6386c9e090e50fcafdfa0a19\"}</sup><sup>{\"chunk_id\":\"5\", \"paper_id\":\"65c97cd4939a5f4082307083\"}</sup><sup>{\"chunk_id\":\"2\", \"paper_id\":\"64af735d3fda6d7f0644baeb\"}</sup><sup>{\"chunk_id\":\"1\", \"paper_id\":\"65c97cd4939a5f4082307083\"}</sup><sup>{\"chunk_id\":\"2\", \"paper_id\":\"62b595eb5aee126c0f4793f8\"}</sup> While effective for single-modal LLMs, knowledge infusion for MLLMs requires careful consideration of the interplay between different modalities and the potential for cross-modal knowledge transfer.<sup>{\"chunk_id\":\"2\", \"paper_id\":\"65e68afc13fb2c6cf6f6e33d\"}</sup><sup>{\"chunk_id\":\"3\", \"paper_id\":\"6006d0cb91e0111a1b6a2507\"}</sup><sup>{\"chunk_id\":\"0\", \"paper_id\":\"65a75aa9939a5f408261970a\"}</sup><sup>{\"chunk_id\":\"2\", \"paper_id\":\"655ac423939a5f4082e26049\"}</sup>\n",
      "\n",
      "**Incremental Learning:** Incremental learning involves training a model on new data while retaining its knowledge from previous training sessions.<sup>{\"chunk_id\":\"2\", \"paper_id\":\"6694828f01d2a3fbfc8654c0\"}</sup><sup>{\"chunk_id\":\"1\", \"paper_id\":\"641137fe90e50fcafd17b992\"}</sup><sup>{\"chunk_id\":\"2\", \"paper_id\":\"62d7730e5aee126c0f9009f3\"}</sup> This approach is particularly relevant for MLLMs, as it allows for the continuous updating of the model with new multimodal data without forgetting previously learned information.<sup>{\"chunk_id\":\"1\", \"paper_id\":\"623004305aee126c0f9b322d\"}</sup> Techniques like experience replay and model regularization can be employed to mitigate catastrophic forgetting and ensure the stability of the model.<sup>{\"chunk_id\":\"1\", \"paper_id\":\"6464afdfd68f896efa356511\"}</sup><sup>{\"chunk_id\":\"1\", \"paper_id\":\"64e2e14f3fda6d7f064665d0\"}</sup><sup>{\"chunk_id\":\"1\", \"paper_id\":\"6413dac290e50fcafd3ce260\"}</sup><sup>{\"chunk_id\":\"1\", \"paper_id\":\"656fde3c939a5f4082948795\"}</sup>\n",
      "\n",
      "**Modality-Specific Editing:** Given the distinct characteristics of each modality, it may be necessary to develop modality-specific editing techniques for MLLMs.<sup>{\"chunk_id\":\"1\", \"paper_id\":\"6528a864939a5f408257a0cf\"}</sup> For example, image editing techniques like style transfer and image inpainting can be used to modify the visual representations learned by the model, while text editing techniques like grammar correction and sentiment modification can be used to refine the linguistic representations.<sup>{\"chunk_id\":\"1\", \"paper_id\":\"6556d305939a5f4082dc359b\"}</sup><sup>{\"chunk_id\":\"5\", \"paper_id\":\"6392a77190e50fcafd8c4e48\"}</sup>\n",
      "\n",
      "**Challenges and Limitations:** Editing MLLMs is still an evolving field, and several challenges and limitations remain.<sup>{\"chunk_id\":\"0\",\"paper_id\":\"646c3addd68f896efa5d1901\"}</sup> These include the difficulty of aligning edits across different modalities, the potential for introducing biases or errors during the editing process, and the lack of standardized evaluation metrics for assessing the effectiveness of editing techniques.<sup>{\"chunk_id\":\"7\", \"paper_id\":\"64741c33d68f896efaa7b664\"}</sup><sup>{\"chunk_id\":\"3\", \"paper_id\":\"6528a864939a5f408257a0cf\"}</sup><sup>{\"chunk_id\":\"4\", \"paper_id\":\"6528a864939a5f408257a0cf\"}</sup><sup>{\"chunk_id\":\"6\", \"paper_id\":\"65f7a01c13fb2c6cf668ebd0\"}</sup> Future research should focus on addressing these challenges and developing more robust and efficient editing methods for MLLMs.<sup>{\"chunk_id\":\"0\", \"paper_id\":\"646c3addd68f896efa5d1901\"}</sup><sup>{\"chunk_id\":\"4\", \"paper_id\":\"6528a864939a5f408257a0cf\"}</sup>\n"
     ]
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "C:\\Users\\17981\\AppData\\Local\\Temp\\ipykernel_6008\\2919960040.py:1: RuntimeWarning: coroutine 'MultiModalCitationPipeline.run_pipeline' was never awaited\n",
      "  final_section = await reference_checker.run_pipeline()\n",
      "RuntimeWarning: Enable tracemalloc to get the object allocation traceback\n"
     ]
    }
   ],
   "source": [
    "final_section = await reference_checker.run_pipeline()\n",
    "print(final_section)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "1. 先开始"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 5,
   "metadata": {},
   "outputs": [],
   "source": [
    "from functools import wraps\n",
    "import logging\n",
    "\n",
    "\n",
    "def async_retry(retries=3, delay=1):\n",
    "    \"\"\"\n",
    "    异步重试装饰器\n",
    "    Args:\n",
    "        retries (int): 最大重试次数\n",
    "        delay (int): 重试间隔时间(秒)\n",
    "    \"\"\"\n",
    "    def decorator(func):\n",
    "        @wraps(func)\n",
    "        async def wrapper(*args, **kwargs):\n",
    "            last_exception = None\n",
    "            for attempt in range(retries):\n",
    "                try:\n",
    "                    result = await func(*args, **kwargs)\n",
    "                    return result\n",
    "                except Exception as e:\n",
    "                    last_exception = e\n",
    "                    if attempt < retries - 1:\n",
    "                        await asyncio.sleep(delay)\n",
    "                    logging.warning(f\"第 {attempt + 1} 次尝试失败: {str(e)}\")\n",
    "            raise last_exception\n",
    "        return wrapper\n",
    "    return decorator"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 7,
   "metadata": {},
   "outputs": [
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "2025-02-18 22:45:09,275 - research_agent.core.reference_checker - DEBUG - find statements: ['Editing multimodal large language models (MLLMs) presents unique challenges compared to editing single-modal LLMs.', 'The inherent complexity and diversity of MLLMs, stemming from their integration of multiple modalities like text, images, and audio, necessitate more sophisticated editing techniques.', 'This subsection explores the existing approaches for editing single-modal LLMs and their potential applicability to MLLMs.', 'Knowledge Infusion: This technique involves incrementally updating a language model with new facts or information without significant retraining.', 'Methods like knowledge distillation and fine-tuning allow for the transfer of knowledge from a larger, more knowledgeable model to a smaller, less knowledgeable one.', 'While effective for single-modal LLMs, knowledge infusion for MLLMs requires careful consideration of the interplay between different modalities and the potential for cross-modal knowledge transfer.', 'Incremental Learning: Incremental learning involves training a model on new data while retaining its knowledge from previous training sessions.', 'This approach is particularly relevant for MLLMs, as it allows for the continuous updating of the model with new multimodal data without forgetting previously learned information.', 'Techniques like experience replay and model regularization can be employed to mitigate catastrophic forgetting and ensure the stability of the model.', 'Modality-Specific Editing: Given the distinct characteristics of each modality, it may be necessary to develop modality-specific editing techniques for MLLMs.', 'For example, image editing techniques like style transfer and image inpainting can be used to modify the visual representations learned by the model, while text editing techniques like grammar correction and sentiment modification can be used to refine the linguistic representations.', 'Challenges and Limitations: Editing MLLMs is still an evolving field, and several challenges and limitations remain.', 'These include the difficulty of aligning edits across different modalities, the potential for introducing biases or errors during the editing process, and the lack of standardized evaluation metrics for assessing the effectiveness of editing techniques.', 'Future research should focus on addressing these challenges and developing more robust and efficient editing methods for MLLMs.']\n"
     ]
    },
    {
     "data": {
      "text/plain": [
       "['Editing multimodal large language models (MLLMs) presents unique challenges compared to editing single-modal LLMs.',\n",
       " 'The inherent complexity and diversity of MLLMs, stemming from their integration of multiple modalities like text, images, and audio, necessitate more sophisticated editing techniques.',\n",
       " 'This subsection explores the existing approaches for editing single-modal LLMs and their potential applicability to MLLMs.',\n",
       " 'Knowledge Infusion: This technique involves incrementally updating a language model with new facts or information without significant retraining.',\n",
       " 'Methods like knowledge distillation and fine-tuning allow for the transfer of knowledge from a larger, more knowledgeable model to a smaller, less knowledgeable one.',\n",
       " 'While effective for single-modal LLMs, knowledge infusion for MLLMs requires careful consideration of the interplay between different modalities and the potential for cross-modal knowledge transfer.',\n",
       " 'Incremental Learning: Incremental learning involves training a model on new data while retaining its knowledge from previous training sessions.',\n",
       " 'This approach is particularly relevant for MLLMs, as it allows for the continuous updating of the model with new multimodal data without forgetting previously learned information.',\n",
       " 'Techniques like experience replay and model regularization can be employed to mitigate catastrophic forgetting and ensure the stability of the model.',\n",
       " 'Modality-Specific Editing: Given the distinct characteristics of each modality, it may be necessary to develop modality-specific editing techniques for MLLMs.',\n",
       " 'For example, image editing techniques like style transfer and image inpainting can be used to modify the visual representations learned by the model, while text editing techniques like grammar correction and sentiment modification can be used to refine the linguistic representations.',\n",
       " 'Challenges and Limitations: Editing MLLMs is still an evolving field, and several challenges and limitations remain.',\n",
       " 'These include the difficulty of aligning edits across different modalities, the potential for introducing biases or errors during the editing process, and the lack of standardized evaluation metrics for assessing the effectiveness of editing techniques.',\n",
       " 'Future research should focus on addressing these challenges and developing more robust and efficient editing methods for MLLMs.']"
      ]
     },
     "execution_count": 7,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "@async_retry(retries=3, delay=1)\n",
    "async def find_statements():\n",
    "    return await reference_checker._find_statements()\n",
    "find_statements_results = await find_statements()\n",
    "reference_checker.logger.debug(f\"find statements: {find_statements_results}\")\n",
    "find_statements_results"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 6,
   "metadata": {},
   "outputs": [],
   "source": [
    "find_statements_results = ['Editing multimodal large language models (MLLMs) presents unique challenges compared to editing single-modal LLMs.',\n",
    " 'The inherent complexity and diversity of MLLMs, stemming from their integration of multiple modalities like text, images, and audio, necessitate more sophisticated editing techniques.',\n",
    " 'This subsection explores the existing approaches for editing single-modal LLMs and their potential applicability to MLLMs.',\n",
    " 'Knowledge Infusion: This technique involves incrementally updating a language model with new facts or information without significant retraining.',\n",
    " 'Methods like knowledge distillation and fine-tuning allow for the transfer of knowledge from a larger, more knowledgeable model to a smaller, less knowledgeable one.',\n",
    " 'While effective for single-modal LLMs, knowledge infusion for MLLMs requires careful consideration of the interplay between different modalities and the potential for cross-modal knowledge transfer.',\n",
    " 'Incremental Learning: Incremental learning involves training a model on new data while retaining its knowledge from previous training sessions.',\n",
    " 'This approach is particularly relevant for MLLMs, as it allows for the continuous updating of the model with new multimodal data without forgetting previously learned information.',\n",
    " 'Techniques like experience replay and model regularization can be employed to mitigate catastrophic forgetting and ensure the stability of the model.',\n",
    " 'Modality-Specific Editing: Given the distinct characteristics of each modality, it may be necessary to develop modality-specific editing techniques for MLLMs.',\n",
    " 'For example, image editing techniques like style transfer and image inpainting can be used to modify the visual representations learned by the model, while text editing techniques like grammar correction and sentiment modification can be used to refine the linguistic representations.',\n",
    " 'Challenges and Limitations: Editing MLLMs is still an evolving field, and several challenges and limitations remain.',\n",
    " 'These include the difficulty of aligning edits across different modalities, the potential for introducing biases or errors during the editing process, and the lack of standardized evaluation metrics for assessing the effectiveness of editing techniques.',\n",
    " 'Future research should focus on addressing these challenges and developing more robust and efficient editing methods for MLLMs.']\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 7,
   "metadata": {},
   "outputs": [
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "2025-02-19 00:40:55,701 - research_agent.core.reference_checker - DEBUG - supplement citations: [{'statement': 'Editing multimodal large language models (MLLMs) presents unique challenges compared to editing single-modal LLMs. For instance, multimodal model editing demands a higher level of scrutiny and careful consideration in the editing process <sup>{\"chunk_id\":\"0\", \"paper_id\":\"6528a864939a5f408257a0cf\"}</sup>. Specifically, incorrect outputs from multimodal models may stem from the synergistic effects of various modalities, such as misreading or misrecognition, which is analogous to human errors like color blindness affecting color identification in images <sup>{\"chunk_id\":\"0\", \"paper_id\":\"6528a864939a5f408257a0cf\"}</sup>. Furthermore, the task of editing multimodal LLMs presents considerable challenges due to their inherent diversity and complexity, as incorrect outputs may stem not just from LLMs but also from the interaction between different modalities <sup>{\"chunk_id\":\"0\", \"paper_id\":\"6528a864939a5f408257a0cf\"}</sup>. Empirically, current editing approaches are effective for editing the textual model in the multimodal language model but not as effective for editing the vision module, indicating the potential difficulty and opportunities of this task <sup>{\"chunk_id\":\"4\", \"paper_id\":\"6528a864939a5f408257a0cf\"}</sup>.'}, {'statement': 'The inherent complexity and diversity of MLLMs, stemming from their integration of multiple modalities like text, images, and audio, necessitate more sophisticated editing techniques. <sup>{\"chunk_id\":\"4\", \"paper_id\":\"6528a864939a5f408257a0cf\"}</sup> <sup>{\"chunk_id\":\"2\", \"paper_id\":\"656fdcf8939a5f4082920de7\"}</sup> <sup>{\"chunk_id\":\"3\", \"paper_id\":\"6571365b939a5f4082f7ccfa\"}</sup> <sup>{\"chunk_id\":\"1\", \"paper_id\":\"6684b06d01d2a3fbfce33e31\"}</sup>'}]\n"
     ]
    },
    {
     "data": {
      "text/plain": [
       "[{'statement': 'Editing multimodal large language models (MLLMs) presents unique challenges compared to editing single-modal LLMs. For instance, multimodal model editing demands a higher level of scrutiny and careful consideration in the editing process <sup>{\"chunk_id\":\"0\", \"paper_id\":\"6528a864939a5f408257a0cf\"}</sup>. Specifically, incorrect outputs from multimodal models may stem from the synergistic effects of various modalities, such as misreading or misrecognition, which is analogous to human errors like color blindness affecting color identification in images <sup>{\"chunk_id\":\"0\", \"paper_id\":\"6528a864939a5f408257a0cf\"}</sup>. Furthermore, the task of editing multimodal LLMs presents considerable challenges due to their inherent diversity and complexity, as incorrect outputs may stem not just from LLMs but also from the interaction between different modalities <sup>{\"chunk_id\":\"0\", \"paper_id\":\"6528a864939a5f408257a0cf\"}</sup>. Empirically, current editing approaches are effective for editing the textual model in the multimodal language model but not as effective for editing the vision module, indicating the potential difficulty and opportunities of this task <sup>{\"chunk_id\":\"4\", \"paper_id\":\"6528a864939a5f408257a0cf\"}</sup>.'},\n",
       " {'statement': 'The inherent complexity and diversity of MLLMs, stemming from their integration of multiple modalities like text, images, and audio, necessitate more sophisticated editing techniques. <sup>{\"chunk_id\":\"4\", \"paper_id\":\"6528a864939a5f408257a0cf\"}</sup> <sup>{\"chunk_id\":\"2\", \"paper_id\":\"656fdcf8939a5f4082920de7\"}</sup> <sup>{\"chunk_id\":\"3\", \"paper_id\":\"6571365b939a5f4082f7ccfa\"}</sup> <sup>{\"chunk_id\":\"1\", \"paper_id\":\"6684b06d01d2a3fbfce33e31\"}</sup>'}]"
      ]
     },
     "execution_count": 7,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "# 步骤2: 初次补充引文\n",
    "@async_retry(retries=3, delay=1)\n",
    "async def supplement_citations(statements):\n",
    "    return await reference_checker._supplement_citations(statements)\n",
    "supp_results = await supplement_citations(find_statements_results)\n",
    "reference_checker.logger.debug(f\"supplement citations: {supp_results}\")\n",
    "supp_results"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 8,
   "metadata": {},
   "outputs": [
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "2025-02-19 00:41:56,967 - research_agent.core.reference_checker - DEBUG - verified results: {'supported': [{'statement': 'Editing multimodal large language models (MLLMs) presents unique challenges compared to editing single-modal LLMs. For instance, multimodal model editing demands a higher level of scrutiny and careful consideration in the editing process <sup>{\"chunk_id\":\"0\", \"paper_id\":\"6528a864939a5f408257a0cf\"}</sup>. Specifically, incorrect outputs from multimodal models may stem from the synergistic effects of various modalities, such as misreading or misrecognition, which is analogous to human errors like color blindness affecting color identification in images <sup>{\"chunk_id\":\"0\", \"paper_id\":\"6528a864939a5f408257a0cf\"}</sup>. Furthermore, the task of editing multimodal LLMs presents considerable challenges due to their inherent diversity and complexity, as incorrect outputs may stem not just from LLMs but also from the interaction between different modalities <sup>{\"chunk_id\":\"0\", \"paper_id\":\"6528a864939a5f408257a0cf\"}</sup>. Empirically, current editing approaches are effective for editing the textual model in the multimodal language model but not as effective for editing the vision module, indicating the potential difficulty and opportunities of this task <sup>{\"chunk_id\":\"4\", \"paper_id\":\"6528a864939a5f408257a0cf\"}</sup>.'}, {'statement': 'The inherent complexity and diversity of MLLMs, stemming from their integration of multiple modalities like text, images, and audio, necessitate more sophisticated editing techniques. <sup>{\"chunk_id\":\"4\", \"paper_id\":\"6528a864939a5f408257a0cf\"}</sup><sup>{\"chunk_id\":\"3\", \"paper_id\":\"6571365b939a5f4082f7ccfa\"}</sup><sup>{\"chunk_id\":\"1\", \"paper_id\":\"6684b06d01d2a3fbfce33e31\"}</sup>'}], 'unsupported_count': 0, 'retries_remaining': 2}\n"
     ]
    },
    {
     "data": {
      "text/plain": [
       "{'supported': [{'statement': 'Editing multimodal large language models (MLLMs) presents unique challenges compared to editing single-modal LLMs. For instance, multimodal model editing demands a higher level of scrutiny and careful consideration in the editing process <sup>{\"chunk_id\":\"0\", \"paper_id\":\"6528a864939a5f408257a0cf\"}</sup>. Specifically, incorrect outputs from multimodal models may stem from the synergistic effects of various modalities, such as misreading or misrecognition, which is analogous to human errors like color blindness affecting color identification in images <sup>{\"chunk_id\":\"0\", \"paper_id\":\"6528a864939a5f408257a0cf\"}</sup>. Furthermore, the task of editing multimodal LLMs presents considerable challenges due to their inherent diversity and complexity, as incorrect outputs may stem not just from LLMs but also from the interaction between different modalities <sup>{\"chunk_id\":\"0\", \"paper_id\":\"6528a864939a5f408257a0cf\"}</sup>. Empirically, current editing approaches are effective for editing the textual model in the multimodal language model but not as effective for editing the vision module, indicating the potential difficulty and opportunities of this task <sup>{\"chunk_id\":\"4\", \"paper_id\":\"6528a864939a5f408257a0cf\"}</sup>.'},\n",
       "  {'statement': 'The inherent complexity and diversity of MLLMs, stemming from their integration of multiple modalities like text, images, and audio, necessitate more sophisticated editing techniques. <sup>{\"chunk_id\":\"4\", \"paper_id\":\"6528a864939a5f408257a0cf\"}</sup><sup>{\"chunk_id\":\"3\", \"paper_id\":\"6571365b939a5f4082f7ccfa\"}</sup><sup>{\"chunk_id\":\"1\", \"paper_id\":\"6684b06d01d2a3fbfce33e31\"}</sup>'}],\n",
       " 'unsupported_count': 0,\n",
       " 'retries_remaining': 2}"
      ]
     },
     "execution_count": 8,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "# 步骤3: 引文验证循环\n",
    "@async_retry(retries=3, delay=1)\n",
    "async def verify_citations(supplemented_statements):\n",
    "    return await reference_checker._verify_citations(\n",
    "        supplemented_statements,\n",
    "        remaining_retries=reference_checker.max_retries\n",
    "    )\n",
    "verified_results = await verify_citations(supp_results)\n",
    "reference_checker.logger.debug(f\"verified results: {verified_results}\")\n",
    "verified_results"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 18,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "{'supported': [{'statement': 'Editing multimodal large language models (MLLMs) presents unique challenges compared to editing single-modal LLMs. For instance, multimodal model editing demands a higher level of scrutiny and careful consideration in the editing process <sup>{\"chunk_id\":\"0\", \"paper_id\":\"6528a864939a5f408257a0cf\"}</sup>. Specifically, incorrect outputs from multimodal models may stem from the synergistic effects of various modalities, such as misreading or misrecognition, which is analogous to human errors like color blindness affecting color identification in images <sup>{\"chunk_id\":\"0\", \"paper_id\":\"6528a864939a5f408257a0cf\"}</sup>. Furthermore, the task of editing multimodal LLMs presents considerable challenges due to their inherent diversity and complexity, as incorrect outputs may stem not just from LLMs but also from the interaction between different modalities <sup>{\"chunk_id\":\"0\", \"paper_id\":\"6528a864939a5f408257a0cf\"}</sup>. Empirically, current editing approaches are effective for editing the textual model in the multimodal language model but not as effective for editing the vision module, indicating the potential difficulty and opportunities of this task <sup>{\"chunk_id\":\"4\", \"paper_id\":\"6528a864939a5f408257a0cf\"}</sup>.'},\n",
       "  {'statement': 'The inherent complexity and diversity of MLLMs, stemming from their integration of multiple modalities like text, images, and audio, necessitate more sophisticated editing techniques. <sup>{\"chunk_id\":\"4\", \"paper_id\":\"6528a864939a5f408257a0cf\"}</sup><sup>{\"chunk_id\":\"3\", \"paper_id\":\"6571365b939a5f4082f7ccfa\"}</sup><sup>{\"chunk_id\":\"1\", \"paper_id\":\"6684b06d01d2a3fbfce33e31\"}</sup>'}],\n",
       " 'unsupported_count': 0,\n",
       " 'retries_remaining': 2}"
      ]
     },
     "execution_count": 18,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "verified_results"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 19,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "[{'statement': 'Editing multimodal large language models (MLLMs) presents unique challenges compared to editing single-modal LLMs. For instance, multimodal model editing demands a higher level of scrutiny and careful consideration in the editing process <sup>{\"chunk_id\":\"0\", \"paper_id\":\"6528a864939a5f408257a0cf\"}</sup>. Specifically, incorrect outputs from multimodal models may stem from the synergistic effects of various modalities, such as misreading or misrecognition, which is analogous to human errors like color blindness affecting color identification in images <sup>{\"chunk_id\":\"0\", \"paper_id\":\"6528a864939a5f408257a0cf\"}</sup>. Furthermore, the task of editing multimodal LLMs presents considerable challenges due to their inherent diversity and complexity, as incorrect outputs may stem not just from LLMs but also from the interaction between different modalities <sup>{\"chunk_id\":\"0\", \"paper_id\":\"6528a864939a5f408257a0cf\"}</sup>. Empirically, current editing approaches are effective for editing the textual model in the multimodal language model but not as effective for editing the vision module, indicating the potential difficulty and opportunities of this task <sup>{\"chunk_id\":\"4\", \"paper_id\":\"6528a864939a5f408257a0cf\"}</sup>.'}, {'statement': 'The inherent complexity and diversity of MLLMs, stemming from their integration of multiple modalities like text, images, and audio, necessitate more sophisticated editing techniques. <sup>{\"chunk_id\":\"4\", \"paper_id\":\"6528a864939a5f408257a0cf\"}</sup><sup>{\"chunk_id\":\"3\", \"paper_id\":\"6571365b939a5f4082f7ccfa\"}</sup><sup>{\"chunk_id\":\"1\", \"paper_id\":\"6684b06d01d2a3fbfce33e31\"}</sup>'}]\n"
     ]
    }
   ],
   "source": [
    "import re\n",
    "filter_results = []\n",
    "for v in verified_results[\"supported\"]:\n",
    "    a_re = re.findall('<sup>',v[\"statement\"])\n",
    "    if not a_re:\n",
    "        continue\n",
    "    else:\n",
    "        filter_results.append(v)\n",
    "print(filter_results)\n",
    "# verified_results的输出有的没有按照<sup>paper_id,chunk_id</sup>，而是按照[paper_id,chunk_id]\n",
    "# 把[paper_id,chunk_id]的部分删掉后，模型输出正常"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 20,
   "metadata": {},
   "outputs": [
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "2025-02-19 00:44:43,367 - research_agent.core.reference_checker - DEBUG - Successfully updated \n",
      "## 2.2 Model Editing Techniques\n",
      "\n",
      "Editing multimod with citations\n"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "## 2.2 Model Editing Techniques\n",
      "\n",
      "Editing multimodal large language models (MLLMs) presents unique challenges compared to editing single-modal LLMs. For instance, multimodal model editing demands a higher level of scrutiny and careful consideration in the editing process<sup>{\"chunk_id\":\"0\", \"paper_id\":\"6528a864939a5f408257a0cf\"}</sup>. Specifically, incorrect outputs from multimodal models may stem from the synergistic effects of various modalities, such as misreading or misrecognition, which is analogous to human errors like color blindness affecting color identification in images<sup>{\"chunk_id\":\"0\", \"paper_id\":\"6528a864939a5f408257a0cf\"}</sup>. Furthermore, the task of editing multimodal LLMs presents considerable challenges due to their inherent diversity and complexity, as incorrect outputs may stem not just from LLMs but also from the interaction between different modalities<sup>{\"chunk_id\":\"0\", \"paper_id\":\"6528a864939a5f408257a0cf\"}</sup>. Empirically, current editing approaches are effective for editing the textual model in the multimodal language model but not as effective for editing the vision module, indicating the potential difficulty and opportunities of this task<sup>{\"chunk_id\":\"4\", \"paper_id\":\"6528a864939a5f408257a0cf\"}</sup>. The inherent complexity and diversity of MLLMs, stemming from their integration of multiple modalities like text, images, and audio, necessitate more sophisticated editing techniques<sup>{\"chunk_id\":\"4\", \"paper_id\":\"6528a864939a5f408257a0cf\"}</sup><sup>{\"chunk_id\":\"3\", \"paper_id\":\"6571365b939a5f4082f7ccfa\"}</sup><sup>{\"chunk_id\":\"1\", \"paper_id\":\"6684b06d01d2a3fbfce33e31\"}</sup>. This subsection explores the existing approaches for editing single-modal LLMs and their potential applicability to MLLMs.\n",
      "\n",
      "**Knowledge Infusion:** This technique involves incrementally updating a language model with new facts or information without significant retraining. Methods like knowledge distillation and fine-tuning allow for the transfer of knowledge from a larger, more knowledgeable model to a smaller, less knowledgeable one. While effective for single-modal LLMs, knowledge infusion for MLLMs requires careful consideration of the interplay between different modalities and the potential for cross-modal knowledge transfer.\n",
      "\n",
      "**Incremental Learning:** Incremental learning involves training a model on new data while retaining its knowledge from previous training sessions. This approach is particularly relevant for MLLMs, as it allows for the continuous updating of the model with new multimodal data without forgetting previously learned information. Techniques like experience replay and model regularization can be employed to mitigate catastrophic forgetting and ensure the stability of the model.\n",
      "\n",
      "**Modality-Specific Editing:** Given the distinct characteristics of each modality, it may be necessary to develop modality-specific editing techniques for MLLMs. For example, image editing techniques like style transfer and image inpainting can be used to modify the visual representations learned by the model, while text editing techniques like grammar correction and sentiment modification can be used to refine the linguistic representations.\n",
      "\n",
      "**Challenges and Limitations:** Editing MLLMs is still an evolving field, and several challenges and limitations remain. These include the difficulty of aligning edits across different modalities, the potential for introducing biases or errors during the editing process, and the lack of standardized evaluation metrics for assessing the effectiveness of editing techniques. Future research should focus on addressing these challenges and developing more robust and efficient editing methods for MLLMs.\n"
     ]
    }
   ],
   "source": [
    "# 步骤4: 最终更新论文草稿\n",
    "@async_retry(retries=3, delay=1)\n",
    "async def update_draft(supported):\n",
    "    return await reference_checker._update_draft(supported)\n",
    "final_draft = await update_draft(filter_results)\n",
    "reference_checker.logger.debug(f\"Successfully updated {reference_checker.paper_draft[:50]} with citations\")\n",
    "print(final_draft)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "1.2--1.3删除\n",
    "链路预测——技术预测\n",
    "社团检测删掉"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 9,
   "metadata": {},
   "outputs": [],
   "source": [
    "import json\n",
    "from typing import List\n",
    "from pathlib import Path\n",
    "\n",
    "import json_repair\n",
    "from jinja2 import Environment\n",
    "from research_agent.core.query import Query\n",
    "from research_agent.core.general_llm import LLM\n",
    "from pyaml_env import parse_config\n",
    "from research_agent.core.config import Config\n",
    "from jinja2 import Environment, FileSystemLoader\n",
    "from research_agent.core.paths import UPDATE_REFERENCE_PROMPT\n",
    "\n",
    "\n",
    "class UpdateReference:\n",
    "    def __init__(self):\n",
    "        self.query = Query()\n",
    "        configs = parse_config(Config.YAML_CONFIG)\n",
    "        self.llm = LLM(config=configs[Config.DEFAULT_MODEL])\n",
    "\n",
    "        self.prompt_env = Environment(\n",
    "        loader=FileSystemLoader(UPDATE_REFERENCE_PROMPT))\n",
    "        with open(UPDATE_REFERENCE_PROMPT, \"r\") as f:\n",
    "            self.update_reference_prompt_template = self.prompt_env.from_string(\n",
    "                f.read())\n",
    "\n",
    "    async def update_reference(\n",
    "        self,\n",
    "        support_statement_citation_result: List[dict],\n",
    "        paper_draft: str\n",
    "    ) -> str:\n",
    "        \"\"\"\n",
    "        参数说明：\n",
    "        - support_statement_citation_result: 支持陈述和引用的字典列表\n",
    "        - paper_draft: 需要更新的论文草稿\n",
    "        返回更新后的论文草稿\n",
    "        \"\"\"\n",
    "        # 生成提示信息\n",
    "        prompt_messages = self._prepare_update_reference_prompt(\n",
    "            support_statement_citation_result,\n",
    "            paper_draft\n",
    "        )\n",
    "        # 调用 LLM\n",
    "\n",
    "        response = await self.llm.completion(prompt_messages)\n",
    "        # 返回更新后的草稿\n",
    "        return response\n",
    "\n",
    "    def _prepare_update_reference_prompt(self, support_data: List[dict], draft: str):\n",
    "        \"\"\"准备系统提示和用户提示的prompt组合\"\"\"\n",
    "        system_prompt = self.update_reference_prompt_template.render(\n",
    "            role=\"system\")\n",
    "\n",
    "        user_prompt = self.update_reference_prompt_template.render(\n",
    "            role=\"user\",\n",
    "            support_statement_citation_result=support_data,\n",
    "            paper_draft=draft\n",
    "        )\n",
    "\n",
    "        return [\n",
    "            {\"role\": \"system\", \"content\": system_prompt},\n",
    "            {\"role\": \"user\", \"content\": user_prompt},\n",
    "        ]\n",
    "\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 11,
   "metadata": {},
   "outputs": [],
   "source": [
    "updatereferencer = UpdateReference()\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "update_draft = await updatereferencer.update_reference(verified_results,reference_checker.paper_draft)\n",
    "update_draft"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 21,
   "metadata": {},
   "outputs": [],
   "source": [
    "verified_results= [{'statement': 'Editing multimodal large language models (MLLMs) presents unique challenges compared to editing single-modal LLMs. For instance, multimodal model editing demands a higher level of scrutiny and careful consideration in the editing process <sup>{\"chunk_id\":\"0\", \"paper_id\":\"6528a864939a5f408257a0cf\"}</sup>. Specifically, incorrect outputs from multimodal models may stem from the synergistic effects of various modalities, such as misreading or misrecognition, which is analogous to human errors like color blindness affecting color identification in images <sup>{\"chunk_id\":\"0\", \"paper_id\":\"6528a864939a5f408257a0cf\"}</sup>. Furthermore, the task of editing multimodal LLMs is more challenging due to their inherent diversity and complexity <sup>{\"chunk_id\":\"0\", \"paper_id\":\"6528a864939a5f408257a0cf\"}</sup>.'},\n",
    " {'statement': \"The inherent complexity and diversity of MLLMs, stemming from their integration of multiple modalities like text, images, and audio, necessitate more sophisticated editing techniques. For instance, the work by [chunk_id: '4', paper_id: '6528a864939a5f408257a0cf'] introduces multimodal model editing with a new benchmark MMEdit, analyzing the effectiveness of various model editing baselines and exploring their impact on different components (e.g., visual and text). Furthermore, [chunk_id: '2', paper_id: '656fdcf8939a5f4082920de7'] discusses the fusion of LLMs and sequential recommendation systems, drawing inspiration from MLLMs that amalgamate the domain of text with other modalities. Additionally, [chunk_id: '3', paper_id: '6571365b939a5f4082f7ccfa'] proposes a progressive multimodal alignment approach, training an image-to-text model as initialization and progressively grounding other modalities into LLM. Moreover, [chunk_id: '1', paper_id: '6684b06d01d2a3fbfce33e31'] focuses on equipping LLMs with strong audio-visual comprehension abilities, addressing the research gap in fine-grained audio-visual understanding. Lastly, [chunk_id: '2', paper_id: '65e144ed13fb2c6cf60f500c'] discusses the vulnerability of LMMs to typographic attacks, highlighting the need for more sophisticated editing techniques to ensure robustness.\"},\n",
    " {'statement': 'This subsection explores the existing approaches for editing single-modal LLMs and their potential applicability to MLLMs. Current editing approaches are effective for editing the textual model in the multimodal language model but not as effective for editing the vision module <sup>{\"chunk_id\":\"4\", \"paper_id\":\"6528a864939a5f408257a0cf\"}</sup>. Specifically, in editing the language module of the BLIP-2 model, the reliability of MEND can reach $99.4\\\\%$ , but only attain $65.2\\\\%$ if editing the vision module, indicating the potential difficulty and opportunities of this task <sup>{\"chunk_id\":\"1\", \"paper_id\":\"6528a864939a5f408257a0cf\"}</sup>.'},\n",
    " {'statement': 'Methods like knowledge distillation and fine-tuning allow for the transfer of knowledge from a larger, more knowledgeable model to a smaller, less knowledgeable one. Knowledge distillation techniques have evolved to enable multi-directional knowledge transfer between models, such as using a hierarchy of multiple auxiliary heads to distill knowledge from each other and across the ensemble <sup>{\"chunk_id\":\"8\", \"paper_id\":\"6386c9e090e50fcafdfa0a19\"}</sup>. This approach has been shown to be more effective than naive distillation and allows models to achieve close to supervised accuracy on large datasets like ImageNet <sup>{\"chunk_id\":\"8\", \"paper_id\":\"6386c9e090e50fcafdfa0a19\"}</sup>. Additionally, cooperative knowledge distillation methods have been developed that can distill knowledge between two or more models regardless of architecture, algorithm, feature overlap, and under small or large data settings <sup>{\"chunk_id\":\"5\", \"paper_id\":\"65c97cd4939a5f4082307083\"}</sup>. These methods target specific weaknesses of each model, allowing any combination of high and/or low-performance models to distill knowledge, compared to traditional techniques that typically only transfer knowledge from a single high-performance model to a low-performance model <sup>{\"chunk_id\":\"5\", \"paper_id\":\"65c97cd4939a5f4082307083\"}</sup>.'},\n",
    " {'statement': 'While effective for single-modal LLMs, knowledge infusion for MLLMs requires careful consideration of the interplay between different modalities and the potential for cross-modal knowledge transfer. For example, in the context of multi-modal entity alignment, aligning uni-modal embedding with the joint embedding can transfer the knowledge from the joint embedding back to uni-modal ones, resulting in better uni-modal representation <sup>{\"chunk_id\":\"2\", \"paper_id\":\"65e68afc13fb2c6cf6f6e33d\"}</sup>. Additionally, the use of mutual information estimator MINE can enhance mutual information, which can be utilized to mine the modal-invariant information between different modalities and filter out modality-specific random noise <sup>{\"chunk_id\":\"2\", \"paper_id\":\"65e68afc13fb2c6cf6f6e33d\"}</sup>. Furthermore, in the domain of 3D semantic segmentation, cross-modal learning can transfer knowledge from one modality to the other on the target-domain dataset, and design an auxiliary objective on source and target domains, where the task is to estimate the other modality’s prediction <sup>{\"chunk_id\":\"3\", \"paper_id\":\"6006d0cb91e0111a1b6a2507\"}</sup>. Finally, a generative framework for multi-modal knowledge retrieval, such as GeMKR, leverages LLMs as virtual knowledge bases and retrieves knowledge via a two-step process: generating knowledge clues related to the queries, and obtaining the relevant document by searching databases using the knowledge clue <sup>{\"chunk_id\":\"0\", \"paper_id\":\"65a75aa9939a5f408261970a\"}</sup>.'},\n",
    " {'statement': 'Incremental Learning: Incremental learning involves training a model on new data while retaining its knowledge from previous training sessions. Incremental learning [33], a pivotal area in machine learning, endeavors to enable models to adapt to new classes while avoiding catastrophic forgetting [26] of previously acquired knowledge. Various strategies have been proposed in this domain, encompassing structural-based methods [24, 38] that dynamically expand the model architecture to accommodate new classes, regularization-based methods [1,11,20,21,34,40] employing constraints like knowledge distillation to maintain consistency of old classes, and rehearsal-based methods [14,22,33,44] storing or generating old samples to participate in training alongside new samples. These diverse approaches collectively aim to empower the model to incrementally acquire new knowledge while preserving previous knowledge. In this paper, we focus on challenging ISS.\\nIncremental task learning (ITL) [10,34] aims to train a single model on a sequence of different tasks and perform well on all the trained tasks once the training is finished. While training on new tasks, the old data from previous tasks will not be provided to the model. This scenario mimics the human learning process where they have the ability to acquire new knowledge and skills throughout their lifespan. However, this setting is still challenging to neural network models as a common phenomenon called ”catastrophic forgetting [21]” is observed during this learning process. Catastrophic forgetting occurs when the data from the new tasks interfere with the data seen in the previous tasks and thus deteriorating model performance on preceding tasks. To overcome this issue, different approaches have been proposed so far which can be divided into three main categories: regularization-based approaches, memory and replay-based approaches, and dynamic network architecture-based approaches. Some of these approaches are especially designed for ITL whereas others are designed for more general continual learning setup.\\nIncremental learning. Various methods have been proposed for incremental learning in the past few years [ 2 ,5 ]. Recent works can be coarsely grouped into three categories: replay-based, regularization-based, and parameter-isolation methods. Replay-based methods mitigate the task-recency bias by replaying training samples from previous tasks. In addition to replaying samples, BiC [ 36 ], PODNet [ 8 ], and iCaRL [ 29 ] apply a distillation loss to prevent forgetting and enhance model stability. GEM [ 21 ], AGEM [ 3 ], and MER [ 30 ] exploit past-task exemplars by modifying gradients on current training samples to match old samples. Rehearsal-based methods may cause models to overfit stored samples.\\nClass incremental learning is a framework that progressively increases the scope of a problem while combating the inherent catastrophic forgetting issue. Among many existing approaches [ 2 ,17 ,20 ,33 ,43 ,48 ], the techniques based on knowledge distillation with exemplars [ 7 ,13 ,20 ], allowing new models to mimic previous ones, have demonstrated promising performance in alleviating the feature drift issue. Yet, these methods still have inherent drawbacks induced by data deficiency for old tasks and data imbalance between tasks as only a small number of training examples are available for the previous tasks. To alleviate the limitations, some existing approaches generate either data samples [ 31 ,38 ] or feature representations [ 22 ] to complement the shortage of training data for the previous tasks. However, they require additional generative models, which hampers the stability of convergence and increases the complexity of models.'},\n",
    " {'statement': 'This approach is particularly relevant for MLLMs, as it allows for the continuous updating of the model with new multimodal data without forgetting previously learned information. <sup>{\"chunk_id\":\"1\", \"paper_id\":\"623004305aee126c0f9b322d\"}</sup>'},\n",
    " {'statement': 'Techniques like experience replay and model regularization can be employed to mitigate catastrophic forgetting and ensure the stability of the model. Experience replay methods identify a limited number of exemplars to store in an auxiliary dataset, buffer, that is used to retain performance on previously seen tasks through rehearsal (ER [14], GEM [15], A-GEM [16], GSS [17]). An auxiliary loss can be applied as a regularization term to the main training task, such as with Knowledge Distillation (DER++ [18], iCaRL [19], FDR [20], DMC [21], ExModel [22]) or by restricting the gradient magnitude (GEM [15], AGEM [16]). Regularization methods such as EWC [8] and similarly (MAS [30], SI [11]) use an auxiliary loss term to constrain optimization w.r.t. to a metric of importance for each parameter for a given task. LwF [31] distills knowledge from the previous model using current task data, and LFL [9] freezes portion of the network while penalizing intermediate representations using the Euclidean distance. These approaches are orthogonal to our method and are candidates for the stability loss. We find that they underperform compared to our method in Sec. 5.2 and additional experiments in the supplementary. <sup>{\"chunk_id\":\"1\", \"paper_id\":\"6464afdfd68f896efa356511\"}</sup><sup>{\"chunk_id\":\"1\", \"paper_id\":\"64e2e14f3fda6d7f064665d0\"}</sup><sup>{\"chunk_id\":\"1\", \"paper_id\":\"6413dac290e50fcafd3ce260\"}</sup>'},\n",
    " {'statement': 'Modality-Specific Editing: Given the distinct characteristics of each modality, it may be necessary to develop modality-specific editing techniques for MLLMs. For example, in editing the language module of the BLIP-2 model, the reliability of MEND can reach 99.4%, but only attain 65.2% if editing the vision module, indicating the potential difficulty and opportunities of this task <sup>{\"chunk_id\":\"1\", \"paper_id\":\"6528a864939a5f408257a0cf\"}</sup><sup>{\"chunk_id\":\"4\", \"paper_id\":\"6528a864939a5f408257a0cf\"}</sup>.'},\n",
    " {'statement': 'Challenges and Limitations: Editing MLLMs is still an evolving field, and several challenges and limitations remain. <sup>{\"chunk_id\":\"0\", \"paper_id\":\"646c3addd68f896efa5d1901\"}</sup><sup>{\"chunk_id\":\"6\", \"paper_id\":\"647eaf35d68f896efad408e7\"}</sup><sup>{\"chunk_id\":\"9\", \"paper_id\":\"66f4cd3401d2a3fbfcbfac37\"}</sup>'},\n",
    " {'statement': 'These include the difficulty of aligning edits across different modalities, the potential for introducing biases or errors during the editing process, and the lack of standardized evaluation metrics for assessing the effectiveness of editing techniques. <sup>{\"chunk_id\":\"7\", \"paper_id\":\"64741c33d68f896efaa7b664\"}</sup><sup>{\"chunk_id\":\"3\", \"paper_id\":\"6528a864939a5f408257a0cf\"}</sup><sup>{\"chunk_id\":\"4\", \"paper_id\":\"6528a864939a5f408257a0cf\"}</sup><sup>{\"chunk_id\":\"6\", \"paper_id\":\"65f7a01c13fb2c6cf668ebd0\"}</sup>'},\n",
    " {'statement': 'Future research should focus on addressing these challenges and developing more robust and efficient editing methods for MLLMs. <sup>{\"chunk_id\":\"0\", \"paper_id\":\"646c3addd68f896efa5d1901\"}</sup><sup>{\"chunk_id\":\"0\", \"paper_id\":\"6528a864939a5f408257a0cf\"}</sup><sup>{\"chunk_id\":\"4\", \"paper_id\":\"6528a864939a5f408257a0cf\"}</sup>'},\n",
    " {'statement': 'Knowledge Infusion: This technique involves incrementally updating a language model with new facts or information without significant retraining. <sup>{\"chunk_id\":\"1\", \"paper_id\":\"616e37435244ab9dcbd1a6fa\"}</sup>'},\n",
    " {'statement': 'For example, image editing techniques like style transfer and image inpainting can be used to modify the visual representations learned by the model, while text editing techniques like grammar correction and sentiment modification can be used to refine the linguistic representations. Image editing techniques such as style transfer and image inpainting can modify the visual representations learned by the model <sup>{\"chunk_id\":\"1\", \"paper_id\":\"6556d305939a5f4082dc359b\"}</sup>, and text editing techniques like grammar correction and sentiment modification can refine the linguistic representations <sup>{\"chunk_id\":\"5\", \"paper_id\":\"6392a77190e50fcafd8c4e48\"}</sup>.'}]"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    " 找到的statement： 'The inherent complexity and diversity of MLLMs, stemming from their integration of multiple modalities like text, images, and audio, necessitate more sophisticated editing techniques.',\n",
    "\n",
    " 添加引文后的statement：{'statement': \"The inherent complexity and diversity of MLLMs, stemming from their integration of multiple modalities like text, images, and audio, necessitate more sophisticated editing techniques. For instance, the work by [chunk_id: '4', paper_id: '6528a864939a5f408257a0cf'] introduces multimodal model editing with a new benchmark MMEdit, analyzing the effectiveness of various model editing baselines and exploring their impact on different components (e.g., visual and text). Furthermore, [chunk_id: '2', paper_id: '656fdcf8939a5f4082920de7'] discusses the fusion of LLMs and sequential recommendation systems, drawing inspiration from MLLMs that amalgamate the domain of text with other modalities. Additionally, [chunk_id: '3', paper_id: '6571365b939a5f4082f7ccfa'] proposes a progressive multimodal alignment approach, training an image-to-text model as initialization and progressively grounding other modalities into LLM. Moreover, [chunk_id: '1', paper_id: '6684b06d01d2a3fbfce33e31'] focuses on equipping LLMs with strong audio-visual comprehension abilities, addressing the research gap in fine-grained audio-visual understanding. Lastly, [chunk_id: '2', paper_id: '65e144ed13fb2c6cf60f500c'] discusses the vulnerability of LMMs to typographic attacks, highlighting the need for more sophisticated editing techniques to ensure robustness.\"},\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 22,
   "metadata": {},
   "outputs": [],
   "source": [
    "a = updatereferencer._prepare_update_reference_prompt(verified_results,reference_checker.paper_draft)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 23,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "\n",
      "\n",
      "You are an AI expert tasked with updating a paper draft by integrating citations into specific statements. Each citation should be properly added to the relevant statements, ensuring correct citation format and context.  \n",
      "\n",
      "---  \n",
      "\n",
      "### **Task Instructions**  \n",
      "1. **Locate the Exact Statement**:\n",
      "   - For each statement in `support_statement_citation_result`, **ensure the statement is used exactly as provided**. If there is any discrepancy between the statement in `support_statement_citation_result` and the one in `paper_draft`, **do not modify the statement in any way**. Always use the exact wording provided in `support_statement_citation_result` to prevent any changes.\n",
      "   - Carefully match the statement with the one in `paper_draft`, considering spacing, punctuation, and wording.\n",
      "\n",
      "2. **Update the Statement with Citation**:  \n",
      "   - Replace the statement with a version that includes the citation in the **exact format**:\n",
      "     ```  \n",
      "     <sup>{\"chunk_id\":\"<chunk_id>\", \"paper_id\":\"<paper_id>\"}</sup>  \n",
      "     ```  \n",
      "   - Ensure the citation is directly attached to the statement without any additional spacing or characters.  \n",
      "3. **Ensure All Statements are Updated**:  \n",
      "   - Every statement in `support_statement_citation_result` must be replaced in `paper_draft`.none should be skipped.\n",
      "   - Do not add or modify any other text outside the specified statement.\n",
      "4. **Preserve Text Structure**:  \n",
      "   - Maintain the original structure, formatting, and text outside the specified statements. Only the statements that need citations should be altered.  \n",
      "5. **Verify Accuracy**:  \n",
      "   - Double-check that each statement is correctly matched with its citation. Ensure there are no missed statements, duplicate citations, or unrelated content changes.\n",
      "6. **Return the Full Updated Text**:  \n",
      "   - Provide the entire paper draft with the relevant citations updated.\n",
      "   - Ensure the output is clean and directly usable, without any additional explanations or information.  \n",
      "\n",
      "### **Output Format**  \n",
      "- **Only** return the updated text of the `paper_draft` with the citations correctly applied.\n",
      "- **Do not** include explanations, JSON formatting, or any other type of meta-information.\n",
      "### **Example Input**: \n",
      "`support_statement_citation_result`: \n",
      "[\n",
      "   {'statement': \"Editing multimodal large language models (MLLMs) presents unique challenges compared to editing single-modal LLMs.<sup>{\"chunk_id\":\"0\", \"paper_id\":\"6528a864939a5f408257a0cf\"}</sup>\"},\n",
      "]\n",
      "`paper_draft`: \"## 2.2 Model Editing Techniques\n",
      "\n",
      "Editing multimodal large language models (MLLMs) presents unique challenges compared to editing single-modal LLMs. The inherent complexity...\"\n",
      "\n",
      "**Expected Output**:\n",
      "\"## 2.2 Model Editing Techniques\n",
      "\n",
      "Editing multimodal large language models (MLLMs) presents unique challenges compared to editing single-modal LLMs.<sup>{\"chunk_id\":\"0\", \"paper_id\":\"6528a864939a5f408257a0cf\"}</sup> The inherent complexity...\"\n",
      "  \n"
     ]
    }
   ],
   "source": [
    "\n",
    "print(a[0][\"content\"])"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 24,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "## 2.2 Model Editing Techniques\n",
      "\n",
      "Editing multimodal large language models (MLLMs) presents unique challenges compared to editing single-modal LLMs. The inherent complexity and diversity of MLLMs, stemming from their integration of multiple modalities like text, images, and audio, necessitate more sophisticated editing techniques. This subsection explores the existing approaches for editing single-modal LLMs and their potential applicability to MLLMs.\n",
      "\n",
      "**Knowledge Infusion:** This technique involves incrementally updating a language model with new facts or information without significant retraining. Methods like knowledge distillation and fine-tuning allow for the transfer of knowledge from a larger, more knowledgeable model to a smaller, less knowledgeable one. While effective for single-modal LLMs, knowledge infusion for MLLMs requires careful consideration of the interplay between different modalities and the potential for cross-modal knowledge transfer.\n",
      "\n",
      "**Incremental Learning:** Incremental learning involves training a model on new data while retaining its knowledge from previous training sessions. This approach is particularly relevant for MLLMs, as it allows for the continuous updating of the model with new multimodal data without forgetting previously learned information. Techniques like experience replay and model regularization can be employed to mitigate catastrophic forgetting and ensure the stability of the model.\n",
      "\n",
      "**Modality-Specific Editing:** Given the distinct characteristics of each modality, it may be necessary to develop modality-specific editing techniques for MLLMs. For example, image editing techniques like style transfer and image inpainting can be used to modify the visual representations learned by the model, while text editing techniques like grammar correction and sentiment modification can be used to refine the linguistic representations.\n",
      "\n",
      "**Challenges and Limitations:** Editing MLLMs is still an evolving field, and several challenges and limitations remain. These include the difficulty of aligning edits across different modalities, the potential for introducing biases or errors during the editing process, and the lack of standardized evaluation metrics for assessing the effectiveness of editing techniques. Future research should focus on addressing these challenges and developing more robust and efficient editing methods for MLLMs.\n"
     ]
    }
   ],
   "source": [
    "a_answer = await updatereferencer.llm.completion(a)\n",
    "print(a_answer)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 12,
   "metadata": {},
   "outputs": [],
   "source": [
    "text = '''This research survey delves into the advancements and challenges in knowledge graph construction and applications<sup>39</sup>. It encompasses a comprehensive overview of knowledge graphs, highlighting their significance in various domains such as natural language processing, computer vision, and information retrieval. The survey also focuses on the enhancement of textual information in multilingual knowledge graphs, exploring methods to bridge the gap between English and non-English languages<sup>82</sup><sup>26</sup><sup>84</sup>. Additionally, it investigates the construction and application of multi-modal knowledge graphs, which integrate symbolic knowledge with images, sounds, and videos. Multi-modal knowledge graphs have been surveyed systematically, revealing their importance in enhancing machine understanding of the real world and enabling more informative and precise downstream applications. The construction and application of such graphs are explored in various domains, including recommender systems, natural language understanding, and question answering. The survey covers the development of knowledge graph completion techniques, aiming to infer missing facts in incomplete knowledge graphs<sup>65</sup>. It also explores the application of graph neural networks for knowledge graph completion, as well as the utilization of explicit knowledge from knowledge graphs to enhance pre-trained language models for tasks like passage re-ranking. Lastly, the survey discusses the construction of large-scale financial datasets for graph anomaly detection and the application of multi-task reinforcement learning for robust knowledge graph embedding. This survey aims to provide a holistic view of the current state-of-the-art and future directions in knowledge graph research.\n",
    "In this survey, we analyze the specific areas of knowledge graph research mentioned above and their implications for various fields and applications. Our analysis draws upon key papers in each area to identify emerging trends and research gaps. The survey is structured to provide a structured overview of the current state of research in knowledge graphs, with a focus on the following key themes: advancements in knowledge graph construction, enhancement of textual information in multilingual knowledge graphs<sup>82</sup>, construction and application of multi-modal knowledge graphs<sup>39</sup>, development of knowledge graph completion techniques, application of graph neural networks for knowledge graph completion, utilization of explicit knowledge from knowledge graphs to enhance pre-trained language models, construction of large-scale financial datasets for graph anomaly detection, and application of multi-task reinforcement learning for robust knowledge graph embedding<sup>78</sup><sup>62</sup><sup>18</sup>. By synthesizing findings from key papers in each area, this survey aims to provide a comprehensive introduction to the field of knowledge graph research and its applications. Knowledge graphs have been used for the versatility of their relational information, but recent work has also integrated textual information into downstream applications, showcasing the importance of both aspects. For instance, SpherE has been introduced as an expressive and interpretable model for knowledge graph set retrieval<sup>92</sup>, while efforts have been made to increase the coverage and precision of textual information in multilingual knowledge graphs<sup>82</sup>. Additionally, the effectiveness of graph classification datasets in benchmarks for assessing GNNs has been reconsidered<sup>88</sup>, and new approaches in knowledge graph completion using GCNs have been explored<sup>18</sup>. Furthermore, the proposal of GraphCSPN introduces a geometry'''\n",
    "topic = \"What does the technology development roadmap for multi-modal large models look like?\""
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 7,
   "metadata": {},
   "outputs": [],
   "source": [
    "from research_agent.core.find_statement_citation import FindStatementCitation\n",
    "find_statement_citation = FindStatementCitation()\n",
    "statements = await find_statement_citation.find_statement_citation(topic,text)\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 16,
   "metadata": {},
   "outputs": [],
   "source": [
    "statements +=statements"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 21,
   "metadata": {},
   "outputs": [],
   "source": [
    "import asyncio\n",
    "import json\n",
    "from typing import List, Dict, Optional\n",
    "from jinja2 import Environment, FileSystemLoader\n",
    "import json_repair\n",
    "from research_agent.core.query import Query\n",
    "from research_agent.core.config import Config\n",
    "from research_agent.core.general_llm import LLM\n",
    "import re\n",
    "from typing import Union, Optional\n",
    "from pyaml_env import parse_config\n",
    "from pathlib import Path\n",
    "\n",
    "class AddCitation:\n",
    "    \"\"\"\n",
    "    用于为研究陈述添加引用的核心类。\n",
    "    \n",
    "    该类负责：\n",
    "    1. 从研究陈述中提取关键信息\n",
    "    2. 查询相关文献片段\n",
    "    3. 使用LLM生成适当的引用\n",
    "    4. 处理批量陈述的引用添加\n",
    "    \"\"\"\n",
    "    def __init__(self):\n",
    "        \"\"\"\n",
    "        初始化AddCitation实例。\n",
    "        \n",
    "        加载配置文件，初始化查询和LLM实例，\n",
    "        并加载引用提示模板。\n",
    "        \"\"\"\n",
    "        configs = parse_config(Config.YAML_CONFIG)\n",
    "        self.query = Query()\n",
    "        self.llm = LLM(config=configs[Config.DEFAULT_MODEL])\n",
    "        self.top_k = 5\n",
    "        \n",
    "        # 修改文件路径的获取方式\n",
    "        add_citations_prompt_file = r\"D:\\GoodStudy\\FX15\\FX15H\\final_work\\FX15_research_agent\\summary-generation-match\\research_agent\\core\\prompts\\add_citations.jinja\"\n",
    "        with open(add_citations_prompt_file, \"r\",encoding=\"utf-8\") as f:\n",
    "            self.add_citations_prompt_template = Environment().from_string(f.read())\n",
    "\n",
    "\n",
    "    async def _retrieve_context(self, statement: Union[str, dict]) -> Optional[str]:\n",
    "        \"\"\"\n",
    "        检索与陈述相关的上下文片段。\n",
    "        \n",
    "        参数:\n",
    "            statement: 需要引用的陈述，可以是字符串或包含未支持论文信息的字典\n",
    "            \n",
    "        返回:\n",
    "            包含相关文献片段的字符串，如果没有找到相关片段则返回None\n",
    "        \"\"\"\n",
    "        context = \"\"\n",
    "        if isinstance(statement, dict) and \"unsupported_papers\" in statement:\n",
    "            unsupported_papers = [json_repair.loads(re.findall(\n",
    "                r\"<sup>(.*?)</sup>\", s)[0]) for s in statement[\"unsupported_papers\"]]\n",
    "            chunks = await self.query.query_by_content(statement[\"statement_explanation\"], top_k=self.top_k+len(statement[\"unsupported_papers\"]))\n",
    "            chunk_text = []\n",
    "            for chunk in chunks:\n",
    "                if (chunk[\"entity\"][\"paper_id\"] in [p[\"paper_id\"] for p in unsupported_papers]) and (str(chunk[\"entity\"][\"chunk_id\"]) in [str(p[\"chunk_id\"]) for p in unsupported_papers]):\n",
    "                    continue\n",
    "                else:\n",
    "                    formatted_text = f\"paper_title:{chunk['entity']['paper_title']}\\n\" \\\n",
    "                        f\"paper_id:{chunk['entity']['paper_id']}\\n\" \\\n",
    "                        f\"chunk_id:{chunk['entity']['chunk_id']}\\n\" \\\n",
    "                        f\"chunk_text:{chunk['entity']['chunk_text']}\"\n",
    "                    chunk_text.append(formatted_text)\n",
    "            context = \"\\n\".join(chunk_text)\n",
    "        else:\n",
    "            chunks = await self.query.query_by_content(statement, top_k=self.top_k)\n",
    "            context = \"\\n\".join(\n",
    "                f\"\"\"paper_title:{chunk['entity']['paper_title']}\n",
    "                                paper_id:{chunk['entity']['paper_id']}\n",
    "                                chunk_id:{chunk['entity']['chunk_id']}\n",
    "                                chunk_text:{chunk['entity']['chunk_text']}\"\"\"\n",
    "                for chunk in chunks\n",
    "            )\n",
    "        if not chunks:\n",
    "            return None\n",
    "        return chunks\n",
    "\n",
    "    def _prepare_messages(self, statement: str, context: str) -> List[Dict]:\n",
    "        \"\"\"\n",
    "        准备LLM的输入消息。\n",
    "        \n",
    "        参数:\n",
    "            statement: 需要引用的陈述\n",
    "            context: 检索到的相关文献片段\n",
    "            \n",
    "        返回:\n",
    "            包含系统提示和用户提示的消息列表\n",
    "        \"\"\"\n",
    "        system_prompt = self.add_citations_prompt_template.render(\n",
    "            role=\"system\")\n",
    "        user_prompt = self.add_citations_prompt_template.render(\n",
    "            role=\"user\", statement=statement, retrieved_chunk=context\n",
    "        )\n",
    "        return [\n",
    "            {\"role\": \"system\", \"content\": system_prompt},\n",
    "            {\"role\": \"user\", \"content\": user_prompt},\n",
    "        ]\n",
    "\n",
    "    async def _process_statement(self, statement: Union[str, dict]) -> Dict:\n",
    "        \"\"\"\n",
    "        处理单个陈述并生成引用。\n",
    "        \n",
    "        参数:\n",
    "            statement: 需要处理的陈述，可以是字符串或字典\n",
    "            \n",
    "        返回:\n",
    "            包含处理结果的字典，格式为：\n",
    "            {\n",
    "                \"statement\": 原始陈述,\n",
    "                \"answer\": LLM生成的回答,\n",
    "                \"related_papers\": 相关论文列表\n",
    "            }\n",
    "            如果处理失败，则返回包含错误信息的字典\n",
    "        \"\"\"\n",
    "        try:\n",
    "            if isinstance(statement, dict):\n",
    "                unsupported_papers = [json_repair.loads(re.findall(\n",
    "                    r\"<sup>(.*?)</sup>\", s)[0]) for s in statement[\"unsupported_papers\"]]\n",
    "                chunks = await self.query.query_by_content(statement[\"statement\"], top_k=self.top_k+len(statement[\"unsupported_papers\"]))\n",
    "                chunk_text = []\n",
    "                for chunk in chunks:\n",
    "                    if (chunk[\"entity\"][\"paper_id\"] in [p[\"paper_id\"] for p in unsupported_papers]) and (str(chunk[\"entity\"][\"chunk_id\"]) in [str(p[\"chunk_id\"]) for p in unsupported_papers]):\n",
    "                        continue\n",
    "                    else:\n",
    "                        formatted_text = f\"paper_title:{chunk['entity']['paper_title']}\\n\" \\\n",
    "                            f\"paper_id:{chunk['entity']['paper_id']}\\n\" \\\n",
    "                            f\"chunk_id:{chunk['entity']['chunk_id']}\\n\" \\\n",
    "                            f\"chunk_text:{chunk['entity']['chunk_text']}\"\n",
    "                        chunk_text.append(formatted_text)\n",
    "                context = \"\\n\".join(chunk_text)\n",
    "            if isinstance(statement, str):\n",
    "                chunks = await self.query.query_by_content(statement, self.top_k)\n",
    "                context = \"\\n\".join(\n",
    "                    f\"\"\"paper_title:{chunk['entity']['paper_title']}\n",
    "                                paper_id:{chunk['entity']['paper_id']}\n",
    "                                chunk_id:{chunk['entity']['chunk_id']}\n",
    "                                chunk_text:{chunk['entity']['chunk_text']}\"\"\"\n",
    "                    for chunk in chunks\n",
    "                )\n",
    "            if not context:\n",
    "                return {\"statement\": statement, \"answer\": \"No relevant information found.\", \"related_papers\": []}\n",
    "\n",
    "            messages = self._prepare_messages(statement, context)\n",
    "            response = await self.llm.completion(messages)\n",
    "            return json_repair.loads(response)\n",
    "        except Exception as e:\n",
    "            return {\n",
    "                \"statement\": statement,\n",
    "                \"error\": str(e),\n",
    "                \"related_papers\": []\n",
    "            }\n",
    "\n",
    "    async def add_citations(self, statements: List[str], batch_size: int = 10) -> List[Dict]:\n",
    "        if not isinstance(statements, list) or not statements:\n",
    "            raise ValueError(\"Input must be a non-empty list of statements\")\n",
    "\n",
    "        results = []\n",
    "        for i in range(0, len(statements), batch_size):\n",
    "            batch = statements[i:i + batch_size]  # 切分 batch\n",
    "            tasks = [self._process_statement(stmt) for stmt in batch]\n",
    "            batch_results = await asyncio.gather(*tasks, return_exceptions=True)\n",
    "            results.extend([res for res in batch_results if not isinstance(res, Exception)])\n",
    "\n",
    "        return results\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 22,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "[{'statement': 'This research survey delves into the advancements and challenges in knowledge graph construction and applications. <sup>{\"paper_title\":\"Multi-Modal Knowledge Graph Construction and Application: A Survey\", \"chunk_id\":\"0\", \"paper_id\":\"6209c8295aee126c0f1e86c0\"}</sup> <sup>{\"paper_title\":\"Increasing Coverage and Precision of Textual Information in Multilingual Knowledge Graphs\", \"chunk_id\":\"1\", \"paper_id\":\"65655a25939a5f4082bae77e\"}</sup> <sup>{\"paper_title\":\"Rethinking Graph Convolutional Networks in Knowledge Graph Completion\", \"chunk_id\":\"5\", \"paper_id\":\"6209c8265aee126c0f1e81ff\"}</sup>'},\n",
       " {'statement': 'It encompasses a comprehensive overview of knowledge graphs, highlighting their significance in various domains such as natural language processing, computer vision, and information retrieval. <sup>{\"paper_title\":\"Increasing Coverage and Precision of Textual Information in Multilingual Knowledge Graphs\", \"chunk_id\":\"1\", \"paper_id\":\"65655a25939a5f4082bae77e\"}</sup>'},\n",
       " {'statement': 'The survey also focuses on the enhancement of textual information in multilingual knowledge graphs, exploring methods to bridge the gap between English and non-English languages. <sup>{\"paper_title\":\"Increasing Coverage and Precision of Textual Information in Multilingual Knowledge Graphs\", \"chunk_id\":\"0\", \"paper_id\":\"65655a25939a5f4082bae77e\"}</sup>'},\n",
       " {'statement': 'Additionally, it investigates the construction and application of multi-modal knowledge graphs, which integrate symbolic knowledge with images, sounds, and videos <sup>{\"paper_title\":\"Multi-Modal Knowledge Graph Construction and Application: A Survey\", \"chunk_id\":\"0\", \"paper_id\":\"6209c8295aee126c0f1e86c0\"}</sup>.'},\n",
       " {'statement': 'Multi-modal knowledge graphs have been surveyed systematically, revealing their importance in enhancing machine understanding of the real world and enabling more informative and precise downstream applications. <sup>{\"paper_title\":\"Multi-Modal Knowledge Graph Construction and Application: A Survey\", \"chunk_id\":\"0\", \"paper_id\":\"6209c8295aee126c0f1e86c0\"}</sup>'},\n",
       " {'statement': 'The construction and application of such graphs are explored in various domains, including recommender systems, natural language understanding, and question answering. For instance, DriveLM: Driving with Graph Visual Question Answering <sup>{\"paper_title\":\"DriveLM: Driving with Graph Visual Question Answering\", \"chunk_id\":\"15\", \"paper_id\":\"6584fc33939a5f408238634e\"}</sup> employs scene graphs for explainable and explicit reasoning with structured knowledge, while LLMRG: Improving Recommendations Through Large Language Model Reasoning Graphs <sup>{\"paper_title\":\"LLMRG: Improving Recommendations Through Large Language Model Reasoning Graphs\", \"chunk_id\":\"1\", \"paper_id\":\"6602492813fb2c6cf676d713\"}</sup> utilizes large language models to improve recommendation system performance by constructing reasoning graphs. Additionally, Graph-based Extractive Explainer for Recommendations <sup>{\"paper_title\":\"Graph-based Extractive Explainer for Recommendations\", \"chunk_id\":\"0\", \"paper_id\":\"621454535aee126c0f200edd\"}</sup> develops a graph attentive neural network model for extraction-based explanation in recommender systems. These approaches demonstrate the versatility and effectiveness of graph-based methods across different AI applications.'},\n",
       " {'statement': 'The survey covers the development of knowledge graph completion techniques, aiming to infer missing facts in incomplete knowledge graphs.<sup>{\"paper_title\":\"Increasing Coverage and Precision of Textual Information in Multilingual Knowledge Graphs\", \"chunk_id\":\"1\", \"paper_id\":\"65655a25939a5f4082bae77e\"}</sup><sup>{\"paper_title\":\"Neural-Symbolic Models for Logical Queries on Knowledge Graphs\", \"chunk_id\":\"1\", \"paper_id\":\"628afb4c5aee126c0f04e4a6\"}</sup>'},\n",
       " {'statement': 'It also explores the application of graph neural networks for knowledge graph completion, as well as the utilization of explicit knowledge from knowledge graphs to enhance pre-trained language models for tasks like passage re-ranking. <sup>{\"paper_title\":\"Incorporating Explicit Knowledge in Pre-trained Language Models for Passage Re-ranking\", \"chunk_id\":\"0\", \"paper_id\":\"626754c85aee126c0fbcdd50\"}</sup><sup>{\"paper_title\":\"Incorporating Explicit Knowledge in Pre-trained Language Models for Passage Re-ranking\", \"chunk_id\":\"3\", \"paper_id\":\"626754c85aee126c0fbcdd50\"}</sup>'},\n",
       " {'statement': 'Lastly, the survey discusses the construction of large-scale financial datasets for graph anomaly detection and the application of multi-task reinforcement learning for robust knowledge graph embedding. <sup>{\"paper_title\":\"A Comprehensive Survey on Graph Anomaly Detection with Deep Learning\", \"chunk_id\":\"0\", \"paper_id\":\"656da6e8939a5f4082de8ec7\"}</sup><sup>{\"paper_title\":\"A Comprehensive Survey on Graph Anomaly Detection with Deep Learning\", \"chunk_id\":\"4\", \"paper_id\":\"656da6e8939a5f4082de8ec7\"}</sup><sup>{\"paper_title\":\"A Comprehensive Survey on Graph Anomaly Detection with Deep Learning\", \"chunk_id\":\"14\", \"paper_id\":\"656da6e8939a5f4082de8ec7\"}</sup>'},\n",
       " {'statement': 'This survey aims to provide a holistic view of the current state-of-the-art and future directions in knowledge graph research<sup>{\"paper_title\":\"Increasing Coverage and Precision of Textual Information in Multilingual Knowledge Graphs\", \"chunk_id\":\"1\", \"paper_id\":\"65655a25939a5f4082bae77e\"}</sup><sup>{\"paper_title\":\"SpherE: Expressive and Interpretable Knowledge Graph Embedding for Set Retrieval\", \"chunk_id\":\"3\", \"paper_id\":\"6631a2d501d2a3fbfc8c4a96\"}</sup>.'},\n",
       " {'statement': 'This research survey delves into the advancements and challenges in knowledge graph construction and applications. <sup>{\"paper_title\":\"Multi-Modal Knowledge Graph Construction and Application: A Survey\", \"chunk_id\":\"0\", \"paper_id\":\"6209c8295aee126c0f1e86c0\"}</sup> <sup>{\"paper_title\":\"Increasing Coverage and Precision of Textual Information in Multilingual Knowledge Graphs\", \"chunk_id\":\"1\", \"paper_id\":\"65655a25939a5f4082bae77e\"}</sup> <sup>{\"paper_title\":\"Rethinking Graph Convolutional Networks in Knowledge Graph Completion\", \"chunk_id\":\"5\", \"paper_id\":\"6209c8265aee126c0f1e81ff\"}</sup>'},\n",
       " {'statement': 'It encompasses a comprehensive overview of knowledge graphs, highlighting their significance in various domains such as natural language processing, computer vision, and information retrieval. <sup>{\"paper_title\":\"Increasing Coverage and Precision of Textual Information in Multilingual Knowledge Graphs\", \"chunk_id\":\"1\", \"paper_id\":\"65655a25939a5f4082bae77e\"}</sup>'},\n",
       " {'statement': 'The survey also focuses on the enhancement of textual information in multilingual knowledge graphs, exploring methods to bridge the gap between English and non-English languages. <sup>{\"paper_title\":\"Increasing Coverage and Precision of Textual Information in Multilingual Knowledge Graphs\", \"chunk_id\":\"0\", \"paper_id\":\"65655a25939a5f4082bae77e\"}</sup>'},\n",
       " {'statement': 'Additionally, it investigates the construction and application of multi-modal knowledge graphs, which integrate symbolic knowledge with images, sounds, and videos <sup>{\"paper_title\":\"Multi-Modal Knowledge Graph Construction and Application: A Survey\", \"chunk_id\":\"0\", \"paper_id\":\"6209c8295aee126c0f1e86c0\"}</sup>.'},\n",
       " {'statement': 'Multi-modal knowledge graphs have been surveyed systematically, revealing their importance in enhancing machine understanding of the real world and enabling more informative and precise downstream applications. <sup>{\"paper_title\":\"Multi-Modal Knowledge Graph Construction and Application: A Survey\", \"chunk_id\":\"0\", \"paper_id\":\"6209c8295aee126c0f1e86c0\"}</sup>'},\n",
       " {'statement': 'The construction and application of such graphs are explored in various domains, including recommender systems, natural language understanding, and question answering. For instance, DriveLM: Driving with Graph Visual Question Answering <sup>{\"paper_title\":\"DriveLM: Driving with Graph Visual Question Answering\", \"chunk_id\":\"15\", \"paper_id\":\"6584fc33939a5f408238634e\"}</sup> employs scene graphs for explainable and explicit reasoning with structured knowledge, while LLMRG: Improving Recommendations Through Large Language Model Reasoning Graphs <sup>{\"paper_title\":\"LLMRG: Improving Recommendations Through Large Language Model Reasoning Graphs\", \"chunk_id\":\"1\", \"paper_id\":\"6602492813fb2c6cf676d713\"}</sup> utilizes large language models to improve recommendation system performance by constructing reasoning graphs. Additionally, Graph-based Extractive Explainer for Recommendations <sup>{\"paper_title\":\"Graph-based Extractive Explainer for Recommendations\", \"chunk_id\":\"0\", \"paper_id\":\"621454535aee126c0f200edd\"}</sup> develops a graph attentive neural network model for extraction-based explanation in recommender systems. These approaches demonstrate the versatility and effectiveness of graph-based methods across different AI applications.'},\n",
       " {'statement': 'The survey covers the development of knowledge graph completion techniques, aiming to infer missing facts in incomplete knowledge graphs.<sup>{\"paper_title\":\"Increasing Coverage and Precision of Textual Information in Multilingual Knowledge Graphs\", \"chunk_id\":\"1\", \"paper_id\":\"65655a25939a5f4082bae77e\"}</sup><sup>{\"paper_title\":\"Neural-Symbolic Models for Logical Queries on Knowledge Graphs\", \"chunk_id\":\"1\", \"paper_id\":\"628afb4c5aee126c0f04e4a6\"}</sup>'},\n",
       " {'statement': 'It also explores the application of graph neural networks for knowledge graph completion, as well as the utilization of explicit knowledge from knowledge graphs to enhance pre-trained language models for tasks like passage re-ranking. <sup>{\"paper_title\":\"Incorporating Explicit Knowledge in Pre-trained Language Models for Passage Re-ranking\", \"chunk_id\":\"0\", \"paper_id\":\"626754c85aee126c0fbcdd50\"}</sup><sup>{\"paper_title\":\"Incorporating Explicit Knowledge in Pre-trained Language Models for Passage Re-ranking\", \"chunk_id\":\"3\", \"paper_id\":\"626754c85aee126c0fbcdd50\"}</sup>'},\n",
       " {'statement': 'Lastly, the survey discusses the construction of large-scale financial datasets for graph anomaly detection and the application of multi-task reinforcement learning for robust knowledge graph embedding. <sup>{\"paper_title\":\"A Comprehensive Survey on Graph Anomaly Detection with Deep Learning\", \"chunk_id\":\"0\", \"paper_id\":\"656da6e8939a5f4082de8ec7\"}</sup><sup>{\"paper_title\":\"A Comprehensive Survey on Graph Anomaly Detection with Deep Learning\", \"chunk_id\":\"4\", \"paper_id\":\"656da6e8939a5f4082de8ec7\"}</sup><sup>{\"paper_title\":\"A Comprehensive Survey on Graph Anomaly Detection with Deep Learning\", \"chunk_id\":\"14\", \"paper_id\":\"656da6e8939a5f4082de8ec7\"}</sup>'},\n",
       " {'statement': 'This survey aims to provide a holistic view of the current state-of-the-art and future directions in knowledge graph research<sup>{\"paper_title\":\"Increasing Coverage and Precision of Textual Information in Multilingual Knowledge Graphs\", \"chunk_id\":\"1\", \"paper_id\":\"65655a25939a5f4082bae77e\"}</sup><sup>{\"paper_title\":\"SpherE: Expressive and Interpretable Knowledge Graph Embedding for Set Retrieval\", \"chunk_id\":\"3\", \"paper_id\":\"6631a2d501d2a3fbfc8c4a96\"}</sup>.'}]"
      ]
     },
     "execution_count": 22,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "add_citation = AddCitation()\n",
    "await add_citation.add_citations(statements)\n",
    "# batch_size=20;time:26\n",
    "# batch_size=10;time:40\n",
    "\n"
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "pytorch",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.9.18"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 2
}
