{
 "cells": [
  {
   "cell_type": "code",
   "execution_count": 3,
   "metadata": {},
   "outputs": [],
   "source": [
    "import asyncio\n",
    "import nest_asyncio\n",
    "nest_asyncio.apply()\n",
    "import sys\n",
    "import os\n",
    "\n",
    "# 添加项目根目录到Python路径\n",
    "sys.path.append(os.path.abspath('../..'))  # 修改这行，向上追溯三层目录到项目根目录\n",
    "\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 4,
   "metadata": {},
   "outputs": [],
   "source": [
    "topic = \"what does the current technological development of Text2SQL look like?\"\n",
    "with open(r\"D:\\GoodStudy\\FX15_reference_1\\summary-generation-match\\research_agent\\scripts\\1.md\", \"r\", encoding=\"utf-8\") as file:\n",
    "    content = file.read()\n",
    "from research_agent.core.pipeline_reference import CitationProcessor\n",
    "pipeliner = CitationProcessor()\n",
    "sections = pipeliner.split_by_primary_headers(content)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 5,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "[\"\\n## 1 Introduction\\n\\nResearch on Text-to-SQL generation has traditionally focused on scenarios where each natural language question is associated with one correct SQL query. However, real-world databases often exhibit significant ambiguity in natural language queries due to overlapping schema names, multiple relationship paths, and other factors. For example, a query asking for the capacity of O2 Arena could be ambiguous if the schema has separate columns for standing and seating capacity. Similarly, a query on the number of under-nourished children is ambiguous if there are different columns for 'under-weight children' and 'stunted growth in children'. This ambiguity can lead to multiple SQL queries that can produce the correct answer, yet most benchmarks only provide one query among the many possible correct answers. This ambiguity poses a challenge for existing Text-to-SQL systems, which struggle to generate accurate and diverse SQL queries that capture all possible interpretations of the user's intent. To address this gap, we develop a novel benchmark called AmbiQT, which consists of over 3000 examples where each natural language query is interpretable as two plausible SQL queries due to lexical and/or structural ambiguity. This benchmark aims to test the performance of Text-to-SQL systems under ambiguity and encourage the development of more robust and accurate models.\\n\\nIn this survey, we explore the current state of Text-to-SQL technology, focusing on the challenges posed by ambiguity and the approaches used to address it. We discuss the limitations of existing benchmarks and evaluation metrics, and propose potential improvements to ensure a more comprehensive and accurate assessment of model capabilities. Benchmarks like PredBench [PredBench: Benchmarking Spatio-Temporal Prediction Across Diverse Disciplines, 66908e2301d2a3fbfcea14d7, 8] have limitations in terms of training, benchmarking, and evaluation. For instance, training limitations include the constraint on model architecture and size, while benchmark limitations involve the limited number of methods and the need for further calibration of dataset protocols. Evaluation limitations are observed in the small and homogenous sample of human evaluators and the lack of diverse evaluation approaches and metrics. To address these limitations, future studies could explore more evaluation methods, improve the diversity and size of participants, and investigate the impact of various hyperparameters on model performance. Additionally, incorporating indicators of attack failures [Indicators of Attack Failure: Debugging and Improving Optimization of Adversarial Examples, 60d140d191e011c16f0cb388, 5] could help in debugging faulty evaluations and lead to fairer assessments. Furthermore, benchmarks could benefit from the inclusion of economic rationality assessments [STEER: Assessing the Economic Rationality of Large Language Models, 65cec1c1939a5f40828f00d7, 11] to evaluate models' ability to exhibit rational behavior in economic tasks. We also explore the integration of Text-to-SQL with other NLP tasks, such as question answering and information extraction, to create more powerful and versatile systems. For example, the AmbiQT benchmark (Benchmarking and Improving Text-to-SQL Generation under Ambiguity, 6535d747939a5f408295c649, 1) addresses ambiguity in SQL by encompassing four types of ambiguity, including column ambiguity, table ambiguity, join ambiguity, and precomputed aggregates. Furthermore, the work on interactive Text-to-SQL generation (Interactive Text-to-SQL Generation Via Editable Step-by-Step Explanations, 6461b9c9d68f896efad43133, 1) proposes a new interaction mechanism for users to validate and refine generated queries through step-by-step explanations, which can be extended to support multi-turn SQL generation by incorporating the contextual information of previous queries into explanation generation and text-to-clause generation. Additionally, the exploration of chain-of-thought style prompting for Text-to-SQL (Exploring Chain-of-Thought Style Prompting for Text-to-SQL, 646d8642d68f896efa0a3040, 1) aims to enhance LLMs' reasoning ability by systematically exploring CoT style prompting for text-to-SQL parsing, addressing complex, multistep reasoning required for the task. Additionally, we address the ethical considerations associated with Text2SQL technology, particularly in sensitive domains, and discuss strategies for mitigating bias and ensuring fairness. Finally, we identify promising future research directions for advancing Text2SQL technology and unlocking its full potential for empowering users to access and analyze data more effectively.\",\n",
       " '\\n## 2.1 Early Developments\\n\\nThe early approaches to Text2SQL can be categorized into rule-based systems and grammar-based methods. Rule-based systems, such as the one proposed by Hendrix et al. (1978), relied on handcrafted rules to map natural language questions to SQL queries. For example, PGTune [1] makes configuration recommendations by asking users for basic information about the Postgres database they are using and the details about their hardware environment. Note that the information of the Postgres’ version and the number of CPUs affects the setting of the knobs because a new version will introduce new knobs. For versions below 9.5, max_worker_processes is not available. Similarly, max_parallel_workers_per_gather supports versions higher than 9.5, and max_parallel_workers supports v10 and higher versions. The setting of the knob values also follows the rules. These rules include that the values of max_worker_processes and max_parallel_workers are equal to the number of CPUs and the value of max_parallel_workers_per_gather is half the number of CPUs. These systems were limited in their ability to handle complex queries and required extensive manual effort to create and maintain the rules. Grammar-based methods, like the one developed by Giordani and Moschitti (2012), used generative parsers to translate questions into SQL queries. While these methods offered some flexibility, they still struggled with the inherent complexity and ambiguity of natural language. For instance, the approach necessitates meticulous prompt design to generate exercises, which inevitably entails human intervention, introducing potential bias or variability and may not scale efficiently. Additionally, the quality and correctness of the generated problems are not explicitly addressed, and the current framework relies on a source problem for exercise generation, limiting flexibility and robustness. Furthermore, the handling of ambiguity in natural language is a significant challenge, as models often fail to capture the distribution of possible meanings without deliberate instruction.',\n",
       " '\\n## 2.2 Deep Learning Era\\n\\nThe advent of deep learning brought about a paradigm shift in the field of Text2SQL, enabling the construction of several large text-to-SQL datasets, such as WikiSQL (Zhong et al., 2017) and Spider (Yu et al., 2018), and achieving unprecedented performance in recent years (Rubin and Berant, 2021; Wang et al., 2020a; Scholak et al., 2021; Yu et al., 2020; Hwang et al., 2019). Neural network-based models, particularly sequence-to-sequence models, demonstrated remarkable improvements in translation accuracy and generalization capabilities. Notable examples include Seq2SQL (Zhong et al., 2017), which employed reinforcement learning to generate SQL queries, and RATSQL (Wang et al., 2020a), which introduced a relation-aware self-attention mechanism to better encode the relationships between columns and tables. These models leveraged the power of deep learning to capture the complexities of natural language and database schemas, leading to more accurate and robust Text2SQL systems.\\n\\nFurthermore, the integration of large language models (LLMs) like BERT (Devlin et al., 2019) and GPT (Radford et al., 2018) into Text2SQL further pushed the boundaries of performance. These pre-trained models, fine-tuned on Text2SQL tasks, demonstrated superior understanding of language semantics and context, resulting in more accurate query generation. For instance, Grappa (Yu et al., 2020) combined grammar-augmented pre-training with table semantic parsing, showcasing the potential of LLMs in Text2SQL.\\n\\nThe deep learning era also witnessed the emergence of interactive Text2SQL systems, which aimed to address the ambiguity inherent in natural language queries. These systems, such as the one proposed by Li et al. (2020), employed parser-independent interactive approaches to enhance query understanding and disambiguation. By engaging users in a step-by-step dialogue, these systems could clarify ambiguities and generate more accurate SQL queries.\\n\\nIn summary, the deep learning era marked a significant leap forward in Text2SQL technology. The integration of neural networks, LLMs, and interactive systems revolutionized the field, leading to more accurate, robust, and user-friendly Text2SQL systems.',\n",
       " '\\n## 2.3 Large Language Models\\n\\nThe integration of large language models (LLMs) into Text2SQL has significantly advanced the field. LLMs, such as BERT (Devlin et al., 2019) and GPT (Radford et al., 2018), have demonstrated remarkable capabilities in understanding language semantics and context, leading to more accurate and robust query generation. Fine-tuning these pre-trained models on Text2SQL tasks has proven to be highly effective, as evidenced by the success of models like Grappa (Yu et al., 2020), which combines grammar-augmented pre-training with table semantic parsing. The use of LLMs has also enabled the development of more user-friendly and interactive Text2SQL systems, which can better handle the ambiguities inherent in natural language queries. For example, the system proposed by Li et al. (2020) employs a parser-independent interactive approach to enhance query understanding and disambiguation through step-by-step dialogue with the user. Overall, the integration of LLMs into Text2SQL has opened up new avenues for research and development, paving the way for more sophisticated and powerful natural language interfaces to databases.']"
      ]
     },
     "execution_count": 5,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "sections = [section for section in sections if len(section) > 100]\n",
    "sections = sections[:4]\n",
    "sections"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 3,
   "metadata": {},
   "outputs": [],
   "source": [
    "def write_result(topic,result):\n",
    "    import json\n",
    "    a = {}\n",
    "    a[topic] = result\n",
    "    with open(\"result.json\",\"a\",encoding=\"utf-8\") as f:\n",
    "        json.dump(a,f,ensure_ascii=False)\n",
    "        f.write(\"\\n\")\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# FIND_STATEMENT"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 4,
   "metadata": {},
   "outputs": [],
   "source": [
    "from nltk.tokenize  import sent_tokenize\n",
    "def sentence_tokenize(text):\n",
    "    sentences = sent_tokenize(text)\n",
    "    new_sections = \"\"\n",
    "    for sen_id,sentence in enumerate(sentences):\n",
    "        sentence = sentence.replace(\"\\n\",\".\")\n",
    "        new_sections += f\"sen_id:{sen_id}\\nsentence_text:{sentence}\\n\"\n",
    "    return new_sections"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 13,
   "metadata": {},
   "outputs": [
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "  0%|          | 0/1 [00:00<?, ?it/s]"
     ]
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "100%|██████████| 1/1 [01:26<00:00, 86.65s/it]\n"
     ]
    }
   ],
   "source": [
    "from research_agent.core.general_llm import LLM\n",
    "from research_agent.core.config import Config\n",
    "from pyaml_env import parse_config\n",
    "configs = parse_config(Config.YAML_CONFIG)\n",
    "llm = LLM(config=configs[Config.DEFAULT_MODEL])\n",
    "\n",
    "import json\n",
    "from typing import List\n",
    "from pathlib import Path\n",
    "\n",
    "import json_repair\n",
    "from jinja2 import Environment\n",
    "from research_agent.core.query import Query\n",
    "from research_agent.core.general_llm import LLM\n",
    "from research_agent.core.config import Config\n",
    "from pyaml_env import parse_config\n",
    "\n",
    "class FindStatementCitation:\n",
    "    def __init__(self):\n",
    "        # 解析配置文件，加载默认模型配置\n",
    "        configs = parse_config(Config.YAML_CONFIG)\n",
    "        self.llm = LLM(config=configs[Config.DEFAULT_MODEL])\n",
    "        self.query = Query()\n",
    "\n",
    "        # 获取当前文件所在目录的路径并找到 prompts 文件夹\n",
    "        base_path = r\"D:\\GoodStudy\\FX15\\FX15H\\final_work\\FX15_research_agent\\summary-generation-match\\research_agent\\core\\prompts\"\n",
    "\n",
    "        # 修改文件路径的获取方式，加载用于查找语句引用的 Jinja 模板\n",
    "        find_statement_citation_prompt_file = base_path + r\"\\find_statements.jinja\"\n",
    "        with open(find_statement_citation_prompt_file, \"r\", encoding=\"utf-8\") as f:\n",
    "            # 使用 Jinja2 环境加载模板\n",
    "            self.find_statement_citation_prompt_template = Environment().from_string(f.read())\n",
    "\n",
    "    async def find_statement_citation(\n",
    "        self, topic: str, section: str\n",
    "    ):\n",
    "        \"\"\"\n",
    "        调用模型生成对给定主题和调查草稿的回答，查找相关的语句引用\n",
    "\n",
    "        Args:\n",
    "            topic: 研究的主题\n",
    "            section: 调查草稿内容\n",
    "\n",
    "        Returns:\n",
    "            response[\"statements\"]: 模型返回的引用语句\n",
    "        \"\"\"\n",
    "        # 准备输入给模型的提示信息\n",
    "        prompt_messages = self._prepare_find_statement_citation_prompt(\n",
    "            topic, section\n",
    "        )\n",
    "        response = await self.llm.completion(prompt_messages)\n",
    "        response = json_repair.loads(response)\n",
    "        return response[\"statements\"]\n",
    "\n",
    "    def _prepare_find_statement_citation_prompt(\n",
    "        self, topic: str, section: str\n",
    "    ):\n",
    "        \"\"\"\n",
    "        准备生成查找语句引用的模型提示消息\n",
    "\n",
    "        Args:\n",
    "            topic: 研究的主题\n",
    "            section: 调查草稿内容\n",
    "\n",
    "        Returns:\n",
    "            一个列表，包含系统和用户的提示信息\n",
    "        \"\"\"\n",
    "        system_prompt = self.find_statement_citation_prompt_template.render(\n",
    "            role=\"system\",\n",
    "            new_sections=section,\n",
    "            topic=topic\n",
    "        )\n",
    "        user_prompt = self.find_statement_citation_prompt_template.render(\n",
    "            role=\"user\",\n",
    "        )\n",
    "        return [\n",
    "            {\"role\": \"system\", \"content\": system_prompt},\n",
    "            {\"role\": \"user\", \"content\": user_prompt},\n",
    "        ]\n",
    "\n",
    "find_statementer = FindStatementCitation()\n",
    "# 添加并发控制\n",
    "from asyncio import Semaphore\n",
    "import asyncio\n",
    "\n",
    "\n",
    "# 设置最大并发数\n",
    "MAX_CONCURRENT = 5\n",
    "semaphore = Semaphore(MAX_CONCURRENT)\n",
    "\n",
    "async def process_section(section, find_statementer, topic):\n",
    "    \"\"\"处理单个section的异步函数\"\"\"\n",
    "    async with semaphore:  # 使用信号量控制并发\n",
    "        try:\n",
    "            new_section = sentence_tokenize(section)\n",
    "            find_statements_result = await find_statementer.find_statement_citation(topic, new_section)\n",
    "            return find_statements_result\n",
    "        except Exception as e:\n",
    "            print(f\"处理section时出错: {str(e)}\")\n",
    "            return None\n",
    "\n",
    "from tqdm import tqdm\n",
    "\n",
    "async def process_all_sections(sections, find_statementer, topic):\n",
    "    \"\"\"带进度条的并行处理\"\"\"\n",
    "    tasks = [process_section(section, find_statementer, topic) for section in sections]\n",
    "    \n",
    "    # 使用tqdm显示进度\n",
    "    results = []\n",
    "    for f in tqdm(asyncio.as_completed(tasks), total=len(tasks)):\n",
    "        result = await f\n",
    "        results.append(result)\n",
    "    \n",
    "    return [r for r in results if r is not None]\n",
    "# 使用方式\n",
    "results = await process_all_sections(sections[:1], find_statementer, topic)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 14,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "[[{'statement': 'Research on Text-to-SQL generation has traditionally focused on scenarios where each natural language question is associated with one correct SQL query.',\n",
       "   'related_sen_id': [0],\n",
       "   'statement_hyde': 'Traditional research in the domain of Text-to-SQL generation has predominantly concentrated on scenarios wherein a single natural language query corresponds to a unique, correct SQL query, as documented in [Reference].'},\n",
       "  {'statement': 'However, real-world databases often exhibit significant ambiguity in natural language queries due to overlapping schema names, multiple relationship paths, and other factors.',\n",
       "   'related_sen_id': [1],\n",
       "   'statement_hyde': 'Contrary to idealized scenarios, real-world databases frequently present substantial ambiguity in natural language queries, stemming from overlapping schema names, multiple relationship paths, and various other contributing factors, as highlighted in [Reference].'},\n",
       "  {'statement': 'This ambiguity can lead to multiple SQL queries that can produce the correct answer, yet most benchmarks only provide one query among the many possible correct answers.',\n",
       "   'related_sen_id': [4],\n",
       "   'statement_hyde': 'Such ambiguity can result in the existence of multiple SQL queries that yield correct answers, although the majority of existing benchmarks typically furnish only a single query from the numerous plausible correct alternatives, as noted in [Reference].'},\n",
       "  {'statement': \"This ambiguity poses a challenge for existing Text-to-SQL systems, which struggle to generate accurate and diverse SQL queries that capture all possible interpretations of the user's intent.\",\n",
       "   'related_sen_id': [5],\n",
       "   'statement_hyde': \"The aforementioned ambiguity presents a significant challenge to current Text-to-SQL systems, which often encounter difficulties in generating both accurate and diverse SQL queries that encompass all potential interpretations of the user's intent, as discussed in [Reference].\"},\n",
       "  {'statement': 'To address this gap, we develop a novel benchmark called AmbiQT, which consists of over 3000 examples where each natural language query is interpretable as two plausible SQL queries due to lexical and/or structural ambiguity.',\n",
       "   'related_sen_id': [6],\n",
       "   'statement_hyde': 'To bridge this gap, we introduce a novel benchmark, termed AmbiQT, comprising over 3000 examples wherein each natural language query can be interpreted as two viable SQL queries owing to lexical and/or structural ambiguity, as detailed in [Reference].'},\n",
       "  {'statement': 'Benchmarks like PredBench have limitations in terms of training, benchmarking, and evaluation.',\n",
       "   'related_sen_id': [10],\n",
       "   'statement_hyde': 'Benchmarks such as PredBench exhibit notable limitations in the realms of training, benchmarking, and evaluation processes, as evidenced in [Reference].'},\n",
       "  {'statement': 'For instance, training limitations include the constraint on model architecture and size, while benchmark limitations involve the limited number of methods and the need for further calibration of dataset protocols.',\n",
       "   'related_sen_id': [11],\n",
       "   'statement_hyde': 'For example, training limitations are characterized by constraints on model architecture and size, whereas benchmark limitations are associated with a restricted number of methods and the necessity for additional calibration of dataset protocols, as outlined in [Reference].'},\n",
       "  {'statement': 'Evaluation limitations are observed in the small and homogenous sample of human evaluators and the lack of diverse evaluation approaches and metrics.',\n",
       "   'related_sen_id': [12],\n",
       "   'statement_hyde': 'Evaluation limitations are evident in the utilization of a small and homogenous sample of human evaluators, coupled with the absence of diverse evaluation methodologies and metrics, as identified in [Reference].'},\n",
       "  {'statement': 'To address these limitations, future studies could explore more evaluation methods, improve the diversity and size of participants, and investigate the impact of various hyperparameters on model performance.',\n",
       "   'related_sen_id': [13],\n",
       "   'statement_hyde': 'To mitigate these limitations, future research endeavors could investigate additional evaluation methods, enhance the diversity and size of participant pools, and examine the influence of diverse hyperparameters on model performance, as suggested in [Reference].'},\n",
       "  {'statement': 'Additionally, incorporating indicators of attack failures could help in debugging faulty evaluations and lead to fairer assessments.',\n",
       "   'related_sen_id': [14],\n",
       "   'statement_hyde': 'Furthermore, the integration of indicators of attack failures may facilitate the debugging of erroneous evaluations, thereby contributing to more equitable assessments, as proposed in [Reference].'},\n",
       "  {'statement': \"Furthermore, benchmarks could benefit from the inclusion of economic rationality assessments to evaluate models' ability to exhibit rational behavior in economic tasks.\",\n",
       "   'related_sen_id': [15],\n",
       "   'statement_hyde': \"Moreover, the incorporation of economic rationality assessments into benchmarks could prove beneficial for evaluating models' capacity to demonstrate rational behavior in economic tasks, as argued in [Reference].\"},\n",
       "  {'statement': 'We also explore the integration of Text-to-SQL with other NLP tasks, such as question answering and information extraction, to create more powerful and versatile systems.',\n",
       "   'related_sen_id': [16],\n",
       "   'statement_hyde': 'Additionally, we investigate the integration of Text-to-SQL with other natural language processing tasks, including question answering and information extraction, with the aim of developing more robust and versatile systems, as explored in [Reference].'},\n",
       "  {'statement': 'For example, the AmbiQT benchmark addresses ambiguity in SQL by encompassing four types of ambiguity, including column ambiguity, table ambiguity, join ambiguity, and precomputed aggregates.',\n",
       "   'related_sen_id': [17],\n",
       "   'statement_hyde': 'For instance, the AmbiQT benchmark tackles SQL ambiguity by incorporating four distinct types of ambiguity: column ambiguity, table ambiguity, join ambiguity, and precomputed aggregates, as described in [Reference].'},\n",
       "  {'statement': 'Furthermore, the work on interactive Text-to-SQL generation proposes a new interaction mechanism for users to validate and refine generated queries through step-by-step explanations, which can be extended to support multi-turn SQL generation by incorporating the contextual information of previous queries into explanation generation and text-to-clause generation.',\n",
       "   'related_sen_id': [18],\n",
       "   'statement_hyde': 'Moreover, research on interactive Text-to-SQL generation introduces a novel interaction mechanism enabling users to validate and refine generated queries via step-by-step explanations, a method that can be extended to support multi-turn SQL generation by integrating the contextual information from prior queries into both explanation generation and text-to-clause generation processes, as detailed in [Reference].'},\n",
       "  {'statement': \"Additionally, the exploration of chain-of-thought style prompting for Text-to-SQL aims to enhance LLMs' reasoning ability by systematically exploring CoT style prompting for text-to-SQL parsing, addressing complex, multistep reasoning required for the task.\",\n",
       "   'related_sen_id': [19],\n",
       "   'statement_hyde': \"Furthermore, the examination of chain-of-thought style prompting in the context of Text-to-SQL seeks to augment large language models' reasoning capabilities through a systematic exploration of CoT style prompting for text-to-SQL parsing, thereby addressing the intricate, multistep reasoning demands of the task, as discussed in [Reference].\"},\n",
       "  {'statement': 'Additionally, we address the ethical considerations associated with Text2SQL technology, particularly in sensitive domains, and discuss strategies for mitigating bias and ensuring fairness.',\n",
       "   'related_sen_id': [20],\n",
       "   'statement_hyde': 'Moreover, we delve into the ethical considerations pertinent to Text2SQL technology, especially within sensitive domains, and articulate strategies for bias mitigation and the assurance of fairness, as examined in [Reference].'},\n",
       "  {'statement': 'Finally, we identify promising future research directions for advancing Text2SQL technology and unlocking its full potential for empowering users to access and analyze data more effectively.',\n",
       "   'related_sen_id': [21],\n",
       "   'statement_hyde': 'In conclusion, we pinpoint promising avenues for future research aimed at advancing Text2SQL technology, thereby unlocking its comprehensive potential to empower users in accessing and analyzing data more effectively, as proposed in [Reference].'}]]"
      ]
     },
     "execution_count": 14,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "results"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 15,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Research on Text-to-SQL generation has traditionally focused on scenarios where each natural language question is associated with one correct SQL query.\n",
      "Traditional research in the domain of Text-to-SQL generation has predominantly concentrated on scenarios wherein a single natural language query corresponds to a unique, correct SQL query, as documented in [Reference].\n",
      "[0]\n",
      "However, real-world databases often exhibit significant ambiguity in natural language queries due to overlapping schema names, multiple relationship paths, and other factors.\n",
      "Contrary to idealized scenarios, real-world databases frequently present substantial ambiguity in natural language queries, stemming from overlapping schema names, multiple relationship paths, and various other contributing factors, as highlighted in [Reference].\n",
      "[1]\n",
      "This ambiguity can lead to multiple SQL queries that can produce the correct answer, yet most benchmarks only provide one query among the many possible correct answers.\n",
      "Such ambiguity can result in the existence of multiple SQL queries that yield correct answers, although the majority of existing benchmarks typically furnish only a single query from the numerous plausible correct alternatives, as noted in [Reference].\n",
      "[4]\n",
      "This ambiguity poses a challenge for existing Text-to-SQL systems, which struggle to generate accurate and diverse SQL queries that capture all possible interpretations of the user's intent.\n",
      "The aforementioned ambiguity presents a significant challenge to current Text-to-SQL systems, which often encounter difficulties in generating both accurate and diverse SQL queries that encompass all potential interpretations of the user's intent, as discussed in [Reference].\n",
      "[5]\n",
      "To address this gap, we develop a novel benchmark called AmbiQT, which consists of over 3000 examples where each natural language query is interpretable as two plausible SQL queries due to lexical and/or structural ambiguity.\n",
      "To bridge this gap, we introduce a novel benchmark, termed AmbiQT, comprising over 3000 examples wherein each natural language query can be interpreted as two viable SQL queries owing to lexical and/or structural ambiguity, as detailed in [Reference].\n",
      "[6]\n",
      "Benchmarks like PredBench have limitations in terms of training, benchmarking, and evaluation.\n",
      "Benchmarks such as PredBench exhibit notable limitations in the realms of training, benchmarking, and evaluation processes, as evidenced in [Reference].\n",
      "[10]\n",
      "For instance, training limitations include the constraint on model architecture and size, while benchmark limitations involve the limited number of methods and the need for further calibration of dataset protocols.\n",
      "For example, training limitations are characterized by constraints on model architecture and size, whereas benchmark limitations are associated with a restricted number of methods and the necessity for additional calibration of dataset protocols, as outlined in [Reference].\n",
      "[11]\n",
      "Evaluation limitations are observed in the small and homogenous sample of human evaluators and the lack of diverse evaluation approaches and metrics.\n",
      "Evaluation limitations are evident in the utilization of a small and homogenous sample of human evaluators, coupled with the absence of diverse evaluation methodologies and metrics, as identified in [Reference].\n",
      "[12]\n",
      "To address these limitations, future studies could explore more evaluation methods, improve the diversity and size of participants, and investigate the impact of various hyperparameters on model performance.\n",
      "To mitigate these limitations, future research endeavors could investigate additional evaluation methods, enhance the diversity and size of participant pools, and examine the influence of diverse hyperparameters on model performance, as suggested in [Reference].\n",
      "[13]\n",
      "Additionally, incorporating indicators of attack failures could help in debugging faulty evaluations and lead to fairer assessments.\n",
      "Furthermore, the integration of indicators of attack failures may facilitate the debugging of erroneous evaluations, thereby contributing to more equitable assessments, as proposed in [Reference].\n",
      "[14]\n",
      "Furthermore, benchmarks could benefit from the inclusion of economic rationality assessments to evaluate models' ability to exhibit rational behavior in economic tasks.\n",
      "Moreover, the incorporation of economic rationality assessments into benchmarks could prove beneficial for evaluating models' capacity to demonstrate rational behavior in economic tasks, as argued in [Reference].\n",
      "[15]\n",
      "We also explore the integration of Text-to-SQL with other NLP tasks, such as question answering and information extraction, to create more powerful and versatile systems.\n",
      "Additionally, we investigate the integration of Text-to-SQL with other natural language processing tasks, including question answering and information extraction, with the aim of developing more robust and versatile systems, as explored in [Reference].\n",
      "[16]\n",
      "For example, the AmbiQT benchmark addresses ambiguity in SQL by encompassing four types of ambiguity, including column ambiguity, table ambiguity, join ambiguity, and precomputed aggregates.\n",
      "For instance, the AmbiQT benchmark tackles SQL ambiguity by incorporating four distinct types of ambiguity: column ambiguity, table ambiguity, join ambiguity, and precomputed aggregates, as described in [Reference].\n",
      "[17]\n",
      "Furthermore, the work on interactive Text-to-SQL generation proposes a new interaction mechanism for users to validate and refine generated queries through step-by-step explanations, which can be extended to support multi-turn SQL generation by incorporating the contextual information of previous queries into explanation generation and text-to-clause generation.\n",
      "Moreover, research on interactive Text-to-SQL generation introduces a novel interaction mechanism enabling users to validate and refine generated queries via step-by-step explanations, a method that can be extended to support multi-turn SQL generation by integrating the contextual information from prior queries into both explanation generation and text-to-clause generation processes, as detailed in [Reference].\n",
      "[18]\n",
      "Additionally, the exploration of chain-of-thought style prompting for Text-to-SQL aims to enhance LLMs' reasoning ability by systematically exploring CoT style prompting for text-to-SQL parsing, addressing complex, multistep reasoning required for the task.\n",
      "Furthermore, the examination of chain-of-thought style prompting in the context of Text-to-SQL seeks to augment large language models' reasoning capabilities through a systematic exploration of CoT style prompting for text-to-SQL parsing, thereby addressing the intricate, multistep reasoning demands of the task, as discussed in [Reference].\n",
      "[19]\n",
      "Additionally, we address the ethical considerations associated with Text2SQL technology, particularly in sensitive domains, and discuss strategies for mitigating bias and ensuring fairness.\n",
      "Moreover, we delve into the ethical considerations pertinent to Text2SQL technology, especially within sensitive domains, and articulate strategies for bias mitigation and the assurance of fairness, as examined in [Reference].\n",
      "[20]\n",
      "Finally, we identify promising future research directions for advancing Text2SQL technology and unlocking its full potential for empowering users to access and analyze data more effectively.\n",
      "In conclusion, we pinpoint promising avenues for future research aimed at advancing Text2SQL technology, thereby unlocking its comprehensive potential to empower users in accessing and analyzing data more effectively, as proposed in [Reference].\n",
      "[21]\n"
     ]
    }
   ],
   "source": [
    "for r in results[0]:\n",
    "    print(r[\"statement\"])\n",
    "    print(r[\"statement_hyde\"])\n",
    "    print(r[\"related_sen_id\"])\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# 加载先前找到的statement（每一章节找一个statement）\n",
    "import json\n",
    "with open(\"statements.json\",\"r\",encoding=\"utf-8\") as f:\n",
    "    statements = json.load(f)\n",
    "new_statement = []\n",
    "for i in statements:\n",
    "    len_statement = [len(ii[\"statement\"]) for ii in i]\n",
    "    # 按照长度找到最长的statement\n",
    "    max_len_statement = max(len_statement)\n",
    "    # 按照长度找到最长的statement的index    \n",
    "    max_len_statement_index = len_statement.index(max_len_statement)\n",
    "    new_statement.append(i[max_len_statement_index])\n",
    "\n",
    "new_statement"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 3,
   "metadata": {},
   "outputs": [],
   "source": [
    "statement = \"It has been posited that addressing the identified challenges and limitations will significantly advance the field of MLLM editing, thereby unlocking the full potential of these models across diverse applications.\"\n",
    "# Creating a list of the provided paper descriptions\n",
    "\n",
    "papers_info = [\n",
    "    {\n",
    "        \"title\": \"Dynamic Parameter Editing for Multimodal Large Language Models\",\n",
    "        \"authors\": \"Zhang et al. (2023)\",\n",
    "        \"relevance\": 0.95,\n",
    "        \"abstract\": \"This paper proposes an incremental parameter modification framework specifically designed for MLLMs, achieving 89% editing accuracy while preserving 92% of original model capabilities. Our method introduces a multimodal alignment validator that dynamically adjusts editing operations across visual-textual modalities. Experimental results on 12 downstream tasks demonstrate significant improvements in model adaptability for cross-domain applications. The proposed technique effectively addresses catastrophic forgetting issues in existing editing approaches...\"\n",
    "    },\n",
    "    {\n",
    "        \"title\": \"Cross-Modal Editing Consistency in Vision-Language Models\",\n",
    "        \"authors\": \"Gupta & Schmidt (2024)\",\n",
    "        \"relevance\": 0.90,\n",
    "        \"abstract\": \"Developing a novel evaluation metric (CMEC-Score) to quantify editing consistency across 7 multimodal dimensions. The study reveals current methods have 37% inconsistency rate in cross-modal outputs after factual editing. Our contrastive alignment module reduces this to 9% through dynamic attention recalibration...\"\n",
    "    },\n",
    "    {\n",
    "        \"title\": \"Temporal Knowledge Editing in MLLMs for Robotics Applications\",\n",
    "        \"authors\": \"Robotics AI Team, MIT (2023)\",\n",
    "        \"relevance\": 0.88,\n",
    "        \"abstract\": \"Presenting a temporal editing framework enabling continuous knowledge updates for robotic control systems. Implemented through differentiable memory banks and temporal attention gates, achieving 5x faster knowledge refresh rates compared to baseline methods. Field tests on industrial robots show 78% improvement in task adaptation efficiency...\"\n",
    "    },\n",
    "    {\n",
    "        \"title\": \"Memory-Augmented Editing for Multimodal Reasoning\",\n",
    "        \"authors\": \"Chen et al. (ICLR 2024)\",\n",
    "        \"relevance\": 0.85,\n",
    "        \"abstract\": \"Introducing a memory-augmented architecture with 3 specialized memory banks: factual (500K slots), procedural (200K slots), and contextual (100K slots). The editing process activates relevant memory units through cross-attention gates, achieving 83% factual accuracy improvement on medical diagnosis benchmarks. Particularly effective in handling conflicting information scenarios, reducing error propagation by 67% compared to standard fine-tuning approaches. Evaluation on 5 clinical decision support tasks shows average F1-score improvement from 0.72 to 0.89.\"\n",
    "    },\n",
    "    {\n",
    "        \"title\": \"Diffusion-Based Model Editing for Visual Concepts\",\n",
    "        \"authors\": \"NVIDIA Research Team (2023)\",\n",
    "        \"relevance\": 0.82,\n",
    "        \"abstract\": \"Leveraging diffusion models to edit visual semantics in MLLMs while maintaining text consistency. Our method achieves 41% CLIP score improvement on counterfactual image generation tasks. The two-stage process involves: 1) Latent space purification (denoising 85% irrelevant features) 2) Cross-modal alignment fine-tuning. Testing with 1,200 novel visual concepts shows 78% editing success rate versus 52% in baseline methods.\"\n",
    "    },\n",
    "    {\n",
    "        \"title\": \"Federated Model Editing Across Multimodal Devices\",\n",
    "        \"authors\": \"Samsung AI Center (2024)\",\n",
    "        \"relevance\": 0.79,\n",
    "        \"abstract\": \"Implementing decentralized editing across 15+ device types (mobile/AR/robotics) using dynamic knowledge distillation. The federated consensus algorithm achieves 94% parameter synchronization accuracy with 200ms latency. Real-world deployment in smart factory environments demonstrates 63% reduction in model update cycles while maintaining 99.2% system uptime.\"\n",
    "    },\n",
    "    {\n",
    "        \"title\": \"Causal Intervention for Multimodal Hallucination Control\",\n",
    "        \"authors\": \"Tsinghua University (NeurIPS 2023)\",\n",
    "        \"relevance\": 0.76,\n",
    "        \"abstract\": \"Developing causal graphs with 12 intervention nodes to identify and correct hallucination sources. Our counterfactual testing framework reduces multimodal hallucination rate from 18.3% to 4.7% on GPT-4V. Key innovation includes intervention confidence scores (ICS) that quantify editing impact, achieving 0.89 Spearman correlation with human evaluation.\"\n",
    "    },\n",
    "    {\n",
    "        \"title\": \"In-Context Editing with Multimodal Prompts\",\n",
    "        \"authors\": \"Google DeepMind (2024)\",\n",
    "        \"relevance\": 0.73,\n",
    "        \"abstract\": \"Enabling real-time editing through hybrid prompting (text+image+audio). The context-aware parser processes 8 modality combinations with 150ms latency. Large-scale A/B testing (N=12K users) shows 39% task completion speed improvement and 22% error rate reduction in virtual assistant applications.\"\n",
    "    },\n",
    "    {\n",
    "        \"title\": \"Lifelong Editing via Dynamic Network Expansion\",\n",
    "        \"authors\": \"Stanford HAI (2023)\",\n",
    "        \"relevance\": 0.70,\n",
    "        \"abstract\": \"Proposing Mixture-of-Experts architecture with 32 parallel editing pathways. The gating network dynamically allocates editing operations, achieving 98.7% backward compatibility. Continual learning evaluation on 5-year medical literature shows only 3.2% knowledge degradation per year compared to 12.8% in conventional methods.\"\n",
    "    },\n",
    "    {\n",
    "        \"title\": \"Adversarial Robustness in Model Editing Systems\",\n",
    "        \"authors\": \"ETH Zurich (CVPR 2024)\",\n",
    "        \"relevance\": 0.67,\n",
    "        \"abstract\": \"Establishing robustness benchmarks with 15 attack vectors specific to MLLM editing. Our defense framework combines gradient masking (85% attack detection rate) and semantic consistency checks (92% recovery accuracy). Stress testing reveals current systems have 43% vulnerability rate against sophisticated multimodal attacks.\"\n",
    "    },\n",
    "    {\n",
    "        \"title\": \"Quantum-Inspired Optimization for Editing Efficiency\",\n",
    "        \"authors\": \"Microsoft Research Asia (2024)\",\n",
    "        \"relevance\": 0.64,\n",
    "        \"abstract\": \"Applying quantum annealing algorithms to solve NP-hard editing path optimization. The Q-Bit encoding scheme reduces computational complexity from O(n²) to O(n log n), enabling real-time editing on 50B+ parameter models. Benchmarking shows 7.9x speedup on industrial-scale knowledge graphs with 10M+ nodes.\"\n",
    "    },\n",
    "    {\n",
    "        \"title\": \"Cross-Lingual Editing in Low-Resource Languages\",\n",
    "        \"authors\": \"Meta AI (ACL 2024)\",\n",
    "        \"relevance\": 0.61,\n",
    "        \"abstract\": \"Developing a phoneme-aware editing framework supporting 87 languages. The universal semantic encoder achieves 0.78 BLEU score for rare language (speaker <10K) editing. Field trials in Southeast Asian dialects demonstrate 63% accuracy improvement over translation-based approaches with 200x less training data.\"\n",
    "    },\n",
    "    {\n",
    "        \"title\": \"Neurosymbolic Editing Rule Extraction\",\n",
    "        \"authors\": \"IBM Research (AAAI 2024)\",\n",
    "        \"relevance\": 0.58,\n",
    "        \"abstract\": \"Combining neural networks with symbolic reasoners to generate human-readable editing rules (average 5.2 logical clauses per edit). The hybrid system achieves 89% interpretability score from domain experts while maintaining 93% model performance. Demonstrated on regulatory compliance editing tasks in finance sector.\"\n",
    "    },\n",
    "    {\n",
    "        \"title\": \"Personalized Editing via User Embedding\",\n",
    "        \"authors\": \"Amazon Science (2023)\",\n",
    "        \"relevance\": 0.55,\n",
    "        \"abstract\": \"Learning 512-D user preference vectors that condition editing operations. Multi-task training on 1.2M user interactions achieves 38% personalization accuracy. Deployed in recommendation systems, increasing user engagement time by 25% through contextualized knowledge updates.\"\n",
    "    },\n",
    "    {\n",
    "        \"title\": \"Energy-Efficient Editing on Edge Devices\",\n",
    "        \"authors\": \"Qualcomm Technologies (2024)\",\n",
    "        \"relevance\": 0.52,\n",
    "        \"abstract\": \"Implementing sparse editing operations with 8-bit quantization. Our chip-level optimization reduces energy consumption by 76% (from 3.2W to 0.75W) during on-device editing. Testing on Snapdragon 8 Gen 3 demonstrates real-time operation (60 FPS) for AR applications with <1ms latency.\"\n",
    "    },\n",
    "    {\n",
    "        \"title\": \"Cognitive Load Evaluation in Model Editing\",\n",
    "        \"authors\": \"MIT Cognitive Science (2024)\",\n",
    "        \"relevance\": 0.50,\n",
    "        \"abstract\": \"Developing CLAM metric (Cognitive Load Assessment Metric) through fMRI studies (N=120). Results show complex editing interfaces increase prefrontal cortex activation by 143%. Proposed simplification guidelines reduce user error rate from 29% to 11% in non-expert editing scenarios.\"\n",
    "    },\n",
    "    {\n",
    "        \"title\": \"Single-Modal Language Model Editing via Gradient Constraint\",\n",
    "        \"authors\": \"Tanaka et al. (2022)\",\n",
    "        \"relevance\": 0.45,\n",
    "        \"abstract\": \"Proposing gradient-based editing constraints for pure text-based LLMs, demonstrating 62% editing success rate on Wikipedia fact correction tasks. While not directly applicable to multimodal scenarios, this work establishes fundamental theoretical boundaries for parameter modification operations...\"\n",
    "    },\n",
    "    {\n",
    "        \"title\": \"Cognitive Science Perspectives on AI Model Updating\",\n",
    "        \"authors\": \"Cognitive Computing Lab (2021)\",\n",
    "        \"relevance\": 0.40,\n",
    "        \"abstract\": \"Analyzing human knowledge update mechanisms through cognitive experiments, proposing 7 principles for sustainable AI learning systems. Although focused on biological intelligence, these insights provide metaphorical frameworks for machine learning model editing paradigms...\"\n",
    "    },\n",
    "    {\n",
    "        \"title\": \"Ethical Considerations in Neural Network Editing\",\n",
    "        \"authors\": \"AI Ethics Consortium (2023)\",\n",
    "        \"relevance\": 0.35,\n",
    "        \"abstract\": \"Developing an ethical assessment matrix for model editing operations, identifying 12 potential risk factors including truthfulness degradation and hidden bias propagation. While not addressing technical implementations, this work establishes crucial ethical guidelines for editing research...\"\n",
    "    }\n",
    "]\n",
    "\n",
    "papers = [p[\"abstract\"] for p in papers_info]"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 4,
   "metadata": {},
   "outputs": [],
   "source": [
    "statement = \"有人提出，解决已识别的挑战和局限性将显著推动多模态大型语言模型（MLLM）编辑领域的发展，从而释放这些模型在各类应用中的全部潜力。\""
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 15,
   "metadata": {},
   "outputs": [],
   "source": [
    "\n",
    "papers_info = [\n",
    "    {\n",
    "        \"relevance\": 2,\n",
    "        \"papers_abstract\": \"大型语言模型（LLMs）已经彻底改变了自然语言处理，但它们在需要精确、上下文感知修改的直接文本编辑任务上仍然存在困难。尽管像 ChatGPT 这样的模型在文本生成和分析方面表现出色，但其编辑能力通常较弱，往往只能解决表层问题，而无法处理更深层次的结构或逻辑不一致问题。本研究提出了一种双重方法来提升 LLM 的编辑性能。首先，我们提出了 InstrEditBench，一个高质量的基准数据集，包含 20,000 多个结构化编辑任务，涵盖维基百科文章、LaTeX 文档、代码和数据库领域特定语言（DSL）。InstrEditBench 采用创新的自动化流程生成，确保编辑严格符合指定的指令，同时不改变无关内容。其次，我们提出了 FineEdit，一个基于该数据集训练的专用模型。实验结果表明，与 Gemini 相比，FineEdit 在直接编辑任务上提升了约 10% 的性能，有力地验证了其有效性。\"\n",
    "    },\n",
    "    {\n",
    "        \"relevance\": 8,\n",
    "        \"papers_abstract\": \"当前的指令驱动图像编辑方法（如 InstructPix2Pix）依赖于扩散模型中的 CLIP 文本编码器，在处理复杂场景时往往难以获得理想效果。为此，本文提出 SmartEdit，一种基于指令的创新图像编辑方法，利用多模态大型语言模型（MLLMs）增强其理解和推理能力。然而，直接集成 MLLM 仍然在复杂推理场景下面临挑战。为缓解这一问题，我们提出 双向交互模块（BIM），使输入图像与 MLLM 输出之间能够进行全面的双向信息交互。在训练过程中，我们首先引入感知数据以提升扩散模型的感知和理解能力，然后展示了少量复杂指令编辑数据如何有效激发 SmartEdit 处理复杂指令的能力。此外，我们构建了 Reason-Edit，一个专门用于复杂指令图像编辑的评测数据集。定量和定性实验结果均表明，SmartEdit 超越了现有方法，为复杂指令图像编辑的实际应用铺平了道路。\"\n",
    "    },\n",
    "    {\n",
    "        \"relevance\": 5,\n",
    "        \"papers_abstract\": \"本文对多模态大型语言模型（MLLMs）在医疗领域的应用进行了全面综述。作者讨论了医院在训练和部署医疗 LLMs 和 MLLMs 时面临的挑战，包括对海量医疗数据的需求、大规模计算资源的消耗以及医疗领域的特殊要求。文章还探讨了 MLLMs 在医疗报告生成、临床诊断、心理健康服务等方面的潜力，并展望了医疗 LLMs 和 MLLMs 的未来发展趋势及其在临床环境中的更好集成。\"\n",
    "    },\n",
    "    {\n",
    "        \"relevance\": 7,\n",
    "        \"papers_abstract\": \"本文提出了 LIME（Less Is More for MLLM Evaluation），一种精炼且高效的多模态大型语言模型（MLLMs）评估基准。作者设计了一条半自动化管道来筛选掉无信息量的样本并消除答案泄露，专注于需要图像理解的任务。实验表明，LIME 减少了 76% 的样本数量，并将评估时间缩短了 77%，同时能够更有效地区分不同模型的能力。此外，文章讨论了传统自动化评估指标（如 CIDEr）在评估 MLLMs 生成字幕性能时的不足之处。\"\n",
    "    },\n",
    "    {\n",
    "        \"relevance\": 3,\n",
    "        \"papers_abstract\": \"本文对多模态大型语言模型（MLLMs）在图像字幕生成、视觉问答和推理等任务上的性能进行了综合评估。作者指出 MLLMs 在这些任务中面临的挑战，并提出新的评测方法以更好地衡量其性能。此外，文章探讨了 MLLMs 在医疗诊断和治疗等领域的潜在应用。\"\n",
    "    },\n",
    "    {\n",
    "        \"relevance\": 4,\n",
    "        \"papers_abstract\": \"本文讨论了当前多模态大型语言模型（MLLMs）在理解和推理任务上的挑战和局限性。作者提出了一些新的方法来提升 MLLMs 在这些任务中的表现，包括采用更先进的架构和训练技术。此外，文章还强调了 MLLMs 在图像编辑和医疗诊断等应用场景中的潜力。\"\n",
    "    },\n",
    "    {\n",
    "        \"relevance\": 6,\n",
    "        \"papers_abstract\": \"本文提出了一种基于多模态大型语言模型（MLLMs）的全新图像编辑方法。作者提出 双向交互模块（BIM），以增强 MLLMs 在图像编辑任务中的理解和推理能力。同时，作者构建了一个名为 Reason-Edit 的新评测数据集，专门用于复杂指令图像编辑。实验结果表明，该方法优于以往的方法，为复杂指令图像编辑的实际应用铺平了道路。\"\n",
    "    },\n",
    "    {\n",
    "        \"relevance\": 3,\n",
    "        \"papers_abstract\": \"本文讨论了当前多模态大型语言模型（MLLMs）在图像字幕生成、视觉问答和推理等任务中面临的挑战和局限性。作者提出新的评测方法，以更有效地衡量 MLLMs 的表现，并强调了构建更高效、更准确的基准测试的必要性。此外，文章还探讨了 MLLMs 在医疗诊断和治疗等应用中的潜力。\"\n",
    "    },\n",
    "    {\n",
    "        \"relevance\": 1,\n",
    "        \"papers_abstract\": \"本文对大型语言模型（LLMs）和多模态大型语言模型（MLLMs）在医疗领域的应用进行了全面综述。作者讨论了医院在训练和部署医疗 LLMs 和 MLLMs 时面临的挑战，包括对大量医疗数据的需求、庞大的计算资源消耗以及医疗领域的特殊要求。文章还探讨了 MLLMs 在医疗报告生成、临床诊断、心理健康服务等方面的潜力，并展望了医疗 LLMs 和 MLLMs 在临床环境中的未来发展方向。\"\n",
    "    },\n",
    "    {\n",
    "        \"relevance\": 9,\n",
    "        \"papers_abstract\": \"本文介绍了 FineEdit，一个用于增强大型语言模型（LLMs）编辑性能的专用模型。作者提出了一种双重方法，包括构建高质量的基准数据集 InstrEditBench，以及在该数据集上训练 FineEdit 模型。实验结果表明，FineEdit 在直接编辑任务上取得了显著的改进，验证了其有效性。文章最后讨论了 FineEdit 在文本生成、代码编辑和数据库管理等应用场景中的潜力。\"\n",
    "    }\n",
    "]\n",
    "\n",
    "papers = [f'{p[\"relevance\"]}:{p[\"papers_abstract\"]}' for p in papers_info]"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "statement = \"计算机断层扫描（CT）和正电子发射断层扫描（PET）是单一成像模态的具体例子，而磁共振成像（MRI）则是多模态数据的一个例子，因为其组成序列T1加权、T2加权和流体衰减反转恢复（FLAIR）可以被视为各自独立的模态，因为每个MR序列测量的是不同的生物物理或生物学属性。\"\n",
    "papers_info = [\n",
    "  {\n",
    "    \"paper_content\": \"多模态医学图像融合在医学临床应用中起着至关重要的作用，为了解决现有方法大多数侧重于局部特征的提取，对全局依赖关系的探索不足，忽略了全局和局部信息交互，导致难以有效解决周围组织与病灶区域之间的模式复杂性和强度相似性问题。该文提出面向PET和CT医学图像融合的LL-GG-LG Net模型。首先，提出了局部-局部融合模块（Local-Local Fusion Module，LL Module），该模块采用双层注意力机制更好地关注局部细节信息特征；其次，设计了全局-全局融合模块（Global-Global Fusion Module，GG Module），该模块通过在Swin Transformer中加入残差连接机制将局部信息引入全局信息中，提高了Transformer对局部信息的关注程度；然后，提出一种基于可微分神经架构搜索自适应的密集融合网络的局部-全局融合模块（Local-Global Fusion Module，LG Module），充分捕获全局关系并保留局部线索，有效解决背景和病灶区域相似度高问题；使用临床多模态肺部医学图像数据集验证模型的有效性，实验结果表明，该文方法在平均梯度，边缘强度，Q,AB/F,，空间频率，标准差，信息熵等感知图像融合质量评价指标上与其他七种方法中最优的方法相比，分别平均提高了21.5%，11%，4%，13%，9%，3%。模型能够突出病变区域信息，融合图像结构清晰且纹理细节丰富。\",\n",
    "    \"relevance\": 8\n",
    "  },\n",
    "  {\n",
    "    \"paper_content\": \"死亡受体⁃配体１（programmed death⁃ligand 1，PD⁃L１）等多个分子标志物的无创预测，该模型在国内 外多中心测试集上ＡＵＣ均＞０．８，并显示了辅助临床 靶向治疗 和 免 疫 治 疗 决 策 的 能 力。 本 期 孙 晓 慧 等［１０］探索了１８ Ｆ⁃ＦＤＧ ＰＥＴ 影像组学特征在术前肺 腺癌脉管浸润及脏层胸膜侵犯预测上的可行性，并 通过交叉验证的手段，对模型稳定性进行了验证。 罗量等［１１］评估了前列腺特异膜抗原（prostate specific membrane antigen，PSMA）ＰＥＴ／ ＣＴ 影像组学在前 列腺癌和前列腺增生鉴别诊断中的价值，结果表明 基于整个前列腺区域的 ３ 个 ＣＴ 影像特征和 ４ 个ＰＥＴ 影像特征构建的 ｌｏｇｉｓｔｉｃ 回归模型能鉴别上述 ２ 种 疾病，该模型可有效辅助对 ＰＳＭＡ ＰＥＴ／ ＣＴ 无明显 显像剂摄取且血清前列腺特异抗原（prostate specific antigen，PSＡ）水平升高不显著的患者的鉴别诊断。 贾童童等［１２］ 将基于 ＰＥＴ／ ＣＴ 提取的影像组学特征 与临床特征联合，实现了三阴性乳腺癌（triple⁃negative breast cancer，TNBC）新辅助化疗疗效的无创预测，为临床 治疗决策提供了重要参考。\",\n",
    "    \"relevance\": 6\n",
    "  },\n",
    "  {\n",
    "    \"paper_content\": \"文献关键词：图像超分辨率重建;脑影像;MRI;多模态;解剖结构约束 作者机构：南方医科大学生物医学工程学院,广东 广州 510515;广东省医学图像处理重点实验室,广东 广州510515;上海联影智能医疗科技有限公司,上海 200030;东部战区总医院(南京大学医学院附属金陵医院)放射诊断科,江苏 南京 210002 引用格式：\\[1\\]曹泽红;刘高平;张志强;石峰;张煜-.基于多模态MRI脑影像的超分辨率重建)\\[J\\].南方医科大学学报,2022(07):1019-1025 B类：脑影像,影像数据,合成模型,层数,数据重建,薄层,2D,FLAIR,设计结构,图像超分辨率重建,重建网络,特征重建,充模,皮层,核团,关键区域,信息增强,重建图像,高分辨率图像,灰度,结构相似性,学习方向,金标准,标准图,解剖学,结构信息,要约,模型自适应,图像质量评价,测试集,PSNR,SSIM,比方,脑沟,医学图像,单模,图像重建,第二模态,低分辨率图像,脑灰质,脑脊液,体积测量,灰质体积,平均误差,降为,模态信息,临床诊疗,诊疗流程\",\n",
    "    \"relevance\": 7\n",
    "  },\n",
    "  {\n",
    "    \"paper_content\": \"Med-MAT 医疗多模态大模型超级泛化：模型在学会了各种基础要素之后，就能自己组合这些要素，用到从未直接见过的新应用场景中，而不需要再从头学起。医学领域MLLM的潜力：有助于医生高效咨询，患者可随时获取病情信息。医学影像数据的不足：稀有疾病或隐私受限导致数据有限。多任务训练可改善模型性能，但尚缺细粒度互补分析。核心概念：组合泛化（CG）指模型通过重组已学元素理解新组合。医学影像中的自然组合机会：Modality（不同成像方式，如CT、MRI、X-ray等）、Anatomical area（不同解剖部位，如脑、肺、皮肤等）、Task（不同医学任务，如分类、检测、分割等）。组合泛化的意义：可处理新颖组合。数据集Med-MAT：数据来源及规模，数据属性标注及分割，数据的公开与获取方式。实验与发现：控制变量实验，多任务扩展实验，数据量对CG的影响，CG在小样本场景中的帮助，三元素来源于不同数据集时仍可泛化，检测任务与分类任务互补，不同MLLM骨干模型上的普适性。相关工作：医学影像的泛化研究，检测型MLLM研究，医学MLLM研究。结论：Med-MAT数据集，MLLMs可利用CG理解医学影像，CG能持续带来性能增益、助力小样本学习，在不同任务、不同骨干中均适用，局限性与未来工作。\",\n",
    "    \"relevance\": 9\n",
    "  },\n",
    "\n",
    "]"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# statements——>query"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 6,
   "metadata": {},
   "outputs": [],
   "source": [
    "statements = [{'statement': 'Research on Text-to-SQL generation has traditionally focused on scenarios where each natural language question is associated with one correct SQL query.',\n",
    "  'related_sen_id': [0],\n",
    "  'statement_hyde': 'Traditional research in the domain of Text-to-SQL generation has predominantly concentrated on scenarios wherein a single natural language query corresponds to a unique, correct SQL query, as documented in [Reference].'},\n",
    " {'statement': 'However, real-world databases often exhibit significant ambiguity in natural language queries due to overlapping schema names, multiple relationship paths, and other factors.',\n",
    "  'related_sen_id': [1],\n",
    "  'statement_hyde': 'Contrary to idealized scenarios, real-world databases frequently present substantial ambiguity in natural language queries, stemming from overlapping schema names, multiple relationship paths, and various other contributing factors, as highlighted in [Reference].'},\n",
    " {'statement': 'This ambiguity can lead to multiple SQL queries that can produce the correct answer, yet most benchmarks only provide one query among the many possible correct answers.',\n",
    "  'related_sen_id': [4],\n",
    "  'statement_hyde': 'Such ambiguity can result in the existence of multiple SQL queries that yield correct answers, although the majority of existing benchmarks typically furnish only a single query from the numerous plausible correct alternatives, as noted in [Reference].'},\n",
    " {'statement': \"This ambiguity poses a challenge for existing Text-to-SQL systems, which struggle to generate accurate and diverse SQL queries that capture all possible interpretations of the user's intent.\",\n",
    "  'related_sen_id': [5],\n",
    "  'statement_hyde': \"The aforementioned ambiguity presents a significant challenge to current Text-to-SQL systems, which often encounter difficulties in generating both accurate and diverse SQL queries that encompass all potential interpretations of the user's intent, as discussed in [Reference].\"},\n",
    " {'statement': 'To address this gap, we develop a novel benchmark called AmbiQT, which consists of over 3000 examples where each natural language query is interpretable as two plausible SQL queries due to lexical and/or structural ambiguity.',\n",
    "  'related_sen_id': [6],\n",
    "  'statement_hyde': 'To bridge this gap, we introduce a novel benchmark, termed AmbiQT, comprising over 3000 examples wherein each natural language query can be interpreted as two viable SQL queries owing to lexical and/or structural ambiguity, as detailed in [Reference].'},\n",
    " {'statement': 'Benchmarks like PredBench have limitations in terms of training, benchmarking, and evaluation.',\n",
    "  'related_sen_id': [10],\n",
    "  'statement_hyde': 'Benchmarks such as PredBench exhibit notable limitations in the realms of training, benchmarking, and evaluation processes, as evidenced in [Reference].'},\n",
    " {'statement': 'For instance, training limitations include the constraint on model architecture and size, while benchmark limitations involve the limited number of methods and the need for further calibration of dataset protocols.',\n",
    "  'related_sen_id': [11],\n",
    "  'statement_hyde': 'For example, training limitations are characterized by constraints on model architecture and size, whereas benchmark limitations are associated with a restricted number of methods and the necessity for additional calibration of dataset protocols, as outlined in [Reference].'},\n",
    " {'statement': 'Evaluation limitations are observed in the small and homogenous sample of human evaluators and the lack of diverse evaluation approaches and metrics.',\n",
    "  'related_sen_id': [12],\n",
    "  'statement_hyde': 'Evaluation limitations are evident in the utilization of a small and homogenous sample of human evaluators, coupled with the absence of diverse evaluation methodologies and metrics, as identified in [Reference].'},\n",
    " {'statement': 'To address these limitations, future studies could explore more evaluation methods, improve the diversity and size of participants, and investigate the impact of various hyperparameters on model performance.',\n",
    "  'related_sen_id': [13],\n",
    "  'statement_hyde': 'To mitigate these limitations, future research endeavors could investigate additional evaluation methods, enhance the diversity and size of participant pools, and examine the influence of diverse hyperparameters on model performance, as suggested in [Reference].'},\n",
    " {'statement': 'Additionally, incorporating indicators of attack failures could help in debugging faulty evaluations and lead to fairer assessments.',\n",
    "  'related_sen_id': [14],\n",
    "  'statement_hyde': 'Furthermore, the integration of indicators of attack failures may facilitate the debugging of erroneous evaluations, thereby contributing to more equitable assessments, as proposed in [Reference].'},\n",
    " {'statement': \"Furthermore, benchmarks could benefit from the inclusion of economic rationality assessments to evaluate models' ability to exhibit rational behavior in economic tasks.\",\n",
    "  'related_sen_id': [15],\n",
    "  'statement_hyde': \"Moreover, the incorporation of economic rationality assessments into benchmarks could prove beneficial for evaluating models' capacity to demonstrate rational behavior in economic tasks, as argued in [Reference].\"},\n",
    " {'statement': 'We also explore the integration of Text-to-SQL with other NLP tasks, such as question answering and information extraction, to create more powerful and versatile systems.',\n",
    "  'related_sen_id': [16],\n",
    "  'statement_hyde': 'Additionally, we investigate the integration of Text-to-SQL with other natural language processing tasks, including question answering and information extraction, with the aim of developing more robust and versatile systems, as explored in [Reference].'},\n",
    " {'statement': 'For example, the AmbiQT benchmark addresses ambiguity in SQL by encompassing four types of ambiguity, including column ambiguity, table ambiguity, join ambiguity, and precomputed aggregates.',\n",
    "  'related_sen_id': [17],\n",
    "  'statement_hyde': 'For instance, the AmbiQT benchmark tackles SQL ambiguity by incorporating four distinct types of ambiguity: column ambiguity, table ambiguity, join ambiguity, and precomputed aggregates, as described in [Reference].'},\n",
    " {'statement': 'Furthermore, the work on interactive Text-to-SQL generation proposes a new interaction mechanism for users to validate and refine generated queries through step-by-step explanations, which can be extended to support multi-turn SQL generation by incorporating the contextual information of previous queries into explanation generation and text-to-clause generation.',\n",
    "  'related_sen_id': [18],\n",
    "  'statement_hyde': 'Moreover, research on interactive Text-to-SQL generation introduces a novel interaction mechanism enabling users to validate and refine generated queries via step-by-step explanations, a method that can be extended to support multi-turn SQL generation by integrating the contextual information from prior queries into both explanation generation and text-to-clause generation processes, as detailed in [Reference].'},\n",
    " {'statement': \"Additionally, the exploration of chain-of-thought style prompting for Text-to-SQL aims to enhance LLMs' reasoning ability by systematically exploring CoT style prompting for text-to-SQL parsing, addressing complex, multistep reasoning required for the task.\",\n",
    "  'related_sen_id': [19],\n",
    "  'statement_hyde': \"Furthermore, the examination of chain-of-thought style prompting in the context of Text-to-SQL seeks to augment large language models' reasoning capabilities through a systematic exploration of CoT style prompting for text-to-SQL parsing, thereby addressing the intricate, multistep reasoning demands of the task, as discussed in [Reference].\"},\n",
    " {'statement': 'Additionally, we address the ethical considerations associated with Text2SQL technology, particularly in sensitive domains, and discuss strategies for mitigating bias and ensuring fairness.',\n",
    "  'related_sen_id': [20],\n",
    "  'statement_hyde': 'Moreover, we delve into the ethical considerations pertinent to Text2SQL technology, especially within sensitive domains, and articulate strategies for bias mitigation and the assurance of fairness, as examined in [Reference].'},\n",
    " {'statement': 'Finally, we identify promising future research directions for advancing Text2SQL technology and unlocking its full potential for empowering users to access and analyze data more effectively.',\n",
    "  'related_sen_id': [21],\n",
    "  'statement_hyde': 'In conclusion, we pinpoint promising avenues for future research aimed at advancing Text2SQL technology, thereby unlocking its comprehensive potential to empower users in accessing and analyzing data more effectively, as proposed in [Reference].'}]\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 8,
   "metadata": {},
   "outputs": [],
   "source": [
    "from research_agent.core.query import Query\n",
    "query = Query()\n",
    "# r = await query.query_by_content(statements[0][\"statement_hyde\"],top_k=20)\n",
    "# len(r)\n",
    "# r = await query.query_by_content(statements[0][\"statement\"],top_k=20)\n",
    "# len(r)\n",
    "# 接下来可以对比一下by_hyde和by_statement检索到的结果的差异"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 9,
   "metadata": {},
   "outputs": [],
   "source": [
    "paper_queries = [\n",
    "    query.query_by_content(f\"{statement['statement_hyde']}\", top_k=20) for statement in statements\n",
    "]\n",
    "# 并行执行所有论文查询任务\n",
    "retrieved_paper_list = await asyncio.gather(*paper_queries)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 11,
   "metadata": {},
   "outputs": [],
   "source": [
    "papers = retrieved_paper_list[0]\n",
    "statement = statements[0]"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# 让模型排序"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 42,
   "metadata": {},
   "outputs": [],
   "source": [
    "context = \"\\n\".join(\n",
    "    f\"\"\"paper_title:{chunk['entity']['paper_title']}\n",
    "                paper_id:{chunk['entity']['paper_id']}\n",
    "                chunk_id:{chunk['entity']['chunk_id']}\n",
    "                chunk_text:{chunk['entity']['chunk_text']}\"\"\"\n",
    "    for chunk in papers[:5]\n",
    ")\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 44,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "[{'role': 'system', 'content': '\\nYou are an expert in artificial intelligence and your task is to assist in supplementing missing citations for given statements. \\nBelow is a set of statements that require citations, along with relevant chunk_text retrieved from academic papers. \\nYour goal is to identify the most appropriate chunks that support each statement and format the supplemented citation properly.\\n\\n<instruction>\\n### **Task Instructions**  \\n1. **Analyze Each Statement**:  \\n   - Carefully evaluate each statement and its context to determine the **most relevant and directly supporting `chunk`** from the provided `chunk_text`.  \\n   - Ensure the selected `chunk` **aligns directly with the statement\\'s meaning and context**.  \\n\\n2. **Generate Precise Supplemented Citations**:  \\n   - Format each citation in the following format:  \\n     ```  \\n     <sup>{\"chunk_id\":\"[chunk_id]\", \"paper_id\":\"[paper_id]\"}</sup>  \\n     ```  \\n   - If a statement contains multiple references, **ensure all references are captured** in the same format.such as：<sup>{chunk_id:\\'123\\', paper_id:\\'456\\'}</sup><sup>{chunk_id:\\'234\\', paper_id:\\'567\\'}</sup>\\n   - Ensure citations are **seamlessly integrated into the statement** without disrupting its flow.  \\n   - Ensure that you only add citations and do not modify any part of the statement.  \\n\\n3. **Select the Most Relevant `chunk`**:  \\n   - **Prioritize clarity and relevance**: Only include `chunk`(s) that **directly support** the statement.  \\n   - **Citation limit**: Add a **maximum of 5 citations** per statement to avoid overloading.  \\n   - If fewer relevant `chunk` are available, you may add fewer than 5 citations.  \\n\\n4. **Maintain Brevity**:  \\n   - If no sufficiently relevant `chunk` is found, **leave the statement unchanged**.  \\n   - **Do not add** citations that do not directly contribute to the statement\\'s meaning.  \\n</instruction>\\n\\n<Output Format>\\n1. Format: Valid JSON object\\n2. Single key: \"statement\"\\n3. Value: Well-formatted string with paragraphs separated by \"\\\\n\"\\n4. The output must be in valid JSON format with a single key \"statement\" containing a string.\\nUse the following format:\\n```json\\n     {\\n      \"statement\": \"statement with citations\",\\n     }\\n```\\n</Output Format>\\n\\n'}, {'role': 'user', 'content': ' \\nStatement Requiring Citations:{\\'statement\\': \\'Research on Text-to-SQL generation has traditionally focused on scenarios where each natural language question is associated with one correct SQL query.\\', \\'related_sen_id\\': [0], \\'statement_hyde\\': \\'Traditional research in the domain of Text-to-SQL generation has predominantly concentrated on scenarios wherein a single natural language query corresponds to a unique, correct SQL query, as documented in [Reference].\\'} \\nRetrieved Chunk_Text\\n- A set of text snippets (`chunk_text`) from academic sources, each with a unique ID (`chunk_id`) and metadata (`paper_id`)\\npaper_title:Constraint Reasoning Embedded Structured Prediction.\\n                paper_id:64a29654d68f896efa29af31\\n                chunk_id:10\\n                chunk_text:# 5.3 Text2SQL Generation\\nTask Definition. Formatted data such as travel records and stock market transactions are stored in relational databases. Currently, accessing the database requires a data scientist who masters the SQL query language. Our task is to automatically synthesize SQL queries from natural language sentences using machine learning. Compared with the data expert approach, SQL query generation requires deeper reasoning across the structure of the database, the semantics of the structured query language, and the understanding of natural language. As shown in Figure 11, the input of the text2SQL generation is a sentence that describes the query in natural language and the table headers in the relational database. The output is a SQL query with the following structure:  \\n\\nSELECT agg-op sel-col WHERE (cond-col cond-op cond-val) AND ...  \\n\\nHere, SELECT and WHERE are keywords in the SQL language. What we need to predict are: (1) the aggregation operator $\\\\mathsf{a g g-o p}$ , which chooses among the set {empty, COUNT, MIN, MAX, SUM, AVG }; (2) the column name in selection sel-col and (3) the column name in condition cond-col , both of which are chosen from the table headers; (4) the conditional operator cond-op , which is in $\\\\{=,<,>\\\\}$ ; (5) the conditional value cond-val , which is assumed to be a sub-sequence of the given query. Here, one bracket pair () represents one conditional statement. The SQL query may have multiple conditions, which are denoted above by “ ... ”. Figure 11 displays this SQL query:\\n\\n# SELECT COUNT \"School\" WHERE \"No.\" = \"3\"\\nHere agg-op is COUNT ;sel-col is “school”, which is a column name from the table headers. One cond-col is “No.”, which also comes from the table headers. The cond-op is “=”. The cond-val is “3”, which we assume is from the input query. This example has one condition but multiple conditions are allowed.  \\n\\nDefinition of Constraints. Existing generative neural models for this task are not guaranteed to generate a query that follows the grammar of a SQL query. To avoid grammar violations, we compile a set of common SQL grammars as constraints into the Core-Sp module. The Core-Sp module will ensure that all the generated SQL queries follow the grammatical constraints. Our constraints are defined on the operators, namely the conditional operator cond-op and the aggregation operator agg-op . The domains of these operators are dependent upon the data types of the entities (namely, cond-col and sel-col )they operate on. Consider the previous example. The agg-op can only take values between $\\\\{\\\\mathrm{empty,~\\\\coUNT}\\\\}$ , because the sel-col is “school”, which is of the string type. More precisely, let $s$ be a column header (the value of sel-col or cond-col ). We define $F_{a}(s)$ as  \\n\\nInput Table:   \\n\\n\\n<html><body><table><tr><td></td><td>Player</td><td>No.</td><td>Position</td><td>School</td></tr><tr><td>0</td><td>Antonio</td><td>21</td><td>Guard-Forward</td><td>Duke</td></tr><tr><td>1</td><td>Voshon</td><td>2</td><td>Guard</td><td>Minnesota</td></tr><tr><td>2</td><td>Marin</td><td>3</td><td>Guard-Forward</td><td>Butler CC</td></tr></table></body></html>\\n\\n# Input Query:\\nHow many schools did player number 3 play at?\\n\\n# Output SQL Query:\\nFigure 11: An example for the Text2SQL generation task. The input is the text query “How many schools did player number 3 play at?” and the table header “ Player, No., Position, School ” from the relational database. The output should be the SQL query: SELECT COUNT \"School\" WHERE \"No. $\"~=~\"3\"$ .  \\n\\nthe set of aggregation operators agg-op that can be associated with $s$ , and $F_{c}(s)$ as the set of condition operators cond-op that can be associated with $s$ . That is:  \\n\\n$$\\n\\\\begin{array}{r l}&{F_{a}(s)=\\\\left\\\\{\\\\begin{array}{l l}{\\\\{\\\\mathrm{empty~,~\\\\varsigma0UNT,~\\\\forall\\\\mathrm{IIN},~\\\\forall\\\\mathrm{IAX},~\\\\forall\\\\mathrm{II},~\\\\mathrm{AVG}\\\\}}}&{\\\\mathrm{if~}s\\\\mathrm{~of~is~numeric~type}}\\\\\\\\ {\\\\{\\\\mathrm{empty~,~\\\\varsigma0UNT}\\\\}}&{\\\\mathrm{if~}s\\\\mathrm{~of~is~string~type}}\\\\end{array}\\\\right.}\\\\\\\\ &{F_{c}(s)=\\\\left\\\\{\\\\begin{array}{l l}{\\\\{\\\\mathrm{=,~\\\\displaystyle>,~\\\\varsigma\\\\}}}&{\\\\mathrm{if~}s\\\\mathrm{~is~of~numeric~type}}\\\\\\\\ {\\\\{\\\\mathrm{=}\\\\}}&{\\\\mathrm{if~}s\\\\mathrm{~is~of~string~type}}\\\\end{array}\\\\right.}\\\\end{array}\\n$$  \\n\\nWe also introduce dataype constraints, which are defined as:  \\n\\n$$\\n\\\\begin{array}{r l}&{\\\\mathtt{s e l-c o l}=s\\\\Rightarrow\\\\mathtt{a g g-o p}\\\\in F_{a}(s),}\\\\\\\\ &{\\\\mathtt{c o n d-c o l}=s\\\\Rightarrow\\\\mathtt{c o n d-o p}\\\\in F_{c}(s).}\\\\end{array}\\n$$  \\n\\nModel Structure. We embed the Core-Sp module to SQLova (Hwang et al., 2019), the state-of-the-art neural network for text2SQL generation. SQLova has a sequence-tosequence architecture. It first encodes a natural language sentence and the table headers into a high-dimensional vector. Then the decoder of SQLova decodes the hidden representation into the predictions of various entities in the SQL query. SQLova first determines the number of conditions in the SQL query and then fills in the ( cond-col ,cond-op ,cond-val ) for each condition. The operators agg-op, cond-op are predicted as a classification task from a fixed set of operators. Column names cond-col, sel-col are predicted from the set of table headers in the relational database. The cond-val is predicted by a pointer neural network which points at a span of the input natural language sentence. The selected span of the query is used as the cond-val (Dong and Lapata, 2018).  \\n\\nMDD Construction. The associated MDD that encodes the constraints for text2SQL generation is similar to the MDD for if-then program synthesis. The MDD is split into layers and every two layers form a group. One two-layer group is used to enforce constraints on an operator-column name pair. The operator-column name pair can be $\\\\mathsf{a g g-o p}$ and sel-col ,or can be cond-op and cond-col . Note that there can be only one group of $\\\\mathsf{a g g-o p}$ and sel-col and more than one group of cond-op and cond-col . In the first layer of the group, the column name is determined. In the second layer, the invalid operators are ruled out based on the type of the column name selected in the first layer. The two-layer group is copied several times because the SQL query can contain multiple conditions.  \\n\\nConstraint Reasoning Embedded Structured Prediction\\npaper_title:Interactive Text-to-SQL Generation Via Editable Step-by-Step Explanations\\n                paper_id:6461b9c9d68f896efad43133\\n                chunk_id:1\\n                chunk_text:# 2 Related Work\\n\\n# 2.1 Text-to-SQL Generation\\nNatural language interfaces have long been recognized as a way to expand access to databases ( Hendrix et al. ,1978 ).The construction of several large text-to-SQL datasets, such as WikiSQL ( Zhong et al. ,2017 ) and Spider ( Yu et al. ,2018 ), has enabled the adoption of deep learning models in this task, achieving unprecedented performance in recent years ( Rubin and Berant ,2021 ;Wang et al. ,2020a ;Scholak et al. ,2021 ;Yu et al. ,2020 ;Hwang et al. ,2019 ). Our technique is based on the recent success of neural text-to-SQL models. Unlike existing models that perform end-to-end SQL generation, we propose a new interaction mechanism for users to validate and refine generated queries through step-by-step explanations.  \\n\\nAs the first step to demonstrate the feasibility of our approach, we focus on single-turn SQL generation ( Yu et al. ,2018 ) in this work. There has also been recent work that supports multi-turn SQL generation ( Yu et al. ,2019a ,b;Guo et al. ,2021 ), where a sequence of interdependent queries are expressed in multiple utterances in a dialog. Models designed for multi-turn SQL generation typically need to reason about the dialog context and effectively encode the historical queries ( Wang et al. ,2021 ;Hui et al. ,2021 ;Zhang et al. ,2019 ;Cai and Wan ,2020 ;Wang et al. ,2020b ). Our approach can be extended to support multi-turn SQL generation by initiating separate refinement sessions for individual queries while incorporating the contextual information of previous queries into explanation generation and text-to-clause generation.\\n\\n# 2.2 Interactive Semantic Parsing for SQL\\nRecently, there has been a growing interest in interactive approaches that elicit user feedback to guide SQL generation. Iyer et al. (2017 ) proposed to allow users to flag incorrect queries and continuously retrain the model. Both DIY ( Narechania et al. ,2021 ) and NaLIR ( Li and Jagadish ,2014a ,b)enable users to select alternative values or subexpressions to fix an incorrect SQL query. PIIA ( Li et al. ,2020 ), MISP ( Yao et al. ,2019 ), and DialSQL ( Gur et al. ,2018 ) proactively ask for user feedback via multiple-choice questions. A common limitation of these methods is that they only solicit feedback in constrained forms, hindering their flexibility and effectiveness in addressing the variability of SQL errors. In contrast, our approach allows more flexible feedback through direct edits to the explanations generated by the model.  \\n\\nThe only work that supports open-ended user feedback in SQL generation is NL-EDIT ( Elgohary et al. ,2021 ). NL-EDIT is trained on SPLASH ( Elgohary et al. ,2020 ), a dataset of SQL errors and user feedback utterances. Given an incorrect query, NL-EDIT allows users to provide a clarification utterance. Based on the utterance, the model generates a sequence of edits to the SQL query. Incorporating feedback expressed in a completely free-text utterance is challenging for two reasons:  \\n\\n  \\nFigure 2: An Overview of Interactive SQL Generation and Refinement with Editable Step-by-Step Explanations  \\n\\n(1) the model needs to infer which part of the SQL query to fix; (2) the model needs to determine what changes are being requested. In contrast, S TEPS asks users to directly edit an NL explanation and make corrections to the explanation. Comparing the initial explanation with the user-corrected explanation makes it easier to locate the part of a SQL query that needs to be changed and infer what change to make.  \\n\\nThe idea of SQL decomposition is similar to recent work that decomposes a user question to sub-questions on SPARQL ( Mo et al. ,2022 ). Their approach requires a crowd-sourced dataset to train a question decomposition model. In contrast, our rule-based method generates step-by-step explanations without the need for training a model. This also allows our system to map each entity in the explanation to the corresponding SQL element, making it easier for SQL correction (Sec. 3.2 ).\\n\\n# 2.3 Explaining SQL Queries in NL\\nOur approach is also related to prior work that generates NL explanations for SQL queries. Simitsis and Ioannidis (2009 ) argued that databases should “talk back” in human language so that users can verify results. Kokkalis et al. (2012 ) and Koutrika et al. (2010 ) used a graph-based SQL translation approach, where each query is represented as a graph and the explanation is generated by traversing the graph. Elgohary et al. (2021 ,2020 ) employed a template-based explanation approach, where they manually curated 57 templates for explanation generation. These approaches have limited capability to handle arbitrary SQL queries. To address this limitation, we propose a rule-based method to first explain terminal tokens (e.g., operators, keywords) and gradually compose them into a complete explanation based on the derivation rules in the SQL grammar. Another key difference is that none of the existing approaches supports editable explanations for SQL correction, which is a key feature to solicit user feedback in our approach.\\npaper_title:Benchmarking and Improving Text-to-SQL Generation under Ambiguity\\n                paper_id:6535d747939a5f408295c649\\n                chunk_id:1\\n                chunk_text:# 2 Background and Related Work\\nA Text-to-SQL model takes as input a question expressed as a natural language text x, and a database schema scomprising of table and column names, and outputs an SQL program ywhich can be executed against the database to answer the user’s question. Figure 1 shows an example. The training data for the task comprises (text, schema, SQL) triplets spanning multiple distinct databases.  \\n\\n  \\nFigure 1: A Text-to-SQL system converts a user question to an SQL query, conditioned on the database schema and/or content.  \\n\\nBenchmarks. Popular benchmarks for the Textto-SQL task are WikiSQL ( Zhong et al. ,2018 )and SPIDER ( Yu et al. ,2018 ). A few others have been proposed recently to capture real-world scenarios, such as KaggleDBQA ( Lee et al. ,2021 ), SEDE ( Hazoom et al. ,2021 ), and EHRSQL ( Lee et al. ,2022 ). They all attach one SQL per text, though some of them mention the problem of ambiguity in real-world datasets. Dr. SPIDER ( Chang et al. ,2023 ), designed to test the robustness of existing models, perturbs either the text or schema of SPIDER but still assigns one SQL per text.  \\n\\nAmbiguity in SQL Although ambiguity has been studied in other fields of NLP ( Pilault et al. ,2023 ;Li et al. ,2022 ;Futeral et al. ,2022 ), it has been unexplored in the context of semantic parsing. Ambiguity in SQL arising from related column names is discussed in ( Wang et al. ,2022 ), but they only consider column ambiguity. Their method of recognizing ambiguous queries depends on labeling words of the text and does not generalize to other kinds of ambiguity. To the best of our discernment, AmbiQT represents the first open benchmark for testing coverage of ambiguous alternatives.  \\n\\nDiverse Decoding. Prior work has critiqued the lack of meaningful diversity in beam-search outputs ( Finkel et al. ,2006 ;Gimpel et al. ,2013 ;Li et al. ,2016 ;Li and Jurafsky ,2016 ). In response, many fixes have been proposed. Some proposals attempt to restrict the tokens sampled, using strategies like Nucleus sampling ( Holtzman et al. ,2020 ), Truncated Sampling ( Hewitt et al. ,2022 ), and Typical Sampling ( Meister et al. ,2023 ), while some rely on Template-Based decoding ( Wiseman et al. ,2018 ;Zhang et al. ,2022 ;Fu et al. ,2023 ;Elgohary et al. ,2020 ;Awasthi et al. ,2022 ). A third approach is to generate a prefix with high diversity first, then generate the rest of the sentence with lower diversity. Narayan et al. (2022 ) follow this recipe but focus on incorporating diverse entity orders in text summarization.  \\n\\n<html><body><table><tr><td rowspan=\"2\">Kind of ambiguity</td><td rowspan=\"2\">Count</td><td colspan=\"3\">Example</td></tr><tr><td>QuestionText</td><td>SQL#1</td><td>SQL#2</td></tr><tr><td>Column Ambiguity (C)</td><td>1240</td><td>List the ids of all students.</td><td>SELECTroll_number FROMstudents</td><td>SELECTadmission_number FROMstudents</td></tr><tr><td>Table Ambiguity (T)</td><td>1417</td><td>How many singers do wehave?</td><td>SELECT COUNT(*) FROM artist</td><td>SELECT COUNT(*) FROM performer</td></tr><tr><td>Join Ambiguity (J)</td><td>288</td><td>Whatarethemakers and models?</td><td>SELECT maker，model FROM model</td><td>SELECT t2.maker，t1.model FROM modelASt1JOINmodel_maker AS t2 ON t1.model_id = t2.model_id</td></tr><tr><td>Precomputed Aggregates (P)</td><td>101</td><td>for each pet type.</td><td>Find the average weight|SELECT AVG(weight)， pettype FROM pets GROUP BY pettype</td><td>SELECT avg_weight，pettype FROM pets_weight</td></tr></table></body></html>\\n\\nTable 1: The AmbiQT benchmark. For each question, we list two valid SQL queries as per the schema. The schema is not shown here, but the ambiguity in it can be inferred based on the two SQL queries.\\n\\n# 3 AmbiQT: A Benchmark of Ambiguous Text-to-SQL Conversion\\nAmbiQT is constructed so that each text query has two distinct valid SQL interpretations. Motivated by our experience working with real-life databases, we designed AmbiQT to encompass four types of ambiguity. Each entry is designed so that both alternatives have a similar relevance to the question, and a well-calibrated decoding method is expected to rank them close by in their outputs.  \\n\\nWe create AmbiQT by modifying the SPIDER (Yu et al. ,2018 ) dataset, and use ChatGPT ( OpenAI ,2022 ) to aid with the creation. In each case, we modify the schema instead of the text as that provides greater control over the modification process. We explain the kinds of ambiguity in AmbiQT below and portray examples of each in Table 1 .  \\n\\nColumn Ambiguity (C). Unlike the SPIDER benchmark where column names usually appear verbatim in the question text (like born state for the column born_state ), when users unaware of the schema pose a natural question, they introduce column ambiguity ( Wang et al. ,2022 ). For example, “ What is the capacity of O2 Arena? ” could be ambiguous if the schema has separate columns for standing and seating capacity. Likewise, a query on the number of under-nourished children is ambiguous if we have different columns for “under-weight children” and “stunted growth in children”.  \\n\\nTo simulate column ambiguity, for each text $\\\\mathbf{x}$ ,schema s, and SQL yin SPIDER, we prompt ChatGPT to generate two synonyms for each column name of sin a one-shot manner. Appendix A furnishes more details of the prompt. We then modify sby replacing $c$ with two columns $c_{1},c_{2}$ , and we use yto generate two queries $\\\\mathbf{y}_{1},\\\\mathbf{y}_{2}$ where all mentions of $c$ are replaced with $c_{1}$ in $\\\\mathbf{y}_{1}$ and with $c_{2}$ in $\\\\mathbf{y}_{2}$ . An example appears in the first row of Table 1 . We do not reuse $c$ because the SPIDER dataset often contains column names verbatim in the question, and that would violate our attempt at keeping the two options at similar relevance levels. We modify one column at a time and generate up to 3 examples from each original entry.  \\n\\nTable Ambiguity (T). Table name ambiguity is common in databases obtained by integrating multiple data sources, as in web tables ( Cafarella et al. ,2008 ;Pimplikar and Sarawagi ,2012 ). Here again, we prompt ChatGPT to generate two alternate names for each table. We then modify one SQL yto generate two candidates ${\\\\bf y}_{1},{\\\\bf y}_{2}$ as shown in Table 1 .  \\n\\nJoin Ambiguity (J). In production databases, a logical table is often vertically partitioned across several tables for efficient clustered access ( Stonebraker et al. ,2019 ). Column names overlapping across tables leads to Join Ambiguity. Suppose we have two tables: (1) person with columns id, name, email_address , and (2) person_details with columns id, postal_address, photo .A question asking for a person’s name and address is ambiguous on whether a JOIN with the person_details is necessary. We expose such ambiguity by modifying the schema as follows.  \\n\\nConsider a $(\\\\mathbf{x},\\\\mathbf{s},\\\\mathbf{y})$ triplet. Suppose yinvolves selecting two or more columns $c_{1},c_{2},\\\\ldots.$ not necessarily in the same order, from a table $t$ . Suppose further that $c_{1}$ is not a primary key of $t$ . We create a table called $t\\\\_c_{1}$ that includes just the primary key $p k_{t}$ of $t$ , and $c_{1}$ . The first alternative $\\\\mathbf{y}_{1}$ is $\\\\mathbf{y}$ and the second alternative $\\\\mathbf{y}_{2}$ uses a join over $t$ and $t\\\\_c_{1}$ , with everything else staying the same as y.  \\n\\n  \\nFigure 2: Beam Search works well when targeting only one output, but leads to superficial diversity, for example via different grouping and erroneous variants of column names.  \\n\\nPrecomputed Aggregates $(\\\\mathbf{P})$ :. This ambiguity is particularly common in data warehouses such as Data Commons which pre-aggregate certain variables. For instance, the “ total rice production ” of a state might refer to the column rice_production of state rather than a sum over it. Text-toSQL models have a bias toward introducing a sum()...group-by clause every time total appears in the text. The non-aggregated alternative is usually missing in the top$k$ options. We incorporate this ambiguity as follows.  \\n\\nFor each $(\\\\mathbf{x},\\\\mathbf{s},\\\\mathbf{y})$ , where $\\\\mathbf{y}$ has at least one aggregate, we construct a new table $t^{\\\\prime}$ . For each aggregate $\\\\boldsymbol{\\\\mathcal{A}}$ over column $c$ in y, we add to $t^{\\\\prime}$ the columns and the columns grouped by in $A^{\\\\prime}\\\\_c$ for all $\\\\mathcal{A}^{\\\\prime}\\\\,\\\\in\\\\,\\\\{\\\\mathsf{a v g},\\\\mathsf{s u m},\\\\mathsf{m i n},\\\\mathsf{m a x}\\\\}$ y. For count $(\\\\star)$ ,we add a column called number . We get two gold queries, the original yand a second with the groupby replaced by a direct SELECT on $t^{\\\\prime}$ as shown in the example in Table 1 . We also support aggregates across multiple tables but skip the details here.\\npaper_title:A Lightweight Constrained Generation Alternative for Query-focused Summarization\\n                paper_id:644744fb71ac66d2cbf9b886\\n                chunk_id:1\\n                chunk_text:# 2 Related Work\\nQuery-focused Summarization: To generate a query-focused summary, several studies used an additional query-attention mechanism. QR-BERTSUM-TL [13] incorporates query relevance scores into a pre-trained summarization model. Su et al. [29] propose merging the representation of an answer span predicted by a separate QA model into the Seq2Seq model’s training and inference process to enforce the summary’s coherence w.r.t. the query. QSG Transformer [23] suggests using a separate graph neural network model to learn per-token representations and fuse them to the Seq2Seq model to effectively generate a QFS. These mechanisms can be viewed as enforcing soft semantic constraints during the generation process, and requires additional modules and parameters to function effectively. We opt for a different approach, i.e. explicitly enforcing lexical constraints during the generation process, without the additional machinery that is necessary to handle the soft semantic constrains.  \\n\\nConstrained Generation (or Conditional Generation) is a family of natural language generation (NLG) methods that aim to generate natural language including/excluding a set of specific words, i.e. lexical constraints. The NLG domain recipe leverages pre-trained large language models (LLM) finetuned on specific datasets [7]. However, as pointed out by Lu et al. [18], such models only finetuned in an end-to-end manner do not learn to follow the underlying constraints reliably even when supervised with large amounts of training examples. Therefore, a line of works [1, 10, 17, 18] in constrained generation proposes to explicitly modify the likelihood of next word prediction in the generation stage, such that the predefined lexical constraints can be better satisfied.\\npaper_title:Interactive Text-to-SQL Generation Via Editable Step-by-Step Explanations\\n                paper_id:6461b9c9d68f896efad43133\\n                chunk_id:2\\n                chunk_text:# 3 Approach\\nFig. 2 provides an overview of S TEPS . Given a natural language (NL) question, S TEPS invokes a text-to-SQL model to generate an initial SQL query. Then, it decomposes the generated SQL query into individual query clauses and re-orders them based on their execution order. Each clause is then translated into an NL description of the underlying data operation, which is then used to form a step-by-step explanation. By reading the NL explanation along with the query result, users can easily understand the behavior of the generated query and locate any errors, even if they are unfamiliar with SQL.  \\n\\nIf one step is incorrect, users can directly edit its explanation to specify the correct behavior. S TEPS will then regenerate the clause based on the usercorrected explanation and update the SQL query, rather than regenerate the entire query from scratch. If multiple steps are incorrect, the user can add, remove, and modify all steps as needed.\\n\\n# 3.1 Rule-based SQL Explanation\\nTo generate explanations for arbitrarily complex SQL queries (e.g., a query with nested subqueries), we design a rule-based method to first decompose a query into individual clauses. Specifically, S TEPS first parses a SQL query to its Abstract Syntax Tree (AST) based on the SQL grammar in Table 6 . Then, it traverses the AST to identify the subtree of each clause while preserving their hierarchical relations.  \\n\\nGiven the subtree of a clause, S TEPS performs an in-order traversal and translates each leaf node (i.e., a terminal token in the grammar) to the corresponding NL description based on a set of translation rules (see Table 7 in the appendices). For example, SELECT is translated to “Return”, and Order By is translated to “Sort the records based on.” S TEPS concatenates these descriptions to form a complete sentence as the explanation of the clause.  \\n\\n  \\nFigure 3: An example of the explanation generation process  \\n\\nSince SQL engines follow a specific order to execute individual clauses in a query 2 , S TEPS further reorders the clause explanations to reflect their execution order. We believe this is a more faithful representation of the query behavior and thus can help users better understand the underlying data operations, compared with rendering them based on the syntactic order of clauses. Fig. 3 shows an example translation.\\n\\n# 3.2 Text-to-Clause Generation\\nUsers make edits to the explanation produced by our system to make it consistent with their goal. Given these edits, S TEPS uses a hybrid method to generate the corresponding SQL clause. For simple edits, such as replacing a column name, S TEPS directly edits the original clause to fix the error using three direct transformation rules $(\\\\S\\\\ 3.2.1)$ .For more complex edits, S TEPS uses a neural textto-clause model to generate the clause based on the user-corrected explanation $(\\\\S\\\\ 3.2.2)$ .  \\n\\nThe hybrid method is inspired by the findings from our recent study ( Ning et al. ,2023 ). Specifically, a large portion of SQL generation errors are simple errors (e.g., incorrect column names and operators), which can be fixed with small edits. After SQL decomposition by our approach, many larger errors are further decomposed into a set of simpler errors, contained within separate clauses. Thus, it is not necessary to regenerate the entire clause to fix such errors. Furthermore, compared to using a large model, direct transformation is more computationally efficient. Our experiment shows that direct transformation is 22K times faster than the text-to-clause model (Table 4 ).\\n\\n# Algorithm 1: Direct transformation\\nInput: The original explanation $e_{o}$ ;  \\nThe new edited explanation $e_{n}$ ;  \\nThe original SQL clause $s$ ;  \\nOutput: the updated SQL clause   \\n1 $C_{o}\\\\gets\\\\mathrm{CHUNK}(e_{o})$   \\n2 C$C_{n}\\\\gets\\\\mathrm{CHUNK}(e_{n})$ ←  \\n3 foreach $(c_{o},\\\\,c_{n})$ )in $\\\\mathrm{ALIGN}(C_{o},C_{n})$ do   \\n4 // Replace ;  \\n5 if BOTH COLUMN $(c_{o},\\\\,c_{n})$ or   \\n6 BOTH TABLE $(c_{o},\\\\,c_{n})$ or   \\n7 BOTH LITERAL $\\\\left(c_{o},\\\\,c_{n}\\\\right)$ then   \\n8 s←s.R EPLACE (c o, c n) ;   \\n9 // Add ;  \\n10 else if $c_{o}$ is $\\\\mathcal{Q}$ and IS COLUMN ($\\\\left(c_{n}\\\\right)$ then   \\n11 if s. START WITH (\"Select\" )then   \\n12 s←s.A PPEND (cn)  \\n13 // Remove ;  \\n14 else if $c_{n}$ is $\\\\mathcal{D}$ and IS COLUMN $\\\\scriptstyle(c_{o})$ then   \\n15 $s\\\\gets s.\\\\mathrm{REMOVE}(c_{o})$ ;  \\n16 end   \\n17 return\\n\\n# 3.2.1 Direct Transformation\\nWe define three types of atomic edits that can be directly converted into SQL edits by S TEPS : (1) replacing a column name, a table name, or a literal value (i.e., string, number), (2) adding a new column name in the explanation of a SELECT clause, and (3) removing a column name.  \\n\\nAlgorithm 1 describes our direct transformation algorithm. After chunking the text (Lines 1 -2 ), STEPS aligns and compares the chunks in the original explanation with those in the user-corrected explanation, using the Needleman and Wunsch (1970 )algorithm (Line 3 ). This allows S TEPS to detect any replacements (Line 4 ), additions (Line 9 ), or removals (Line 13 ) of database entities in the explanation. Based on this information, S TEPS automatically edits the corresponding SQL clause without calling a neural model (Lines 8 ,12 ,15 ). More details of this algorithm can be found in Appendix E.\\n\\nOutput only the JSON structure, no additional text or explanations. \\n\\n'}]\n",
      "```json\n",
      "{\n",
      "  \"statement\": \"Research on Text-to-SQL generation has traditionally focused on scenarios where each natural language question is associated with one correct SQL query.<sup>{\\\"chunk_id\\\":\\\"10\\\", \\\"paper_id\\\":\\\"64a29654d68f896efa29af31\\\"}</sup><sup>{\\\"chunk_id\\\":\\\"1\\\", \\\"paper_id\\\":\\\"6461b9c9d68f896efad43133\\\"}</sup><sup>{\\\"chunk_id\\\":\\\"1\\\", \\\"paper_id\\\":\\\"6535d747939a5f408295c649\\\"}</sup>\"\n",
      "}\n",
      "```\n"
     ]
    }
   ],
   "source": [
    "from research_agent.core.add_citation import AddCitation\n",
    "addctitationer = AddCitation()\n",
    "context = \"\\n\".join(\n",
    "    f\"\"\"paper_title:{chunk['entity']['paper_title']}\n",
    "                paper_id:{chunk['entity']['paper_id']}\n",
    "                chunk_id:{chunk['entity']['chunk_id']}\n",
    "                chunk_text:{chunk['entity']['chunk_text']}\"\"\"\n",
    "    for chunk in papers[:5]\n",
    ")\n",
    "\n",
    "messages = addctitationer._prepare_messages(statement, context)\n",
    "print(messages)\n",
    "response = await addctitationer.llm.completion(messages)\n",
    "print(response)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 45,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "[{'id': 454846633586283990,\n",
       "  'distance': 0.6550747752189636,\n",
       "  'entity': {'paper_id': '64a29654d68f896efa29af31',\n",
       "   'paper_title': 'Constraint Reasoning Embedded Structured Prediction.',\n",
       "   'chunk_id': 10,\n",
       "   'chunk_text': '# 5.3 Text2SQL Generation\\nTask Definition. Formatted data such as travel records and stock market transactions are stored in relational databases. Currently, accessing the database requires a data scientist who masters the SQL query language. Our task is to automatically synthesize SQL queries from natural language sentences using machine learning. Compared with the data expert approach, SQL query generation requires deeper reasoning across the structure of the database, the semantics of the structured query language, and the understanding of natural language. As shown in Figure 11, the input of the text2SQL generation is a sentence that describes the query in natural language and the table headers in the relational database. The output is a SQL query with the following structure:  \\n\\nSELECT agg-op sel-col WHERE (cond-col cond-op cond-val) AND ...  \\n\\nHere, SELECT and WHERE are keywords in the SQL language. What we need to predict are: (1) the aggregation operator $\\\\mathsf{a g g-o p}$ , which chooses among the set {empty, COUNT, MIN, MAX, SUM, AVG }; (2) the column name in selection sel-col and (3) the column name in condition cond-col , both of which are chosen from the table headers; (4) the conditional operator cond-op , which is in $\\\\{=,<,>\\\\}$ ; (5) the conditional value cond-val , which is assumed to be a sub-sequence of the given query. Here, one bracket pair () represents one conditional statement. The SQL query may have multiple conditions, which are denoted above by “ ... ”. Figure 11 displays this SQL query:\\n\\n# SELECT COUNT \"School\" WHERE \"No.\" = \"3\"\\nHere agg-op is COUNT ;sel-col is “school”, which is a column name from the table headers. One cond-col is “No.”, which also comes from the table headers. The cond-op is “=”. The cond-val is “3”, which we assume is from the input query. This example has one condition but multiple conditions are allowed.  \\n\\nDefinition of Constraints. Existing generative neural models for this task are not guaranteed to generate a query that follows the grammar of a SQL query. To avoid grammar violations, we compile a set of common SQL grammars as constraints into the Core-Sp module. The Core-Sp module will ensure that all the generated SQL queries follow the grammatical constraints. Our constraints are defined on the operators, namely the conditional operator cond-op and the aggregation operator agg-op . The domains of these operators are dependent upon the data types of the entities (namely, cond-col and sel-col )they operate on. Consider the previous example. The agg-op can only take values between $\\\\{\\\\mathrm{empty,~\\\\coUNT}\\\\}$ , because the sel-col is “school”, which is of the string type. More precisely, let $s$ be a column header (the value of sel-col or cond-col ). We define $F_{a}(s)$ as  \\n\\nInput Table:   \\n\\n\\n<html><body><table><tr><td></td><td>Player</td><td>No.</td><td>Position</td><td>School</td></tr><tr><td>0</td><td>Antonio</td><td>21</td><td>Guard-Forward</td><td>Duke</td></tr><tr><td>1</td><td>Voshon</td><td>2</td><td>Guard</td><td>Minnesota</td></tr><tr><td>2</td><td>Marin</td><td>3</td><td>Guard-Forward</td><td>Butler CC</td></tr></table></body></html>\\n\\n# Input Query:\\nHow many schools did player number 3 play at?\\n\\n# Output SQL Query:\\nFigure 11: An example for the Text2SQL generation task. The input is the text query “How many schools did player number 3 play at?” and the table header “ Player, No., Position, School ” from the relational database. The output should be the SQL query: SELECT COUNT \"School\" WHERE \"No. $\"~=~\"3\"$ .  \\n\\nthe set of aggregation operators agg-op that can be associated with $s$ , and $F_{c}(s)$ as the set of condition operators cond-op that can be associated with $s$ . That is:  \\n\\n$$\\n\\\\begin{array}{r l}&{F_{a}(s)=\\\\left\\\\{\\\\begin{array}{l l}{\\\\{\\\\mathrm{empty~,~\\\\varsigma0UNT,~\\\\forall\\\\mathrm{IIN},~\\\\forall\\\\mathrm{IAX},~\\\\forall\\\\mathrm{II},~\\\\mathrm{AVG}\\\\}}}&{\\\\mathrm{if~}s\\\\mathrm{~of~is~numeric~type}}\\\\\\\\ {\\\\{\\\\mathrm{empty~,~\\\\varsigma0UNT}\\\\}}&{\\\\mathrm{if~}s\\\\mathrm{~of~is~string~type}}\\\\end{array}\\\\right.}\\\\\\\\ &{F_{c}(s)=\\\\left\\\\{\\\\begin{array}{l l}{\\\\{\\\\mathrm{=,~\\\\displaystyle>,~\\\\varsigma\\\\}}}&{\\\\mathrm{if~}s\\\\mathrm{~is~of~numeric~type}}\\\\\\\\ {\\\\{\\\\mathrm{=}\\\\}}&{\\\\mathrm{if~}s\\\\mathrm{~is~of~string~type}}\\\\end{array}\\\\right.}\\\\end{array}\\n$$  \\n\\nWe also introduce dataype constraints, which are defined as:  \\n\\n$$\\n\\\\begin{array}{r l}&{\\\\mathtt{s e l-c o l}=s\\\\Rightarrow\\\\mathtt{a g g-o p}\\\\in F_{a}(s),}\\\\\\\\ &{\\\\mathtt{c o n d-c o l}=s\\\\Rightarrow\\\\mathtt{c o n d-o p}\\\\in F_{c}(s).}\\\\end{array}\\n$$  \\n\\nModel Structure. We embed the Core-Sp module to SQLova (Hwang et al., 2019), the state-of-the-art neural network for text2SQL generation. SQLova has a sequence-tosequence architecture. It first encodes a natural language sentence and the table headers into a high-dimensional vector. Then the decoder of SQLova decodes the hidden representation into the predictions of various entities in the SQL query. SQLova first determines the number of conditions in the SQL query and then fills in the ( cond-col ,cond-op ,cond-val ) for each condition. The operators agg-op, cond-op are predicted as a classification task from a fixed set of operators. Column names cond-col, sel-col are predicted from the set of table headers in the relational database. The cond-val is predicted by a pointer neural network which points at a span of the input natural language sentence. The selected span of the query is used as the cond-val (Dong and Lapata, 2018).  \\n\\nMDD Construction. The associated MDD that encodes the constraints for text2SQL generation is similar to the MDD for if-then program synthesis. The MDD is split into layers and every two layers form a group. One two-layer group is used to enforce constraints on an operator-column name pair. The operator-column name pair can be $\\\\mathsf{a g g-o p}$ and sel-col ,or can be cond-op and cond-col . Note that there can be only one group of $\\\\mathsf{a g g-o p}$ and sel-col and more than one group of cond-op and cond-col . In the first layer of the group, the column name is determined. In the second layer, the invalid operators are ruled out based on the type of the column name selected in the first layer. The two-layer group is copied several times because the SQL query can contain multiple conditions.  \\n\\nConstraint Reasoning Embedded Structured Prediction',\n",
       "   'original_filename': 'Journal_Paper_Meta_Data_Journal_of_Machine_Learning_Research_with_whole_text.db'}},\n",
       " {'id': 454845641360425782,\n",
       "  'distance': 0.6537715196609497,\n",
       "  'entity': {'paper_id': '6461b9c9d68f896efad43133',\n",
       "   'paper_title': 'Interactive Text-to-SQL Generation Via Editable Step-by-Step Explanations',\n",
       "   'chunk_id': 1,\n",
       "   'chunk_text': '# 2 Related Work\\n\\n# 2.1 Text-to-SQL Generation\\nNatural language interfaces have long been recognized as a way to expand access to databases ( Hendrix et al. ,1978 ).The construction of several large text-to-SQL datasets, such as WikiSQL ( Zhong et al. ,2017 ) and Spider ( Yu et al. ,2018 ), has enabled the adoption of deep learning models in this task, achieving unprecedented performance in recent years ( Rubin and Berant ,2021 ;Wang et al. ,2020a ;Scholak et al. ,2021 ;Yu et al. ,2020 ;Hwang et al. ,2019 ). Our technique is based on the recent success of neural text-to-SQL models. Unlike existing models that perform end-to-end SQL generation, we propose a new interaction mechanism for users to validate and refine generated queries through step-by-step explanations.  \\n\\nAs the first step to demonstrate the feasibility of our approach, we focus on single-turn SQL generation ( Yu et al. ,2018 ) in this work. There has also been recent work that supports multi-turn SQL generation ( Yu et al. ,2019a ,b;Guo et al. ,2021 ), where a sequence of interdependent queries are expressed in multiple utterances in a dialog. Models designed for multi-turn SQL generation typically need to reason about the dialog context and effectively encode the historical queries ( Wang et al. ,2021 ;Hui et al. ,2021 ;Zhang et al. ,2019 ;Cai and Wan ,2020 ;Wang et al. ,2020b ). Our approach can be extended to support multi-turn SQL generation by initiating separate refinement sessions for individual queries while incorporating the contextual information of previous queries into explanation generation and text-to-clause generation.\\n\\n# 2.2 Interactive Semantic Parsing for SQL\\nRecently, there has been a growing interest in interactive approaches that elicit user feedback to guide SQL generation. Iyer et al. (2017 ) proposed to allow users to flag incorrect queries and continuously retrain the model. Both DIY ( Narechania et al. ,2021 ) and NaLIR ( Li and Jagadish ,2014a ,b)enable users to select alternative values or subexpressions to fix an incorrect SQL query. PIIA ( Li et al. ,2020 ), MISP ( Yao et al. ,2019 ), and DialSQL ( Gur et al. ,2018 ) proactively ask for user feedback via multiple-choice questions. A common limitation of these methods is that they only solicit feedback in constrained forms, hindering their flexibility and effectiveness in addressing the variability of SQL errors. In contrast, our approach allows more flexible feedback through direct edits to the explanations generated by the model.  \\n\\nThe only work that supports open-ended user feedback in SQL generation is NL-EDIT ( Elgohary et al. ,2021 ). NL-EDIT is trained on SPLASH ( Elgohary et al. ,2020 ), a dataset of SQL errors and user feedback utterances. Given an incorrect query, NL-EDIT allows users to provide a clarification utterance. Based on the utterance, the model generates a sequence of edits to the SQL query. Incorporating feedback expressed in a completely free-text utterance is challenging for two reasons:  \\n\\n  \\nFigure 2: An Overview of Interactive SQL Generation and Refinement with Editable Step-by-Step Explanations  \\n\\n(1) the model needs to infer which part of the SQL query to fix; (2) the model needs to determine what changes are being requested. In contrast, S TEPS asks users to directly edit an NL explanation and make corrections to the explanation. Comparing the initial explanation with the user-corrected explanation makes it easier to locate the part of a SQL query that needs to be changed and infer what change to make.  \\n\\nThe idea of SQL decomposition is similar to recent work that decomposes a user question to sub-questions on SPARQL ( Mo et al. ,2022 ). Their approach requires a crowd-sourced dataset to train a question decomposition model. In contrast, our rule-based method generates step-by-step explanations without the need for training a model. This also allows our system to map each entity in the explanation to the corresponding SQL element, making it easier for SQL correction (Sec. 3.2 ).\\n\\n# 2.3 Explaining SQL Queries in NL\\nOur approach is also related to prior work that generates NL explanations for SQL queries. Simitsis and Ioannidis (2009 ) argued that databases should “talk back” in human language so that users can verify results. Kokkalis et al. (2012 ) and Koutrika et al. (2010 ) used a graph-based SQL translation approach, where each query is represented as a graph and the explanation is generated by traversing the graph. Elgohary et al. (2021 ,2020 ) employed a template-based explanation approach, where they manually curated 57 templates for explanation generation. These approaches have limited capability to handle arbitrary SQL queries. To address this limitation, we propose a rule-based method to first explain terminal tokens (e.g., operators, keywords) and gradually compose them into a complete explanation based on the derivation rules in the SQL grammar. Another key difference is that none of the existing approaches supports editable explanations for SQL correction, which is a key feature to solicit user feedback in our approach.',\n",
       "   'original_filename': 'Conf_Paper_Meta_Data_EMNLP_2023_with_whole_text.db'}},\n",
       " {'id': 454845681760490286,\n",
       "  'distance': 0.6474056243896484,\n",
       "  'entity': {'paper_id': '6535d747939a5f408295c649',\n",
       "   'paper_title': 'Benchmarking and Improving Text-to-SQL Generation under Ambiguity',\n",
       "   'chunk_id': 1,\n",
       "   'chunk_text': '# 2 Background and Related Work\\nA Text-to-SQL model takes as input a question expressed as a natural language text x, and a database schema scomprising of table and column names, and outputs an SQL program ywhich can be executed against the database to answer the user’s question. Figure 1 shows an example. The training data for the task comprises (text, schema, SQL) triplets spanning multiple distinct databases.  \\n\\n  \\nFigure 1: A Text-to-SQL system converts a user question to an SQL query, conditioned on the database schema and/or content.  \\n\\nBenchmarks. Popular benchmarks for the Textto-SQL task are WikiSQL ( Zhong et al. ,2018 )and SPIDER ( Yu et al. ,2018 ). A few others have been proposed recently to capture real-world scenarios, such as KaggleDBQA ( Lee et al. ,2021 ), SEDE ( Hazoom et al. ,2021 ), and EHRSQL ( Lee et al. ,2022 ). They all attach one SQL per text, though some of them mention the problem of ambiguity in real-world datasets. Dr. SPIDER ( Chang et al. ,2023 ), designed to test the robustness of existing models, perturbs either the text or schema of SPIDER but still assigns one SQL per text.  \\n\\nAmbiguity in SQL Although ambiguity has been studied in other fields of NLP ( Pilault et al. ,2023 ;Li et al. ,2022 ;Futeral et al. ,2022 ), it has been unexplored in the context of semantic parsing. Ambiguity in SQL arising from related column names is discussed in ( Wang et al. ,2022 ), but they only consider column ambiguity. Their method of recognizing ambiguous queries depends on labeling words of the text and does not generalize to other kinds of ambiguity. To the best of our discernment, AmbiQT represents the first open benchmark for testing coverage of ambiguous alternatives.  \\n\\nDiverse Decoding. Prior work has critiqued the lack of meaningful diversity in beam-search outputs ( Finkel et al. ,2006 ;Gimpel et al. ,2013 ;Li et al. ,2016 ;Li and Jurafsky ,2016 ). In response, many fixes have been proposed. Some proposals attempt to restrict the tokens sampled, using strategies like Nucleus sampling ( Holtzman et al. ,2020 ), Truncated Sampling ( Hewitt et al. ,2022 ), and Typical Sampling ( Meister et al. ,2023 ), while some rely on Template-Based decoding ( Wiseman et al. ,2018 ;Zhang et al. ,2022 ;Fu et al. ,2023 ;Elgohary et al. ,2020 ;Awasthi et al. ,2022 ). A third approach is to generate a prefix with high diversity first, then generate the rest of the sentence with lower diversity. Narayan et al. (2022 ) follow this recipe but focus on incorporating diverse entity orders in text summarization.  \\n\\n<html><body><table><tr><td rowspan=\"2\">Kind of ambiguity</td><td rowspan=\"2\">Count</td><td colspan=\"3\">Example</td></tr><tr><td>QuestionText</td><td>SQL#1</td><td>SQL#2</td></tr><tr><td>Column Ambiguity (C)</td><td>1240</td><td>List the ids of all students.</td><td>SELECTroll_number FROMstudents</td><td>SELECTadmission_number FROMstudents</td></tr><tr><td>Table Ambiguity (T)</td><td>1417</td><td>How many singers do wehave?</td><td>SELECT COUNT(*) FROM artist</td><td>SELECT COUNT(*) FROM performer</td></tr><tr><td>Join Ambiguity (J)</td><td>288</td><td>Whatarethemakers and models?</td><td>SELECT maker，model FROM model</td><td>SELECT t2.maker，t1.model FROM modelASt1JOINmodel_maker AS t2 ON t1.model_id = t2.model_id</td></tr><tr><td>Precomputed Aggregates (P)</td><td>101</td><td>for each pet type.</td><td>Find the average weight|SELECT AVG(weight)， pettype FROM pets GROUP BY pettype</td><td>SELECT avg_weight，pettype FROM pets_weight</td></tr></table></body></html>\\n\\nTable 1: The AmbiQT benchmark. For each question, we list two valid SQL queries as per the schema. The schema is not shown here, but the ambiguity in it can be inferred based on the two SQL queries.\\n\\n# 3 AmbiQT: A Benchmark of Ambiguous Text-to-SQL Conversion\\nAmbiQT is constructed so that each text query has two distinct valid SQL interpretations. Motivated by our experience working with real-life databases, we designed AmbiQT to encompass four types of ambiguity. Each entry is designed so that both alternatives have a similar relevance to the question, and a well-calibrated decoding method is expected to rank them close by in their outputs.  \\n\\nWe create AmbiQT by modifying the SPIDER (Yu et al. ,2018 ) dataset, and use ChatGPT ( OpenAI ,2022 ) to aid with the creation. In each case, we modify the schema instead of the text as that provides greater control over the modification process. We explain the kinds of ambiguity in AmbiQT below and portray examples of each in Table 1 .  \\n\\nColumn Ambiguity (C). Unlike the SPIDER benchmark where column names usually appear verbatim in the question text (like born state for the column born_state ), when users unaware of the schema pose a natural question, they introduce column ambiguity ( Wang et al. ,2022 ). For example, “ What is the capacity of O2 Arena? ” could be ambiguous if the schema has separate columns for standing and seating capacity. Likewise, a query on the number of under-nourished children is ambiguous if we have different columns for “under-weight children” and “stunted growth in children”.  \\n\\nTo simulate column ambiguity, for each text $\\\\mathbf{x}$ ,schema s, and SQL yin SPIDER, we prompt ChatGPT to generate two synonyms for each column name of sin a one-shot manner. Appendix A furnishes more details of the prompt. We then modify sby replacing $c$ with two columns $c_{1},c_{2}$ , and we use yto generate two queries $\\\\mathbf{y}_{1},\\\\mathbf{y}_{2}$ where all mentions of $c$ are replaced with $c_{1}$ in $\\\\mathbf{y}_{1}$ and with $c_{2}$ in $\\\\mathbf{y}_{2}$ . An example appears in the first row of Table 1 . We do not reuse $c$ because the SPIDER dataset often contains column names verbatim in the question, and that would violate our attempt at keeping the two options at similar relevance levels. We modify one column at a time and generate up to 3 examples from each original entry.  \\n\\nTable Ambiguity (T). Table name ambiguity is common in databases obtained by integrating multiple data sources, as in web tables ( Cafarella et al. ,2008 ;Pimplikar and Sarawagi ,2012 ). Here again, we prompt ChatGPT to generate two alternate names for each table. We then modify one SQL yto generate two candidates ${\\\\bf y}_{1},{\\\\bf y}_{2}$ as shown in Table 1 .  \\n\\nJoin Ambiguity (J). In production databases, a logical table is often vertically partitioned across several tables for efficient clustered access ( Stonebraker et al. ,2019 ). Column names overlapping across tables leads to Join Ambiguity. Suppose we have two tables: (1) person with columns id, name, email_address , and (2) person_details with columns id, postal_address, photo .A question asking for a person’s name and address is ambiguous on whether a JOIN with the person_details is necessary. We expose such ambiguity by modifying the schema as follows.  \\n\\nConsider a $(\\\\mathbf{x},\\\\mathbf{s},\\\\mathbf{y})$ triplet. Suppose yinvolves selecting two or more columns $c_{1},c_{2},\\\\ldots.$ not necessarily in the same order, from a table $t$ . Suppose further that $c_{1}$ is not a primary key of $t$ . We create a table called $t\\\\_c_{1}$ that includes just the primary key $p k_{t}$ of $t$ , and $c_{1}$ . The first alternative $\\\\mathbf{y}_{1}$ is $\\\\mathbf{y}$ and the second alternative $\\\\mathbf{y}_{2}$ uses a join over $t$ and $t\\\\_c_{1}$ , with everything else staying the same as y.  \\n\\n  \\nFigure 2: Beam Search works well when targeting only one output, but leads to superficial diversity, for example via different grouping and erroneous variants of column names.  \\n\\nPrecomputed Aggregates $(\\\\mathbf{P})$ :. This ambiguity is particularly common in data warehouses such as Data Commons which pre-aggregate certain variables. For instance, the “ total rice production ” of a state might refer to the column rice_production of state rather than a sum over it. Text-toSQL models have a bias toward introducing a sum()...group-by clause every time total appears in the text. The non-aggregated alternative is usually missing in the top$k$ options. We incorporate this ambiguity as follows.  \\n\\nFor each $(\\\\mathbf{x},\\\\mathbf{s},\\\\mathbf{y})$ , where $\\\\mathbf{y}$ has at least one aggregate, we construct a new table $t^{\\\\prime}$ . For each aggregate $\\\\boldsymbol{\\\\mathcal{A}}$ over column $c$ in y, we add to $t^{\\\\prime}$ the columns and the columns grouped by in $A^{\\\\prime}\\\\_c$ for all $\\\\mathcal{A}^{\\\\prime}\\\\,\\\\in\\\\,\\\\{\\\\mathsf{a v g},\\\\mathsf{s u m},\\\\mathsf{m i n},\\\\mathsf{m a x}\\\\}$ y. For count $(\\\\star)$ ,we add a column called number . We get two gold queries, the original yand a second with the groupby replaced by a direct SELECT on $t^{\\\\prime}$ as shown in the example in Table 1 . We also support aggregates across multiple tables but skip the details here.',\n",
       "   'original_filename': 'Conf_Paper_Meta_Data_EMNLP_2023_with_whole_text.db'}},\n",
       " {'id': 454845489174282794,\n",
       "  'distance': 0.632977306842804,\n",
       "  'entity': {'paper_id': '644744fb71ac66d2cbf9b886',\n",
       "   'paper_title': 'A Lightweight Constrained Generation Alternative for Query-focused Summarization',\n",
       "   'chunk_id': 1,\n",
       "   'chunk_text': '# 2 Related Work\\nQuery-focused Summarization: To generate a query-focused summary, several studies used an additional query-attention mechanism. QR-BERTSUM-TL [13] incorporates query relevance scores into a pre-trained summarization model. Su et al. [29] propose merging the representation of an answer span predicted by a separate QA model into the Seq2Seq model’s training and inference process to enforce the summary’s coherence w.r.t. the query. QSG Transformer [23] suggests using a separate graph neural network model to learn per-token representations and fuse them to the Seq2Seq model to effectively generate a QFS. These mechanisms can be viewed as enforcing soft semantic constraints during the generation process, and requires additional modules and parameters to function effectively. We opt for a different approach, i.e. explicitly enforcing lexical constraints during the generation process, without the additional machinery that is necessary to handle the soft semantic constrains.  \\n\\nConstrained Generation (or Conditional Generation) is a family of natural language generation (NLG) methods that aim to generate natural language including/excluding a set of specific words, i.e. lexical constraints. The NLG domain recipe leverages pre-trained large language models (LLM) finetuned on specific datasets [7]. However, as pointed out by Lu et al. [18], such models only finetuned in an end-to-end manner do not learn to follow the underlying constraints reliably even when supervised with large amounts of training examples. Therefore, a line of works [1, 10, 17, 18] in constrained generation proposes to explicitly modify the likelihood of next word prediction in the generation stage, such that the predefined lexical constraints can be better satisfied.',\n",
       "   'original_filename': 'Conf_Paper_Meta_Data_SIGIR2023_with_whole_text.db'}},\n",
       " {'id': 454845641378251576,\n",
       "  'distance': 0.6278348565101624,\n",
       "  'entity': {'paper_id': '6461b9c9d68f896efad43133',\n",
       "   'paper_title': 'Interactive Text-to-SQL Generation Via Editable Step-by-Step Explanations',\n",
       "   'chunk_id': 2,\n",
       "   'chunk_text': '# 3 Approach\\nFig. 2 provides an overview of S TEPS . Given a natural language (NL) question, S TEPS invokes a text-to-SQL model to generate an initial SQL query. Then, it decomposes the generated SQL query into individual query clauses and re-orders them based on their execution order. Each clause is then translated into an NL description of the underlying data operation, which is then used to form a step-by-step explanation. By reading the NL explanation along with the query result, users can easily understand the behavior of the generated query and locate any errors, even if they are unfamiliar with SQL.  \\n\\nIf one step is incorrect, users can directly edit its explanation to specify the correct behavior. S TEPS will then regenerate the clause based on the usercorrected explanation and update the SQL query, rather than regenerate the entire query from scratch. If multiple steps are incorrect, the user can add, remove, and modify all steps as needed.\\n\\n# 3.1 Rule-based SQL Explanation\\nTo generate explanations for arbitrarily complex SQL queries (e.g., a query with nested subqueries), we design a rule-based method to first decompose a query into individual clauses. Specifically, S TEPS first parses a SQL query to its Abstract Syntax Tree (AST) based on the SQL grammar in Table 6 . Then, it traverses the AST to identify the subtree of each clause while preserving their hierarchical relations.  \\n\\nGiven the subtree of a clause, S TEPS performs an in-order traversal and translates each leaf node (i.e., a terminal token in the grammar) to the corresponding NL description based on a set of translation rules (see Table 7 in the appendices). For example, SELECT is translated to “Return”, and Order By is translated to “Sort the records based on.” S TEPS concatenates these descriptions to form a complete sentence as the explanation of the clause.  \\n\\n  \\nFigure 3: An example of the explanation generation process  \\n\\nSince SQL engines follow a specific order to execute individual clauses in a query 2 , S TEPS further reorders the clause explanations to reflect their execution order. We believe this is a more faithful representation of the query behavior and thus can help users better understand the underlying data operations, compared with rendering them based on the syntactic order of clauses. Fig. 3 shows an example translation.\\n\\n# 3.2 Text-to-Clause Generation\\nUsers make edits to the explanation produced by our system to make it consistent with their goal. Given these edits, S TEPS uses a hybrid method to generate the corresponding SQL clause. For simple edits, such as replacing a column name, S TEPS directly edits the original clause to fix the error using three direct transformation rules $(\\\\S\\\\ 3.2.1)$ .For more complex edits, S TEPS uses a neural textto-clause model to generate the clause based on the user-corrected explanation $(\\\\S\\\\ 3.2.2)$ .  \\n\\nThe hybrid method is inspired by the findings from our recent study ( Ning et al. ,2023 ). Specifically, a large portion of SQL generation errors are simple errors (e.g., incorrect column names and operators), which can be fixed with small edits. After SQL decomposition by our approach, many larger errors are further decomposed into a set of simpler errors, contained within separate clauses. Thus, it is not necessary to regenerate the entire clause to fix such errors. Furthermore, compared to using a large model, direct transformation is more computationally efficient. Our experiment shows that direct transformation is 22K times faster than the text-to-clause model (Table 4 ).\\n\\n# Algorithm 1: Direct transformation\\nInput: The original explanation $e_{o}$ ;  \\nThe new edited explanation $e_{n}$ ;  \\nThe original SQL clause $s$ ;  \\nOutput: the updated SQL clause   \\n1 $C_{o}\\\\gets\\\\mathrm{CHUNK}(e_{o})$   \\n2 C$C_{n}\\\\gets\\\\mathrm{CHUNK}(e_{n})$ ←  \\n3 foreach $(c_{o},\\\\,c_{n})$ )in $\\\\mathrm{ALIGN}(C_{o},C_{n})$ do   \\n4 // Replace ;  \\n5 if BOTH COLUMN $(c_{o},\\\\,c_{n})$ or   \\n6 BOTH TABLE $(c_{o},\\\\,c_{n})$ or   \\n7 BOTH LITERAL $\\\\left(c_{o},\\\\,c_{n}\\\\right)$ then   \\n8 s←s.R EPLACE (c o, c n) ;   \\n9 // Add ;  \\n10 else if $c_{o}$ is $\\\\mathcal{Q}$ and IS COLUMN ($\\\\left(c_{n}\\\\right)$ then   \\n11 if s. START WITH (\"Select\" )then   \\n12 s←s.A PPEND (cn)  \\n13 // Remove ;  \\n14 else if $c_{n}$ is $\\\\mathcal{D}$ and IS COLUMN $\\\\scriptstyle(c_{o})$ then   \\n15 $s\\\\gets s.\\\\mathrm{REMOVE}(c_{o})$ ;  \\n16 end   \\n17 return\\n\\n# 3.2.1 Direct Transformation\\nWe define three types of atomic edits that can be directly converted into SQL edits by S TEPS : (1) replacing a column name, a table name, or a literal value (i.e., string, number), (2) adding a new column name in the explanation of a SELECT clause, and (3) removing a column name.  \\n\\nAlgorithm 1 describes our direct transformation algorithm. After chunking the text (Lines 1 -2 ), STEPS aligns and compares the chunks in the original explanation with those in the user-corrected explanation, using the Needleman and Wunsch (1970 )algorithm (Line 3 ). This allows S TEPS to detect any replacements (Line 4 ), additions (Line 9 ), or removals (Line 13 ) of database entities in the explanation. Based on this information, S TEPS automatically edits the corresponding SQL clause without calling a neural model (Lines 8 ,12 ,15 ). More details of this algorithm can be found in Appendix E.',\n",
       "   'original_filename': 'Conf_Paper_Meta_Data_EMNLP_2023_with_whole_text.db'}}]"
      ]
     },
     "execution_count": 45,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "papers[:5]"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "\"statement\": \"Research on Text-to-SQL generation has traditionally focused on scenarios where each natural language question is associated with one correct SQL query.\n",
    "<sup>{\\\"chunk_id\\\":\\\"10\\\", \\\"paper_id\\\":\\\"64a29654d68f896efa29af31\\\"}</sup>\n",
    "<sup>{\\\"chunk_id\\\":\\\"1\\\", \\\"paper_id\\\":\\\"6461b9c9d68f896efad43133\\\"}</sup>\n",
    "<sup>{\\\"chunk_id\\\":\\\"1\\\", \\\"paper_id\\\":\\\"6535d747939a5f408295c649\\\"}</sup>\"\n",
    "      \n",
    "# 该statement下的模型的引用和检索出来的顺序是一样的\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 23,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "[{'role': 'system',\n",
       "  'content': '\\nYou are an expert in artificial intelligence and your task is to assist in supplementing missing citations for given statements. \\nBelow is a set of statements that require citations, along with relevant chunk_text retrieved from academic papers. \\nYour goal is to identify the most appropriate chunks that support each statement and format the supplemented citation properly.\\n\\n<instruction>\\n### **Task Instructions**  \\n1. **Analyze Each Statement**:  \\n   - Carefully evaluate each statement and its context to determine the **most relevant and directly supporting `chunk`** from the provided `chunk_text`.  \\n   - Ensure the selected `chunk` **aligns directly with the statement\\'s meaning and context**.  \\n\\n2. **Generate Precise Supplemented Citations**:  \\n   - Format each citation in the following format:  \\n     ```  \\n     <sup>{\"chunk_id\":\"[chunk_id]\", \"paper_id\":\"[paper_id]\"}</sup>  \\n     ```  \\n   - If a statement contains multiple references, **ensure all references are captured** in the same format.such as：<sup>{chunk_id:\\'123\\', paper_id:\\'456\\'}</sup><sup>{chunk_id:\\'234\\', paper_id:\\'567\\'}</sup>\\n   - Ensure citations are **seamlessly integrated into the statement** without disrupting its flow.  \\n   - Ensure that you only add citations and do not modify any part of the statement.  \\n\\n3. **Select the Most Relevant `chunk`**:  \\n   - **Prioritize clarity and relevance**: Only include `chunk`(s) that **directly support** the statement.  \\n   - **Citation limit**: Add a **maximum of 5 citations** per statement to avoid overloading.  \\n   - If fewer relevant `chunk` are available, you may add fewer than 5 citations.  \\n\\n4. **Maintain Brevity**:  \\n   - If no sufficiently relevant `chunk` is found, **leave the statement unchanged**.  \\n   - **Do not add** citations that do not directly contribute to the statement\\'s meaning.  \\n</instruction>\\n\\n<Output Format>\\n1. Format: Valid JSON object\\n2. Single key: \"statement\"\\n3. Value: Well-formatted string with paragraphs separated by \"\\\\n\"\\n4. The output must be in valid JSON format with a single key \"statement\" containing a string.\\nUse the following format:\\n```json\\n     {\\n      \"statement\": \"statement with citations\",\\n     }\\n```\\n</Output Format>\\n\\n'},\n",
       " {'role': 'user',\n",
       "  'content': \" \\nStatement Requiring Citations:有人提出，解决已识别的挑战和局限性将显著推动多模态大型语言模型（MLLM）编辑领域的发展，从而释放这些模型在各类应用中的全部潜力。 \\nRetrieved Chunk_Text\\n- A set of text snippets (`chunk_text`) from academic sources, each with a unique ID (`chunk_id`) and metadata (`paper_id`)\\n['\\\\nchunk_text:大型语言模型（LLMs）已经彻底改变了自然语言处理，但它们在需要精确、上下文感知修改的直接文本编辑任务上仍然存在困难。尽管像 ChatGPT 这样的模型在文本生成和分析方面表现出色，但其编辑能力通常较弱，往往只能解决表层问题，而无法处理更深层次的结构或逻辑不一致问题。本研究提出了一种双重方法来提升 LLM 的编辑性能。首先，我们提出了 InstrEditBench，一个高质量的基准数据集，包含 20,000 多个结构化编辑任务，涵盖维基百科文章、LaTeX 文档、代码和数据库领域特定语言（DSL）。InstrEditBench 采用创新的自动化流程生成，确保编辑严格符合指定的指令，同时不改变无关内容。其次，我们提出了 FineEdit，一个基于该数据集训练的专用模型。实验结果表明，与 Gemini 相比，FineEdit 在直接编辑任务上提升了约 10% 的性能，有力地验证了其有效性。\\\\npaper_id:0', '\\\\nchunk_text:当前的指令驱动图像编辑方法（如 InstructPix2Pix）依赖于扩散模型中的 CLIP 文本编码器，在处理复杂场景时往往难以获得理想效果。为此，本文提出 SmartEdit，一种基于指令的创新图像编辑方法，利用多模态大型语言模型（MLLMs）增强其理解和推理能力。然而，直接集成 MLLM 仍然在复杂推理场景下面临挑战。为缓解这一问题，我们提出 双向交互模块（BIM），使输入图像与 MLLM 输出之间能够进行全面的双向信息交互。在训练过程中，我们首先引入感知数据以提升扩散模型的感知和理解能力，然后展示了少量复杂指令编辑数据如何有效激发 SmartEdit 处理复杂指令的能力。此外，我们构建了 Reason-Edit，一个专门用于复杂指令图像编辑的评测数据集。定量和定性实验结果均表明，SmartEdit 超越了现有方法，为复杂指令图像编辑的实际应用铺平了道路。\\\\npaper_id:1', '\\\\nchunk_text:本文对多模态大型语言模型（MLLMs）在医疗领域的应用进行了全面综述。作者讨论了医院在训练和部署医疗 LLMs 和 MLLMs 时面临的挑战，包括对海量医疗数据的需求、大规模计算资源的消耗以及医疗领域的特殊要求。文章还探讨了 MLLMs 在医疗报告生成、临床诊断、心理健康服务等方面的潜力，并展望了医疗 LLMs 和 MLLMs 的未来发展趋势及其在临床环境中的更好集成。\\\\npaper_id:2', '\\\\nchunk_text:本文提出了 LIME（Less Is More for MLLM Evaluation），一种精炼且高效的多模态大型语言模型（MLLMs）评估基准。作者设计了一条半自动化管道来筛选掉无信息量的样本并消除答案泄露，专注于需要图像理解的任务。实验表明，LIME 减少了 76% 的样本数量，并将评估时间缩短了 77%，同时能够更有效地区分不同模型的能力。此外，文章讨论了传统自动化评估指标（如 CIDEr）在评估 MLLMs 生成字幕性能时的不足之处。\\\\npaper_id:3', '\\\\nchunk_text:本文对多模态大型语言模型（MLLMs）在图像字幕生成、视觉问答和推理等任务上的性能进行了综合评估。作者指出 MLLMs 在这些任务中面临的挑战，并提出新的评测方法以更好地衡量其性能。此外，文章探讨了 MLLMs 在医疗诊断和治疗等领域的潜在应用。\\\\npaper_id:4', '\\\\nchunk_text:本文讨论了当前多模态大型语言模型（MLLMs）在理解和推理任务上的挑战和局限性。作者提出了一些新的方法来提升 MLLMs 在这些任务中的表现，包括采用更先进的架构和训练技术。此外，文章还强调了 MLLMs 在图像编辑和医疗诊断等应用场景中的潜力。\\\\npaper_id:5', '\\\\nchunk_text:本文提出了一种基于多模态大型语言模型（MLLMs）的全新图像编辑方法。作者提出 双向交互模块（BIM），以增强 MLLMs 在图像编辑任务中的理解和推理能力。同时，作者构建了一个名为 Reason-Edit 的新评测数据集，专门用于复杂指令图像编辑。实验结果表明，该方法优于以往的方法，为复杂指令图像编辑的实际应用铺平了道路。\\\\npaper_id:6', '\\\\nchunk_text:本文讨论了当前多模态大型语言模型（MLLMs）在图像字幕生成、视觉问答和推理等任务中面临的挑战和局限性。作者提出新的评测方法，以更有效地衡量 MLLMs 的表现，并强调了构建更高效、更准确的基准测试的必要性。此外，文章还探讨了 MLLMs 在医疗诊断和治疗等应用中的潜力。\\\\npaper_id:7', '\\\\nchunk_text:本文对大型语言模型（LLMs）和多模态大型语言模型（MLLMs）在医疗领域的应用进行了全面综述。作者讨论了医院在训练和部署医疗 LLMs 和 MLLMs 时面临的挑战，包括对大量医疗数据的需求、庞大的计算资源消耗以及医疗领域的特殊要求。文章还探讨了 MLLMs 在医疗报告生成、临床诊断、心理健康服务等方面的潜力，并展望了医疗 LLMs 和 MLLMs 在临床环境中的未来发展方向。\\\\npaper_id:8', '\\\\nchunk_text:本文介绍了 FineEdit，一个用于增强大型语言模型（LLMs）编辑性能的专用模型。作者提出了一种双重方法，包括构建高质量的基准数据集 InstrEditBench，以及在该数据集上训练 FineEdit 模型。实验结果表明，FineEdit 在直接编辑任务上取得了显著的改进，验证了其有效性。文章最后讨论了 FineEdit 在文本生成、代码编辑和数据库管理等应用场景中的潜力。\\\\npaper_id:9']\\n\\nOutput only the JSON structure, no additional text or explanations. \\n\\n\"}]"
      ]
     },
     "execution_count": 23,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "papers = [f'\\nchunk_text:{p[\"papers_abstract\"]}\\npaper_id:{i}' for i,p in enumerate(papers_info)]\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "addctitation_prompt =f'''\n",
    "您需要为学术陈述补充精准的文献引用。请严格遵循以下处理流程：\n",
    "\n",
    "<处理框架>\n",
    "1. 输入数据预处理：\n",
    "   <statements>\n",
    "   {statement}\n",
    "   </statements>\n",
    "   <chunk_texts>\n",
    "   {papers}\n",
    "   </chunk_texts>\n",
    "\n",
    "2. 语义匹配分析：\n",
    "   - 对每个statement进行语义解构，提取核心主张\n",
    "   - 为每个chunk_text建立特征向量，包含：研究结论、方法论、数据支撑三个维度\n",
    "   - 使用余弦相似度算法计算statement与chunk_text的语义匹配度（评分范围1-5）\n",
    "\n",
    "3. 相关性排序规则：\n",
    "   (1) 直接验证结论的chunk优先（评分5）\n",
    "   (2) 提供方法论支持的次之（评分4） \n",
    "   (3) 包含相关数据的再次（评分3）\n",
    "   (4) 仅部分概念重叠的排除（评分≤2）\n",
    "\n",
    "4. 引用生成规范：\n",
    "   - 选择匹配度≥3的前5个chunk\n",
    "   - 按paper_id数字升序排列引用\n",
    "   - 使用标准格式：<sup>{{paper_id:'X'}}</sup>\n",
    "   - 保持原statement语法完整性\n",
    "\n",
    "5. 异常处理：\n",
    "   - 当出现多个相同paper_id时保留最高匹配度的\n",
    "   - 无相关chunk时保持原样\n",
    "   - 发现数据矛盾时标注[需验证]\n",
    "</处理框架>\n",
    "\n",
    "<输出要求>\n",
    "生成包含以下元素的JSON对象：\n",
    "{{\n",
    "  \"statement\": \"补充后的完整语句\",\n",
    "  \"explanation\": [\n",
    "    {{\n",
    "      \"paper_id\": \"引用的文献ID\",\n",
    "      \"relevance_score\": 相关性评分,\n",
    "      \"match_points\": [\"具体匹配特征\"]\n",
    "    }}\n",
    "  ]\n",
    "}}\n",
    "示例：\n",
    "```json\n",
    "{{\n",
    "  \"statement\": \"深度学习模型需要大规模标注数据<sup>{{paper_id:'123'}}</sup>\",\n",
    "  \"explanation\": [\n",
    "    {{\n",
    "      \"paper_id\": \"123\", \n",
    "      \"relevance_score\": 5,\n",
    "      \"match_points\": [\"结论验证\",\"数据支撑\"]\n",
    "    }}\n",
    "  ]\n",
    "}}\n",
    "```\n",
    "\n",
    "请先在<analysis>中列出所有候选chunk的匹配度评分，再在<output>中生成最终结果。不得修改原语句的语义和结构。\n",
    "'''\n",
    "print(addctitation_prompt)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# 准备数据"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 74,
   "metadata": {},
   "outputs": [],
   "source": [
    "statement = \"有人提出，解决已识别的挑战和局限性将显著推动多模态大型语言模型（MLLM）编辑领域的发展，从而释放这些模型在各类应用中的全部潜力。\"\n",
    "\n",
    "papers_info = [\n",
    "    {\n",
    "        \"relevance\": 2,\n",
    "        \"papers_abstract\": \"大型语言模型（LLMs）已经彻底改变了自然语言处理，但它们在需要精确、上下文感知修改的直接文本编辑任务上仍然存在困难。尽管像 ChatGPT 这样的模型在文本生成和分析方面表现出色，但其编辑能力通常较弱，往往只能解决表层问题，而无法处理更深层次的结构或逻辑不一致问题。本研究提出了一种双重方法来提升 LLM 的编辑性能。首先，我们提出了 InstrEditBench，一个高质量的基准数据集，包含 20,000 多个结构化编辑任务，涵盖维基百科文章、LaTeX 文档、代码和数据库领域特定语言（DSL）。InstrEditBench 采用创新的自动化流程生成，确保编辑严格符合指定的指令，同时不改变无关内容。其次，我们提出了 FineEdit，一个基于该数据集训练的专用模型。实验结果表明，与 Gemini 相比，FineEdit 在直接编辑任务上提升了约 10% 的性能，有力地验证了其有效性。\"\n",
    "    },\n",
    "    {\n",
    "        \"relevance\": 8,\n",
    "        \"papers_abstract\": \"当前的指令驱动图像编辑方法（如 InstructPix2Pix）依赖于扩散模型中的 CLIP 文本编码器，在处理复杂场景时往往难以获得理想效果。为此，本文提出 SmartEdit，一种基于指令的创新图像编辑方法，利用多模态大型语言模型（MLLMs）增强其理解和推理能力。然而，直接集成 MLLM 仍然在复杂推理场景下面临挑战。为缓解这一问题，我们提出 双向交互模块（BIM），使输入图像与 MLLM 输出之间能够进行全面的双向信息交互。在训练过程中，我们首先引入感知数据以提升扩散模型的感知和理解能力，然后展示了少量复杂指令编辑数据如何有效激发 SmartEdit 处理复杂指令的能力。此外，我们构建了 Reason-Edit，一个专门用于复杂指令图像编辑的评测数据集。定量和定性实验结果均表明，SmartEdit 超越了现有方法，为复杂指令图像编辑的实际应用铺平了道路。\"\n",
    "    },\n",
    "    {\n",
    "        \"relevance\": 5,\n",
    "        \"papers_abstract\": \"本文对多模态大型语言模型（MLLMs）在医疗领域的应用进行了全面综述。作者讨论了医院在训练和部署医疗 LLMs 和 MLLMs 时面临的挑战，包括对海量医疗数据的需求、大规模计算资源的消耗以及医疗领域的特殊要求。文章还探讨了 MLLMs 在医疗报告生成、临床诊断、心理健康服务等方面的潜力，并展望了医疗 LLMs 和 MLLMs 的未来发展趋势及其在临床环境中的更好集成。\"\n",
    "    },\n",
    "    {\n",
    "        \"relevance\": 7,\n",
    "        \"papers_abstract\": \"本文提出了 LIME（Less Is More for MLLM Evaluation），一种精炼且高效的多模态大型语言模型（MLLMs）评估基准。作者设计了一条半自动化管道来筛选掉无信息量的样本并消除答案泄露，专注于需要图像理解的任务。实验表明，LIME 减少了 76% 的样本数量，并将评估时间缩短了 77%，同时能够更有效地区分不同模型的能力。此外，文章讨论了传统自动化评估指标（如 CIDEr）在评估 MLLMs 生成字幕性能时的不足之处。\"\n",
    "    },\n",
    "    {\n",
    "        \"relevance\": 3,\n",
    "        \"papers_abstract\": \"本文对多模态大型语言模型（MLLMs）在图像字幕生成、视觉问答和推理等任务上的性能进行了综合评估。作者指出 MLLMs 在这些任务中面临的挑战，并提出新的评测方法以更好地衡量其性能。此外，文章探讨了 MLLMs 在医疗诊断和治疗等领域的潜在应用。\"\n",
    "    },\n",
    "    {\n",
    "        \"relevance\": 4,\n",
    "        \"papers_abstract\": \"本文讨论了当前多模态大型语言模型（MLLMs）在理解和推理任务上的挑战和局限性。作者提出了一些新的方法来提升 MLLMs 在这些任务中的表现，包括采用更先进的架构和训练技术。此外，文章还强调了 MLLMs 在图像编辑和医疗诊断等应用场景中的潜力。\"\n",
    "    },\n",
    "    {\n",
    "        \"relevance\": 6,\n",
    "        \"papers_abstract\": \"本文提出了一种基于多模态大型语言模型（MLLMs）的全新图像编辑方法。作者提出 双向交互模块（BIM），以增强 MLLMs 在图像编辑任务中的理解和推理能力。同时，作者构建了一个名为 Reason-Edit 的新评测数据集，专门用于复杂指令图像编辑。实验结果表明，该方法优于以往的方法，为复杂指令图像编辑的实际应用铺平了道路。\"\n",
    "    },\n",
    "    {\n",
    "        \"relevance\": 3,\n",
    "        \"papers_abstract\": \"本文讨论了当前多模态大型语言模型（MLLMs）在图像字幕生成、视觉问答和推理等任务中面临的挑战和局限性。作者提出新的评测方法，以更有效地衡量 MLLMs 的表现，并强调了构建更高效、更准确的基准测试的必要性。此外，文章还探讨了 MLLMs 在医疗诊断和治疗等应用中的潜力。\"\n",
    "    },\n",
    "    {\n",
    "        \"relevance\": 1,\n",
    "        \"papers_abstract\": \"本文对大型语言模型（LLMs）和多模态大型语言模型（MLLMs）在医疗领域的应用进行了全面综述。作者讨论了医院在训练和部署医疗 LLMs 和 MLLMs 时面临的挑战，包括对大量医疗数据的需求、庞大的计算资源消耗以及医疗领域的特殊要求。文章还探讨了 MLLMs 在医疗报告生成、临床诊断、心理健康服务等方面的潜力，并展望了医疗 LLMs 和 MLLMs 在临床环境中的未来发展方向。\"\n",
    "    },\n",
    "    {\n",
    "        \"relevance\": 9,\n",
    "        \"papers_abstract\": \"本文介绍了 FineEdit，一个用于增强大型语言模型（LLMs）编辑性能的专用模型。作者提出了一种双重方法，包括构建高质量的基准数据集 InstrEditBench，以及在该数据集上训练 FineEdit 模型。实验结果表明，FineEdit 在直接编辑任务上取得了显著的改进，验证了其有效性。文章最后讨论了 FineEdit 在文本生成、代码编辑和数据库管理等应用场景中的潜力。\"\n",
    "    }\n",
    "]\n",
    "\n",
    "papers = [f'{p[\"papers_abstract\"]}' for p in papers_info]"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 82,
   "metadata": {},
   "outputs": [],
   "source": [
    "write_result(\"statement\",statement)\n",
    "write_result(\"papers\",papers)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "for p in papers:\n",
    "    prompt = f'''\n",
    "你的任务是评估一篇学术论文的内容是否能作为给定声明的有效引文支持。请严格遵循以下步骤进行分析和判断：  \n",
    "\n",
    "首先，仔细阅读以下声明内容：  \n",
    "<statement>  \n",
    "{statement}  \n",
    "</statement>  \n",
    "\n",
    "然后，阅读完整的论文节选内容：  \n",
    "<paper_text>  \n",
    "{p}  \n",
    "</paper_text>  \n",
    "\n",
    "评估时需应用以下标准：  \n",
    "1. **相关性**：论文是否明确讨论与声明直接相关的内容  \n",
    "2. **支持力度**：论文中的证据是否足够强有力地支持声明（例如：实验数据理论推导观点陈述）  \n",
    "3. **上下文一致性**：论文结论的适用范围是否与声明的应用场景一致  \n",
    "\n",
    "请在<分析>标签中完成以下内容：  \n",
    "- 列出论文中所有可能相关的段落（按出现顺序编号）  \n",
    "- 对每个段落进行评分（0-5分，0=无关，5=完全支持）  \n",
    "- 说明评分理由及其与声明的逻辑关联  \n",
    "\n",
    "最后，在<判断>标签中给出最终结论：  \n",
    "- 若存在至少一个段落评分≥4分，则输出\"可支持\"  \n",
    "- 若最高评分≤3分，则输出\"需谨慎引用\"  \n",
    "- 若所有段落评分=0分，则输出\"不可支持\"  \n",
    "\n",
    "在<依据>标签中：  \n",
    "- 引用具体段落编号  \n",
    "- 说明该段落为何符合/不符合评估标准  \n",
    "- 当结论为\"需谨慎引用\"时，必须指出证据的局限性  \n",
    "\n",
    "示例格式：  \n",
    "<分析>  \n",
    "[1] \"论文段落1...\"  \n",
    "评分：4/5  \n",
    "理由：提供了实验数据支持声明的X部分，但未涵盖Y条件...  \n",
    "</分析>  \n",
    "<判断>可支持</判断>  \n",
    "<依据>  \n",
    "段落[1]通过实验数据直接验证了声明中的X机制（标准2），但其样本局限在A场景（标准3）...  \n",
    "</依据>  \n",
    "\n",
    "现在开始评估，请确保：  \n",
    "1. 不自行补充论文未明确陈述的内容  \n",
    "2. 不假设作者意图  \n",
    "3. 对间接证据保持审慎态度  \n",
    "'''\n",
    "    print(prompt)\n",
    "    print(\"-\"*100)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# BM25"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 12,
   "metadata": {},
   "outputs": [
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "[nltk_data] Downloading package stopwords to\n",
      "[nltk_data]     C:\\Users\\17981\\AppData\\Roaming\\nltk_data...\n",
      "[nltk_data]   Package stopwords is already up-to-date!\n"
     ]
    }
   ],
   "source": [
    "import math\n",
    "from collections import defaultdict\n",
    "import nltk\n",
    "from nltk.corpus import stopwords\n",
    "from nltk.stem import PorterStemmer\n",
    "\n",
    "nltk.download('stopwords')\n",
    "\n",
    "class BM25:\n",
    "    def __init__(self, k1=1.5, b=0.75):\n",
    "        self.k1 = k1\n",
    "        self.b = b\n",
    "        self.documents = []\n",
    "        self.avgdl = 0\n",
    "        self.doc_freqs = defaultdict(int)\n",
    "        self.idf = {}\n",
    "        self.stemmer = PorterStemmer()\n",
    "        self.stop_words = set(stopwords.words('english'))\n",
    "        \n",
    "    def preprocess(self, text):\n",
    "        # 分词 + 去停用词 + 词干提取\n",
    "        tokens = nltk.word_tokenize(text.lower())\n",
    "        filtered = [self.stemmer.stem(t) for t in tokens \n",
    "                   if t.isalnum() and t not in self.stop_words]\n",
    "        return filtered\n",
    "    \n",
    "    def add_document(self, document):\n",
    "        \"\"\"添加文档到索引\"\"\"\n",
    "        processed = self.preprocess(document)\n",
    "        self.documents.append(processed)\n",
    "        \n",
    "        # 更新词项频率\n",
    "        unique_terms = set(processed)\n",
    "        for term in unique_terms:\n",
    "            self.doc_freqs[term] += 1\n",
    "        \n",
    "    def finalize(self):\n",
    "        \"\"\"完成索引构建\"\"\"\n",
    "        num_docs = len(self.documents)\n",
    "        self.avgdl = sum(len(d) for d in self.documents) / num_docs\n",
    "        \n",
    "        # 计算IDF\n",
    "        for term, freq in self.doc_freqs.items():\n",
    "            idf = math.log((num_docs - freq + 0.5) / (freq + 0.5) + 1)\n",
    "            self.idf[term] = idf\n",
    "            \n",
    "    def get_score(self, query, doc_index):\n",
    "        \"\"\"计算单个文档的BM25分数\"\"\"\n",
    "        doc = self.documents[doc_index]\n",
    "        doc_len = len(doc)\n",
    "        score = 0.0\n",
    "        \n",
    "        # 预处理查询\n",
    "        q_terms = self.preprocess(query)\n",
    "        \n",
    "        # 计算每个词项的贡献\n",
    "        for term in set(q_terms):\n",
    "            if term not in self.doc_freqs:\n",
    "                continue\n",
    "                \n",
    "            tf = doc.count(term)\n",
    "            numerator = tf * (self.k1 + 1)\n",
    "            denominator = tf + self.k1 * (1 - self.b + \n",
    "                        self.b * (doc_len / self.avgdl))\n",
    "                        \n",
    "            score += self.idf[term] * (numerator / denominator)\n",
    "            \n",
    "        return score\n",
    "    \n",
    "    def get_scores(self, query):\n",
    "        \"\"\"对所有文档进行排名\"\"\"\n",
    "        scores = []\n",
    "        for i in range(len(self.documents)):\n",
    "            score = self.get_score(query, i)\n",
    "            scores.append(score)\n",
    "            \n",
    "        return scores"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 14,
   "metadata": {},
   "outputs": [],
   "source": [
    "def get_bm25_scores(query,papers,k1=1.5,b=0.75):\n",
    "    bm25 = BM25(k1=k1, b=b)\n",
    "\n",
    "\n",
    "    for doc in papers:\n",
    "        bm25.add_document(doc)\n",
    "    # 构建索引\n",
    "    bm25.finalize() \n",
    "    # 执行查询\n",
    "    bm25_scores = bm25.get_scores(query)\n",
    "    # 最小-最大归一化\n",
    "    min_score = min(bm25_scores)\n",
    "    max_score = max(bm25_scores)\n",
    "    \n",
    "    if max_score != min_score:\n",
    "        normalized_scores = [(score - min_score) / (max_score - min_score) for score in bm25_scores]\n",
    "    else:\n",
    "        # 如果所有得分相等，直接返回0的归一化结果\n",
    "        normalized_scores = [0 for _ in bm25_scores]\n",
    "    \n",
    "    return normalized_scores\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 83,
   "metadata": {},
   "outputs": [],
   "source": [
    "import math\n",
    "from collections import defaultdict\n",
    "import jieba\n",
    "import os\n",
    "\n",
    "class ChineseBM25:\n",
    "    def __init__(self, k1=1.5, b=0.75):\n",
    "        self.k1 = k1\n",
    "        self.b = b\n",
    "        self.documents = []\n",
    "        self.avgdl = 0\n",
    "        self.doc_freqs = defaultdict(int)\n",
    "        self.idf = {}\n",
    "        \n",
    "        # 加载中文停用词\n",
    "        self.stop_words = self._load_stop_words()\n",
    "        \n",
    "    def _load_stop_words(self):\n",
    "        \"\"\"加载中文停用词表\"\"\"\n",
    "        stop_words = set()\n",
    "        # 这里可以根据实际情况调整停用词文件路径\n",
    "        stop_words_file = \"chinese_stop_words.txt\"  \n",
    "        \n",
    "        if os.path.exists(stop_words_file):\n",
    "            with open(stop_words_file, 'r', encoding='utf-8') as f:\n",
    "                for line in f:\n",
    "                    stop_words.add(line.strip())\n",
    "        return stop_words\n",
    "    \n",
    "    def preprocess(self, text):\n",
    "        \"\"\"中文文本预处理：分词 + 去停用词\"\"\"\n",
    "        # 使用jieba进行中文分词\n",
    "        tokens = jieba.cut(text)\n",
    "        # 过滤停用词和空白字符\n",
    "        filtered = [t for t in tokens \n",
    "                   if t.strip() and t not in self.stop_words]\n",
    "        return filtered\n",
    "    \n",
    "    def add_document(self, document):\n",
    "        \"\"\"添加文档到索引\"\"\"\n",
    "        processed = self.preprocess(document)\n",
    "        self.documents.append(processed)\n",
    "        \n",
    "        # 更新词项频率\n",
    "        unique_terms = set(processed)\n",
    "        for term in unique_terms:\n",
    "            self.doc_freqs[term] += 1\n",
    "        \n",
    "    def finalize(self):\n",
    "        \"\"\"完成索引构建\"\"\"\n",
    "        num_docs = len(self.documents)\n",
    "        if num_docs == 0:\n",
    "            return\n",
    "            \n",
    "        self.avgdl = sum(len(d) for d in self.documents) / num_docs\n",
    "        \n",
    "        # 计算IDF\n",
    "        for term, freq in self.doc_freqs.items():\n",
    "            idf = math.log((num_docs - freq + 0.5) / (freq + 0.5) + 1)\n",
    "            self.idf[term] = idf\n",
    "            \n",
    "    def get_score(self, query, doc_index):\n",
    "        \"\"\"计算单个文档的BM25分数\"\"\"\n",
    "        doc = self.documents[doc_index]\n",
    "        doc_len = len(doc)\n",
    "        score = 0.0\n",
    "        \n",
    "        # 预处理查询\n",
    "        q_terms = self.preprocess(query)\n",
    "        \n",
    "        # 计算每个词项的贡献\n",
    "        for term in set(q_terms):\n",
    "            if term not in self.doc_freqs:\n",
    "                continue\n",
    "                \n",
    "            tf = doc.count(term)\n",
    "            numerator = tf * (self.k1 + 1)\n",
    "            denominator = tf + self.k1 * (1 - self.b + \n",
    "                        self.b * (doc_len / self.avgdl))\n",
    "                        \n",
    "            score += self.idf[term] * (numerator / denominator)\n",
    "            \n",
    "        return score\n",
    "    \n",
    "    def get_scores(self, query):\n",
    "        \"\"\"对所有文档进行排名\"\"\"\n",
    "        scores = []\n",
    "        for i in range(len(self.documents)):\n",
    "            score = self.get_score(query, i)\n",
    "            scores.append(score)\n",
    "            \n",
    "        return scores\n",
    "\n",
    "def get_chinese_bm25_scores(statement, papers, k1=1.5, b=0.75):\n",
    "    \"\"\"计算BM25分数并进行归一化\"\"\"\n",
    "    # 初始化\n",
    "    bm25 = ChineseBM25(k1=k1, b=b)\n",
    "    \n",
    "    # 添加文档\n",
    "    for doc in papers:\n",
    "        bm25.add_document(doc)\n",
    "        \n",
    "    # 构建索引\n",
    "    bm25.finalize()\n",
    "    \n",
    "    # 执行查询\n",
    "    bm25_scores = bm25.get_scores(statement)\n",
    "    \n",
    "    # 最小-最大归一化\n",
    "    min_score = min(bm25_scores) if bm25_scores else 0\n",
    "    max_score = max(bm25_scores) if bm25_scores else 0\n",
    "    \n",
    "    # 如果最大值和最小值相等，避免除以0的错误\n",
    "    if max_score != min_score:\n",
    "        normalized_scores = [(score - min_score) / (max_score - min_score) \n",
    "                           for score in bm25_scores]\n",
    "    else:\n",
    "        normalized_scores = [0 for _ in bm25_scores]\n",
    "    \n",
    "    return normalized_scores"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 84,
   "metadata": {},
   "outputs": [],
   "source": [
    "bm25_scores = get_chinese_bm25_scores(statement,papers)\n",
    "scored_papers = list(zip(bm25_scores, papers))\n",
    "\n",
    "# 按照分数从高到低排序\n",
    "scored_papers.sort(key=lambda x: x[0], reverse=True)\n",
    "write_result(\"BM25\",scored_papers)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 16,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "[{'id': 454846633586283990,\n",
       "  'distance': 0.6550747752189636,\n",
       "  'entity': {'paper_id': '64a29654d68f896efa29af31',\n",
       "   'paper_title': 'Constraint Reasoning Embedded Structured Prediction.',\n",
       "   'chunk_id': 10,\n",
       "   'chunk_text': '# 5.3 Text2SQL Generation\\nTask Definition. Formatted data such as travel records and stock market transactions are stored in relational databases. Currently, accessing the database requires a data scientist who masters the SQL query language. Our task is to automatically synthesize SQL queries from natural language sentences using machine learning. Compared with the data expert approach, SQL query generation requires deeper reasoning across the structure of the database, the semantics of the structured query language, and the understanding of natural language. As shown in Figure 11, the input of the text2SQL generation is a sentence that describes the query in natural language and the table headers in the relational database. The output is a SQL query with the following structure:  \\n\\nSELECT agg-op sel-col WHERE (cond-col cond-op cond-val) AND ...  \\n\\nHere, SELECT and WHERE are keywords in the SQL language. What we need to predict are: (1) the aggregation operator $\\\\mathsf{a g g-o p}$ , which chooses among the set {empty, COUNT, MIN, MAX, SUM, AVG }; (2) the column name in selection sel-col and (3) the column name in condition cond-col , both of which are chosen from the table headers; (4) the conditional operator cond-op , which is in $\\\\{=,<,>\\\\}$ ; (5) the conditional value cond-val , which is assumed to be a sub-sequence of the given query. Here, one bracket pair () represents one conditional statement. The SQL query may have multiple conditions, which are denoted above by “ ... ”. Figure 11 displays this SQL query:\\n\\n# SELECT COUNT \"School\" WHERE \"No.\" = \"3\"\\nHere agg-op is COUNT ;sel-col is “school”, which is a column name from the table headers. One cond-col is “No.”, which also comes from the table headers. The cond-op is “=”. The cond-val is “3”, which we assume is from the input query. This example has one condition but multiple conditions are allowed.  \\n\\nDefinition of Constraints. Existing generative neural models for this task are not guaranteed to generate a query that follows the grammar of a SQL query. To avoid grammar violations, we compile a set of common SQL grammars as constraints into the Core-Sp module. The Core-Sp module will ensure that all the generated SQL queries follow the grammatical constraints. Our constraints are defined on the operators, namely the conditional operator cond-op and the aggregation operator agg-op . The domains of these operators are dependent upon the data types of the entities (namely, cond-col and sel-col )they operate on. Consider the previous example. The agg-op can only take values between $\\\\{\\\\mathrm{empty,~\\\\coUNT}\\\\}$ , because the sel-col is “school”, which is of the string type. More precisely, let $s$ be a column header (the value of sel-col or cond-col ). We define $F_{a}(s)$ as  \\n\\nInput Table:   \\n\\n\\n<html><body><table><tr><td></td><td>Player</td><td>No.</td><td>Position</td><td>School</td></tr><tr><td>0</td><td>Antonio</td><td>21</td><td>Guard-Forward</td><td>Duke</td></tr><tr><td>1</td><td>Voshon</td><td>2</td><td>Guard</td><td>Minnesota</td></tr><tr><td>2</td><td>Marin</td><td>3</td><td>Guard-Forward</td><td>Butler CC</td></tr></table></body></html>\\n\\n# Input Query:\\nHow many schools did player number 3 play at?\\n\\n# Output SQL Query:\\nFigure 11: An example for the Text2SQL generation task. The input is the text query “How many schools did player number 3 play at?” and the table header “ Player, No., Position, School ” from the relational database. The output should be the SQL query: SELECT COUNT \"School\" WHERE \"No. $\"~=~\"3\"$ .  \\n\\nthe set of aggregation operators agg-op that can be associated with $s$ , and $F_{c}(s)$ as the set of condition operators cond-op that can be associated with $s$ . That is:  \\n\\n$$\\n\\\\begin{array}{r l}&{F_{a}(s)=\\\\left\\\\{\\\\begin{array}{l l}{\\\\{\\\\mathrm{empty~,~\\\\varsigma0UNT,~\\\\forall\\\\mathrm{IIN},~\\\\forall\\\\mathrm{IAX},~\\\\forall\\\\mathrm{II},~\\\\mathrm{AVG}\\\\}}}&{\\\\mathrm{if~}s\\\\mathrm{~of~is~numeric~type}}\\\\\\\\ {\\\\{\\\\mathrm{empty~,~\\\\varsigma0UNT}\\\\}}&{\\\\mathrm{if~}s\\\\mathrm{~of~is~string~type}}\\\\end{array}\\\\right.}\\\\\\\\ &{F_{c}(s)=\\\\left\\\\{\\\\begin{array}{l l}{\\\\{\\\\mathrm{=,~\\\\displaystyle>,~\\\\varsigma\\\\}}}&{\\\\mathrm{if~}s\\\\mathrm{~is~of~numeric~type}}\\\\\\\\ {\\\\{\\\\mathrm{=}\\\\}}&{\\\\mathrm{if~}s\\\\mathrm{~is~of~string~type}}\\\\end{array}\\\\right.}\\\\end{array}\\n$$  \\n\\nWe also introduce dataype constraints, which are defined as:  \\n\\n$$\\n\\\\begin{array}{r l}&{\\\\mathtt{s e l-c o l}=s\\\\Rightarrow\\\\mathtt{a g g-o p}\\\\in F_{a}(s),}\\\\\\\\ &{\\\\mathtt{c o n d-c o l}=s\\\\Rightarrow\\\\mathtt{c o n d-o p}\\\\in F_{c}(s).}\\\\end{array}\\n$$  \\n\\nModel Structure. We embed the Core-Sp module to SQLova (Hwang et al., 2019), the state-of-the-art neural network for text2SQL generation. SQLova has a sequence-tosequence architecture. It first encodes a natural language sentence and the table headers into a high-dimensional vector. Then the decoder of SQLova decodes the hidden representation into the predictions of various entities in the SQL query. SQLova first determines the number of conditions in the SQL query and then fills in the ( cond-col ,cond-op ,cond-val ) for each condition. The operators agg-op, cond-op are predicted as a classification task from a fixed set of operators. Column names cond-col, sel-col are predicted from the set of table headers in the relational database. The cond-val is predicted by a pointer neural network which points at a span of the input natural language sentence. The selected span of the query is used as the cond-val (Dong and Lapata, 2018).  \\n\\nMDD Construction. The associated MDD that encodes the constraints for text2SQL generation is similar to the MDD for if-then program synthesis. The MDD is split into layers and every two layers form a group. One two-layer group is used to enforce constraints on an operator-column name pair. The operator-column name pair can be $\\\\mathsf{a g g-o p}$ and sel-col ,or can be cond-op and cond-col . Note that there can be only one group of $\\\\mathsf{a g g-o p}$ and sel-col and more than one group of cond-op and cond-col . In the first layer of the group, the column name is determined. In the second layer, the invalid operators are ruled out based on the type of the column name selected in the first layer. The two-layer group is copied several times because the SQL query can contain multiple conditions.  \\n\\nConstraint Reasoning Embedded Structured Prediction',\n",
       "   'original_filename': 'Journal_Paper_Meta_Data_Journal_of_Machine_Learning_Research_with_whole_text.db'}},\n",
       " {'id': 454845641360425782,\n",
       "  'distance': 0.6537715196609497,\n",
       "  'entity': {'paper_id': '6461b9c9d68f896efad43133',\n",
       "   'paper_title': 'Interactive Text-to-SQL Generation Via Editable Step-by-Step Explanations',\n",
       "   'chunk_id': 1,\n",
       "   'chunk_text': '# 2 Related Work\\n\\n# 2.1 Text-to-SQL Generation\\nNatural language interfaces have long been recognized as a way to expand access to databases ( Hendrix et al. ,1978 ).The construction of several large text-to-SQL datasets, such as WikiSQL ( Zhong et al. ,2017 ) and Spider ( Yu et al. ,2018 ), has enabled the adoption of deep learning models in this task, achieving unprecedented performance in recent years ( Rubin and Berant ,2021 ;Wang et al. ,2020a ;Scholak et al. ,2021 ;Yu et al. ,2020 ;Hwang et al. ,2019 ). Our technique is based on the recent success of neural text-to-SQL models. Unlike existing models that perform end-to-end SQL generation, we propose a new interaction mechanism for users to validate and refine generated queries through step-by-step explanations.  \\n\\nAs the first step to demonstrate the feasibility of our approach, we focus on single-turn SQL generation ( Yu et al. ,2018 ) in this work. There has also been recent work that supports multi-turn SQL generation ( Yu et al. ,2019a ,b;Guo et al. ,2021 ), where a sequence of interdependent queries are expressed in multiple utterances in a dialog. Models designed for multi-turn SQL generation typically need to reason about the dialog context and effectively encode the historical queries ( Wang et al. ,2021 ;Hui et al. ,2021 ;Zhang et al. ,2019 ;Cai and Wan ,2020 ;Wang et al. ,2020b ). Our approach can be extended to support multi-turn SQL generation by initiating separate refinement sessions for individual queries while incorporating the contextual information of previous queries into explanation generation and text-to-clause generation.\\n\\n# 2.2 Interactive Semantic Parsing for SQL\\nRecently, there has been a growing interest in interactive approaches that elicit user feedback to guide SQL generation. Iyer et al. (2017 ) proposed to allow users to flag incorrect queries and continuously retrain the model. Both DIY ( Narechania et al. ,2021 ) and NaLIR ( Li and Jagadish ,2014a ,b)enable users to select alternative values or subexpressions to fix an incorrect SQL query. PIIA ( Li et al. ,2020 ), MISP ( Yao et al. ,2019 ), and DialSQL ( Gur et al. ,2018 ) proactively ask for user feedback via multiple-choice questions. A common limitation of these methods is that they only solicit feedback in constrained forms, hindering their flexibility and effectiveness in addressing the variability of SQL errors. In contrast, our approach allows more flexible feedback through direct edits to the explanations generated by the model.  \\n\\nThe only work that supports open-ended user feedback in SQL generation is NL-EDIT ( Elgohary et al. ,2021 ). NL-EDIT is trained on SPLASH ( Elgohary et al. ,2020 ), a dataset of SQL errors and user feedback utterances. Given an incorrect query, NL-EDIT allows users to provide a clarification utterance. Based on the utterance, the model generates a sequence of edits to the SQL query. Incorporating feedback expressed in a completely free-text utterance is challenging for two reasons:  \\n\\n  \\nFigure 2: An Overview of Interactive SQL Generation and Refinement with Editable Step-by-Step Explanations  \\n\\n(1) the model needs to infer which part of the SQL query to fix; (2) the model needs to determine what changes are being requested. In contrast, S TEPS asks users to directly edit an NL explanation and make corrections to the explanation. Comparing the initial explanation with the user-corrected explanation makes it easier to locate the part of a SQL query that needs to be changed and infer what change to make.  \\n\\nThe idea of SQL decomposition is similar to recent work that decomposes a user question to sub-questions on SPARQL ( Mo et al. ,2022 ). Their approach requires a crowd-sourced dataset to train a question decomposition model. In contrast, our rule-based method generates step-by-step explanations without the need for training a model. This also allows our system to map each entity in the explanation to the corresponding SQL element, making it easier for SQL correction (Sec. 3.2 ).\\n\\n# 2.3 Explaining SQL Queries in NL\\nOur approach is also related to prior work that generates NL explanations for SQL queries. Simitsis and Ioannidis (2009 ) argued that databases should “talk back” in human language so that users can verify results. Kokkalis et al. (2012 ) and Koutrika et al. (2010 ) used a graph-based SQL translation approach, where each query is represented as a graph and the explanation is generated by traversing the graph. Elgohary et al. (2021 ,2020 ) employed a template-based explanation approach, where they manually curated 57 templates for explanation generation. These approaches have limited capability to handle arbitrary SQL queries. To address this limitation, we propose a rule-based method to first explain terminal tokens (e.g., operators, keywords) and gradually compose them into a complete explanation based on the derivation rules in the SQL grammar. Another key difference is that none of the existing approaches supports editable explanations for SQL correction, which is a key feature to solicit user feedback in our approach.',\n",
       "   'original_filename': 'Conf_Paper_Meta_Data_EMNLP_2023_with_whole_text.db'}},\n",
       " {'id': 454845681760490286,\n",
       "  'distance': 0.6474056243896484,\n",
       "  'entity': {'paper_id': '6535d747939a5f408295c649',\n",
       "   'paper_title': 'Benchmarking and Improving Text-to-SQL Generation under Ambiguity',\n",
       "   'chunk_id': 1,\n",
       "   'chunk_text': '# 2 Background and Related Work\\nA Text-to-SQL model takes as input a question expressed as a natural language text x, and a database schema scomprising of table and column names, and outputs an SQL program ywhich can be executed against the database to answer the user’s question. Figure 1 shows an example. The training data for the task comprises (text, schema, SQL) triplets spanning multiple distinct databases.  \\n\\n  \\nFigure 1: A Text-to-SQL system converts a user question to an SQL query, conditioned on the database schema and/or content.  \\n\\nBenchmarks. Popular benchmarks for the Textto-SQL task are WikiSQL ( Zhong et al. ,2018 )and SPIDER ( Yu et al. ,2018 ). A few others have been proposed recently to capture real-world scenarios, such as KaggleDBQA ( Lee et al. ,2021 ), SEDE ( Hazoom et al. ,2021 ), and EHRSQL ( Lee et al. ,2022 ). They all attach one SQL per text, though some of them mention the problem of ambiguity in real-world datasets. Dr. SPIDER ( Chang et al. ,2023 ), designed to test the robustness of existing models, perturbs either the text or schema of SPIDER but still assigns one SQL per text.  \\n\\nAmbiguity in SQL Although ambiguity has been studied in other fields of NLP ( Pilault et al. ,2023 ;Li et al. ,2022 ;Futeral et al. ,2022 ), it has been unexplored in the context of semantic parsing. Ambiguity in SQL arising from related column names is discussed in ( Wang et al. ,2022 ), but they only consider column ambiguity. Their method of recognizing ambiguous queries depends on labeling words of the text and does not generalize to other kinds of ambiguity. To the best of our discernment, AmbiQT represents the first open benchmark for testing coverage of ambiguous alternatives.  \\n\\nDiverse Decoding. Prior work has critiqued the lack of meaningful diversity in beam-search outputs ( Finkel et al. ,2006 ;Gimpel et al. ,2013 ;Li et al. ,2016 ;Li and Jurafsky ,2016 ). In response, many fixes have been proposed. Some proposals attempt to restrict the tokens sampled, using strategies like Nucleus sampling ( Holtzman et al. ,2020 ), Truncated Sampling ( Hewitt et al. ,2022 ), and Typical Sampling ( Meister et al. ,2023 ), while some rely on Template-Based decoding ( Wiseman et al. ,2018 ;Zhang et al. ,2022 ;Fu et al. ,2023 ;Elgohary et al. ,2020 ;Awasthi et al. ,2022 ). A third approach is to generate a prefix with high diversity first, then generate the rest of the sentence with lower diversity. Narayan et al. (2022 ) follow this recipe but focus on incorporating diverse entity orders in text summarization.  \\n\\n<html><body><table><tr><td rowspan=\"2\">Kind of ambiguity</td><td rowspan=\"2\">Count</td><td colspan=\"3\">Example</td></tr><tr><td>QuestionText</td><td>SQL#1</td><td>SQL#2</td></tr><tr><td>Column Ambiguity (C)</td><td>1240</td><td>List the ids of all students.</td><td>SELECTroll_number FROMstudents</td><td>SELECTadmission_number FROMstudents</td></tr><tr><td>Table Ambiguity (T)</td><td>1417</td><td>How many singers do wehave?</td><td>SELECT COUNT(*) FROM artist</td><td>SELECT COUNT(*) FROM performer</td></tr><tr><td>Join Ambiguity (J)</td><td>288</td><td>Whatarethemakers and models?</td><td>SELECT maker，model FROM model</td><td>SELECT t2.maker，t1.model FROM modelASt1JOINmodel_maker AS t2 ON t1.model_id = t2.model_id</td></tr><tr><td>Precomputed Aggregates (P)</td><td>101</td><td>for each pet type.</td><td>Find the average weight|SELECT AVG(weight)， pettype FROM pets GROUP BY pettype</td><td>SELECT avg_weight，pettype FROM pets_weight</td></tr></table></body></html>\\n\\nTable 1: The AmbiQT benchmark. For each question, we list two valid SQL queries as per the schema. The schema is not shown here, but the ambiguity in it can be inferred based on the two SQL queries.\\n\\n# 3 AmbiQT: A Benchmark of Ambiguous Text-to-SQL Conversion\\nAmbiQT is constructed so that each text query has two distinct valid SQL interpretations. Motivated by our experience working with real-life databases, we designed AmbiQT to encompass four types of ambiguity. Each entry is designed so that both alternatives have a similar relevance to the question, and a well-calibrated decoding method is expected to rank them close by in their outputs.  \\n\\nWe create AmbiQT by modifying the SPIDER (Yu et al. ,2018 ) dataset, and use ChatGPT ( OpenAI ,2022 ) to aid with the creation. In each case, we modify the schema instead of the text as that provides greater control over the modification process. We explain the kinds of ambiguity in AmbiQT below and portray examples of each in Table 1 .  \\n\\nColumn Ambiguity (C). Unlike the SPIDER benchmark where column names usually appear verbatim in the question text (like born state for the column born_state ), when users unaware of the schema pose a natural question, they introduce column ambiguity ( Wang et al. ,2022 ). For example, “ What is the capacity of O2 Arena? ” could be ambiguous if the schema has separate columns for standing and seating capacity. Likewise, a query on the number of under-nourished children is ambiguous if we have different columns for “under-weight children” and “stunted growth in children”.  \\n\\nTo simulate column ambiguity, for each text $\\\\mathbf{x}$ ,schema s, and SQL yin SPIDER, we prompt ChatGPT to generate two synonyms for each column name of sin a one-shot manner. Appendix A furnishes more details of the prompt. We then modify sby replacing $c$ with two columns $c_{1},c_{2}$ , and we use yto generate two queries $\\\\mathbf{y}_{1},\\\\mathbf{y}_{2}$ where all mentions of $c$ are replaced with $c_{1}$ in $\\\\mathbf{y}_{1}$ and with $c_{2}$ in $\\\\mathbf{y}_{2}$ . An example appears in the first row of Table 1 . We do not reuse $c$ because the SPIDER dataset often contains column names verbatim in the question, and that would violate our attempt at keeping the two options at similar relevance levels. We modify one column at a time and generate up to 3 examples from each original entry.  \\n\\nTable Ambiguity (T). Table name ambiguity is common in databases obtained by integrating multiple data sources, as in web tables ( Cafarella et al. ,2008 ;Pimplikar and Sarawagi ,2012 ). Here again, we prompt ChatGPT to generate two alternate names for each table. We then modify one SQL yto generate two candidates ${\\\\bf y}_{1},{\\\\bf y}_{2}$ as shown in Table 1 .  \\n\\nJoin Ambiguity (J). In production databases, a logical table is often vertically partitioned across several tables for efficient clustered access ( Stonebraker et al. ,2019 ). Column names overlapping across tables leads to Join Ambiguity. Suppose we have two tables: (1) person with columns id, name, email_address , and (2) person_details with columns id, postal_address, photo .A question asking for a person’s name and address is ambiguous on whether a JOIN with the person_details is necessary. We expose such ambiguity by modifying the schema as follows.  \\n\\nConsider a $(\\\\mathbf{x},\\\\mathbf{s},\\\\mathbf{y})$ triplet. Suppose yinvolves selecting two or more columns $c_{1},c_{2},\\\\ldots.$ not necessarily in the same order, from a table $t$ . Suppose further that $c_{1}$ is not a primary key of $t$ . We create a table called $t\\\\_c_{1}$ that includes just the primary key $p k_{t}$ of $t$ , and $c_{1}$ . The first alternative $\\\\mathbf{y}_{1}$ is $\\\\mathbf{y}$ and the second alternative $\\\\mathbf{y}_{2}$ uses a join over $t$ and $t\\\\_c_{1}$ , with everything else staying the same as y.  \\n\\n  \\nFigure 2: Beam Search works well when targeting only one output, but leads to superficial diversity, for example via different grouping and erroneous variants of column names.  \\n\\nPrecomputed Aggregates $(\\\\mathbf{P})$ :. This ambiguity is particularly common in data warehouses such as Data Commons which pre-aggregate certain variables. For instance, the “ total rice production ” of a state might refer to the column rice_production of state rather than a sum over it. Text-toSQL models have a bias toward introducing a sum()...group-by clause every time total appears in the text. The non-aggregated alternative is usually missing in the top$k$ options. We incorporate this ambiguity as follows.  \\n\\nFor each $(\\\\mathbf{x},\\\\mathbf{s},\\\\mathbf{y})$ , where $\\\\mathbf{y}$ has at least one aggregate, we construct a new table $t^{\\\\prime}$ . For each aggregate $\\\\boldsymbol{\\\\mathcal{A}}$ over column $c$ in y, we add to $t^{\\\\prime}$ the columns and the columns grouped by in $A^{\\\\prime}\\\\_c$ for all $\\\\mathcal{A}^{\\\\prime}\\\\,\\\\in\\\\,\\\\{\\\\mathsf{a v g},\\\\mathsf{s u m},\\\\mathsf{m i n},\\\\mathsf{m a x}\\\\}$ y. For count $(\\\\star)$ ,we add a column called number . We get two gold queries, the original yand a second with the groupby replaced by a direct SELECT on $t^{\\\\prime}$ as shown in the example in Table 1 . We also support aggregates across multiple tables but skip the details here.',\n",
       "   'original_filename': 'Conf_Paper_Meta_Data_EMNLP_2023_with_whole_text.db'}},\n",
       " {'id': 454845489174282794,\n",
       "  'distance': 0.632977306842804,\n",
       "  'entity': {'paper_id': '644744fb71ac66d2cbf9b886',\n",
       "   'paper_title': 'A Lightweight Constrained Generation Alternative for Query-focused Summarization',\n",
       "   'chunk_id': 1,\n",
       "   'chunk_text': '# 2 Related Work\\nQuery-focused Summarization: To generate a query-focused summary, several studies used an additional query-attention mechanism. QR-BERTSUM-TL [13] incorporates query relevance scores into a pre-trained summarization model. Su et al. [29] propose merging the representation of an answer span predicted by a separate QA model into the Seq2Seq model’s training and inference process to enforce the summary’s coherence w.r.t. the query. QSG Transformer [23] suggests using a separate graph neural network model to learn per-token representations and fuse them to the Seq2Seq model to effectively generate a QFS. These mechanisms can be viewed as enforcing soft semantic constraints during the generation process, and requires additional modules and parameters to function effectively. We opt for a different approach, i.e. explicitly enforcing lexical constraints during the generation process, without the additional machinery that is necessary to handle the soft semantic constrains.  \\n\\nConstrained Generation (or Conditional Generation) is a family of natural language generation (NLG) methods that aim to generate natural language including/excluding a set of specific words, i.e. lexical constraints. The NLG domain recipe leverages pre-trained large language models (LLM) finetuned on specific datasets [7]. However, as pointed out by Lu et al. [18], such models only finetuned in an end-to-end manner do not learn to follow the underlying constraints reliably even when supervised with large amounts of training examples. Therefore, a line of works [1, 10, 17, 18] in constrained generation proposes to explicitly modify the likelihood of next word prediction in the generation stage, such that the predefined lexical constraints can be better satisfied.',\n",
       "   'original_filename': 'Conf_Paper_Meta_Data_SIGIR2023_with_whole_text.db'}},\n",
       " {'id': 454845641378251576,\n",
       "  'distance': 0.6278348565101624,\n",
       "  'entity': {'paper_id': '6461b9c9d68f896efad43133',\n",
       "   'paper_title': 'Interactive Text-to-SQL Generation Via Editable Step-by-Step Explanations',\n",
       "   'chunk_id': 2,\n",
       "   'chunk_text': '# 3 Approach\\nFig. 2 provides an overview of S TEPS . Given a natural language (NL) question, S TEPS invokes a text-to-SQL model to generate an initial SQL query. Then, it decomposes the generated SQL query into individual query clauses and re-orders them based on their execution order. Each clause is then translated into an NL description of the underlying data operation, which is then used to form a step-by-step explanation. By reading the NL explanation along with the query result, users can easily understand the behavior of the generated query and locate any errors, even if they are unfamiliar with SQL.  \\n\\nIf one step is incorrect, users can directly edit its explanation to specify the correct behavior. S TEPS will then regenerate the clause based on the usercorrected explanation and update the SQL query, rather than regenerate the entire query from scratch. If multiple steps are incorrect, the user can add, remove, and modify all steps as needed.\\n\\n# 3.1 Rule-based SQL Explanation\\nTo generate explanations for arbitrarily complex SQL queries (e.g., a query with nested subqueries), we design a rule-based method to first decompose a query into individual clauses. Specifically, S TEPS first parses a SQL query to its Abstract Syntax Tree (AST) based on the SQL grammar in Table 6 . Then, it traverses the AST to identify the subtree of each clause while preserving their hierarchical relations.  \\n\\nGiven the subtree of a clause, S TEPS performs an in-order traversal and translates each leaf node (i.e., a terminal token in the grammar) to the corresponding NL description based on a set of translation rules (see Table 7 in the appendices). For example, SELECT is translated to “Return”, and Order By is translated to “Sort the records based on.” S TEPS concatenates these descriptions to form a complete sentence as the explanation of the clause.  \\n\\n  \\nFigure 3: An example of the explanation generation process  \\n\\nSince SQL engines follow a specific order to execute individual clauses in a query 2 , S TEPS further reorders the clause explanations to reflect their execution order. We believe this is a more faithful representation of the query behavior and thus can help users better understand the underlying data operations, compared with rendering them based on the syntactic order of clauses. Fig. 3 shows an example translation.\\n\\n# 3.2 Text-to-Clause Generation\\nUsers make edits to the explanation produced by our system to make it consistent with their goal. Given these edits, S TEPS uses a hybrid method to generate the corresponding SQL clause. For simple edits, such as replacing a column name, S TEPS directly edits the original clause to fix the error using three direct transformation rules $(\\\\S\\\\ 3.2.1)$ .For more complex edits, S TEPS uses a neural textto-clause model to generate the clause based on the user-corrected explanation $(\\\\S\\\\ 3.2.2)$ .  \\n\\nThe hybrid method is inspired by the findings from our recent study ( Ning et al. ,2023 ). Specifically, a large portion of SQL generation errors are simple errors (e.g., incorrect column names and operators), which can be fixed with small edits. After SQL decomposition by our approach, many larger errors are further decomposed into a set of simpler errors, contained within separate clauses. Thus, it is not necessary to regenerate the entire clause to fix such errors. Furthermore, compared to using a large model, direct transformation is more computationally efficient. Our experiment shows that direct transformation is 22K times faster than the text-to-clause model (Table 4 ).\\n\\n# Algorithm 1: Direct transformation\\nInput: The original explanation $e_{o}$ ;  \\nThe new edited explanation $e_{n}$ ;  \\nThe original SQL clause $s$ ;  \\nOutput: the updated SQL clause   \\n1 $C_{o}\\\\gets\\\\mathrm{CHUNK}(e_{o})$   \\n2 C$C_{n}\\\\gets\\\\mathrm{CHUNK}(e_{n})$ ←  \\n3 foreach $(c_{o},\\\\,c_{n})$ )in $\\\\mathrm{ALIGN}(C_{o},C_{n})$ do   \\n4 // Replace ;  \\n5 if BOTH COLUMN $(c_{o},\\\\,c_{n})$ or   \\n6 BOTH TABLE $(c_{o},\\\\,c_{n})$ or   \\n7 BOTH LITERAL $\\\\left(c_{o},\\\\,c_{n}\\\\right)$ then   \\n8 s←s.R EPLACE (c o, c n) ;   \\n9 // Add ;  \\n10 else if $c_{o}$ is $\\\\mathcal{Q}$ and IS COLUMN ($\\\\left(c_{n}\\\\right)$ then   \\n11 if s. START WITH (\"Select\" )then   \\n12 s←s.A PPEND (cn)  \\n13 // Remove ;  \\n14 else if $c_{n}$ is $\\\\mathcal{D}$ and IS COLUMN $\\\\scriptstyle(c_{o})$ then   \\n15 $s\\\\gets s.\\\\mathrm{REMOVE}(c_{o})$ ;  \\n16 end   \\n17 return\\n\\n# 3.2.1 Direct Transformation\\nWe define three types of atomic edits that can be directly converted into SQL edits by S TEPS : (1) replacing a column name, a table name, or a literal value (i.e., string, number), (2) adding a new column name in the explanation of a SELECT clause, and (3) removing a column name.  \\n\\nAlgorithm 1 describes our direct transformation algorithm. After chunking the text (Lines 1 -2 ), STEPS aligns and compares the chunks in the original explanation with those in the user-corrected explanation, using the Needleman and Wunsch (1970 )algorithm (Line 3 ). This allows S TEPS to detect any replacements (Line 4 ), additions (Line 9 ), or removals (Line 13 ) of database entities in the explanation. Based on this information, S TEPS automatically edits the corresponding SQL clause without calling a neural model (Lines 8 ,12 ,15 ). More details of this algorithm can be found in Appendix E.',\n",
       "   'original_filename': 'Conf_Paper_Meta_Data_EMNLP_2023_with_whole_text.db'}},\n",
       " {'id': 454847842914282426,\n",
       "  'distance': 0.6255221366882324,\n",
       "  'entity': {'paper_id': '646edc9cd68f896efaddab9b',\n",
       "   'paper_title': 'Faithful Low-Resource Data-to-Text Generation Through Cycle Training.',\n",
       "   'chunk_id': 0,\n",
       "   'chunk_text': '# Faithful Low-Resource Data-to-Text Generation through Cycle Training\\nZhuoer Wang †1 Marcus Collins ⋆2 Nikhita Vedula ⋆2   \\nSimone Filice 2 Shervin Malmasi 2 Oleg Rokhlenko 2  \\n\\n1 Texas A&M University 2 Amazon  {collmr,veduln,filicesf,malmasi,olegro}@amazon.com\\n\\n# Abstract\\nMethods to generate text from structured data have advanced significantly in recent years, primarily due to fine-tuning of pre-trained language models on large datasets. However, such models can fail to produce output faithful to the input data, particularly on out-of-domain data. Sufficient annotated data is often not available for specific domains, leading us to seek an unsupervised approach to improve the faithfulness of output text. Since the problem is fundamentally one of consistency between the representations of the structured data and text, we evaluate the effectiveness of cycle training in this work. Cycle training uses two models which are inverses of each other: one that generates text from structured data, and one which generates the structured data from natural language text. We show that cycle training, when initialized with a small amount of supervised data (100 samples in our case), achieves nearly the same performance as fully supervised approaches for the data-to-text generation task on the WebNLG, E2E, WTQ, and WSQL datasets. We perform extensive empirical analysis with automated evaluation metrics and a newly designed human evaluation schema to reveal different cycle training strategies’ effectiveness of reducing various types of generation errors. Our code is publicly available at https:// github.com/Edillower/CycleNLG .',\n",
       "   'original_filename': 'Conf_Paper_Meta_Data_ACL_2023_with_whole_text.db'}},\n",
       " {'id': 454845652744816630,\n",
       "  'distance': 0.6249851584434509,\n",
       "  'entity': {'paper_id': '646d8642d68f896efa0a3040',\n",
       "   'paper_title': 'Exploring Chain-of-Thought Style Prompting for Text-to-SQL',\n",
       "   'chunk_id': 1,\n",
       "   'chunk_text': '# 1 Introduction\\nText-to-SQL parsing, the task of translating a natural language question into a SQL query, has found wide applications in building natural language interfaces to databases and thus piqued significant research interest in recent years ( Wang et al. ,2020 ;Deng et al. ,2021 ;Yu et al. ,2021 ;Rajkumar et al. ,2022 ;Hongjin et al. ,2023 ;Ni et al. ,2023 ). To develop a text-to-SQL parser, a prevalent approach is to collect labeled data and train a model via supervised learning ( Shaw et al. ,2021 ;Scholak et al. ,2021 ). While effective, this approach necessitates a considerable amount of training data, which is costly to obtain because annotating SQL queries requires programming expertise. Consequently, the lack of data hinders real-life applications of stateof-the-art parsers, especially on novel databases and unseen domains ( Suhr et al. ,2020 ).  \\n\\nAs an alternative to supervised learning, incontext learning ( Brown et al. ,2020 ), an emergent capability of large language models (LLMs), alleviates the need for large-scale data. With only a few examples, in-context learning enables LLMs to demonstrate performance comparable to or even better than fully supervised models on many NLP tasks, such as question answering, machine translation, and natural language inference ( Chowdhery et al. ,2022 ;Kojima et al. ,2022 ;Wei et al. ,2022b ,a ;Brohan et al. ,2023 ). When applied to text-to-SQL parsing, in-context learning also shows encouraging results, but it still lags behind supervised approaches ( Rajkumar et al. ,2022 ;Chang et al. ,2023 ;Liu et al. ,2023a ).  \\n\\nWe hypothesize that the under-performance is because text-to-SQL parsing requires complex, multistep reasoning. Even for a seemingly simple question, such as “What is the ID of Kyle,\" a model has to ground it to database schemas, infer the relational algebra among schema items, and construct syntactically correct SQL clauses. Recently, the chain-of-thought (CoT) style promptings ( Wei et al. ,2022b ;Zhou et al. ,2023 ) are proposed and have shown promising multi-step reasoning capabilities. To enhance LLMs’ reasoning ability, we systematically explore CoT style prompting for text-to-SQL parsing. Specifically, we seek to answer two research questions: (1) Which prompting style is better, generating all reasoning steps in a single pass, or iterative prompting and problem solving? (2) Does including more detailed information in the reasoning steps lead to better results for text-to-SQL parsing?  \\n\\nTo address the questions, we adopt two widely used prompting methods for text-to-SQL parsing As the first method, we apply chain-of-thought prompting (Wei et al. ,2022b ) by drawing an analogy between its problem-solving process and the execution procedure of a SQL query. Referring to the logical execution order of SQL clauses (Narechania et al. ,2021 ), we compose the intermediate execution steps in natural language and prompt LLMs to derive them before generating the SQL query. As the second method, we follow Zhou et al. (2023 ) to apply least-to-most prompting in two stages: (1) reduction: generate a series of sub-questions from the original question and (2) solving: iteratively translate each sub-question into its corresponding SQL query, with the original question as the last sub-question. However, in our case study 1 , we find that directly applying chainof-thought and lease-to-most promptings leads to error propagation issues. Their rationales contain very demonstration-specific information and are easier to mislead the reasoning process. Furthermore, least-to-most prompting technique leads to additional computational and time cost due to the multiple stages of reduction and solving.  \\n\\n  \\nFigure 1: Different prompting methods with multi-step reasoning for text-to-SQL parsing: (a) Chain-of-Thought, (b) Least-toMost, and our proposed (c) QDecomp , and (d) QDecomp $^+$ InterCOL .  \\n\\nTherefore, we propose a new method called question-decomposition prompting (QDecomp ). Similar to chain-of-thought prompting, QDecomp generates a sequence of reasoning steps and then the SQL query in one pass. However, we modify the steps to instruct LLMs to decompose the original complex question, akin to the problem reduction stage in least-to-most prompting. Also, to help LLMs ground database schemas, we design a variant of question decomposition prompting (QDecomp $^+$ InterCOL ) by including the table and column names involved in each sub-question. We conduct comprehensive evaluations on two textto-SQL datasets, Spider ( Yu et al. ,2018 ) and Spider Realistic ( Deng et al. ,2021 ). Our proposed prompting methods substantially outperform existing prompting ones by 2.4 and 1.5 point absolute gains on the development set of Spider and Spider Realistic, respectively. The results suggest that the iterative prompting which is costly due to additional computational resources requirement as in least-to-most prompting may not be necessary $(R Q I)$ . In addition, our analysis shows the proposed question decomposition prompting methods, which do not instruct LLMs to generate detailed reasoning steps, reduce the chance of error propagation when generating the reasoning steps. ( RQ2 ). Finally, we evaluate the robustness of our proposed prompting methods by varying the number, selection, and format of in-context examples and show that they can achieve consistently strong performance across different settings.',\n",
       "   'original_filename': 'Conf_Paper_Meta_Data_EMNLP_2023_with_whole_text.db'}},\n",
       " {'id': 454845681740829484,\n",
       "  'distance': 0.6236740350723267,\n",
       "  'entity': {'paper_id': '6535d747939a5f408295c649',\n",
       "   'paper_title': 'Benchmarking and Improving Text-to-SQL Generation under Ambiguity',\n",
       "   'chunk_id': 0,\n",
       "   'chunk_text': '# Benchmarking and Improving Text-to-SQL Generation under Ambiguity\\nAdithya Bhaskar $\\\\mathbf{\\\\nabla}_{*}\\\\bigotimes\\\\bigtriangleup$ Tushar Tomar ∗♠ Ashutosh Sathe ♠Sunita Sarawagi ♠♠IIT Bombay ♢Princeton University\\n\\n# Abstract\\nResearch in Text-to-SQL conversion has been largely benchmarked against datasets where each text query corresponds to one correct SQL. However, natural language queries over reallife databases frequently involve significant ambiguity about the intended SQL due to overlapping schema names and multiple confusing relationship paths. To bridge this gap, we develop a novel benchmark called AmbiQT with over 3000 examples where each text is interpretable as two plausible SQLs due to lexical and/or structural ambiguity.  \\n\\nWhen faced with ambiguity, an ideal top$k$ decoder should generate all valid interpretations for possible disambiguation by the user ( Elgohary et al. ,2021 ;Zhong et al. ,2022 ). We evaluate several Text-to-SQL systems and decoding algorithms, including those employing state-of-the-art LLMs, and find them to be far from this ideal. The primary reason is that the prevalent beam search algorithm and its variants, treat SQL queries as a string and produce unhelpful token-level diversity in the top$k$ .  \\n\\nWe propose LogicalBeam, a new decoding algorithm that navigates the SQL logic space using a blend of plan-based template generation and constrained infilling. Counterfactually generated plans diversify templates while in-filling with a beam-search, that branches solely on schema names, provides value diversity. LogicalBeam is up to $2.5\\\\times$ more effective than state-of-the-art models at generating all candidate SQLs in the top$k$ ranked outputs. It also enhances the top5 Exact and Execution Match Accuracies on SPIDER and Kaggle DBQA 1 .\\n\\n# 1 Introduction\\nResearch on Text-to-SQL generation has focused on scenarios where each natural language question is associated with one correct SQL ( Zelle and Mooney ,1996 ;Tang and Mooney ,2000 ;Scholak et al. ,2021a ;Wang et al. ,2020 ;Rubin and Berant ,2021 ;Xie et al. ,2022 ;Arcadinho et al. ,2022 ;Zeng et al. ,2022 ;Scholak et al. ,2021b ;Pourreza and Rafiei ,2023 ). Popular benchmarks driving such research, including WikiSQL ( Zhong et al. ,2018 ), SPIDER ( Yu et al. ,2018 ), its robust perturbations ( Chang et al. ,2023 ), and even “in-thewild” benchmarks such as KaggleDBQA ( Lee et al. ,2021 ) and SEDE ( Hazoom et al. ,2021 ) all associate one correct SQL with text. Meanwhile, ambiguity is prevalent in real-life databases — particularly the ones obtained by integrating several data sources for data analysis, where a natural language interface is most in demand. The sources of ambiguity are several — inherent ambiguity of natural language, the user’s ignorance of table/column names, overlapping strings in column names, underspecified clauses, and confusion about whether aggregates are pre-computed, or if a join is required. Hazoom et al. (2021 ) observe that up to $87\\\\%$ of queries on the stack exchange database are underspecified, and Wang et al. (2022 ) mention that $11\\\\%$ of queries exhibited ambiguity in column names. Although prior work has brought up ambiguity, there is no publicly available benchmark with ambiguous queries, nor a comprehensive evaluation of systems under ambiguity.  \\n\\nOur first contribution is to bridge this gulf by developing a benchmark, AmbiQT, that tests performance under ambiguity in the context of current models. AmbiQT includes over 3000 examples, each associating a natural question on a database with two valid SQLs. Inspired by our experience with several real-world datasets, we target four types of ambiguity spanning both lexical (ambiguous column and table names) and structural (whether a join is necessary, and an aggregate is pre-computed) ambiguity. The benchmark is generated via a combination of ChatGPT ( OpenAI ,2022 ) based synonym generation and perturbation, and standard rule-based perturbation.  \\n\\nWhen faced with ambiguity, an ideal Text-toSQL system should incorporate all valid alternatives in their top$k$ SQL outputs, for user resolution. We show that present approaches, ranging from T5-3B ( Raffel et al. ,2019 ) to SOTA models, fail to generate all ambiguous outputs with any decoding strategy, including beam search and diversity-promoting sampling methods such as Nucleus ( Holtzman et al. ,2020 ) and Typical sampling ( Meister et al. ,2023 ). Most outputs are small lexical tweaks of the top choice, bringing about little meaningful diversity in SQL structures or schema alternatives. Even SOTA LLMs like ChatGPT ( OpenAI ,2022 ) suffer from this issue.  \\n\\nTo remedy the lack of diversity, we propose a new decoding algorithm, LogicalBeam, that allocates branching to explore underlying logical variants of the SQL rather than the string form. We catalog the errors of T5-3B ( Raffel et al. ,2019 ) on the SPIDER dev split and use our insights to encourage targeted types of diversity — the number of JOIN s and selections, and table/column names.  \\n\\nOur main contributions are:   \\n•We develop AmbiQT, the first benchmark that tests performance under four types of ambiguity over $\\\\mathbf{3000+}$ examples.   \\n•We show that SOTA methods, including a finetuned T5-3B, RESDSQL ( Li et al. ,2023 ), OpenAI Codex, and ChatGPT, provide a poor representation of ambiguity despite their high accuracy on conventional benchmarks.   \\n•We present LogicalBeam, a two-step algorithm that generates plan-based templates with counterfactually controlled plan diversity and fills them via a beam search that branches only on schema names.   \\n•We show that LogicalBeam consistently increases the fraction of time when all gold SQLs get generated in the Top-5 choices by $1.5-2.5\\\\times$ over the baselines across the board on AmbiQT.',\n",
       "   'original_filename': 'Conf_Paper_Meta_Data_EMNLP_2023_with_whole_text.db'}},\n",
       " {'id': 454848078959234800,\n",
       "  'distance': 0.621777355670929,\n",
       "  'entity': {'paper_id': '633ba44790e50fcafdfe4af3',\n",
       "   'paper_title': 'Calibrating Sequence likelihood Improves Conditional Language Generation',\n",
       "   'chunk_id': 1,\n",
       "   'chunk_text': '# 1 I NTRODUCTION\\nConditional language generation aims to generate natural language text based on input context, and includes many useful and hard tasks such as abstractive summarization (Mani, 2001; Nenkova and McKeown, 2011), generative question answering (Bajaj et al., 2016), question generation (Zhou et al., 2017) and data-to-text (Wiseman et al., 2017; Gardent et al., 2017) tasks. Pretraining large Transformer encoder-decoder models and fine-tuning them on downstream tasks is the common paradigm to address these tasks (Raffel et al., 2020; Lewis et al., 2019; Tay et al., 2022; Zhang et al., 2019a).  \\n\\nConditional language generation tasks are modeled by learning the probability of a target sequence ygiven a context sequence $\\\\mathbf{x}$ . Since directly modeling sequence probability $P(\\\\mathbf{y}|\\\\mathbf{x})$ over all possible generated text sequences is intractable, the canonical solution is to auto-regressively factor the probability and share the parameters at all token prediction steps as $\\\\begin{array}{r l}{P_{\\\\theta}\\\\overline{{(\\\\mathbf{y}|\\\\mathbf{x})}}=}\\\\end{array}$ $\\\\begin{array}{r}{\\\\prod_{t=0}^{l}P_{\\\\theta}(y^{t}|y^{0}...y^{t-1},\\\\mathbf{x})}\\\\end{array}$ , where $l$ is the sequence length. These models are often trained with maximum likelihood estimation (MLE) over observed target sequences. The learning objective thus becomes $\\\\begin{array}{r}{L\\\\;=\\\\;\\\\sum_{i}^{N}-l o g(P_{\\\\theta}(\\\\mathbf{y}_{i}|\\\\mathbf{x}_{i}))\\\\;=\\\\;\\\\sum_{i}^{N}\\\\sum_{t=0}^{l}-l o g(P_{\\\\theta}(y_{i}^{t}|y_{i}^{0}...y_{i}^{t-1},\\\\mathbf{x}_{i}))}\\\\end{array}$ PP, where $N$ is the number of training instances. It is also referred to as next token prediction loss as it is mathematically equivalent.  \\n\\nIn the ideal setting of MLE training, a large number of target sequences are observed for each context, and the relative frequencies of output sequences can calibrate the assigned model probabilities. However, in practice most language generation training datasets have only a single target sequence given the context. While the subsequent MLE trained models learn to assign relatively high probability to plausible sequences, they lack the direct supervision to compare such sequences, and solely rely on models’ generalization capability. We refer to this phenomenon as models’ sequence likelihood not being calibrated . Prior works (Liu and Liu, 2021; Liu et al., 2022) has shown that the correlation between sequence probability and its quality for MLE trained models can be low. Liu et al. (2022) attributed this similarly as the deterministic (one-point) target distribution problem. Exposure bias (Ranzato et al., 2016) further aggravates the problem, as sequence likelihood estimation is noisier when models’ decoded sequences shift from exposed training data distribution.  \\n\\n  \\n\\nFigure 1: Calibrating sequence likelihood improves language generation across model scales. Scores are averaged ROUGE across 4 datasets ( $R_{m}$ in subsection 3.2)  \\n\\nMany effective heuristics have been proposed during training and decoding to combat the problem of uncalibrated sequence likelihood. Label smoothing (Szegedy et al., 2016) prevents the network from becoming over-confident towards the observed target. This is particularly necessary in language generation, since the gold target represents just one of many possibilities. It has been observed that increasing number of decoding candidates past a certain point leads to worse quality for beam search decoding (Yang et al., 2018; Koehn and Knowles, 2017) and sampling (Adiwardana et al., 2020). An optimal number of decoding candidates is often determined empirically by decoding models on the validation set and measuring their performance. Using length normalization is also essential for beam search decoding (Wu et al., 2016) and sampling (Adiwardana et al., 2020) as models tend to underestimate sequence likelihood of longer sentences. Repetition is another common failure mode when models overestimate the probability of repeated sequences (Holtzman et al., 2019). Trigram blocking (Paulus et al., 2018) and nucleus sampling (Holtzman et al., 2020) have been used to interrupt repeating sequences. These techniques are pervasive and often the default in modern Transformer libraries (Wolf et al., 2020; Lewis et al., 2019; Raffel et al., 2020; Zhang et al., 2019a).  \\n\\nSince the lack of observed target sequences in MLE training is the root problem, solutions involving learning with multiple sequence candidates have been proposed to directly address it. They can be loosely put in three categories: (1) reinforcement learning with sequence-level rewards (Paulus et al., 2018; Ziegler et al., 2019; Stiennon et al., 2020); (2) two-stage systems that generate and rerank candidates (Liu and Liu, 2021; Ravaut et al., 2022b; Liu et al., 2022); and (3) multi-task learning with sequence-level losses (Edunov et al., 2018; Liu et al., 2022). Refer to Related Works (section 4) for a more comprehensive discussion.  \\n\\nIn this paper, we propose to first decode candidates from a fine-tuned model on its own training dataset, and then continue training the model with a new objective. The new objective aims to align candidates’ sequence likelihoods according to their similarities to the target sequence in the model’s latent space. We refer to this process as sequence likelihood calibration (SLiC). Our approach is related to multi-task learning with sequence-level losses in Liu et al. (2022). However, we propose a simple yet effective recipe that eliminates decoding heuristics and doesn’t risk directly optimizing the same metrics that are used to report text generation quality. Unlike reinforcement learning, it is a one-time offline process that avoids costly online decoding processes. Also, when compared to two-stage reranking systems, it doesn’t require a separate reranking model that incurs additional complexity and compute. As depicted in Figure 1, our calibration stage naturally extends the current paradigm of pretraining and fine-tuning, and we show that calibrated models have strong improvements over fine-tuned-only models across model sizes.  \\n\\nOur main contributions include:  \\n\\n• Proposed a sequence likelihood calibration (SLiC) stage that consistently improves model quality, exceeding or matching state-of-the-art results on abstractive summarization, generative question answering, question generation and data-to-text generation tasks.  \\n\\n• Proposed a novel calibration similarity metric between model decodes and targets measured in the model’s latent space rather than resorting to external metrics or human feedback. • Demonstrated that SLiC eliminates the need for popular decoding heuristics, such as beam size optimization, length normalization and repetition prevention for the calibrated models. • Demonstrated that SLiC has persistent significant benefits on model performance even as the number of model parameters scales up. Under the same inference budget, smaller calibrated models might outperform larger counterparts by decoding more candidates.',\n",
       "   'original_filename': 'Conf_Paper_Meta_Data_ICLR_2023_with_whole_text.db'}},\n",
       " {'id': 454896058206782956,\n",
       "  'distance': 0.621261477470398,\n",
       "  'entity': {'paper_id': '62393e7e5aee126c0f125b6b',\n",
       "   'paper_title': 'Probing Factually Grounded Content Transfer with Factual Ablation',\n",
       "   'chunk_id': 1,\n",
       "   'chunk_text': '# 2 Related Work and Background\\n\\n# 2.1 Textually Grounded Generation\\nTextual grounding is a common element of natural language generation tasks, wherein a textual input is used to provide facts and information for decoding. One of the most popular tasks following this paradigm is abstractive summarization ( Narayan et al. ,2018 ;Rush et al. ,2015 ), in which generation $y$ should shorten and capture the salient information in source $g$ . Other tasks extent beyond summarization, for example grounded dialogue (Dziri et al. ,2021 ) and content transfer ( Prabhumoye et al. ,2019 ) (studied here). These tasks add the additional constraint that the generation $y$ must adhere to some existing context $c$ , either previous dialogue turns or a document being extended (respectively).\\n\\n# 2.2 Factuality and Factual Consistency\\nRecent work ( Maynez et al. ,2020 ) observes that strong neural models, although fluent and creative, often hallucinate information. Indeed, for all summarization models tested by Maynez et al. (2020 ), over $70\\\\%$ of generations included information not directly entailed by the grounding $g$ . However, they observe that some of this information is still factually correct. This naturally yields 2 notions of correctness for textually grounded generation: factuality and factual consistency (or faithfulness ). Factuality concerns the universal correctness of a generation–is the model output factual regardless of grounding $g?$ Factual consistency more specifically probes whether the generation adheres to grounding $g$ . Our work probes the much more tractable problem of factual consistency.  \\n\\nA significant portion of past work on factuality and factual consistency in generation has focused on abstractive summarization ( Pagnoni et al. ,2021 ;Goyal and Durrett ,2021 ;Cao and Wang ,2021 ;Aralikatte et al. ,2021 ). Yet as mentioned above, textually grounded generation extends beyond summarization, and some works explore notions of factuality in other domains such as conversation (Shuster et al. ,2021 ) or table-to-text generation (Liu et al. ,2021 ). Similarly, we explore these notions outside of direct summarization, instead focusing on grounded content transfer ( Prabhumoye et al. ,2019 ).  \\n\\nMuch work in this area concerns improving factuality and factual consistency ( Shuster et al. ,2021 ;Zhu et al. ,2021 ;Nan et al. ,2021 ;Mao et al. ,2020 ;Aralikatte et al. ,2021 ). While this is one aspect of our work, we also aim to improve automatic evaluation, for which a single standard metric has not emerged. Some works evaluate factuality and consistency with extraction ( Goodrich et al. ,2019 ;Zhang et al. ,2020 ) or question answering (Wang et al. ,2020 ;Durmus et al. ,2020 ;Nan et al. ,2021 ). Others use notions of entailment ( Falke et al. ,2019 ), or simply train end-to-end models to judge these aspects directly ( Kryscinski et al. ,2020 ). We instead focus on the effect of excluding relevant information from the grounding–for a factual model, removing this information should lower the probability of the ground-truth generation.  \\n\\nSome works follow a similar intuition to ours. Xie et al. (2021 ) also understand factuality by estimating the effect of the source document on generative model output, although they explicitly mask relevant information while we offer a plausible alternative grounding. Similarly, Xu and Durrett (2021 ) ablate information from a source document to understand aspects of conditional generation, although factuality is not the focus.  \\n\\nFinally, some work in this area studies the need to evaluate metrics of factuality and consistency (Gabriel et al. ,2020 ;Pagnoni et al. ,2021 ), and to generally characterize and annotate the mistakes of models ( Maynez et al. ,2020 ;Pagnoni et al. ,2021 ;Goyal and Durrett ,2021 )\\n\\n# 2.3 Loss Truncation\\nLoss Truncation ( Kang and Hashimoto ,2020 ) improves conditional models by only training on the top-c examples, ranked by dynamically updated model loss. This is broadly applicable to conditional models with a noisy learning signal, and we include two baselines using this approach.\\n\\n# 3 Methodology\\nHere, we bring factual consistency to a new domain, content transfer, which is the task of extending context $c$ with content from a grounding document $g$ . We discuss the task (§ 3.1 ), and our major contributions: novel methods for judging (§ 3.2 ) and improving (§ 3.3 ) factual consistency in this setting.\\n\\n# 3.1 Task: Content Transfer\\nRecent work studying factual consistency has largely focused on summarization: models are given a source document $g$ (grounding) as input, and output a shorter summary text $y$ capturing the most salient information from $g$ . Summarization is a natural domain to study factual consistency–the source document typically contains all information needed for the summary–but the need for factual consistency is not exclusive to summarization, and more domains should be explored.  \\n\\nHere, we expand this study to the content transfer task. As in summarization, models are given grounding $g$ , and must output text $y$ using information from $g$ . However, $y$ must also fit a context c, which significantly narrows the range of reasonable outputs from the open-ended summarization task, to those that fit the context. Prabhumoye et al. (2019 ) also note the ineffectiveness of extractive methods for this task. This obviates issues of model understanding that underlie factual consistency errors: while summarization models can often copy text directly, ensuring factual consistency regardless of understanding, content transfer models must reformulate information to fit the context.  \\n\\nPrabhumoye et al. (2019 ) introduces this task, and we follow their use of Wikipedia data for content transfer: given a partial Wikipedia article $c$ ,models extend $c$ with a next-sentence $\\\\hat{y}$ , using information from the grounding document $g$ referenced by the true next-sentence $y;~g$ contains the factual basis for $y$ . The dataset contains 600K training examples, 6K validation examples, and 50K test examples. Measuring factual ablation on this original dataset is not an option as there is only one piece of grounding per-example, and so we describe two paths to generating evaluation data for this purpose below.  \\n\\nContent transfer is formally defined as the task of generating a next-sentence $\\\\hat{y}$ for context $c$ which is (i) coherent, and fits $c$ (ii) factually and (iii) stylistically, while (iv) only utilizing information from grounding document $g$ . Note here, (iv) requires factual consistency, which is a stronger notion than overall factuality (§ 2.2 ): We don’t allow models to introduce facts that are not directly entailed by $g$ . Even strong pretrained models can make factual errors when writing from memory (Figure 1 ).  \\n\\nCentral to our study is the degree to which each above condition must be met to have an effective model. Conditions i-iii are not absolute constraints. A reasonable generation may be a bit awkward or not perfectly fit $c$ . On the other hand, an effective model must follow condition iv completely. While satisfaction of all of i-iv may be noisy in both the training dataset and tuned models, our approach will focus on addressing this noise for condition iv.',\n",
       "   'original_filename': 'Conf_Paper_Meta_Data_ACL_2022_Annual_Meeting_of_the_Association_for_Computational_Linguistics_with_whole_text.db'}},\n",
       " {'id': 454845641449030464,\n",
       "  'distance': 0.6193163990974426,\n",
       "  'entity': {'paper_id': '6461b9c9d68f896efad43133',\n",
       "   'paper_title': 'Interactive Text-to-SQL Generation Via Editable Step-by-Step Explanations',\n",
       "   'chunk_id': 6,\n",
       "   'chunk_text': '# 7 Discussion\\nBoth the quantitative experiments and the user study demonstrate S TEPS can significantly improve the accuracy of SQL generation. This is largely attributed to the interaction design, which allows users to precisely pinpoint which part of the SQL is wrong and only regenerates the incorrect clauses rather than the entire SQL query. In contrast, existing approaches do not support expressive ease or error isolation. Users either cannot regenerate new content (e.g., DIY), or can only regenerate the entire query rather than just the erroneous part (e.g., MISP). Ning et al. (2023 ) showed that this lack of error isolation often introduces new errors, which frustrates users and makes errors hard to fix.  \\n\\nError Analysis. While simple errors are prevalent in SQL generation, our ablation study (Table 4 )shows that only fixing simple errors is insufficient, which motivates the design of our hybrid method. Our hybrid method can handle a broad range of errors because users can flexibly correct entities or clauses in a query. This ability helps reduce the difficulty of tasks by dividing complex errors into simpler ones, allowing users to solve them separately.  \\n\\nIn our automated user simulation, S TEPS failed in a few cases when the text-to-clause model predicted the wrong clause type. For example, the paraphrased ground truth explanation of one step was: “ Ensure that all categories where the total cost of therapy exceeds 1000 are included. ” The text-to-clause model predicted aWHEREclause in-stead of a HAVING clause.  \\n\\nIn the user study, one common challenge arose when multiple tables in the database had the same column name. If users did not look carefully at the database schema, they may have not explicitly indicated the table to be used. That creates an ambiguity for the model.  \\n\\nOther Datasets and Domains. Our system should work for any SQL dataset, as our approach is domain-agnostic and covers general SQL structures. For other forms of code, such as WebAPI (Su et al. ,2017 ) and SPARQL ( Ngonga Ngomo et al. ,2013 ;Mo et al. ,2022 ), the general idea is applicable, but new models would be needed for (a) code generation, (b) explanation generation, and (c) code correction.\\n\\n# 8 Conclusion\\nThis work presents S TEPS , a new interactive approach for text-to-SQL generation. S TEPS decomposes a text-to-SQL task into smaller text-to-clause tasks and enables users to validate and refine a generated query via editable explanations. Experiments on four benchmarks and a user study show S TEPS can significantly boost the accuracy of end-to-end models by incorporating user feedback. S TEPS significantly outperforms three stateof-the-art approaches for interactive SQL generation across all metrics considered.\\n\\n# 9 Limitations\\nOur automated user simulation is an optimistic experiment that does not account for user errors, such as not being able to identify mistakes in the explanation. The simulation was designed to test a scenario in which a user can perfectly identify which step of the explanation is wrong and accurately describe a corrected version in natural language. Creating such a perfect user required the use of the ground truth, both for the identification step and to generate the natural language correction. This simulation is not representative of real-world use. That limitaton was the motivation for our study with real users, in which we had actual people use different tools without information about correct answers. As shown in Table 5 , the accuracy of the user study is lower than the simulation, but S TEPS is still very effective and outperforms other tools. We choose to include the simulation study because it shows the potential for S TEPS to make corrections if there is no human error.  \\n\\nIn this paper, we only evaluate S TEPS on singleturn SQL generation. In future work, our approach can be extended to multi-turn SQL generation by incorporating contextual information when editing the natural language explanation.  \\n\\nWhile our approach is designed to be general for SQL generation and potentially other code generation tasks, the current version only supports SQL keywords that appear in the Spider dataset. Like other text-to-SQL datasets, Spider only covers query operations (e.g., SELECT ) and does not cover update operations (e.g., INSERT ) for evaluation convenience. But it would be straightforward to cover unsupported operations by adding new translation rules.  \\n\\nprocedure, potential risks, data usage, and confidentiality. We obtained consent from each user before proceeding with the study. All collected data were anonymized and de-identified to protect the privacy of users.\\n\\n# Acknowledgments\\nThis material is based in part on work supported by an Amazon Research Award, the Australian Research Council through a Discovery Early Career Researcher Award and by the Defense Advanced Research Projects Agency (grant #HR00112290056).',\n",
       "   'original_filename': 'Conf_Paper_Meta_Data_EMNLP_2023_with_whole_text.db'}},\n",
       " {'id': 454845641342337844,\n",
       "  'distance': 0.6175075769424438,\n",
       "  'entity': {'paper_id': '6461b9c9d68f896efad43133',\n",
       "   'paper_title': 'Interactive Text-to-SQL Generation Via Editable Step-by-Step Explanations',\n",
       "   'chunk_id': 0,\n",
       "   'chunk_text': '# Interactive Text-to-SQL Generation via Editable Step-by-Step Explanations\\nYuan Tian 1 , Zheng Zhang 2 , Zheng $\\\\mathbf{Ning^{2}}$ ,Toby Jia-Jun $\\\\mathbf{Li}^{2}$ ,Jonathan K. Kummerfeld 3 , and Tianyi Zhang 1 Purdue University 1 , University of Notre Dame 2 , The University of Sydney 3  , , , , ,\\n\\n# Abstract\\nRelational databases play an important role in business, science, and more. However, many users cannot fully unleash the analytical power of relational databases, because they are not familiar with database languages such as SQL. Many techniques have been proposed to automatically generate SQL from natural language, but they suffer from two issues: (1) they still make many mistakes, particularly for complex queries, and (2) they do not provide a flexible way for non-expert users to validate and refine incorrect queries. To address these issues, we introduce a new interaction mechanism that allows users to directly edit a stepby-step explanation of a query to fix errors. Our experiments on multiple datasets, as well as a user study with 24 participants, demonstrate that our approach can achieve better performance than multiple SOTA approaches. Our code and datasets are available at https: //github.com/magic-YuanTian/STEPS .\\n\\n# 1 Introduction\\nNatural language interfaces significantly lower the barrier to accessing databases and performing data analytics tasks for users who are not familiar with database query languages. Many approaches have been proposed for generating SQL queries from natural language ( Popescu et al. ,2004 ;Giordani and Moschitti ,2012 ;Rubin and Berant ,2021 ;Scholak et al. ,2021 ;Zhao et al. ,2022 ). Using recent large language models, systems have reached $86.6\\\\%$ execution accuracy ( Gao et al. ,2023 ) on the Spider benchmark ( Yu et al. ,2018 ).  \\n\\nHowever, the rate of improvement has slowed, with a gain of only $10\\\\%$ since mid-2021. This is partly due to the inherent ambiguity of natural language and the complex structure of SQL queries (e.g., nested or joined queries). Thus, it is challenging to generate a fully correct query in one step, especially for complex tasks ( Yao et al. ,2019 ).  \\n\\n  \\nFigure 1: Refining a SQL query by directly editing a step-by-step explanation.  \\n\\nThere has been growing interest in developing “human-in-the-loop” approaches that elicit user feedback to guide SQL generation. However, most approaches only support feedback in constrained forms, e.g., answering multiple-choice questions (MISP, PIIA, DialSQL Yao et al. ,2019 ;Li et al. ,2020 ;Gur et al. ,2018 ), changing SQL elements in a drop-down menu (DIY, Narechania et al. ,2021 ), etc. Such constrained feedback is not sufficient to fix many complex errors in real-world SQL tasks. One exception is NL-EDIT ( Elgohary et al. ,2021 ), which allows users to provide feedback as new utterances. However, since the feedback is open-ended, interpreting it can be just as hard as processing the original request.  \\n\\nIn this paper, we seek to strike a balance between constrained feedback and open-ended feedback by proposing a new interaction mechanism: editable step-by-step explanations. Fig. 1 illustrates our idea. This mechanism consists of three core components: (a) a text-to-SQL model, (b) an explanation generation method, and (c) a SQL correction model. Our key insight is that using a step-by-step explanation as the basis to suggest fixes allows users to precisely specify where the error is and how to fix it via direct edits. This not only saves users’ time but also makes it easier for the model to locate the error and apply fixes.  \\n\\nBased on this idea, we implemented an interactive SQL generation and refinement system called STEPS . S TEPS adopts a rule-based method to generate step-by-step explanations and uses a hybrid rule/neural method to convert a user-corrected explanation back to a SQL query.  \\n\\nAn evaluation with a simulated user on Spider ( Yu et al. ,2018 ) shows that S TEPS can achieve $97.9\\\\%$ exact set match accuracy, outperforming prior interactive text-to-SQL systems— MISP, DIY, and NL-EDIT—by $33.5\\\\%$ ,$33.2\\\\%$ , and $31.3\\\\%$ respectively. We further evaluate S TEPS on other datasets, including Spider-DK ( Gan et al. ,2021b ), Spider-Syn ( Gan et al. ,2021a ), and WikiSQL ( Zhong et al. ,2017 ). S TEPS consistently achieves at least $96\\\\%$ exact set match accuracy and execution accuracy across all datasets.  \\n\\nFinally, we conducted a within-subjects user study with 24 real users. We found that within the same amount of time, S TEPS helped users complete almost 2X and 4X more tasks correctly than DIY and MISP respectively, with significantly higher self-reported confidence and lower mental load.  \\n\\nThis work makes the following contributions: (1) we propose a new interaction mechanism for the text-to-SQL task; (2) we develop an interactive text-to-SQL system based on the new interaction mechanism and a new training method for SQL correction; (3) we conduct a comprehensive evaluation with both simulated and real users and demonstrate its effectiveness over state-of-the-art interactive systems. Our dataset and code are publicly available.',\n",
       "   'original_filename': 'Conf_Paper_Meta_Data_EMNLP_2023_with_whole_text.db'}},\n",
       " {'id': 454845706688551984,\n",
       "  'distance': 0.613497793674469,\n",
       "  'entity': {'paper_id': '65406320939a5f40826491aa',\n",
       "   'paper_title': 'Evaluating Cross-Domain Text-to-SQL Models and Benchmarks',\n",
       "   'chunk_id': 0,\n",
       "   'chunk_text': '# Evaluating Cross-Domain Text-to-SQL Models and Benchmarks\\nMohammadreza Pourreza University of Alberta   \\n\\nDavood Rafiei University of Alberta\\n\\n# Abstract\\nText-to-SQL benchmarks play a crucial role in evaluating the progress made in the field and the ranking of different models. However, accurately matching a model-generated SQL query to a reference SQL query in a benchmark fails for various reasons, such as underspecified natural language queries, inherent assumptions in both model-generated and reference queries, and the non-deterministic nature of SQL output under certain conditions. In this paper, we conduct an extensive study of several prominent cross-domain text-to-SQL benchmarks and reevaluate some of the top-performing models within these benchmarks, by both manually evaluating the SQL queries and rewriting them in equivalent expressions. Our evaluation reveals that attaining a perfect performance on these benchmarks is unfeasible due to the multiple interpretations that can be derived from the provided samples. Furthermore, we find that the true performance of the models is underestimated and their relative performance changes after a re-evaluation. Most notably, our evaluation reveals a surprising discovery: a recent GPT4-based model surpasses the gold standard reference queries in the Spider benchmark in our human evaluation. This finding highlights the importance of interpreting benchmark evaluations cautiously, while also acknowledging the critical role of additional independent evaluations in driving advancements in the field.  \\n\\nproved from 65.6 ( Wang et al. ,2019 ) to 74.0 ( Li et al. ,2023a ). Measuring such progress is hinged on reliable benchmarks and evaluation metrics.  \\n\\nTwo standard metrics for evaluating the performance in this domain have been exact set match accuracy and execution accuracy . The former measures if a model-generated SQL query lexically matches a reference SQL query, whereas the latter measures if a model-generated SQL query produces the same output as a reference query $(\\\\S\\\\,4)$ .  \\n\\n  \\nFigure 1: An example question with two correct SQL queries, each corresponding to a different interpretation. There is an ambiguity in schema mapping, with two different database columns describing the name.\\n\\n# 1 Introduction\\nSignificant progress has been made in translating natural language text to SQL statements over the past few years. The execution accuracy on the hold-out test of Spider ( Yu et al. ,2018b )–a large-scale cross-domain text-to-SQL benchmark– has improved from 53.5 in May, 2020 ( Zhong et al. ,2020b ) to 85.3 in March, 2023 ( Pourreza and Rafiei ,2023 ). The exact set match accuracy, without considering database cell values, on the same benchmark and over the same period has im  \\n\\nConsider the example in Figure 1 , which consists of a model-generated query (shown on the left) and a reference query (shown on the right). Both SQL queries return the id and name of makers that have more than 3 models. However, the model-generated query returns the column FullName, which gives the full name of a maker (e.g., “Ford Motor Company”), whereas the reference query given in the benchmark returns the column Maker, which gives the short common name of a maker (e.g., “Ford”). The model-generated query fails an exact set match since the column names in the select clause are different. The query outputs are also different and the model-generated query fails the execution accuracy as well. The natural language utterance is not specific about the type of name to be returned, and a human evaluator tags both queries correct.  \\n\\nAs the models improve, these types of failures make up most of the errors, and the performance metrics become less relevant, as shown in our evaluation. In particular, we re-evaluated all development set queries of Spider on which two top-performing models, one using a fine-tuned model ( Scholak et al. ,2021 ) and another using a large language model ( Pourreza and Rafiei ,2023 ), failed. We found out that $25\\\\%$ of the queries generated by one model and $87\\\\%$ of the queries generated by the other model were indeed correct but were wrongly evaluated by the benchmark. For the same set of queries, our re-evaluation of the ground truth queries found $33\\\\%$ of the SQL queries incorrect, which was more than the number of incorrect queries generated by one of the models. This evaluation places one of the models above the ground truth queries in this re-evaluation.  \\n\\nWe further re-evaluated two well-known benchmarks, Spider ( Yu et al. ,2018b ) and SpiderDK ( Gan et al. ,2021b ), and a newly released benchmark, BIRD ( Li et al. ,2023b ), and found similar problems in all three benchmarks that affect the evaluation. Our evaluation reveals that $18\\\\%$ of the queries in the train sets and $20\\\\%{-23\\\\%}$ of the queries in the dev sets of these benchmarks are subject to ties in the dataset and which one of the tied rows are returned. This means a model-generated query will be deemed incorrect if it does not return the same row, among tied rows, as the ground truth query. This can severely impact the evaluation, especially when there is a tight race among models. Considering these observations, it is crucial to emphasize the significance of additional independent evaluations when utilizing these benchmarks. To enhance the evaluation process further, a potential solution is to incorporate multiple SQL queries as the ground truth, each representing a different interpretation that may be valid.  \\n\\nOur objective in this paper is to provide a comprehensive evaluation of existing Text-to-SQL benchmarks, underscoring the inherent issues they possess. We refrain from introducing a new dataset due to several considerations. First, addressing the identified issues by updating these benchmarks requires considerable human effort. Additionally, benchmarks in the Text-to-SQL domain, like Spider and BIRD, have holdout test sets used for official leaderboards and comparisons of text-to-SQL methodologies. We only have access to the development and training sets of these benchmarks, which limits our capability to alter the test sets. As a result, making changes only to the development and training sets would not completely address the benchmark’s inherent problems, given that final performance is gauged using the problematic test sets.',\n",
       "   'original_filename': 'Conf_Paper_Meta_Data_EMNLP_2023_with_whole_text.db'}},\n",
       " {'id': 454919258070598578,\n",
       "  'distance': 0.6086536049842834,\n",
       "  'entity': {'paper_id': '628ef0495aee126c0f82db30',\n",
       "   'paper_title': 'PLOG: Table-to-Logic Pretraining for Logical Table-to-Text Generation.',\n",
       "   'chunk_id': 1,\n",
       "   'chunk_text': '# 1 Introduction\\nTable-to-text generation is a sub-task of data-to-text generation, aiming to generate natural language descriptions from structured tables. There are two steps to perform table-to-text generation: content planning (to select table contents and determine the plan to describe them) and surface realization (to realize the plan into fluent natural language). Traditional table-to-text systems take a pipeline manner, to complete the two procedures with separate modules ( Kukich ,1983 ;McKeown ,1985 ). Recent works have shown the advantage of using a neural encoder-decoder model to directly generate sentences from the tables, which shows the strong capability to produce fluent and natural generations ( Wiseman et al. ,2017 ;Nie et al. ,2018 ;Puduppully et al. ,2019b ). Reseachers have also attempted to finetune pretrained language models such as BART ( Lewis et al. ,2020 ) and T5 ( Raffel et al. ,2020 ) on downstream table-to-text tasks and achieve remarkable success on a broad range of benchmarks ( Xie et al. ,2022 ;Kale and Rastogi ,2020 ).  \\n\\nPrevious studies have mainly focused on surfacelevel generation, i.e. generating plain restatements of table records with little logical inference ( Wiseman et al. ,2017 ;Liu et al. ,2018 ;Puduppully et al. ,2019a ,b). Recently, logical table-to-text generation ( Chen et al. ,2020a ), i.e., generating textual descriptions that require logical reasoning over table records, has attracted increasing attention. Logical table-to-text generation poses a new challenge on content planning, requiring models to perform logical inference to derive facts from surface-level table records. End-to-end neural models often suffer from low logical fidelity on this task, i.e. the generated sentences are not logically entailed by the tables even showing reasonable fluency ( Chen et al. ,2020a ,2021 ). There are two reasons for the low fidelity. (1) Directly learning logical inference knowledge from table-text pairs is too difficult for neural models because of the ambiguity and diversity of natural language references. (2) The amount of such paired data is limited because of the laborintensive annotation work, which further limits the performance of neural models.  \\n\\nTo achieve high-fidelity of logical-level generation, Chen et al. (2020b ) have attempted to annotate logical forms to guide the text generation and proposed a L OGIC 2 TEXT dataset. With logical forms as mediators conveying accurate logicallevel facts, models can just focus on surface realization from associated logical forms and achieve high fidelity. However, annotating pairs of logical forms requires intensive human efforts. Moreover, generating from a self-contained logical form is actually a different task from table-to-text generation. Prior studies on this dataset ( Liu et al. ,2021a ;Shu et al. ,2021 ;Xie et al. ,2022 ) mostly focus on converting the logical forms into texts.  \\n\\nInspired by this, we propose a Pre-trained LOgical Form Generator ( PL OG) to achieve faithful logical table-to-text. Specifically, PL OG is first pre-trained on a large-scale synthetic corpus of table-to-logical-form generation ( table-to-logic ) to learn how to generate accurate logical forms from tables, then fine-tuned on downstream table-to-text tasks to transfer the logical inference knowledge learned from pre-training to text generation. Our insights are three-folds: (i) unlike natural language sentences, logical forms are formally defined with unambiguous semantics, hence it is much easier for models to acquire the logical inference knowledge via learning from logical form generation; (ii) via pre-training on large amounts of logical form generation data, the model can better understand the table and organize the logical-level content planning, leading to faithful table-to-text generation; (iii) it is viable to collect large-scale logical form corpora via rule-based search over tables without the efforts of human annotators. In this framework, logical forms can be regarded as an intermediate meaning representation to bridge the gap between logical planning and surface realization, while we do not need explicit logical forms when performing the downstream task.  \\n\\nTo achieve smooth knowledge transfer, we formulate the pre-training task in the same sequenceto-sequence generation way with the downstream table-to-text. We adopt a strong pre-trained language model T5 as the backbone model. To evaluate our method, we consider two typical scenarios of current table-to-text generation tasks: uncontrolled and controlled generation. For uncontrolled generation, we adopt the L OGIC NLG task which requires generating logical descriptions only based on the table, and imposes an additional challenge on content selection. Inspired by ToTTo ( Parikh et al. ,2020 ) and HiTab ( Cheng et al. ,2021 ), we further consider controlled generation, a recently popular task formulation. In this task setting, additional control features such as highlighted cells in the tables are explicitly specified to guide the topic of generation and narrow down the scope of content selection. Because most examples of ToTTo and HiTab do not involve logical reasoning, we reformulate the L OGIC 2 TEXT dataset into a new CONT rolled LOG ical Natural Language Generation (C ONT LOG ) task for our evaluation. We detect highlighted cells via execution-based search with their annotated logical forms. For each dataset, we collect large amounts of logical forms from the tables in the training data via an executionbased search, where the validity of them can be fully guaranteed. To evaluate the fidelity of generated texts, we mainly adopt two state-of-the-art Table Fact Verification ( Chen et al. ,2019 ) models, TA PE X(Liu et al. ,2021b ) and T A PAS (Eisenschlos et al. ,2020 ), to evaluate whether the texts are entailed by the input tables. Experimental results on both L OGIC NLG and C ONT LOG demonstrate that PL OG outperforms the baseline T5 by a large margin in terms of logical fidelity. In particular, PL OG improves the fidelity accuracy (evaluated by T A PE X) by $9.3\\\\%$ and $9.2\\\\%$ on L OGIC NLG and CONT LOG , respectively. Human evaluation and case studies further demonstrate the effectiveness of our pretraining framework. In addition, the results of table-to-logic pretraining demonstrate that the pretraining task indeed contributes to more accurate logical inference. We will make our code publicly available at https://github.com/ Aolius/logic-pretraining .',\n",
       "   'original_filename': 'Conf_Paper_Meta_Data_EMNLP_2022_Empirical_Methods_in_Natural_Language_Processing_with_whole_text.db'}},\n",
       " {'id': 454919258123551672,\n",
       "  'distance': 0.6042478084564209,\n",
       "  'entity': {'paper_id': '628ef0495aee126c0f82db30',\n",
       "   'paper_title': 'PLOG: Table-to-Logic Pretraining for Logical Table-to-Text Generation.',\n",
       "   'chunk_id': 4,\n",
       "   'chunk_text': '# 5 Table-to-Logic Pretraining\\nAs described in Section 1 , pretraining the table-totext model on table-to-logic generation is effective in generating more faithful natural language. In this section, we introduce our pretraining task and the procedure of collecting pretraining corpora.\\n\\n# 5.1 Pretraining Task\\nIn the pretraining task, the input is a (sub-) table while the target is a logical form that abstracts a reasoning process on the table. We follow the same schema in ( Chen et al. ,2020b ) to  \\n\\n$$\\nt^{*}=\\\\arg\\\\operatorname*{max}\\\\prod_{i=1}^{n}P(t_{i}|t_{<i},S;\\\\theta),\\n$$  \\n\\nWe adopt the same data serialization described in Section 4 for the pretraining task. The only difference between table-to-text and table-to-logic lies in the generation target.',\n",
       "   'original_filename': 'Conf_Paper_Meta_Data_EMNLP_2022_Empirical_Methods_in_Natural_Language_Processing_with_whole_text.db'}},\n",
       " {'id': 454845631964659546,\n",
       "  'distance': 0.6036479473114014,\n",
       "  'entity': {'paper_id': '63a1751790e50fcafd1f48e7',\n",
       "   'paper_title': 'CiteBench: A Benchmark for Scientific Citation Text Generation',\n",
       "   'chunk_id': 1,\n",
       "   'chunk_text': '# 2 Related work\\n\\n# 2.1 Benchmarking\\nNLP benchmarks are unified dataset collections coupled with evaluation metrics and baselines that are used to systematically compare the performance of NLP systems for the targeted tasks in a standardized evaluation setup. Well-constructed benchmarks can boost progress in the corresponding research areas, such as SQuAD ( Rajpurkar et al. ,2016 ) for question answering, GLUE ( Wang et al. ,2018 ) for natural language understanding, KILT ( Petroni et al. ,2021 ) for knowledge-intensive tasks, GEM ( Gehrmann et al. ,2021 ,2022 ) for general-purpose text generation, and DynaBench (Kiela et al. ,2021 ) for dynamic benchmark data collection. C ITE BENCH is the first benchmark for the citation text generation task.\\n\\n# 2.2 Text generation for scientific documents\\nScientific documents are characterized by academic vocabulary and writing style, wide use of nonlinguistic elements like formulae, tables and figures, as well as structural elements like abstracts and citation anchors. Recent years have seen a rise in natural language generation for scientific text, including text simplification ( Luo et al. ,2022 ), summarization ( Qazvinian and Radev ,2008 ;Erera et al. ,2019 ;Cachola et al. ,2020 ), slides generation ( Sun et al. ,2021 ), table-to-text generation (Moosavi et al. ,2021 ), and citation text generation ( Li and Ouyang ,2022 ). Closely related to the task of citation text generation, Luu et al. (2021 )study how scientific papers can relate to each other, and how these relations can be expressed in text. Related to our work, Mao et al. (2022 ) propose a benchmark for scientific extreme summarization. Compared to extreme summarization, which amounts to generating short context-independent summaries of individual manuscripts, citation text generation focuses on context-dependent descriptions that relate the cited papers to the citing paper. In line with the recent efforts that address the lack of systematic automated evaluation of natural language generation in general ( Gehrmann et al. ,2021 ), our paper contributes the first unified benchmark for citation text generation in the scientific domain.\\n\\n# 2.3 Citation text generation\\nThe task of citation text generation was introduced in Hoang and Kan (2010 ), who generate a summary of related work specific to the citing paper. Since then, several task definitions and setups have been proposed (Table 1 ). Lu et al. (2020 ) cast the task as generating a multi-paragraph related work section given the abstracts of the citing paper and of the cited papers. AbuRa’ed et al. (2020 ) use the cited paper’s title and abstract to generate a citation sentence. Xing et al. (2020 ) use the abstract of the cited paper and include context before and after the citation sentence as the input, and produce the citation sentence as the output. A recent work by Chen et al. (2021 ) uses multiple cited abstracts as input to generate a related work paragraph. The great variability of the task definitions and setups in citation text generation prevents the study of citation text generation methods across datasets and evaluation setups. Unlike prior work that explores varying task settings, C ITE BENCH brings the diverging task definitions and datasets together in a unified setup. This allows us to compare citation text generation models across different datasets in a standardized manner using an extensive set of quantitative metrics, as well as novel automated qualitative metrics.  \\n\\n<html><body><table><tr><td rowspan=\"2\">Dataset</td><td colspan=\"3\">Input</td><td rowspan=\"2\">Output Citation text (T)</td><td rowspan=\"2\">Datasources</td></tr><tr><td>Cited document (D*) SingleAbs MultiAbs</td><td>Abs</td><td>Citing context (C\") Text</td></tr><tr><td>ABURAED</td><td></td><td>Title √</td><td></td><td>Sent Para</td><td>Multiple</td></tr><tr><td>CHEN</td><td>√</td><td></td><td></td><td></td><td>S2ORCandDelve</td></tr><tr><td>LU</td><td></td><td></td><td></td><td></td><td>arXiv.org and MAG</td></tr><tr><td>XING</td><td></td><td></td><td></td><td></td><td>AAN</td></tr></table></body></html>  \\n\\nTable 1: Overview of datasets in C ITE BENCH . Single Abs $=$ Single abstract, i.e., one cited document per instance. Multi Abs $=$ Multiple abstracts, i.e., multiple cited documents per instance. Abs $=$ Abstract, i.e., a dataset contains the abstract of the citing paper. Text $=\\\\xi$ a dataset contains additional context from the citing paper. Sent $=$ generation target is a single sentence. Para $=$ generation target is a paragraph.   \\nTable 2: Datasets statistics. The validation set for XING has been created by us via randomly sampling $10\\\\%$ of the original training data. Across datasets, very few inputs contain more than 4,096 tokens, and few outputs are longer than 1,024 tokens. We exploit this property to speed up the evaluation in Section 3.3 .  \\n\\n\\n<html><body><table><tr><td>Dataset</td><td>#Train</td><td>#Validation</td><td>#Test</td><td>Inputs>4,096tok.</td><td>Outputs>1,024tok.</td></tr><tr><td>ABURAED</td><td>15,000</td><td>1,384</td><td>219</td><td>0%</td><td>0%</td></tr><tr><td>LU</td><td>30,369</td><td>5,066</td><td>5,093</td><td><0.001%</td><td>0%</td></tr><tr><td>XING</td><td>77,086</td><td>8,566</td><td>400</td><td><0.001%</td><td><0.001%</td></tr><tr><td>CHEN -Delve</td><td>72,927</td><td>3,000</td><td>3,000</td><td><0.001%</td><td>0.004%</td></tr><tr><td>-S2ORC</td><td>126,655</td><td>5,000</td><td>5,000</td><td>0.017%</td><td><0.001%</td></tr><tr><td>Total</td><td>322,037</td><td>23,016</td><td>13,712</td><td>0.007%</td><td><0.001%</td></tr></table></body></html>\\n\\n# 3 Benchmark\\n\\n# 3.1 Task definition and datasets\\nWe formalize the task of citation text generation as follows: Given a set of $n$ (cited) target documents $\\\\{D_{1}^{t}...D_{n}^{t}\\\\}$ }, a (citing) sourc ocum $D^{s}$ set of $m$ citing document contexts $\\\\{C_{1}^{s}\\\\ ...C_{m}^{s}\\\\}\\\\ \\\\in$ } ∈ $D^{s}$ , generate a citation text $T^{\\\\prime}$ ′that is as close as possible to the original citation text $T\\\\in D^{s}$ ∈. This general definition allows wide variation in how the task is implemented. The cited document $D_{i}^{t}$ can be represented by the abstract $a^{t_{i}}$ , the concatenation of the title and the abstract, or even the full text of the paper. The context set $C^{s}$ covers sentences before and after the citation text in $D^{s}$ , as well as the abstract $a^{s}\\\\in D^{s}$ .  \\n\\nSuch general, open definition allows us to accommodate diverse approaches to the task within one framework (Table 1 ). To populate the benchmark, we select four datasets, focusing on the task design and domain variety: ABURAED (AbuRa’ed et al. ,2020 ), CHEN (Chen et al. ,2021 ), LU (Lu et al. ,2020 ), and XING (Xing et al. ,2020 ). Dataset transformation details are provided in Appendix A.1 .Table 2 shows the quantitative statistics, and Figure 2 provides data examples from each dataset. The CHEN dataset has two subsets – CHEN Delve and CHEN S2ORC – based on the data source; we use CHEN to denote the union of the two subsets. The datasets are distributed under varying licenses; we have obtained explicit permissions from the authors to use the data for research purposes in cases when licensing was underspecified (see Ethics statement).  \\n\\nWe note that the datasets included in the benchmark are not only structurally diverse, but also cover a wide range of domains, from medicine to computer science. In particular, ABURAED and XING exemplify citation text generation in the computational linguistics domain, CHEN Delve cover the computer science domain; LU and CHEN S2ORC span a wide range of domains represented on arxiv.org and in the S2ORC corpus, respectively, including biology, medicine and physics.',\n",
       "   'original_filename': 'Conf_Paper_Meta_Data_EMNLP_2023_with_whole_text.db'}},\n",
       " {'id': 454919258054345648,\n",
       "  'distance': 0.6017794013023376,\n",
       "  'entity': {'paper_id': '628ef0495aee126c0f82db30',\n",
       "   'paper_title': 'PLOG: Table-to-Logic Pretraining for Logical Table-to-Text Generation.',\n",
       "   'chunk_id': 0,\n",
       "   'chunk_text': '# PL OG: Table-to-Logic Pretraining for Logical Table-to-Text Generation\\nAo Liu 1 , Haoyu Dong 2 , Naoaki Okazaki 1 , Shi Han 2 , Dongmei Zhang 2 1 Tokyo Institute of Technology, 2 Microsoft Research   \\nliu.ao,@nlp.c.titech.ac.jp , {hadong,shihan,dongmeiz}@microsoft.com\\n\\n# Abstract\\nLogical table-to-text generation is a task that involves generating logically faithful sentences from tables, which requires models to derive logical-level facts from table records via logical inference. It raises a new challenge on the logical-level content planning of table-totext models. However, directly learning the logical inference knowledge from table-text pairs is very difficult for neural models because of the ambiguity of natural language and the scarcity of parallel data. Hence even largescale pre-trained language models present low logical fidelity on logical table-to-text. In this work, we propose a PL OG (Pretrained Logical Form Generator) framework to improve the generation fidelity. Specifically, PL OG is first pretrained on a table-to-logic-form generation (table-to-logic ) task, then finetuned on downstream table-to-text tasks. The formal definition of logical forms enables us to collect large amount of accurate logical forms from tables without human annotation. In addition, PL OGcan learn logical inference from table-logic pairs much more definitely than from tabletext pairs. To evaluate our model, we further collect a controlled logical table-to-text dataset CONT LOG based on an existing dataset. On two benchmarks, L OGIC NLG and C ONT LOG ,PL OG outperforms strong baselines by a large margin on the logical fidelity, demonstrating the effectiveness of table-to-logic pretraining.',\n",
       "   'original_filename': 'Conf_Paper_Meta_Data_EMNLP_2022_Empirical_Methods_in_Natural_Language_Processing_with_whole_text.db'}},\n",
       " {'id': 454919253827011150,\n",
       "  'distance': 0.5987098217010498,\n",
       "  'entity': {'paper_id': '634e194190e50fcafd24e749',\n",
       "   'paper_title': 'Investigating the Robustness of Natural Language Generation from Logical Forms Via Counterfactual Samples',\n",
       "   'chunk_id': 5,\n",
       "   'chunk_text': '# 7 Related Work\\n\\n# 7.1 Text Generation from Tables\\nTable-to-text is a popular area in recent years ( Wiseman et al. ,2018 ;Lee ,2018 ;Liang et al. ,2009 ;Chen et al. ,2021 ). As previous methods generate superfacial and uncontrollable logic, Chen et al. (2020e ) introduced Logic2Text as a controllable and fidelity text generation task conditioned on a logical form. Since then, many works on Logic2Text have been proposed. In order to unify the studies of structural knowledge grounding, Xie et al. (2022 ) proposed the UNIFIEDSKG framework and unified 21 structural knowledge grounding tasks into a text-to-text format, including Logic2Text. Zhang et al. (2021a ) proposed a unified framework for logical knowledge-conditioned text generation in few shot setting. To solve the data scarcity problem of Logic2Text, Shu et al. (2021 ) iteratively augmented the original dataset with a generator and proposed an evaluator for highfidelity text generation.  \\n\\n  \\n  \\nFigure 6: Attention values during decoding. The baseline pays more attention to “attendance” as we expected, which verifies our hypothesis.  \\n\\nHowever, they all ignored the spurious correlation in logical forms, which is investigated in our work.\\n\\n# 7.2 Causal Inference For NLP\\nCausal Inference ( Pearl et al. ,2016 ;Kuang et al. ,2020 ) is a powerful statistical modeling tool for explanatory analysis. In NLP, many methods have been proposed based on the causal inference theory ( Zhang et al. ,2021b ;Chen et al. ,2020a ;Zhang et al. ,2021c ;Hu and Li ,2021 ). Yang et al. (2021 )and Wang and Culotta (2021b ) exploit causal inference to reduce the bias from the context for text classification tasks. For named entity recognition, Zeng et al. (2020 ) replaced the entities in sentences with counterfactual tokens to remove spurious correlation between the context and the entity token. Wang and Culotta (2021a ) generated counterfactual samples by replacing causal terms with their antonyms in sentiment classification. Wu et al. (2020 ) proposed to use a counterfacutal decoder to generate unbiased court’s view.  \\n\\nOur work proposes to improve the robustness of Logic2Text models with causality.\\n\\n# 8 Conclusion\\nWe investigate the robustness of current methods for Logic2Text via a set of manually constructed counterfactual samples. A significant decline on the counterfactual dataset verifies the existence of bias in the training dataset. Then we leverage causal inference to analyze the bias, based on which, two approaches are proposed to reduce the spurious correlations. Automatic and manual experimental results on both Logic2Text and the counterfactual data demonstrate that our method is effective to alleviate the spurious correlations.\\n\\n# Limitations\\nAlthough our method has achieved high logical consistency, we find that for some unseen headers, the model cannot understand them and generate some logically correct but not fluent sentences, which is related to the method of generation of counterfactual samples. Due to the limited number of high-quality logical forms, future work may continue to explore more advanced counterfactual data generation methods considering the context.  \\n\\nBesides, our structure-aware logical form encoder works based on the attention mechanism, so it can’t be applied to models without attention. Fortunately, the current attention-based models are widely used not only because of their better performance but also because of their high interpretability.\\n\\n# Acknowledgment\\nWe would like to thank anonymous reviewers for their comments and suggestions. This work is supported in part by National Natural Science Foundation of China (NO. 62037001), the Key R&D Projects of the Ministry of Science and Technology (NO. 2020 YFC 0832500), the Zhejiang Province Science and Technology Project (NO. 2022C01044), the Starry Night Science Fund of Zhejiang University Shanghai Institute for Advanced Study (NO. SN-ZJU-SIAS-0010), and CAAI-Huawei MindSpore Open Fund (NO. CAAIXSJLJJ-2021-015A).\\n\\n\\n\\n# A Details of Attention Mask\\nThe attention value from token $w_{i}$ to token $w_{j}$ is masked if there is no direct edge connecting them on the logical form. To clarify how the value of the Attention Mask is calculated, we use the left logical form in Figure 7 as an example. And the attention mask matrix for the tokenized logical form is shown on the right of Figure 7 . For each token in the logical form, the parent node can be seen (such as $\\\\mathbf{M}_{\\\\mathrm{hop,result}}=1]$ ). Besides, an operator token can also see the child nodes (such as $\\\\mathbf{M}_{\\\\mathrm{win,eq}}\\\\,=\\\\,1;$ .Otherwise, the attention value is masked.\\n\\n# BReplacement Methods\\nWe match the headers from each logical form to the tokens in the label and then replace the headers in a specific way if found. Concretely, we propose the following three strategies for replacement.  \\n\\nRandom Replacement Intuitively, when a layman tries to describe some domain-specific table, he simply replicates the obscure table headers (such as technical terms). So we train the model’s ability to replicate the header from the logical form. We use completely random strings to replace the headers.  \\n\\nHeader Disturb Another straightforward idea is to select another header token from a set of all table headers to replace the header token in the logical form. However, such a method ignores the attribute of the data type carried by the columns, thus it will produce unreasonable counterfactual samples. In order to solve this problem, we group all the headers by their data type, including three categories: strings, numbers, and time. A header in the logical form is only replaced by another header with the same data type.  \\n\\n  \\nFigure 7: Sample of Attention Mask matrix. The attention of each token to others with no directly connected edges is masked.  \\n\\nMixing Replacement We take turns performing the above two replacement strategies.',\n",
       "   'original_filename': 'Conf_Paper_Meta_Data_EMNLP_2022_Empirical_Methods_in_Natural_Language_Processing_with_whole_text.db'}},\n",
       " {'id': 454919307595371694,\n",
       "  'distance': 0.5985562801361084,\n",
       "  'entity': {'paper_id': '63608e5090e50fcafdee1152',\n",
       "   'paper_title': 'Diverse Parallel Data Synthesis for Cross-Database Adaptation of   Text-to-SQL Parsers',\n",
       "   'chunk_id': 2,\n",
       "   'chunk_text': '# 2.2 Translating text of related queries\\nOur next goal is to translate the retrieved $x_{r}$ from being a text for SQL $q_{r}$ to a text $\\\\hat{x}$ for SQL $q$ ,where $q\\\\approx q_{r}$ structurally. However, we do not have a readily labeled dataset to learn a model that translates $x_{r}$ to $\\\\hat{x}$ while being consistent with $q$ . We therefore decompose this task into two steps: 1) A simpler task of masking schema-specific tokens in $x_{r}$ to get a template $x_{r}^{\\\\mathrm{masked}}$ and 2) A conditional text generation model that maps $(x_{r}^{\\\\mathrm{masked}},q)$ to the text $\\\\hat{x}$ consistent with $q$ , by filling the masked positions in $x_{r}^{\\\\mathrm{masked}}$ as per $q$ . We re-purpose $\\\\mathcal{D}_{\\\\mathrm{train}}$ to get indirect supervision for training the text generation model. We now present each step in detail.  \\n\\nfrom different schemas, we modify the tree-editdistance algorithm to ignore the schema names and the database values. The tree-edit-distance is further normalized by the size of the larger tree. We $\\\\{q_{r}\\\\}$ only consider the have a distance of less than $\\\\{q_{r},x_{r}\\\\}$ pairs where the SQLs 0 .1 w.r.t. the SQL $q$ . Within datasets like Spider that span hundreds of schemas, it is often possible to find several SQLs structurally similar to a given SQL $q$ . For example, in Spider we found that $76\\\\%$ of the train SQLs contain at least three zero-distance (structurally identical) neighbours in other schemas. In Figure 2 ,we present more detailed statistics.  \\n\\nMasking the retrieved text Converting the re$\\\\{x_{r}^{\\\\mathrm{masked}}\\\\}$ trieved text queries }is a critical component of R $\\\\{x_{r}\\\\}$ to masked templates EFILL ’s pipeline since irrelevant tokens like references to schema elements of the original database can potentially misguide the text generation module. Our initial approach was to mask tokens based on a match of text tokens with schema names and manually refined schema-to-text linked annotations as in Lei et al. (2020 ). However, this approach failed to mask all schema-related terms since their occurrences in natural text often differed significantly from schema names in the database. Table A7 shows some anecdotes. Consequently, we designed a simple frequency-based method of masking that is significantly more effective for our goal of using the masked text to just guide the diversity. For each word that appears in the text queries of the train set, we count the number of distinct databases where that word gets mentioned at least once. For example, common words like $\\\\{^{\\\\prime}{\\\\mathsf{s h o w}}\\\\}$ , ‘what’, ‘list’, ‘order’} get mentioned in more than $90\\\\%$ of the schemas, and domain specific words like {‘countries’, ‘government $^\\\\prime\\\\}$ occur only in text queries of a few schemas. We mask out all the words that appear in less than $50\\\\%$ of the schemas. The words to be masked are replaced by a special token MASK , and consecutive occurrences of MASK are collapsed into a single MASK token. Thus we obtain masked templates minimal information about their original schema. $\\\\{x_{r}^{\\\\mathrm{{masked}}}\\\\}$ }retaining Editing and Filling the masked text Given a masked template $x_{r}^{\\\\mathrm{masked}}$ , and an SQL query $q$ ,we wish to edit and fill the masked portions in $x_{r}^{\\\\mathrm{masked}}$ to make it consistent with the $\\\\operatorname{SQL}q$ . We utilize a conditional text generation model BART ( Lewis et al. ,2020 ) for this purpose. We $\\\\boldsymbol{\\\\mathrm{\\\\Sigma}}_{\\\\boldsymbol{\\\\mathrm{\\\\Sigma}}}$ like first convert $q$ into a pseudo-English representation $q^{\\\\mathrm{Eng}}$ similar to Shu et al. (2021 ), to make it easier for $\\\\boldsymbol{\\\\beta}$ to encode $q$ . In addition, we wrap the table, column, or value tokens in $q^{\\\\mathrm{Eng}}$ with special tokens to provide explicit signals to the text generation model $\\\\boldsymbol{\\\\mathrm{\\\\Sigma}}_{\\\\boldsymbol{\\\\mathrm{\\\\Sigma}}}$ that ch tokens are likely to appear in the output text ˆ. Next, we concatenate the tokens in $x_{r}^{\\\\mathrm{masked}}$ and $q^{\\\\mathrm{Eng}}$ for jointly encoding them as which is expected to be consistent with the an input to $\\\\boldsymbol{\\\\mathrm{\\\\Sigma}}_{\\\\boldsymbol{\\\\mathrm{\\\\Sigma}}}$ . The output of $\\\\boldsymbol{\\\\mathrm{\\\\Sigma}}_{\\\\boldsymbol{\\\\mathrm{\\\\Sigma}}}$ ’s decoder is text ${\\\\mathrm{SQL~}}q$ $\\\\hat{x}$ ,.  \\n\\nSince we do not have direct supervision to finetune purposing SQL-Text pairs $\\\\boldsymbol{\\\\mathrm{\\\\Sigma}}_{\\\\boldsymbol{\\\\mathrm{\\\\Sigma}}}$ for this task, we presen $\\\\mathcal{D}_{\\\\mathrm{train}}$ for fine-tuning $(q_{i},x_{i})$ from various schemas B.a method of re$\\\\mathcal{D}_{\\\\mathrm{train}}$ contains $s_{i}$ .A Naïve way to train $\\\\boldsymbol{\\\\mathrm{\\\\Sigma}}_{\\\\boldsymbol{\\\\mathrm{\\\\Sigma}}}$ is to provide $[x_{i}^{\\\\mathrm{{masked}}}|q_{i}^{\\\\mathrm{{Eng}}}]$ |,the concatenation of $x_{i}^{\\\\mathrm{masked}}$ and $q_{i}^{\\\\mathrm{Eng}}$ as an input to the encoder and maximize the likelihood of $x_{i}$ in the decoder’s output. This way the decoder of $\\\\boldsymbol{\\\\beta}$ learns to refill the masked tokens in $x_{i}^{\\\\mathrm{masked}}$ by attending to $q_{i}^{\\\\mathrm{Eng}}$ to recover $x_{i}$ in the output. While useful for learning to refill the masked positions, this from its use during inference in two ways: (i) For a Naïve method of training $\\\\boldsymbol{\\\\mathrm{\\\\Sigma}}_{\\\\boldsymbol{\\\\mathrm{\\\\Sigma}}}$ is mismatched given SQL $q$ , R EFILL might fail to retrieve a similar str cture neighbour of $q_{i}$ from $\\\\mathcal{D}_{\\\\mathrm{train}}$ . In such cases, SQL-to-Text generation mode to directly translate Bshould be capable of falling back to pure $q$ into $\\\\hat{x}$ . (ii) During inference, $x_{r}^{\\\\mathrm{masked}}$ and $q$ come from different schemas. However, during Naïve training, the masked text $x_{i}^{\\\\mathrm{masked}}$ and the SQL $q_{i}$ are derived from the same example $(q_{i},x_{i})$ . To Robust address these two limitations, we train manner as follows: (a) For a random one$\\\\boldsymbol{\\\\mathrm{\\\\Sigma}}_{\\\\boldsymbol{\\\\mathrm{\\\\Sigma}}}$ in a more third of t allowing using $q_{i}^{\\\\tilde{\\\\mathrm{Eng}}}$ B. (b) For another one-third, we pass only to learn the filling of the masked tokens train steps we train $\\\\boldsymbol{\\\\mathrm{\\\\Sigma}}_{\\\\boldsymbol{\\\\mathrm{\\\\Sigma}}}$ in the Naïve way, $q_{i}^{\\\\mathrm{Eng}}$ as an input and maximize the likelihood of $x_{i}$ .This ensures that model is capable of generating the text from the $q_{i}^{\\\\mathrm{Eng}}$ alone, if the templates $\\\\boldsymbol{x}_{i}^{\\\\mathrm{{n}}}$ asked are unavailable or noisy. (c) For the remaining onethird, we first retrieve an SQL-Text pair $(q_{j},x_{j})$ ,from a different schema such that the ${\\\\mathrm{SQL~}}q_{j}$ is structurally similar to $q_{i}$ (§ 2.1 ), and the word edit distance between the masked templates $x_{i}^{\\\\mathrm{masked}}$ and $x_{j}^{\\\\mathrm{masked}}$ is also small. We can then replace $x_{i}^{\\\\mathrm{{n}}}$ asked with $x_{j}^{\\\\mathrm{masked}}$ and encode $[x_{j}^{\\\\mathrm{masked}}|q_{i}^{\\\\mathrm{Eng}}]$ as an input to $\\\\boldsymbol{\\\\beta}$ and maximize the likelihood of $x_{i}$ in the decoder’s output. This step makes the training more consistent with the inference, as $x_{j}^{\\\\mathrm{masked}}$ and $q_{i}^{\\\\mathrm{Eng}}$ now come from different schemas. In $\\\\S\\\\,5.4$ , we justify training Robustly compared to Naïve training.',\n",
       "   'original_filename': 'Conf_Paper_Meta_Data_EMNLP_2022_Empirical_Methods_in_Natural_Language_Processing_with_whole_text.db'}},\n",
       " {'id': 454847854731210130,\n",
       "  'distance': 0.5963490605354309,\n",
       "  'entity': {'paper_id': '64702deed68f896efa5202bb',\n",
       "   'paper_title': 'Uncovering and Categorizing Social Biases in Text-to-SQL.',\n",
       "   'chunk_id': 4,\n",
       "   'chunk_text': '# 7 Conclusion\\nIn this paper, we propose to uncover and categorize social biases in the Text-to-SQL task. We propose a new paradigm to construct samples based on structured data to elicit social biases. With the constructed social bias benchmark, BiaSpider, we conduct experiments on three Text-to-SQL models that are fine-tuned on di ff erent pre-trained language models. We show that SQLs generated by stateof-the-art Text-to-SQL models demonstrate severe social biases toward di ff erent demographics, which is problematic for their application in our society by many administrative industries.\\n\\n# Limitations\\nIn this work, we are the first to uncover the social bias problem in the Text-to-SQL task. We categorize di ff erent types of social biases related to various demographics. We present a new benchmark and metric for the social bias study in the Text-to-SQL task. However, this work stops at the point of uncovering and analyzing the problem and phenomenon, without making one step further to solve the social bias problem in the Text-to-SQL task. Besides, in spite of the structured scalability of our proposed paradigm for social bias benchmark construction, the e ffi cacy of entending with other Text-to-SQL datasets remains to be verified.',\n",
       "   'original_filename': 'Conf_Paper_Meta_Data_ACL_2023_with_whole_text.db'}}]"
      ]
     },
     "execution_count": 16,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "papers"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 22,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "0.36119923897096357 0.6550747752189636 Constraint Reasoning Embedded Structured Prediction. 10\n",
      "0.5187755010426045 0.6537715196609497 Interactive Text-to-SQL Generation Via Editable Step-by-Step Explanations 1\n",
      "0.4338798970217926 0.6474056243896484 Benchmarking and Improving Text-to-SQL Generation under Ambiguity 1\n",
      "0.22700478864830817 0.632977306842804 A Lightweight Constrained Generation Alternative for Query-focused Summarization 1\n",
      "0.5286605679871271 0.6278348565101624 Interactive Text-to-SQL Generation Via Editable Step-by-Step Explanations 2\n",
      "0.11417866900415914 0.6255221366882324 Faithful Low-Resource Data-to-Text Generation Through Cycle Training. 0\n",
      "0.9478368094750292 0.6249851584434509 Exploring Chain-of-Thought Style Prompting for Text-to-SQL 1\n",
      "0.8588924509770979 0.6236740350723267 Benchmarking and Improving Text-to-SQL Generation under Ambiguity 0\n",
      "0.3097136234293514 0.621777355670929 Calibrating Sequence likelihood Improves Conditional Language Generation 1\n",
      "0.9941632611740909 0.621261477470398 Probing Factually Grounded Content Transfer with Factual Ablation 1\n",
      "0.8972843990755315 0.6193163990974426 Interactive Text-to-SQL Generation Via Editable Step-by-Step Explanations 6\n",
      "0.4029298963857586 0.6175075769424438 Interactive Text-to-SQL Generation Via Editable Step-by-Step Explanations 0\n",
      "0.8609844832480328 0.613497793674469 Evaluating Cross-Domain Text-to-SQL Models and Benchmarks 0\n",
      "0.4797324843969384 0.6086536049842834 PLOG: Table-to-Logic Pretraining for Logical Table-to-Text Generation. 1\n",
      "0.0 0.6042478084564209 PLOG: Table-to-Logic Pretraining for Logical Table-to-Text Generation. 4\n",
      "1.0 0.6036479473114014 CiteBench: A Benchmark for Scientific Citation Text Generation 1\n",
      "0.19039209424825404 0.6017794013023376 PLOG: Table-to-Logic Pretraining for Logical Table-to-Text Generation. 0\n",
      "0.04687463579745902 0.5987098217010498 Investigating the Robustness of Natural Language Generation from Logical Forms Via Counterfactual Samples 5\n",
      "0.5194692789937194 0.5985562801361084 Diverse Parallel Data Synthesis for Cross-Database Adaptation of   Text-to-SQL Parsers 2\n",
      "0.07584859838515177 0.5963490605354309 Uncovering and Categorizing Social Biases in Text-to-SQL. 4\n"
     ]
    }
   ],
   "source": [
    "papers_content = [x[\"entity\"][\"chunk_text\"] for x in papers]\n",
    "bm25_scores = get_bm25_scores(statement[\"statement_hyde\"],papers_content)\n",
    "for x,y in zip(bm25_scores,papers):\n",
    "    print(x,y[\"distance\"],y[\"entity\"][\"paper_title\"],y[\"entity\"][\"chunk_id\"])"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# embedding相似度计算"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 23,
   "metadata": {},
   "outputs": [],
   "source": [
    "from zhipuai import ZhipuAI\n",
    "import numpy as np\n",
    "\n",
    "# 初始化客户端\n",
    "client = ZhipuAI(api_key=\"5569e2918aa6f20de094e373821e34df.MTAkqQ49Kl8yrlJQ\")\n",
    "\n",
    "def get_embedding(text):\n",
    "    \"\"\"获取文本向量\"\"\"\n",
    "    response = client.embeddings.create(\n",
    "        model=\"embedding-3\",\n",
    "        input=[text]\n",
    "    )\n",
    "    return np.array(response.data[0].embedding)\n",
    "\n",
    "def cosine_similarity(vec_a, vec_b):\n",
    "    \"\"\"计算余弦相似度\"\"\"\n",
    "    dot_product = np.dot(vec_a, vec_b)\n",
    "    norm_a = np.linalg.norm(vec_a)\n",
    "    norm_b = np.linalg.norm(vec_b)\n",
    "    return dot_product / (norm_a * norm_b)\n",
    "\n",
    "def get_embeddings(texts):\n",
    "    \"\"\"获取文本向量，支持单个文本或文本列表\n",
    "    \n",
    "    Args:\n",
    "        texts: 单个文本字符串或文本列表\n",
    "    \n",
    "    Returns:\n",
    "        如果输入单个文本，返回单个向量\n",
    "        如果输入文本列表，返回向量列表\n",
    "    \"\"\"\n",
    "    # 批量获取embeddings\n",
    "    response = client.embeddings.create(\n",
    "        model=\"embedding-3\",\n",
    "        input=texts\n",
    "    )\n",
    "    \n",
    "    # 转换为numpy数组\n",
    "    embeddings = [np.array(data.embedding) for data in response.data]\n",
    "    return embeddings\n",
    "\n",
    "def cosine_similarities(vec, vecs):\n",
    "    \"\"\"计算一个向量与向量列表的余弦相似度\n",
    "    \n",
    "    Args:\n",
    "        vec: 单个向量 (numpy array)\n",
    "        vecs: 向量列表或单个向量 (numpy array or list of numpy arrays)\n",
    "    \n",
    "    Returns:\n",
    "        如果vecs是单个向量，返回单个相似度值\n",
    "        如果vecs是向量列表，返回相似度值列表\n",
    "    \"\"\"\n",
    "    if isinstance(vecs, np.ndarray) and len(vecs.shape) == 1:\n",
    "        # 单个向量情况\n",
    "        return cosine_similarity(vec, vecs)\n",
    "    else:\n",
    "        # 向量列表情况\n",
    "        return [cosine_similarity(vec, v) for v in vecs]\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 24,
   "metadata": {},
   "outputs": [],
   "source": [
    "def get_cos_scores(statement,papers_content):\n",
    "    # 计算 embedding 分数\n",
    "    query_embedding = get_embedding(statement)  # 假设已实现\n",
    "    doc_embeddings = get_embeddings(papers_content)  # 假设已实现\n",
    "\n",
    "    # 计算余弦相似度\n",
    "    embedding_scores = np.array([cosine_similarity(query_embedding, de) for de in doc_embeddings])\n",
    "\n",
    "    return embedding_scores"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 35,
   "metadata": {},
   "outputs": [],
   "source": [
    "query_embedding = get_embedding(statement[\"statement_hyde\"])\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 38,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "EmbeddingsResponded(object='list', data=[Embedding(object='embedding', index=0, embedding=[-0.015329621, 0.030239513, 0.0073070875, -0.012296131, 0.011523449, -0.0018613518, -0.043499112, -0.0057474156, 0.023294918, 0.055098876, 0.017809834, -0.0021856872, 0.014060897, -0.006138526, 0.015453632, -0.05105422, 0.027148787, 0.004745791, -0.004378529, -0.02419161, 0.037145954, -0.009734834, -0.044109624, 0.03611571, 0.033387475, 0.026137624, -0.005108284, 0.017046692, 0.010216568, 0.0084470315, 0.030353986, 0.02340939, 0.010703071, 0.0017671514, -0.044796452, -0.008952613, -0.0057998816, 0.0028999408, 0.010903396, 0.029705314, -0.022398226, -0.009205404, 0.0048721866, 0.020604841, 0.013450383, 0.014785882, -0.010397814, -0.011447134, 3.398368e-05, -0.0041018897, 0.00727847, 0.0038729473, 0.008709362, 0.03134607, 0.028465208, 0.0059477403, 0.0010737651, 0.022169285, 0.004509694, 0.009400959, 0.0072975485, 0.04010313, 0.005442159, 0.01548225, 0.009663289, -0.008518576, -0.028388893, -0.039454456, -0.009291258, 0.012744476, -0.0030621085, 0.0603264, 0.016989456, 0.008804754, -0.0053229174, -0.0026781526, -0.0047696396, -0.011313585, -0.031155284, -0.021005493, -6.0179435e-05, 0.048841108, 0.004261673, 0.016741434, -0.019240726, -0.011990873, 0.028293502, -0.029877022, -0.0022763105, 0.0070447573, 0.012486916, 0.004512079, -0.015682574, 0.008986001, -0.019488746, 0.010149793, 0.0193075, 0.0001931704, -0.0052466034, 0.016455255, -0.01891639, 0.0012013529, -0.00846134, 0.0018804304, -0.04071364, -0.0073213964, 0.036134787, -0.026004074, -0.0037966329, 0.004407147, -0.032357235, -0.027473124, 0.008757058, 0.003965955, 0.00634839, 0.004721943, 0.012620466, -0.017361488, 0.024630418, -0.016369402, 0.026881687, -0.015644418, 0.021654163, -0.037489366, 0.017418724, 0.023104133, 0.0066345683, -0.0050748964, -0.0046861707, 0.013335912, -0.007884214, -0.029915178, -0.03317761, -0.0024384782, 0.08051151, -0.0019221647, -0.00794145, 0.015625339, -0.024248846, -0.011456674, 0.001018318, 0.0045025395, 0.042621497, -0.0039754943, -0.009143399, -0.040522855, 0.010483667, -0.012782633, -0.004776794, 0.008356408, 0.007970068, 0.014223064, -0.006720422, -0.027950088, -0.014251683, -0.009214943, 0.010683992, 0.0036821617, 0.0078126695, 0.024859361, -0.035676904, -0.0023967437, 0.02381004, -0.027473124, -0.020452214, -0.0010475321, 0.0066250293, 0.0018923545, 0.0037704, -0.02884678, 0.00443338, 0.0024969063, 0.025698816, -0.017418724, 0.0031002655, 0.03478021, 0.02195942, -0.047124036, 0.02762575, -0.011847785, -0.006243458, 0.0012108922, 0.024535025, 0.018620672, -0.011685616, 0.00833733, 0.06452368, 0.0050081215, -0.036802538, 0.0142612215, 0.015176993, 0.0070447573, 0.013641168, 0.010626757, -0.026461959, -0.017666744, -0.0034722975, 0.01639802, 0.023065977, -0.005380153, -0.032700647, 0.0007881829, 0.0046694768, 0.010569521, 0.017142083, -0.0075408, -0.03172764, -0.0033387477, 0.026347488, 0.04430041, -0.016960837, 0.04487277, 0.0026161473, 0.024859361, 0.003581999, 0.012897105, 0.007106763, 0.0153582385, 0.032051977, 0.0041495864, 0.005623405, -0.014471086, 0.02884678, -0.015816124, 0.021806791, 0.023275841, -0.017065769, -0.016426638, -0.079061545, 0.04498724, 0.036211103, -0.018153248, -0.0090813935, 0.021482456, -0.004457228, 0.027186945, -0.016894063, -0.004600317, -0.004812566, 0.0058094207, 0.019555522, 0.02592776, 0.0039898036, 0.031174364, 0.017180242, -0.0011673692, 0.02724418, 0.0037393973, -0.026709981, 0.023848196, -0.023752805, -0.033292085, 0.0056424835, 7.508456e-05, -0.017485498, 0.002554142, 0.019030862, -0.015663495, -0.048077967, 0.012811252, 0.0047672545, -0.009410499, -0.0014261222, 0.004213976, 0.026366567, 0.02024235, -0.007841287, 0.051931836, 0.014099054, -0.0038705624, -0.034608502, 0.038595922, 0.019374276, 0.034837447, 0.006648877, -0.004552621, -0.010407353, -0.008881069, -0.019631837, 0.09066131, 0.002222652, 0.03212829, -0.04365174, 0.0030835718, 0.04506355, -0.019107176, -0.0028403203, -0.0049413466, 0.043193854, -0.0193075, -0.01812463, 0.00011022339, -0.034608502, -0.0015656342, -0.0009634672, 0.02367649, 0.0070066005, -0.022054812, 0.021635083, 0.022856113, -0.032853276, 0.038977493, -0.0070161396, -0.034570348, -0.034722976, 0.016216774, 0.0038848713, -0.0410189, -0.0046074716, 0.011399439, 0.012143502, 0.02155877, -0.010750767, -0.004469152, -0.013555315, 0.0074931034, -0.01892593, -0.015921056, -0.020146957, 0.02211205, 0.020833785, -0.0037060098, 0.0016383711, 0.034017067, -0.0047147884, -0.03504731, 0.0074310983, -0.0124773765, 0.00077923987, -0.01851574, -0.003519994, 0.0067347307, 0.016569728, -0.0040828115, 0.0057187974, -0.008981231, 0.025488952, -0.0041996674, -0.04456751, -0.007101993, 0.033826284, -0.0203759, -0.008566272, 0.016302628, 0.020681156, -0.019708151, -0.02077655, 0.0288277, -0.015787506, 0.0048984196, -0.013660247, 0.027873773, -0.000800107, -0.023180448, -0.041743886, -0.0040780418, 0.017561812, -0.018840076, 0.00086747814, 0.007750664, 0.02222652, -0.016235853, 0.024363318, -0.02249362, -0.011075103, 0.011017867, -0.013354991, 0.009911311, -0.014404311, -0.019937092, 0.0013915423, 0.019765386, 0.0029380978, -0.05441205, -0.011628381, 0.018935468, -0.027549438, 0.015310543, 0.02169232, 0.0030239513, -0.024725812, 0.006935056, -0.040675484, 0.0029953336, -0.023313997, 0.015167453, -0.017275633, 0.02527909, -0.03720319, 0.03308222, -0.008389796, -0.0309645, 0.015138836, -0.007750664, 0.014366154, -0.0070781447, 0.03851961, 0.023466626, -0.090813935, 0.0071496894, -0.033406556, 0.018954547, 0.014156289, 0.0132500585, -0.006252997, 0.015243768, 0.01759043, 0.021730477, -0.012620466, -0.0004215169, 0.020471292, -0.01678913, -0.0064390134, -0.0018136554, 0.019879857, 0.0124773765, -0.0006772888, 0.039568927, 0.029304665, -0.028274423, -0.0020843325, 0.0022989663, 0.0011572337, 0.05742646, -0.016083224, -0.021883106, 0.024248846, -0.02577513, -0.010674453, -0.04849769, 0.015091139, -0.0105218245, -0.016312167, 0.013288216, -0.021539692, 0.0052418336, -0.010502746, -0.00020062296, 0.0053086085, 0.012105345, 0.011580684, -0.039492615, -0.023371233, 0.004628935, -0.020700235, -1.0703741e-05, 0.010798464, 0.0037227036, 0.03399799, 0.04391884, 0.008308711, 0.011256349, -0.009949468, 0.008780906, 0.03357826, 0.013841494, -0.024916597, -0.0301632, 0.017180242, 0.0038133268, 0.005093975, 0.018677907, 0.092645474, -0.006944595, 0.022894269, -0.005079666, -0.0014654717, -0.004194898, 0.01759043, -0.01586382, 0.051931836, 0.019402893, 0.018563436, -0.0014177753, -0.0047839484, 0.01270632, 0.01628355, 0.004485846, -0.01653157, 0.025756054, -0.0022989663, -0.005804651, -0.012448759, -0.0018243871, 0.024420554, -0.0030621085, 0.00039588008, 0.0007732778, 0.0021976114, 0.02419161, 0.01031196, -0.027606674, -0.03653544, -0.0068492023, -0.0055280123, -0.036401886, 0.0021272593, 0.020452214, -0.00634839, -0.013269137, 0.02115812, -0.014804961, -0.03081187, -0.011838245, -0.011275427, 0.020146957, 0.008280094, -0.038023565, 0.016913142, 0.000916367, 0.03292959, 0.010893856, 0.009534509, -0.04708588, 0.020261427, 0.014976668, 0.0128207905, -0.015692113, 0.009572666, -0.0065677934, -0.010998788, -0.04987135, 0.011761931, -0.013211901, -0.023714647, 0.042430714, 0.017361488, 0.0068301237, 0.02115812, -0.013593473, 0.0063388506, -0.021577848, -0.007726816, -0.0067299614, 0.003410292, -0.037336737, 0.026023153, -0.039721556, -0.002260809, 0.00833256, 0.015110218, 0.012219816, 0.018754221, -0.0035772296, -0.00080726144, -0.013812875, -0.0085328845, -0.029380979, -0.014137211, -0.047200352, -0.031956583, 0.03441772, -0.02222652, -0.023199527, 0.0054803155, -0.003555766, -0.0035963082, -0.023581097, 0.021654163, -0.03403615, 0.056358058, 0.037909094, -0.007631423, 0.039111044, -0.05609096, -0.0101974895, 0.007979607, 0.013860572, 0.021902185, 0.001891162, -0.0111227995, -0.021539692, 0.014623715, 0.01694176, -0.023886355, -0.00846611, -0.006987522, -0.033005904, 0.001672951, -0.0037632454, 0.0042664423, 0.00873321, 0.018582515, 0.020204192, -0.025431717, -0.0076934285, -0.035753217, -0.015176993, 0.004590778, 0.0301632, 0.034513112, 0.010540904, -0.023104133, 0.019107176, 0.025717895, 0.005661562, -0.030449377, 0.006892129, 0.003016797, 0.036859773, 0.018410807, -0.011094181, 0.013946426, -0.026481038, -0.020852864, -0.02445871, -0.01283033, 0.023600176, -0.039568927, -0.02129167, -0.003386444, 0.011447134, -0.01202903, 0.033368398, 0.02396267, -0.004442919, 0.013984582, 0.007392941, -0.039568927, 0.022169285, -0.020127878, -0.018859154, -0.01824864, -0.02050945, 0.010273803, -0.023180448, -0.015434553, 0.04098074, -0.008938304, 0.014537861, -0.025717895, -0.018544357, 0.0051369015, 0.00052555464, -0.011800088, -0.009243561, 0.005437389, 0.03083095, 0.0031288834, -0.028598757, -0.012009952, -0.01746642, -0.011714234, 0.0014296994, -0.022569934, 0.018820997, -0.00087344024, -0.0075169518, 0.017199319, 0.013354991, -0.0066822646, 0.0030764174, 0.016598346, 0.027301416, 0.02421069, 0.020929178, -0.005804651, 0.020929178, -0.010044861, 0.0070018307, -0.03663083, 0.008742749, 0.013517158, 0.011389899, -0.013660247, 0.015396396, 0.026252095, 0.010855699, 0.01707531, 0.01018795, -0.00793191, 0.01387965, 0.005065357, 0.029171115, -0.016083224, -0.00701137, 0.016998995, 0.031269755, 0.0010654182, 0.03344471, 0.023771882, -0.013431305, 0.013812875, -0.02381004, -0.030067807, 0.03611571, 0.026004074, 0.0030072576, -0.02726326, 0.016455255, -0.017304251, 0.016998995, -0.025965918, 0.0060431333, 0.029934257, 0.05834223, -0.016045067, 0.025031067, -0.0320329, 0.0035295333, -0.0037203187, 0.014127672, -0.015901977, -0.035333488, -0.011542527, 0.009458195, 0.017895687, -0.0038395596, -0.030048728, -0.03649728, 0.00076612335, -0.004576469, -0.0105218245, -0.0024969063, -0.017027613, 0.025355402, -0.027549438, 0.017294712, -0.01654111, -0.005399232, -0.007836518, -0.008265785, -0.0012281821, -0.006987522, -0.05784619, -0.008175162, 0.019078558, -0.005451698, 0.0061432957, 0.029991493, 0.032509863, 0.033921674, 0.0028474748, 0.008022534, 0.013431305, -0.007793591, 0.0106935315, 0.021425221, 0.0201088, 0.017886147, -0.0013343067, 0.019259805, -0.039988656, -0.00075479544, -0.053801533, -0.0008722478, -0.0032695879, -0.03890118, -0.035409804, 0.032109212, -0.0100257825, -0.033826284, -0.00072558137, -0.013288216, -0.013803337, -0.03188027, -0.009038467, -0.046780623, -0.011437595, 0.037527524, 0.0086187385, -0.026423803, -0.0067299614, -0.0017504577, 0.041743886, 0.02381004, -0.015444092, -0.027206024, 0.017685823, -0.02260809, 0.011151417, 0.01771444, 0.08615877, -0.0064962488, -0.018840076, -0.0019507825, -0.017371027, -0.01057906, 0.04521618, 0.0022298065, 0.026118545, -0.023161368, 0.001693222, 0.013202362, -0.060898755, 0.0037012403, -0.001061841, -0.012248434, -0.0036678526, -0.012505994, -0.04842138, 0.029018486, -0.014404311, -0.03333024, -0.05479362, 0.015653957, -0.018096011, -0.017285174, -0.011399439, -0.03645912, -0.007912831, 0.047963493, -0.003639235, -0.013364529, 0.03218553, 0.01124681, -0.0076648104, -0.014242143, -0.0129924975, 0.0024277465, -0.0032266611, 0.022398226, -0.0007488334, -0.0021868797, -0.011609302, 0.0003875332, -0.007369093, 0.014356615, -0.001070784, 0.0032886665, -0.0003633869, 0.011408977, -0.0055613997, -0.006992291, 0.06448553, 0.008394565, -0.015625339, -0.0036225412, -0.008804754, -0.017809834, -0.008776137, -0.027415887, 0.003958801, -0.030621085, -0.0043308325, 0.016102303, -0.0007500258, 0.01308789, 0.015224689, 0.0021534923, -0.02592776, -0.036058474, -0.008623508, -0.00089192257, -0.034970995, 0.034818366, 0.015787506, -0.0301632, -0.017371027, -0.009329415, -0.035009153, -0.053381804, -0.014700029, 0.0006588065, 0.016741434, -0.024992911, -0.00047845446, -0.0019686688, -0.01455694, -0.0017909996, -0.012658623, 0.015663495, -0.044529352, -0.04510171, -0.031155284, 0.024401475, -0.021501534, 0.021119963, -0.029304665, -0.0029547915, 0.022932427, 0.014843117, -0.024763968, 0.021196278, 0.014251683, 0.0027067703, 0.030773714, -0.02711063, -0.011828706, 0.029514529, 0.022569934, 0.0016526801, 9.449848e-05, 0.011714234, -0.0045287726, 0.021520613, -0.0049031894, -0.036993325, -0.054297574, 0.025603425, -0.0082753245, -0.01826772, -0.011275427, -0.025183696, -0.003267203, -0.01217212, -0.02976255, 0.0017540349, 0.01533916, -0.029457293, 0.019326579, -0.002056907, 0.000398563, -0.0153487, -0.010512285, 0.029323744, -0.0078460565, -0.00024995892, -0.002228614, 0.020871943, 0.022054812, 0.007340475, 0.038138036, -0.0143470755, 0.01746642, -0.043193854, -0.030850029, -0.015854282, -0.016598346, 0.018563436, 0.03716503, -0.012410602, -0.01308789, 0.01679867, 0.02049037, 0.00469571, -0.014633254, -0.027549438, -0.047047723, 0.010779385, 0.022054812, 0.0068492023, 0.00925787, -0.028217187, 0.018439425, 0.009806379, 0.034474954, -0.038576845, 0.014032279, 0.015653957, 0.004192513, 0.00688259, 0.031555936, -0.0034222163, -0.013021115, -0.02155877, -0.05093975, -0.007698198, -0.032986827, 0.03199474, -0.00064926717, 0.04826875, 0.021978498, -0.011389899, -0.023771882, 0.0050987448, 0.015777968, 0.016331246, -0.0028641685, -0.016636502, -0.036726225, 0.0140895145, 0.008690283, 0.009324645, -0.0054326192, 0.012868487, -0.0024420554, 3.1077183e-05, 0.0052561425, 0.07234589, 0.013364529, 0.015682574, -0.006706113, -0.015396396, 0.010168871, -0.010760306, -0.012544151, -0.013469461, -0.0141753685, 0.0044930005, 0.0027353882, -0.004512079, 0.0030859567, -0.013335912, -0.012868487, 0.020223271, 0.011666538, -0.0008048767, 0.0139178075, -0.020223271, -0.045406967, -0.00688259, 0.018964086, 0.0082753245, 0.04868848, 0.0005622212, 0.053000234, -0.01548225, -0.008270555, 0.003784709, -0.03819527, 0.022837033, 0.01177147, -0.019154873, -0.0034579886, -0.022188362, 0.030621085, -0.026729058, 0.026633667, 0.006691804, -0.003980264, -0.0022047658, 0.0042402092, -0.015138836, -0.019918013, -0.005661562, -0.019555522, -0.026309332, -0.02447779, 0.014566478, 0.011819166, -0.0073213964, 0.015806586, 0.019202568, 0.01572073, -0.008766597, 0.00794145, 0.029457293, -0.0014869351, 0.022569934, 0.016493414, -0.013841494, 0.0037060098, -0.044796452, -0.007927141, 0.014060897, -0.0038729473, -0.0034603735, 0.012191199, 0.008404105, -0.016855905, 0.03872947, 0.011552067, -0.016302628, 0.034570348, -0.040561013, -0.02764483, 0.02871323, -0.015825663, 0.023867276, 0.029400058, 0.01269678, -0.029419135, -0.038920257, 0.00068026985, -0.0053563053, 0.0113803595, -0.020719314, 0.009687138, -0.0014213525, 0.015625339, -0.03661175, -0.021387063, -0.014366154, 0.017561812, -0.021711398, -0.01865883, 0.013326373, -0.0014511627, 0.0086521255, -0.02236007, 0.019164411, 0.0051941373, -0.0040398845, -0.0012687241, -0.011504371, 0.011027406, 0.04792534, 0.05303839, 0.0089955395, 0.0039778794, 0.017943384, 0.035219017, -0.004256903, -0.03268157, -0.045521438, -0.0037274733, 0.018935468, -0.015205611, 0.02445871, 0.0066584167, 0.032776963, 0.01813417, -0.017876608, 0.034474954, -0.022760719, 0.004111429, 0.003150347, -0.028083637, 0.033234846, 0.0054707765, -0.0076791192, 0.037489366, -0.0072307736, 0.019078558, -0.024554104, -0.0029118648, 0.012124423, 0.011017867, -0.015768427, 0.005213216, -0.010817542, 0.0071973857, -0.024573183, 0.043003067, 0.01283987, 0.010722149, 0.024859361, 0.012362906, 0.0044095316, 0.002778315, 0.009911311, 0.0017540349, 0.021329828, -0.03851961, 0.017170701, 0.0010254724, 0.023313997, -0.018477583, -0.013727022, 0.03478021, 0.0036082321, 0.01838219, 0.0046384744, -0.019526904, 0.01997525, 0.047314823, 0.009706216, -0.010827081, -0.018668368, -0.006014515, -0.0037370124, 0.023294918, 0.011618841, 0.031117128, 0.013908269, 0.01308789, -0.021921262, 0.016359864, 0.0039611855, -0.0049031894, -0.017609509, -0.0031932737, -0.021367984, 0.003386444, -0.032719728, -0.021902185, -0.0038395596, 0.015281925, -0.01825818, 0.0035605358, -0.0054803155, 0.049947664, 0.04189651, 0.032509863, -0.019517364, -0.0038133268, -0.010884318, -0.0042425944, 0.045597754, -0.04178204, -0.012525073, 0.00045699108, 0.018010158, 0.008757058, 0.0056043263, -0.028904015, -0.009863614, -0.0010463396, 0.0015715961, -0.027587594, 0.014442468, 0.00012445777, -0.011857323, 0.04193467, 0.018754221, -0.0012997268, 0.0012985343, 0.024038984, -0.005103514, 0.022321913, -0.0039087194, -0.02236007, -0.0396834, -0.0040661176, 0.0049938126, -0.008213319, 0.016169077, 0.025469875, 0.017294712, -0.025164617, 0.03504731, -0.01044551, 0.017504577, -0.00030883416, -0.018496662, -0.0069779824, -0.031288836, 0.03172764, 0.036325574, -0.029800707, 0.03226184, 0.049260836, 0.014709568, -0.06872097, 0.0019209723, 0.0144329285, -0.037775543, -0.0017695362, 0.010626757, -0.01905948, -0.06482894, 0.01217212, 0.057464615, 0.010226107, 0.0017182626, 0.015176993, -0.019956172, -0.0032910511, -0.0015727886, 0.030907264, 0.0035629207, 0.01891639, 0.019631837, -0.043460954, -0.010102096, 0.05540413, 0.0027139247, 0.033387475, 0.01931704, 0.037355814, -0.019918013, 0.0018088857, -0.0018673139, -0.0038586382, 0.054068632, -0.008504267, 0.013975044, -0.029438214, -0.0030072576, -0.047162194, 0.038424216, -0.024535025, -0.0032433548, -0.0033148993, 0.008628278, 0.00820855, 0.034951918, 0.038805787, 0.005671101, -0.017838452, -0.018582515, 0.032738805, 0.036745302, 0.013984582, -0.0046909405, 0.007893753, 0.018324954, 0.008318251, 0.00455739, 0.02249362, -0.011323124, 0.05013845, -0.01639802, 0.0061576045, 0.004142432, 0.0026543043, -0.004671862, -0.014833579, -0.02220744, -0.0057044886, -0.01613092, -0.009343724, -0.0094248075, 0.0067871967, 0.0415531, -0.0012508379, -0.00039617816, 0.039454456, -0.034818366, -0.0021248744, -0.009152938, -0.002604223, 0.0068444326, 0.011628381, -0.0035653054, -0.03588677, 0.0034365251, 0.04162941, 0.020948255, -0.022264676, -0.021749556, 0.009100472, -0.011714234, 0.012381984, 0.0097157555, -0.0048745717, -0.0060526724, -0.0076409625, 0.024611339, -0.018849615, -0.01388919, -0.0037393973, 0.023638332, -0.0012687241, 0.0121339625, -0.011456674, 0.00674904, -0.0021010262, 0.0017898072, -0.01944105, -0.016960837, -0.0017707286, 0.014270761, -0.013669787, -0.013106969, 0.01124681, 0.0069255163, 0.0061528347, 0.042812284, -0.016855905, -0.034703895, -0.021940341, 0.0027282338, 0.03214737, 0.013164205, -0.06070797, -0.023733726, -0.0025565268, 0.0072403126, -0.012610926, 0.028884936, -0.0041066594, 0.031422384, -0.03344471, -0.009443886, -0.026442882, 0.016025988, -0.029590843, -0.07593266, -0.005666332, -0.020795628, -0.0103691965, 0.019002244, 0.005804651, 0.02182587, 0.008165623, 0.029571764, 0.0011244424, -0.018401269, -0.04590301, -0.0019257419, 0.020814706, -0.00343891, -0.0035223786, -0.013679326, -0.003932568, 0.020337742, -0.0018935469, 0.02726326, 0.03302498, 0.0020879097, -0.011752391, -0.004683786, -0.04746745, 0.0017790755, 0.0058189603, 0.0034627581, -0.0060431333, -0.012343827, -0.0056472532, 0.02222652, -0.0031646558, -0.026423803, -0.015005286, -0.01203857, -0.0062673064, -0.027206024, 0.042659655, 0.0013855803, -0.004204437, -0.019727228, 0.0045430814, 0.023161368, -0.014652332, 0.050023977, 0.035676904, 0.057388302, 0.041057058, -0.039530773, -0.001838696, -0.03107897, 0.027473124, 0.008246707, -0.006004976, -0.023104133, 0.0006558254, -0.014595097, -0.021711398, 0.004390453, 0.024878439, -0.012200737, -0.017609509, -0.005842808, -0.026023153, 0.016684199, -0.013860572, -0.0037227036, -0.007850827, 0.0015632493, -0.014194447, -0.005265682, 0.0019615141, 0.021730477, -0.013202362, 0.0056520225, 0.035161782, 0.011513909, 0.025336325, -0.021444298, -0.029514529, -0.01680821, -0.020032486, -0.0005887524, -0.023256762, -0.009959007, -0.00555186, -0.0153582385, 0.021387063, -0.037622917, 0.028026402, 0.024134375, -0.0195746, -0.031403307, -0.019746307, -0.010254725, 0.0603264, 0.015959214, -0.02976255, 0.0075837267, 0.0100257825, -0.00912432, -0.00648194, 0.0068873595, 0.03689793, 0.05143579, 0.010140253, 0.0011339817, 0.0068062753, 0.07318535, 0.016913142, -0.013908269, 0.043880682, -0.0070781447, 0.024630418, 0.004602702, -0.01734241, 0.005132132, -0.021005493, -0.00416628, 0.028789543, 0.010750767, -0.021043649, 0.010769846, -0.016560188, 0.0066297986, 0.010207028, 0.018592054, -0.0069589037, 0.028369816, 0.0056949495, 0.007326166, -0.02182587, -0.030029649, -0.013259597, -0.022131126, -0.01111326, -0.008413644, -0.010741228, 0.0072021554, -0.018239101, -0.027740223, -0.015301003, -0.020738393, -0.005594787, 0.0065153274, -0.023180448, 0.01547271, 0.028675072, 0.012448759, 0.0025899142, 0.007163998, 0.015157914, 0.02712971, -0.026958002, -0.011504371, -0.011704695, -0.02445871, 0.04853585, -0.039645243, 0.028140873, 0.003322054, 0.020585764, -0.0012699165, -0.016111841, -0.00090086565, -0.01864929, -0.016378941, 0.03291051, 0.0067299614, -0.03596308, 0.012591848, 0.0026471498, 0.011590224, 0.009186326, -0.007893753, -0.0027926238, -0.026461959, -0.0015584797, 0.011685616, -0.029781628, 0.031823035, -0.0058714263, -0.013240519, 0.09180602, 0.004800642, -0.02712971, 0.019727228, -0.0076123443, 0.010750767, -0.0077315853, -0.00029914582, -0.017170701, 0.030506615, -0.009725295, -0.00028528407, 0.014118133, 0.04525434, 0.024134375, 0.002868938, 0.01573981, 0.0010284535, -0.04124784, -0.016655581, -0.005766494, -0.0043976074, -0.035924923, 0.023123212, 0.026900766, 0.04162941, 0.033807203, -0.01190502, 0.0023609714, -0.014795422, 0.0074692555, 0.005341996, 0.008981231, 0.0015489404, -0.003095496, 0.02050945, 0.016684199, 0.012448759, 0.009949468, 0.012448759, 0.005332457, 0.024172533, -0.031288836, -0.010273803, 0.009577436, 0.092645474, -0.028350737, -0.020394979, -0.0002447421, -0.0010803234, -0.04010313, 0.02934282, -0.02396267, -0.002723464, -0.010083018, 0.003465143, -0.0051225927, 0.03468482, 0.022722563, -0.015005286, 0.015596721, 0.013278676, 0.023027819, 0.02541264, -0.012124423, -0.006248228, 0.024096219, 0.008213319, -0.0081894705, 0.010331039, -0.021711398, 0.0008853643, 0.010502746, -0.02421069, 0.011485292, -0.00033506716, 0.0137461005, -0.010817542, 0.01177147, 0.001361732, 0.0037393973, -0.043003067, -0.008289633, -0.008413644, 0.013259597, 0.009586975, -0.011647459, 0.0021367986, -0.0298961, -0.019879857, 0.012515534, -0.0026089929, 0.00071067625, -0.0038705624, 0.02922835, 0.021616006, -0.01811509, 0.045597754, 0.021444298, 0.013526698, 0.027473124, -0.02539356, 0.0029404827, 0.011084642, -0.026843531, 0.03081187, 0.011008328, -0.011094181, 0.0003750129, -0.03159409, 0.02526001, -0.05185552, -0.02896125, -0.022817954, 0.0055184728, -0.017256556, 0.029724393, 0.014118133, 0.016894063, -0.016626963, -0.03094542, 0.0051845983, 0.041057058, 0.018086473, -0.005341996, -0.01600691, -0.035676904, 0.010741228, -0.00036487743, -0.0046241656, 0.029552685, -0.025145538, -0.0095202, 0.0555186, 0.0006605951, 0.0016395636, -0.03544796, 0.026958002, -0.02884678, 0.018563436, 6.755747e-05, -0.014051357, -0.02262717, -0.030086886, -0.017533194, 0.03199474, -0.0062863845, 0.018296337, -0.00178623, 0.0009181556, 0.012754016, -0.013908269, -0.03743213, 0.007817439, 0.015091139, 0.0035152242, 0.019937092, -0.002921404, 0.00072558137, -0.009834996, 0.028159952, -0.019326579, 0.0097873, 0.004786333, 0.021139042, -0.01217212, 0.0017516501, -0.010235646, -0.02911388, -0.030067807, 0.061585583, 0.0067538093, -0.012143502, -0.008184701, -0.0003031702, -0.007502643, 0.008365948, 0.008423183, 0.0019138178, -0.021902185, 0.02049037, -0.009844536, -0.0023609714, 0.020204192, 0.0054326192, -0.01600691, 0.007497873, 0.011685616, -0.015625339, 0.041324157, -0.020204192, 0.015787506, -0.02035682, 0.02924743, -0.0028283962, -0.031956583, -0.02381004, 0.012601388, 0.00952497, -0.0023108902, 0.0145187825, -0.000532411, 0.039988656, -0.015253307, 0.019431511, -0.006109908, -0.014137211, 0.03109805, -0.0038991803, -7.232711e-05, -0.026023153, -0.030334907, 0.034341402, -0.024573183, 0.02541264, 0.004946116, -0.0057331068, 0.014614175, 0.009300797, 9.449848e-05, 0.034951918, 0.009491582, -0.0046909405, 0.03172764, 0.0048721866, 0.033368398, 0.004547851, 0.003913489, -0.0060002063, 0.01494805, 0.015148375, -0.017409183, -0.041972827, 0.020833785, -0.010264264, 0.008137004, -0.019240726, -0.0004954463, -0.032891434, 0.02209297, -0.0110369455, -0.0113803595, -0.010159332, 0.013259597, 0.0016920295, -0.0030978809, 0.007063836, -0.0005765302, -0.022932427, 0.019164411, 0.024115296, 0.0019805927, -0.0070256786, -0.014499703, 0.007970068, -0.009210174, 0.027854694, -0.006586872, 0.00027291282, 0.042926755, -0.004645629, 0.010283343, 0.001207315, -0.034856524, 0.032662492, 0.02209297, -0.006591642, -0.01918349, 0.018296337, 0.014509243, -0.004707634, -0.013307294, -0.0029643308, 0.011199113, 0.005093975, -0.038843945, -0.03453219, -0.008613969, 0.038653158, 0.026538273, 0.03693609, -0.0063817776, -0.010989249, -0.011332664, 0.011017867, 0.0108652385, -0.021749556, -0.0048984196, -0.0015346315, -0.012887565, -0.038863022, -0.0046599377, -0.005661562, 0.025736975, 0.004209207, -0.007908062, -0.00885722, -0.001931704, 0.017771676, 0.010636296, -0.04204914, 0.0026423803, 0.010159332, 0.008943073, -0.024687653, -0.020471292, 0.0038705624, -0.05937247, 0.03172764, 0.0045597753, 0.0134217655, -0.043956995, 0.016607884, 0.0011572337, 0.00037054135, -0.0046766317, -0.0038157115, -0.020414056, 0.02049037, 0.0058237296, -0.003410292, -0.028217187, 0.0039373375, 0.007163998, 0.01468095, 0.037508443, 0.0054040016, 0.009682368, 0.015539485, 0.057998814, -0.042621497, -0.00082753244, 0.014700029, 0.06471447, 0.02367649, -0.015806586, -0.002842705, -0.0032767423, -0.017285174, 0.016388481, 0.0051369015, -0.00938665, 0.00926264, -0.037355814, 0.010779385, -0.002842705, -0.012639545, -0.0063293115, 0.011199113, -0.020852864, 0.014719107, 0.017657205, 0.023352155, -0.011866863, 0.009377111, 0.038061723, 0.016121382, -0.029629, 0.0055470904, 0.011618841, -0.00088834536, 0.008685513, 0.008647356, 0.014909893, -0.030334907, -0.0006176683, 0.0076791192, -0.01626447, -0.028007323, -0.0058714263, 0.03731766, -0.00885722, 0.024668574, 0.002697231, 0.053648904, 0.012687241, -0.017094389, -0.04033207, -0.018725604, 0.011418517, 0.023886355, -0.021577848, -0.009634672, -0.02739681, 0.0048316447, -0.0011506755, -0.038710393, -0.010292882, -0.003386444, -0.0050748964, 0.028904015, 0.03478021, 0.007502643, 0.018849615, -0.037145954, -0.0026686133, -0.009324645, 0.00030451166, -0.009372341, -0.009978086, -0.024248846, -0.021501534, 0.025240932, -0.022054812, -0.0049127284, -0.018172326, -0.0013688866, -0.0066822646, 0.024134375, -0.0070686056, -0.013221441, -0.032624334, -0.023313997, 0.016483873, 0.0076123443, 0.026442882, -0.013507619, 0.024744889, -0.0050319694, -0.018286798, -0.024153454, 0.0053133783, 0.015577642, -0.010636296, 0.0075741876, -0.0053467657, -0.011838245, -0.013545776, 0.00052197743, -0.0018899696, -0.019240726, 0.007822209, 0.018305875, 0.023333076, 0.014928971, -0.042926755, -0.019240726, 0.012763555, -0.0396834, -0.024096219, 0.0028403203, -0.0277593, 0.00030123253, -0.05158842, 0.029667158, 0.01057906, -0.001594252, 0.0293619, -0.009997164, 0.031899348, -0.009544048, -0.009844536, -0.0301632, -0.009248331, 0.015892439, 0.042812284, 0.015205611, 0.008175162, -0.0049556554, 0.03319669, -0.010044861, 0.009815918, 0.004974734, -0.02791193, 0.010626757, 0.015062521, -0.016083224, -0.038023565, 0.028789543, 0.0017909996, -0.034990076, 0.00022044677, -0.0293619, 0.006419935, 0.002042598, 0.007621884, 0.0059906673, -0.046780623, -0.044643827, -0.015596721, 0.01203857, -0.0006182645, 0.009343724, 0.023905432, 0.0066774953, -0.005609096, -0.009358033, 0.042239927, -0.02592776, -0.07070514, 0.011599763, 0.01045505, 0.043041226, -0.014623715, -0.00443338, 0.0009074239, 0.026233017, 0.012773095, -0.0047672545, -0.0018541974, 0.035791375, 0.023714647, 0.0396834, -0.035676904, 0.04311754, 0.007435868, -0.005451698, 0.008895378, -0.012410602, -0.031040814, 0.009305567, 0.018067393, -0.018048316, 0.013221441])], model='embedding-3', usage=CompletionUsage(prompt_tokens=41, completion_tokens=0, total_tokens=41))"
      ]
     },
     "execution_count": 38,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "response = client.embeddings.create(    \n",
    "    model=\"embedding-3\",\n",
    "    input=statement[\"statement_hyde\"]\n",
    ")\n",
    "\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 40,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "41"
      ]
     },
     "execution_count": 40,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "response.usage.total_tokens"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 47,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "102119"
      ]
     },
     "execution_count": 47,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "sum([len(p) for p in papers_content])"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "26065"
      ]
     },
     "execution_count": 41,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "paper__content_response = client.embeddings.create(\n",
    "        model=\"embedding-3\",\n",
    "        input=papers_content\n",
    "    )\n",
    "# 这里有些疑问，我的papers_content共20篇文章，总字数102119,消耗了26065tokens\n",
    "# embedding-3输入需要向量化的文本，支持字符串数组。\n",
    "# embedding-2 的单条请求最多支持 512 个Tokens，数组总长度不得超过8K；\n",
    "# embedding-3 的单条请求最多支持 3072 个Tokens，数组总长度不得超过8K；\n",
    "# 且数组最大不得超过 64 条。\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 50,
   "metadata": {},
   "outputs": [],
   "source": [
    "paper_embeddings = [np.array(data.embedding) for data in paper__content_response.data]\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 51,
   "metadata": {},
   "outputs": [],
   "source": [
    "query_embedding = get_embedding(statement[\"statement_hyde\"])  # 假设已实现"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 52,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "array([0.48633673, 0.54856038, 0.54749634, 0.47721006, 0.47075218,\n",
       "       0.44072269, 0.56337998, 0.53869614, 0.45702559, 0.44775399,\n",
       "       0.49128021, 0.53277574, 0.53265494, 0.48158977, 0.4545446 ,\n",
       "       0.50440799, 0.43219584, 0.44288487, 0.50983904, 0.44883318])"
      ]
     },
     "execution_count": 52,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "embedding_scores = np.array([cosine_similarity(query_embedding, de) for de in paper_embeddings])\n",
    "embedding_scores"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 55,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "{'statement': 'Research on Text-to-SQL generation has traditionally focused on scenarios where each natural language question is associated with one correct SQL query.',\n",
       " 'related_sen_id': [0],\n",
       " 'statement_hyde': 'Traditional research in the domain of Text-to-SQL generation has predominantly concentrated on scenarios wherein a single natural language query corresponds to a unique, correct SQL query, as documented in [Reference].'}"
      ]
     },
     "execution_count": 55,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "statement"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 54,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "[(0.563379980311657,\n",
       "  {'id': 454845652744816630,\n",
       "   'distance': 0.6249851584434509,\n",
       "   'entity': {'paper_id': '646d8642d68f896efa0a3040',\n",
       "    'paper_title': 'Exploring Chain-of-Thought Style Prompting for Text-to-SQL',\n",
       "    'chunk_id': 1,\n",
       "    'chunk_text': '# 1 Introduction\\nText-to-SQL parsing, the task of translating a natural language question into a SQL query, has found wide applications in building natural language interfaces to databases and thus piqued significant research interest in recent years ( Wang et al. ,2020 ;Deng et al. ,2021 ;Yu et al. ,2021 ;Rajkumar et al. ,2022 ;Hongjin et al. ,2023 ;Ni et al. ,2023 ). To develop a text-to-SQL parser, a prevalent approach is to collect labeled data and train a model via supervised learning ( Shaw et al. ,2021 ;Scholak et al. ,2021 ). While effective, this approach necessitates a considerable amount of training data, which is costly to obtain because annotating SQL queries requires programming expertise. Consequently, the lack of data hinders real-life applications of stateof-the-art parsers, especially on novel databases and unseen domains ( Suhr et al. ,2020 ).  \\n\\nAs an alternative to supervised learning, incontext learning ( Brown et al. ,2020 ), an emergent capability of large language models (LLMs), alleviates the need for large-scale data. With only a few examples, in-context learning enables LLMs to demonstrate performance comparable to or even better than fully supervised models on many NLP tasks, such as question answering, machine translation, and natural language inference ( Chowdhery et al. ,2022 ;Kojima et al. ,2022 ;Wei et al. ,2022b ,a ;Brohan et al. ,2023 ). When applied to text-to-SQL parsing, in-context learning also shows encouraging results, but it still lags behind supervised approaches ( Rajkumar et al. ,2022 ;Chang et al. ,2023 ;Liu et al. ,2023a ).  \\n\\nWe hypothesize that the under-performance is because text-to-SQL parsing requires complex, multistep reasoning. Even for a seemingly simple question, such as “What is the ID of Kyle,\" a model has to ground it to database schemas, infer the relational algebra among schema items, and construct syntactically correct SQL clauses. Recently, the chain-of-thought (CoT) style promptings ( Wei et al. ,2022b ;Zhou et al. ,2023 ) are proposed and have shown promising multi-step reasoning capabilities. To enhance LLMs’ reasoning ability, we systematically explore CoT style prompting for text-to-SQL parsing. Specifically, we seek to answer two research questions: (1) Which prompting style is better, generating all reasoning steps in a single pass, or iterative prompting and problem solving? (2) Does including more detailed information in the reasoning steps lead to better results for text-to-SQL parsing?  \\n\\nTo address the questions, we adopt two widely used prompting methods for text-to-SQL parsing As the first method, we apply chain-of-thought prompting (Wei et al. ,2022b ) by drawing an analogy between its problem-solving process and the execution procedure of a SQL query. Referring to the logical execution order of SQL clauses (Narechania et al. ,2021 ), we compose the intermediate execution steps in natural language and prompt LLMs to derive them before generating the SQL query. As the second method, we follow Zhou et al. (2023 ) to apply least-to-most prompting in two stages: (1) reduction: generate a series of sub-questions from the original question and (2) solving: iteratively translate each sub-question into its corresponding SQL query, with the original question as the last sub-question. However, in our case study 1 , we find that directly applying chainof-thought and lease-to-most promptings leads to error propagation issues. Their rationales contain very demonstration-specific information and are easier to mislead the reasoning process. Furthermore, least-to-most prompting technique leads to additional computational and time cost due to the multiple stages of reduction and solving.  \\n\\n  \\nFigure 1: Different prompting methods with multi-step reasoning for text-to-SQL parsing: (a) Chain-of-Thought, (b) Least-toMost, and our proposed (c) QDecomp , and (d) QDecomp $^+$ InterCOL .  \\n\\nTherefore, we propose a new method called question-decomposition prompting (QDecomp ). Similar to chain-of-thought prompting, QDecomp generates a sequence of reasoning steps and then the SQL query in one pass. However, we modify the steps to instruct LLMs to decompose the original complex question, akin to the problem reduction stage in least-to-most prompting. Also, to help LLMs ground database schemas, we design a variant of question decomposition prompting (QDecomp $^+$ InterCOL ) by including the table and column names involved in each sub-question. We conduct comprehensive evaluations on two textto-SQL datasets, Spider ( Yu et al. ,2018 ) and Spider Realistic ( Deng et al. ,2021 ). Our proposed prompting methods substantially outperform existing prompting ones by 2.4 and 1.5 point absolute gains on the development set of Spider and Spider Realistic, respectively. The results suggest that the iterative prompting which is costly due to additional computational resources requirement as in least-to-most prompting may not be necessary $(R Q I)$ . In addition, our analysis shows the proposed question decomposition prompting methods, which do not instruct LLMs to generate detailed reasoning steps, reduce the chance of error propagation when generating the reasoning steps. ( RQ2 ). Finally, we evaluate the robustness of our proposed prompting methods by varying the number, selection, and format of in-context examples and show that they can achieve consistently strong performance across different settings.',\n",
       "    'original_filename': 'Conf_Paper_Meta_Data_EMNLP_2023_with_whole_text.db'}}),\n",
       " (0.5485603762313501,\n",
       "  {'id': 454845641360425782,\n",
       "   'distance': 0.6537715196609497,\n",
       "   'entity': {'paper_id': '6461b9c9d68f896efad43133',\n",
       "    'paper_title': 'Interactive Text-to-SQL Generation Via Editable Step-by-Step Explanations',\n",
       "    'chunk_id': 1,\n",
       "    'chunk_text': '# 2 Related Work\\n\\n# 2.1 Text-to-SQL Generation\\nNatural language interfaces have long been recognized as a way to expand access to databases ( Hendrix et al. ,1978 ).The construction of several large text-to-SQL datasets, such as WikiSQL ( Zhong et al. ,2017 ) and Spider ( Yu et al. ,2018 ), has enabled the adoption of deep learning models in this task, achieving unprecedented performance in recent years ( Rubin and Berant ,2021 ;Wang et al. ,2020a ;Scholak et al. ,2021 ;Yu et al. ,2020 ;Hwang et al. ,2019 ). Our technique is based on the recent success of neural text-to-SQL models. Unlike existing models that perform end-to-end SQL generation, we propose a new interaction mechanism for users to validate and refine generated queries through step-by-step explanations.  \\n\\nAs the first step to demonstrate the feasibility of our approach, we focus on single-turn SQL generation ( Yu et al. ,2018 ) in this work. There has also been recent work that supports multi-turn SQL generation ( Yu et al. ,2019a ,b;Guo et al. ,2021 ), where a sequence of interdependent queries are expressed in multiple utterances in a dialog. Models designed for multi-turn SQL generation typically need to reason about the dialog context and effectively encode the historical queries ( Wang et al. ,2021 ;Hui et al. ,2021 ;Zhang et al. ,2019 ;Cai and Wan ,2020 ;Wang et al. ,2020b ). Our approach can be extended to support multi-turn SQL generation by initiating separate refinement sessions for individual queries while incorporating the contextual information of previous queries into explanation generation and text-to-clause generation.\\n\\n# 2.2 Interactive Semantic Parsing for SQL\\nRecently, there has been a growing interest in interactive approaches that elicit user feedback to guide SQL generation. Iyer et al. (2017 ) proposed to allow users to flag incorrect queries and continuously retrain the model. Both DIY ( Narechania et al. ,2021 ) and NaLIR ( Li and Jagadish ,2014a ,b)enable users to select alternative values or subexpressions to fix an incorrect SQL query. PIIA ( Li et al. ,2020 ), MISP ( Yao et al. ,2019 ), and DialSQL ( Gur et al. ,2018 ) proactively ask for user feedback via multiple-choice questions. A common limitation of these methods is that they only solicit feedback in constrained forms, hindering their flexibility and effectiveness in addressing the variability of SQL errors. In contrast, our approach allows more flexible feedback through direct edits to the explanations generated by the model.  \\n\\nThe only work that supports open-ended user feedback in SQL generation is NL-EDIT ( Elgohary et al. ,2021 ). NL-EDIT is trained on SPLASH ( Elgohary et al. ,2020 ), a dataset of SQL errors and user feedback utterances. Given an incorrect query, NL-EDIT allows users to provide a clarification utterance. Based on the utterance, the model generates a sequence of edits to the SQL query. Incorporating feedback expressed in a completely free-text utterance is challenging for two reasons:  \\n\\n  \\nFigure 2: An Overview of Interactive SQL Generation and Refinement with Editable Step-by-Step Explanations  \\n\\n(1) the model needs to infer which part of the SQL query to fix; (2) the model needs to determine what changes are being requested. In contrast, S TEPS asks users to directly edit an NL explanation and make corrections to the explanation. Comparing the initial explanation with the user-corrected explanation makes it easier to locate the part of a SQL query that needs to be changed and infer what change to make.  \\n\\nThe idea of SQL decomposition is similar to recent work that decomposes a user question to sub-questions on SPARQL ( Mo et al. ,2022 ). Their approach requires a crowd-sourced dataset to train a question decomposition model. In contrast, our rule-based method generates step-by-step explanations without the need for training a model. This also allows our system to map each entity in the explanation to the corresponding SQL element, making it easier for SQL correction (Sec. 3.2 ).\\n\\n# 2.3 Explaining SQL Queries in NL\\nOur approach is also related to prior work that generates NL explanations for SQL queries. Simitsis and Ioannidis (2009 ) argued that databases should “talk back” in human language so that users can verify results. Kokkalis et al. (2012 ) and Koutrika et al. (2010 ) used a graph-based SQL translation approach, where each query is represented as a graph and the explanation is generated by traversing the graph. Elgohary et al. (2021 ,2020 ) employed a template-based explanation approach, where they manually curated 57 templates for explanation generation. These approaches have limited capability to handle arbitrary SQL queries. To address this limitation, we propose a rule-based method to first explain terminal tokens (e.g., operators, keywords) and gradually compose them into a complete explanation based on the derivation rules in the SQL grammar. Another key difference is that none of the existing approaches supports editable explanations for SQL correction, which is a key feature to solicit user feedback in our approach.',\n",
       "    'original_filename': 'Conf_Paper_Meta_Data_EMNLP_2023_with_whole_text.db'}}),\n",
       " (0.5474963359987777,\n",
       "  {'id': 454845681760490286,\n",
       "   'distance': 0.6474056243896484,\n",
       "   'entity': {'paper_id': '6535d747939a5f408295c649',\n",
       "    'paper_title': 'Benchmarking and Improving Text-to-SQL Generation under Ambiguity',\n",
       "    'chunk_id': 1,\n",
       "    'chunk_text': '# 2 Background and Related Work\\nA Text-to-SQL model takes as input a question expressed as a natural language text x, and a database schema scomprising of table and column names, and outputs an SQL program ywhich can be executed against the database to answer the user’s question. Figure 1 shows an example. The training data for the task comprises (text, schema, SQL) triplets spanning multiple distinct databases.  \\n\\n  \\nFigure 1: A Text-to-SQL system converts a user question to an SQL query, conditioned on the database schema and/or content.  \\n\\nBenchmarks. Popular benchmarks for the Textto-SQL task are WikiSQL ( Zhong et al. ,2018 )and SPIDER ( Yu et al. ,2018 ). A few others have been proposed recently to capture real-world scenarios, such as KaggleDBQA ( Lee et al. ,2021 ), SEDE ( Hazoom et al. ,2021 ), and EHRSQL ( Lee et al. ,2022 ). They all attach one SQL per text, though some of them mention the problem of ambiguity in real-world datasets. Dr. SPIDER ( Chang et al. ,2023 ), designed to test the robustness of existing models, perturbs either the text or schema of SPIDER but still assigns one SQL per text.  \\n\\nAmbiguity in SQL Although ambiguity has been studied in other fields of NLP ( Pilault et al. ,2023 ;Li et al. ,2022 ;Futeral et al. ,2022 ), it has been unexplored in the context of semantic parsing. Ambiguity in SQL arising from related column names is discussed in ( Wang et al. ,2022 ), but they only consider column ambiguity. Their method of recognizing ambiguous queries depends on labeling words of the text and does not generalize to other kinds of ambiguity. To the best of our discernment, AmbiQT represents the first open benchmark for testing coverage of ambiguous alternatives.  \\n\\nDiverse Decoding. Prior work has critiqued the lack of meaningful diversity in beam-search outputs ( Finkel et al. ,2006 ;Gimpel et al. ,2013 ;Li et al. ,2016 ;Li and Jurafsky ,2016 ). In response, many fixes have been proposed. Some proposals attempt to restrict the tokens sampled, using strategies like Nucleus sampling ( Holtzman et al. ,2020 ), Truncated Sampling ( Hewitt et al. ,2022 ), and Typical Sampling ( Meister et al. ,2023 ), while some rely on Template-Based decoding ( Wiseman et al. ,2018 ;Zhang et al. ,2022 ;Fu et al. ,2023 ;Elgohary et al. ,2020 ;Awasthi et al. ,2022 ). A third approach is to generate a prefix with high diversity first, then generate the rest of the sentence with lower diversity. Narayan et al. (2022 ) follow this recipe but focus on incorporating diverse entity orders in text summarization.  \\n\\n<html><body><table><tr><td rowspan=\"2\">Kind of ambiguity</td><td rowspan=\"2\">Count</td><td colspan=\"3\">Example</td></tr><tr><td>QuestionText</td><td>SQL#1</td><td>SQL#2</td></tr><tr><td>Column Ambiguity (C)</td><td>1240</td><td>List the ids of all students.</td><td>SELECTroll_number FROMstudents</td><td>SELECTadmission_number FROMstudents</td></tr><tr><td>Table Ambiguity (T)</td><td>1417</td><td>How many singers do wehave?</td><td>SELECT COUNT(*) FROM artist</td><td>SELECT COUNT(*) FROM performer</td></tr><tr><td>Join Ambiguity (J)</td><td>288</td><td>Whatarethemakers and models?</td><td>SELECT maker，model FROM model</td><td>SELECT t2.maker，t1.model FROM modelASt1JOINmodel_maker AS t2 ON t1.model_id = t2.model_id</td></tr><tr><td>Precomputed Aggregates (P)</td><td>101</td><td>for each pet type.</td><td>Find the average weight|SELECT AVG(weight)， pettype FROM pets GROUP BY pettype</td><td>SELECT avg_weight，pettype FROM pets_weight</td></tr></table></body></html>\\n\\nTable 1: The AmbiQT benchmark. For each question, we list two valid SQL queries as per the schema. The schema is not shown here, but the ambiguity in it can be inferred based on the two SQL queries.\\n\\n# 3 AmbiQT: A Benchmark of Ambiguous Text-to-SQL Conversion\\nAmbiQT is constructed so that each text query has two distinct valid SQL interpretations. Motivated by our experience working with real-life databases, we designed AmbiQT to encompass four types of ambiguity. Each entry is designed so that both alternatives have a similar relevance to the question, and a well-calibrated decoding method is expected to rank them close by in their outputs.  \\n\\nWe create AmbiQT by modifying the SPIDER (Yu et al. ,2018 ) dataset, and use ChatGPT ( OpenAI ,2022 ) to aid with the creation. In each case, we modify the schema instead of the text as that provides greater control over the modification process. We explain the kinds of ambiguity in AmbiQT below and portray examples of each in Table 1 .  \\n\\nColumn Ambiguity (C). Unlike the SPIDER benchmark where column names usually appear verbatim in the question text (like born state for the column born_state ), when users unaware of the schema pose a natural question, they introduce column ambiguity ( Wang et al. ,2022 ). For example, “ What is the capacity of O2 Arena? ” could be ambiguous if the schema has separate columns for standing and seating capacity. Likewise, a query on the number of under-nourished children is ambiguous if we have different columns for “under-weight children” and “stunted growth in children”.  \\n\\nTo simulate column ambiguity, for each text $\\\\mathbf{x}$ ,schema s, and SQL yin SPIDER, we prompt ChatGPT to generate two synonyms for each column name of sin a one-shot manner. Appendix A furnishes more details of the prompt. We then modify sby replacing $c$ with two columns $c_{1},c_{2}$ , and we use yto generate two queries $\\\\mathbf{y}_{1},\\\\mathbf{y}_{2}$ where all mentions of $c$ are replaced with $c_{1}$ in $\\\\mathbf{y}_{1}$ and with $c_{2}$ in $\\\\mathbf{y}_{2}$ . An example appears in the first row of Table 1 . We do not reuse $c$ because the SPIDER dataset often contains column names verbatim in the question, and that would violate our attempt at keeping the two options at similar relevance levels. We modify one column at a time and generate up to 3 examples from each original entry.  \\n\\nTable Ambiguity (T). Table name ambiguity is common in databases obtained by integrating multiple data sources, as in web tables ( Cafarella et al. ,2008 ;Pimplikar and Sarawagi ,2012 ). Here again, we prompt ChatGPT to generate two alternate names for each table. We then modify one SQL yto generate two candidates ${\\\\bf y}_{1},{\\\\bf y}_{2}$ as shown in Table 1 .  \\n\\nJoin Ambiguity (J). In production databases, a logical table is often vertically partitioned across several tables for efficient clustered access ( Stonebraker et al. ,2019 ). Column names overlapping across tables leads to Join Ambiguity. Suppose we have two tables: (1) person with columns id, name, email_address , and (2) person_details with columns id, postal_address, photo .A question asking for a person’s name and address is ambiguous on whether a JOIN with the person_details is necessary. We expose such ambiguity by modifying the schema as follows.  \\n\\nConsider a $(\\\\mathbf{x},\\\\mathbf{s},\\\\mathbf{y})$ triplet. Suppose yinvolves selecting two or more columns $c_{1},c_{2},\\\\ldots.$ not necessarily in the same order, from a table $t$ . Suppose further that $c_{1}$ is not a primary key of $t$ . We create a table called $t\\\\_c_{1}$ that includes just the primary key $p k_{t}$ of $t$ , and $c_{1}$ . The first alternative $\\\\mathbf{y}_{1}$ is $\\\\mathbf{y}$ and the second alternative $\\\\mathbf{y}_{2}$ uses a join over $t$ and $t\\\\_c_{1}$ , with everything else staying the same as y.  \\n\\n  \\nFigure 2: Beam Search works well when targeting only one output, but leads to superficial diversity, for example via different grouping and erroneous variants of column names.  \\n\\nPrecomputed Aggregates $(\\\\mathbf{P})$ :. This ambiguity is particularly common in data warehouses such as Data Commons which pre-aggregate certain variables. For instance, the “ total rice production ” of a state might refer to the column rice_production of state rather than a sum over it. Text-toSQL models have a bias toward introducing a sum()...group-by clause every time total appears in the text. The non-aggregated alternative is usually missing in the top$k$ options. We incorporate this ambiguity as follows.  \\n\\nFor each $(\\\\mathbf{x},\\\\mathbf{s},\\\\mathbf{y})$ , where $\\\\mathbf{y}$ has at least one aggregate, we construct a new table $t^{\\\\prime}$ . For each aggregate $\\\\boldsymbol{\\\\mathcal{A}}$ over column $c$ in y, we add to $t^{\\\\prime}$ the columns and the columns grouped by in $A^{\\\\prime}\\\\_c$ for all $\\\\mathcal{A}^{\\\\prime}\\\\,\\\\in\\\\,\\\\{\\\\mathsf{a v g},\\\\mathsf{s u m},\\\\mathsf{m i n},\\\\mathsf{m a x}\\\\}$ y. For count $(\\\\star)$ ,we add a column called number . We get two gold queries, the original yand a second with the groupby replaced by a direct SELECT on $t^{\\\\prime}$ as shown in the example in Table 1 . We also support aggregates across multiple tables but skip the details here.',\n",
       "    'original_filename': 'Conf_Paper_Meta_Data_EMNLP_2023_with_whole_text.db'}}),\n",
       " (0.5386961442023729,\n",
       "  {'id': 454845681740829484,\n",
       "   'distance': 0.6236740350723267,\n",
       "   'entity': {'paper_id': '6535d747939a5f408295c649',\n",
       "    'paper_title': 'Benchmarking and Improving Text-to-SQL Generation under Ambiguity',\n",
       "    'chunk_id': 0,\n",
       "    'chunk_text': '# Benchmarking and Improving Text-to-SQL Generation under Ambiguity\\nAdithya Bhaskar $\\\\mathbf{\\\\nabla}_{*}\\\\bigotimes\\\\bigtriangleup$ Tushar Tomar ∗♠ Ashutosh Sathe ♠Sunita Sarawagi ♠♠IIT Bombay ♢Princeton University\\n\\n# Abstract\\nResearch in Text-to-SQL conversion has been largely benchmarked against datasets where each text query corresponds to one correct SQL. However, natural language queries over reallife databases frequently involve significant ambiguity about the intended SQL due to overlapping schema names and multiple confusing relationship paths. To bridge this gap, we develop a novel benchmark called AmbiQT with over 3000 examples where each text is interpretable as two plausible SQLs due to lexical and/or structural ambiguity.  \\n\\nWhen faced with ambiguity, an ideal top$k$ decoder should generate all valid interpretations for possible disambiguation by the user ( Elgohary et al. ,2021 ;Zhong et al. ,2022 ). We evaluate several Text-to-SQL systems and decoding algorithms, including those employing state-of-the-art LLMs, and find them to be far from this ideal. The primary reason is that the prevalent beam search algorithm and its variants, treat SQL queries as a string and produce unhelpful token-level diversity in the top$k$ .  \\n\\nWe propose LogicalBeam, a new decoding algorithm that navigates the SQL logic space using a blend of plan-based template generation and constrained infilling. Counterfactually generated plans diversify templates while in-filling with a beam-search, that branches solely on schema names, provides value diversity. LogicalBeam is up to $2.5\\\\times$ more effective than state-of-the-art models at generating all candidate SQLs in the top$k$ ranked outputs. It also enhances the top5 Exact and Execution Match Accuracies on SPIDER and Kaggle DBQA 1 .\\n\\n# 1 Introduction\\nResearch on Text-to-SQL generation has focused on scenarios where each natural language question is associated with one correct SQL ( Zelle and Mooney ,1996 ;Tang and Mooney ,2000 ;Scholak et al. ,2021a ;Wang et al. ,2020 ;Rubin and Berant ,2021 ;Xie et al. ,2022 ;Arcadinho et al. ,2022 ;Zeng et al. ,2022 ;Scholak et al. ,2021b ;Pourreza and Rafiei ,2023 ). Popular benchmarks driving such research, including WikiSQL ( Zhong et al. ,2018 ), SPIDER ( Yu et al. ,2018 ), its robust perturbations ( Chang et al. ,2023 ), and even “in-thewild” benchmarks such as KaggleDBQA ( Lee et al. ,2021 ) and SEDE ( Hazoom et al. ,2021 ) all associate one correct SQL with text. Meanwhile, ambiguity is prevalent in real-life databases — particularly the ones obtained by integrating several data sources for data analysis, where a natural language interface is most in demand. The sources of ambiguity are several — inherent ambiguity of natural language, the user’s ignorance of table/column names, overlapping strings in column names, underspecified clauses, and confusion about whether aggregates are pre-computed, or if a join is required. Hazoom et al. (2021 ) observe that up to $87\\\\%$ of queries on the stack exchange database are underspecified, and Wang et al. (2022 ) mention that $11\\\\%$ of queries exhibited ambiguity in column names. Although prior work has brought up ambiguity, there is no publicly available benchmark with ambiguous queries, nor a comprehensive evaluation of systems under ambiguity.  \\n\\nOur first contribution is to bridge this gulf by developing a benchmark, AmbiQT, that tests performance under ambiguity in the context of current models. AmbiQT includes over 3000 examples, each associating a natural question on a database with two valid SQLs. Inspired by our experience with several real-world datasets, we target four types of ambiguity spanning both lexical (ambiguous column and table names) and structural (whether a join is necessary, and an aggregate is pre-computed) ambiguity. The benchmark is generated via a combination of ChatGPT ( OpenAI ,2022 ) based synonym generation and perturbation, and standard rule-based perturbation.  \\n\\nWhen faced with ambiguity, an ideal Text-toSQL system should incorporate all valid alternatives in their top$k$ SQL outputs, for user resolution. We show that present approaches, ranging from T5-3B ( Raffel et al. ,2019 ) to SOTA models, fail to generate all ambiguous outputs with any decoding strategy, including beam search and diversity-promoting sampling methods such as Nucleus ( Holtzman et al. ,2020 ) and Typical sampling ( Meister et al. ,2023 ). Most outputs are small lexical tweaks of the top choice, bringing about little meaningful diversity in SQL structures or schema alternatives. Even SOTA LLMs like ChatGPT ( OpenAI ,2022 ) suffer from this issue.  \\n\\nTo remedy the lack of diversity, we propose a new decoding algorithm, LogicalBeam, that allocates branching to explore underlying logical variants of the SQL rather than the string form. We catalog the errors of T5-3B ( Raffel et al. ,2019 ) on the SPIDER dev split and use our insights to encourage targeted types of diversity — the number of JOIN s and selections, and table/column names.  \\n\\nOur main contributions are:   \\n•We develop AmbiQT, the first benchmark that tests performance under four types of ambiguity over $\\\\mathbf{3000+}$ examples.   \\n•We show that SOTA methods, including a finetuned T5-3B, RESDSQL ( Li et al. ,2023 ), OpenAI Codex, and ChatGPT, provide a poor representation of ambiguity despite their high accuracy on conventional benchmarks.   \\n•We present LogicalBeam, a two-step algorithm that generates plan-based templates with counterfactually controlled plan diversity and fills them via a beam search that branches only on schema names.   \\n•We show that LogicalBeam consistently increases the fraction of time when all gold SQLs get generated in the Top-5 choices by $1.5-2.5\\\\times$ over the baselines across the board on AmbiQT.',\n",
       "    'original_filename': 'Conf_Paper_Meta_Data_EMNLP_2023_with_whole_text.db'}}),\n",
       " (0.5327757362113476,\n",
       "  {'id': 454845641342337844,\n",
       "   'distance': 0.6175075769424438,\n",
       "   'entity': {'paper_id': '6461b9c9d68f896efad43133',\n",
       "    'paper_title': 'Interactive Text-to-SQL Generation Via Editable Step-by-Step Explanations',\n",
       "    'chunk_id': 0,\n",
       "    'chunk_text': '# Interactive Text-to-SQL Generation via Editable Step-by-Step Explanations\\nYuan Tian 1 , Zheng Zhang 2 , Zheng $\\\\mathbf{Ning^{2}}$ ,Toby Jia-Jun $\\\\mathbf{Li}^{2}$ ,Jonathan K. Kummerfeld 3 , and Tianyi Zhang 1 Purdue University 1 , University of Notre Dame 2 , The University of Sydney 3  , , , , ,\\n\\n# Abstract\\nRelational databases play an important role in business, science, and more. However, many users cannot fully unleash the analytical power of relational databases, because they are not familiar with database languages such as SQL. Many techniques have been proposed to automatically generate SQL from natural language, but they suffer from two issues: (1) they still make many mistakes, particularly for complex queries, and (2) they do not provide a flexible way for non-expert users to validate and refine incorrect queries. To address these issues, we introduce a new interaction mechanism that allows users to directly edit a stepby-step explanation of a query to fix errors. Our experiments on multiple datasets, as well as a user study with 24 participants, demonstrate that our approach can achieve better performance than multiple SOTA approaches. Our code and datasets are available at https: //github.com/magic-YuanTian/STEPS .\\n\\n# 1 Introduction\\nNatural language interfaces significantly lower the barrier to accessing databases and performing data analytics tasks for users who are not familiar with database query languages. Many approaches have been proposed for generating SQL queries from natural language ( Popescu et al. ,2004 ;Giordani and Moschitti ,2012 ;Rubin and Berant ,2021 ;Scholak et al. ,2021 ;Zhao et al. ,2022 ). Using recent large language models, systems have reached $86.6\\\\%$ execution accuracy ( Gao et al. ,2023 ) on the Spider benchmark ( Yu et al. ,2018 ).  \\n\\nHowever, the rate of improvement has slowed, with a gain of only $10\\\\%$ since mid-2021. This is partly due to the inherent ambiguity of natural language and the complex structure of SQL queries (e.g., nested or joined queries). Thus, it is challenging to generate a fully correct query in one step, especially for complex tasks ( Yao et al. ,2019 ).  \\n\\n  \\nFigure 1: Refining a SQL query by directly editing a step-by-step explanation.  \\n\\nThere has been growing interest in developing “human-in-the-loop” approaches that elicit user feedback to guide SQL generation. However, most approaches only support feedback in constrained forms, e.g., answering multiple-choice questions (MISP, PIIA, DialSQL Yao et al. ,2019 ;Li et al. ,2020 ;Gur et al. ,2018 ), changing SQL elements in a drop-down menu (DIY, Narechania et al. ,2021 ), etc. Such constrained feedback is not sufficient to fix many complex errors in real-world SQL tasks. One exception is NL-EDIT ( Elgohary et al. ,2021 ), which allows users to provide feedback as new utterances. However, since the feedback is open-ended, interpreting it can be just as hard as processing the original request.  \\n\\nIn this paper, we seek to strike a balance between constrained feedback and open-ended feedback by proposing a new interaction mechanism: editable step-by-step explanations. Fig. 1 illustrates our idea. This mechanism consists of three core components: (a) a text-to-SQL model, (b) an explanation generation method, and (c) a SQL correction model. Our key insight is that using a step-by-step explanation as the basis to suggest fixes allows users to precisely specify where the error is and how to fix it via direct edits. This not only saves users’ time but also makes it easier for the model to locate the error and apply fixes.  \\n\\nBased on this idea, we implemented an interactive SQL generation and refinement system called STEPS . S TEPS adopts a rule-based method to generate step-by-step explanations and uses a hybrid rule/neural method to convert a user-corrected explanation back to a SQL query.  \\n\\nAn evaluation with a simulated user on Spider ( Yu et al. ,2018 ) shows that S TEPS can achieve $97.9\\\\%$ exact set match accuracy, outperforming prior interactive text-to-SQL systems— MISP, DIY, and NL-EDIT—by $33.5\\\\%$ ,$33.2\\\\%$ , and $31.3\\\\%$ respectively. We further evaluate S TEPS on other datasets, including Spider-DK ( Gan et al. ,2021b ), Spider-Syn ( Gan et al. ,2021a ), and WikiSQL ( Zhong et al. ,2017 ). S TEPS consistently achieves at least $96\\\\%$ exact set match accuracy and execution accuracy across all datasets.  \\n\\nFinally, we conducted a within-subjects user study with 24 real users. We found that within the same amount of time, S TEPS helped users complete almost 2X and 4X more tasks correctly than DIY and MISP respectively, with significantly higher self-reported confidence and lower mental load.  \\n\\nThis work makes the following contributions: (1) we propose a new interaction mechanism for the text-to-SQL task; (2) we develop an interactive text-to-SQL system based on the new interaction mechanism and a new training method for SQL correction; (3) we conduct a comprehensive evaluation with both simulated and real users and demonstrate its effectiveness over state-of-the-art interactive systems. Our dataset and code are publicly available.',\n",
       "    'original_filename': 'Conf_Paper_Meta_Data_EMNLP_2023_with_whole_text.db'}}),\n",
       " (0.5326549360928662,\n",
       "  {'id': 454845706688551984,\n",
       "   'distance': 0.613497793674469,\n",
       "   'entity': {'paper_id': '65406320939a5f40826491aa',\n",
       "    'paper_title': 'Evaluating Cross-Domain Text-to-SQL Models and Benchmarks',\n",
       "    'chunk_id': 0,\n",
       "    'chunk_text': '# Evaluating Cross-Domain Text-to-SQL Models and Benchmarks\\nMohammadreza Pourreza University of Alberta   \\n\\nDavood Rafiei University of Alberta\\n\\n# Abstract\\nText-to-SQL benchmarks play a crucial role in evaluating the progress made in the field and the ranking of different models. However, accurately matching a model-generated SQL query to a reference SQL query in a benchmark fails for various reasons, such as underspecified natural language queries, inherent assumptions in both model-generated and reference queries, and the non-deterministic nature of SQL output under certain conditions. In this paper, we conduct an extensive study of several prominent cross-domain text-to-SQL benchmarks and reevaluate some of the top-performing models within these benchmarks, by both manually evaluating the SQL queries and rewriting them in equivalent expressions. Our evaluation reveals that attaining a perfect performance on these benchmarks is unfeasible due to the multiple interpretations that can be derived from the provided samples. Furthermore, we find that the true performance of the models is underestimated and their relative performance changes after a re-evaluation. Most notably, our evaluation reveals a surprising discovery: a recent GPT4-based model surpasses the gold standard reference queries in the Spider benchmark in our human evaluation. This finding highlights the importance of interpreting benchmark evaluations cautiously, while also acknowledging the critical role of additional independent evaluations in driving advancements in the field.  \\n\\nproved from 65.6 ( Wang et al. ,2019 ) to 74.0 ( Li et al. ,2023a ). Measuring such progress is hinged on reliable benchmarks and evaluation metrics.  \\n\\nTwo standard metrics for evaluating the performance in this domain have been exact set match accuracy and execution accuracy . The former measures if a model-generated SQL query lexically matches a reference SQL query, whereas the latter measures if a model-generated SQL query produces the same output as a reference query $(\\\\S\\\\,4)$ .  \\n\\n  \\nFigure 1: An example question with two correct SQL queries, each corresponding to a different interpretation. There is an ambiguity in schema mapping, with two different database columns describing the name.\\n\\n# 1 Introduction\\nSignificant progress has been made in translating natural language text to SQL statements over the past few years. The execution accuracy on the hold-out test of Spider ( Yu et al. ,2018b )–a large-scale cross-domain text-to-SQL benchmark– has improved from 53.5 in May, 2020 ( Zhong et al. ,2020b ) to 85.3 in March, 2023 ( Pourreza and Rafiei ,2023 ). The exact set match accuracy, without considering database cell values, on the same benchmark and over the same period has im  \\n\\nConsider the example in Figure 1 , which consists of a model-generated query (shown on the left) and a reference query (shown on the right). Both SQL queries return the id and name of makers that have more than 3 models. However, the model-generated query returns the column FullName, which gives the full name of a maker (e.g., “Ford Motor Company”), whereas the reference query given in the benchmark returns the column Maker, which gives the short common name of a maker (e.g., “Ford”). The model-generated query fails an exact set match since the column names in the select clause are different. The query outputs are also different and the model-generated query fails the execution accuracy as well. The natural language utterance is not specific about the type of name to be returned, and a human evaluator tags both queries correct.  \\n\\nAs the models improve, these types of failures make up most of the errors, and the performance metrics become less relevant, as shown in our evaluation. In particular, we re-evaluated all development set queries of Spider on which two top-performing models, one using a fine-tuned model ( Scholak et al. ,2021 ) and another using a large language model ( Pourreza and Rafiei ,2023 ), failed. We found out that $25\\\\%$ of the queries generated by one model and $87\\\\%$ of the queries generated by the other model were indeed correct but were wrongly evaluated by the benchmark. For the same set of queries, our re-evaluation of the ground truth queries found $33\\\\%$ of the SQL queries incorrect, which was more than the number of incorrect queries generated by one of the models. This evaluation places one of the models above the ground truth queries in this re-evaluation.  \\n\\nWe further re-evaluated two well-known benchmarks, Spider ( Yu et al. ,2018b ) and SpiderDK ( Gan et al. ,2021b ), and a newly released benchmark, BIRD ( Li et al. ,2023b ), and found similar problems in all three benchmarks that affect the evaluation. Our evaluation reveals that $18\\\\%$ of the queries in the train sets and $20\\\\%{-23\\\\%}$ of the queries in the dev sets of these benchmarks are subject to ties in the dataset and which one of the tied rows are returned. This means a model-generated query will be deemed incorrect if it does not return the same row, among tied rows, as the ground truth query. This can severely impact the evaluation, especially when there is a tight race among models. Considering these observations, it is crucial to emphasize the significance of additional independent evaluations when utilizing these benchmarks. To enhance the evaluation process further, a potential solution is to incorporate multiple SQL queries as the ground truth, each representing a different interpretation that may be valid.  \\n\\nOur objective in this paper is to provide a comprehensive evaluation of existing Text-to-SQL benchmarks, underscoring the inherent issues they possess. We refrain from introducing a new dataset due to several considerations. First, addressing the identified issues by updating these benchmarks requires considerable human effort. Additionally, benchmarks in the Text-to-SQL domain, like Spider and BIRD, have holdout test sets used for official leaderboards and comparisons of text-to-SQL methodologies. We only have access to the development and training sets of these benchmarks, which limits our capability to alter the test sets. As a result, making changes only to the development and training sets would not completely address the benchmark’s inherent problems, given that final performance is gauged using the problematic test sets.',\n",
       "    'original_filename': 'Conf_Paper_Meta_Data_EMNLP_2023_with_whole_text.db'}}),\n",
       " (0.5098390411196269,\n",
       "  {'id': 454919307595371694,\n",
       "   'distance': 0.5985562801361084,\n",
       "   'entity': {'paper_id': '63608e5090e50fcafdee1152',\n",
       "    'paper_title': 'Diverse Parallel Data Synthesis for Cross-Database Adaptation of   Text-to-SQL Parsers',\n",
       "    'chunk_id': 2,\n",
       "    'chunk_text': '# 2.2 Translating text of related queries\\nOur next goal is to translate the retrieved $x_{r}$ from being a text for SQL $q_{r}$ to a text $\\\\hat{x}$ for SQL $q$ ,where $q\\\\approx q_{r}$ structurally. However, we do not have a readily labeled dataset to learn a model that translates $x_{r}$ to $\\\\hat{x}$ while being consistent with $q$ . We therefore decompose this task into two steps: 1) A simpler task of masking schema-specific tokens in $x_{r}$ to get a template $x_{r}^{\\\\mathrm{masked}}$ and 2) A conditional text generation model that maps $(x_{r}^{\\\\mathrm{masked}},q)$ to the text $\\\\hat{x}$ consistent with $q$ , by filling the masked positions in $x_{r}^{\\\\mathrm{masked}}$ as per $q$ . We re-purpose $\\\\mathcal{D}_{\\\\mathrm{train}}$ to get indirect supervision for training the text generation model. We now present each step in detail.  \\n\\nfrom different schemas, we modify the tree-editdistance algorithm to ignore the schema names and the database values. The tree-edit-distance is further normalized by the size of the larger tree. We $\\\\{q_{r}\\\\}$ only consider the have a distance of less than $\\\\{q_{r},x_{r}\\\\}$ pairs where the SQLs 0 .1 w.r.t. the SQL $q$ . Within datasets like Spider that span hundreds of schemas, it is often possible to find several SQLs structurally similar to a given SQL $q$ . For example, in Spider we found that $76\\\\%$ of the train SQLs contain at least three zero-distance (structurally identical) neighbours in other schemas. In Figure 2 ,we present more detailed statistics.  \\n\\nMasking the retrieved text Converting the re$\\\\{x_{r}^{\\\\mathrm{masked}}\\\\}$ trieved text queries }is a critical component of R $\\\\{x_{r}\\\\}$ to masked templates EFILL ’s pipeline since irrelevant tokens like references to schema elements of the original database can potentially misguide the text generation module. Our initial approach was to mask tokens based on a match of text tokens with schema names and manually refined schema-to-text linked annotations as in Lei et al. (2020 ). However, this approach failed to mask all schema-related terms since their occurrences in natural text often differed significantly from schema names in the database. Table A7 shows some anecdotes. Consequently, we designed a simple frequency-based method of masking that is significantly more effective for our goal of using the masked text to just guide the diversity. For each word that appears in the text queries of the train set, we count the number of distinct databases where that word gets mentioned at least once. For example, common words like $\\\\{^{\\\\prime}{\\\\mathsf{s h o w}}\\\\}$ , ‘what’, ‘list’, ‘order’} get mentioned in more than $90\\\\%$ of the schemas, and domain specific words like {‘countries’, ‘government $^\\\\prime\\\\}$ occur only in text queries of a few schemas. We mask out all the words that appear in less than $50\\\\%$ of the schemas. The words to be masked are replaced by a special token MASK , and consecutive occurrences of MASK are collapsed into a single MASK token. Thus we obtain masked templates minimal information about their original schema. $\\\\{x_{r}^{\\\\mathrm{{masked}}}\\\\}$ }retaining Editing and Filling the masked text Given a masked template $x_{r}^{\\\\mathrm{masked}}$ , and an SQL query $q$ ,we wish to edit and fill the masked portions in $x_{r}^{\\\\mathrm{masked}}$ to make it consistent with the $\\\\operatorname{SQL}q$ . We utilize a conditional text generation model BART ( Lewis et al. ,2020 ) for this purpose. We $\\\\boldsymbol{\\\\mathrm{\\\\Sigma}}_{\\\\boldsymbol{\\\\mathrm{\\\\Sigma}}}$ like first convert $q$ into a pseudo-English representation $q^{\\\\mathrm{Eng}}$ similar to Shu et al. (2021 ), to make it easier for $\\\\boldsymbol{\\\\beta}$ to encode $q$ . In addition, we wrap the table, column, or value tokens in $q^{\\\\mathrm{Eng}}$ with special tokens to provide explicit signals to the text generation model $\\\\boldsymbol{\\\\mathrm{\\\\Sigma}}_{\\\\boldsymbol{\\\\mathrm{\\\\Sigma}}}$ that ch tokens are likely to appear in the output text ˆ. Next, we concatenate the tokens in $x_{r}^{\\\\mathrm{masked}}$ and $q^{\\\\mathrm{Eng}}$ for jointly encoding them as which is expected to be consistent with the an input to $\\\\boldsymbol{\\\\mathrm{\\\\Sigma}}_{\\\\boldsymbol{\\\\mathrm{\\\\Sigma}}}$ . The output of $\\\\boldsymbol{\\\\mathrm{\\\\Sigma}}_{\\\\boldsymbol{\\\\mathrm{\\\\Sigma}}}$ ’s decoder is text ${\\\\mathrm{SQL~}}q$ $\\\\hat{x}$ ,.  \\n\\nSince we do not have direct supervision to finetune purposing SQL-Text pairs $\\\\boldsymbol{\\\\mathrm{\\\\Sigma}}_{\\\\boldsymbol{\\\\mathrm{\\\\Sigma}}}$ for this task, we presen $\\\\mathcal{D}_{\\\\mathrm{train}}$ for fine-tuning $(q_{i},x_{i})$ from various schemas B.a method of re$\\\\mathcal{D}_{\\\\mathrm{train}}$ contains $s_{i}$ .A Naïve way to train $\\\\boldsymbol{\\\\mathrm{\\\\Sigma}}_{\\\\boldsymbol{\\\\mathrm{\\\\Sigma}}}$ is to provide $[x_{i}^{\\\\mathrm{{masked}}}|q_{i}^{\\\\mathrm{{Eng}}}]$ |,the concatenation of $x_{i}^{\\\\mathrm{masked}}$ and $q_{i}^{\\\\mathrm{Eng}}$ as an input to the encoder and maximize the likelihood of $x_{i}$ in the decoder’s output. This way the decoder of $\\\\boldsymbol{\\\\beta}$ learns to refill the masked tokens in $x_{i}^{\\\\mathrm{masked}}$ by attending to $q_{i}^{\\\\mathrm{Eng}}$ to recover $x_{i}$ in the output. While useful for learning to refill the masked positions, this from its use during inference in two ways: (i) For a Naïve method of training $\\\\boldsymbol{\\\\mathrm{\\\\Sigma}}_{\\\\boldsymbol{\\\\mathrm{\\\\Sigma}}}$ is mismatched given SQL $q$ , R EFILL might fail to retrieve a similar str cture neighbour of $q_{i}$ from $\\\\mathcal{D}_{\\\\mathrm{train}}$ . In such cases, SQL-to-Text generation mode to directly translate Bshould be capable of falling back to pure $q$ into $\\\\hat{x}$ . (ii) During inference, $x_{r}^{\\\\mathrm{masked}}$ and $q$ come from different schemas. However, during Naïve training, the masked text $x_{i}^{\\\\mathrm{masked}}$ and the SQL $q_{i}$ are derived from the same example $(q_{i},x_{i})$ . To Robust address these two limitations, we train manner as follows: (a) For a random one$\\\\boldsymbol{\\\\mathrm{\\\\Sigma}}_{\\\\boldsymbol{\\\\mathrm{\\\\Sigma}}}$ in a more third of t allowing using $q_{i}^{\\\\tilde{\\\\mathrm{Eng}}}$ B. (b) For another one-third, we pass only to learn the filling of the masked tokens train steps we train $\\\\boldsymbol{\\\\mathrm{\\\\Sigma}}_{\\\\boldsymbol{\\\\mathrm{\\\\Sigma}}}$ in the Naïve way, $q_{i}^{\\\\mathrm{Eng}}$ as an input and maximize the likelihood of $x_{i}$ .This ensures that model is capable of generating the text from the $q_{i}^{\\\\mathrm{Eng}}$ alone, if the templates $\\\\boldsymbol{x}_{i}^{\\\\mathrm{{n}}}$ asked are unavailable or noisy. (c) For the remaining onethird, we first retrieve an SQL-Text pair $(q_{j},x_{j})$ ,from a different schema such that the ${\\\\mathrm{SQL~}}q_{j}$ is structurally similar to $q_{i}$ (§ 2.1 ), and the word edit distance between the masked templates $x_{i}^{\\\\mathrm{masked}}$ and $x_{j}^{\\\\mathrm{masked}}$ is also small. We can then replace $x_{i}^{\\\\mathrm{{n}}}$ asked with $x_{j}^{\\\\mathrm{masked}}$ and encode $[x_{j}^{\\\\mathrm{masked}}|q_{i}^{\\\\mathrm{Eng}}]$ as an input to $\\\\boldsymbol{\\\\beta}$ and maximize the likelihood of $x_{i}$ in the decoder’s output. This step makes the training more consistent with the inference, as $x_{j}^{\\\\mathrm{masked}}$ and $q_{i}^{\\\\mathrm{Eng}}$ now come from different schemas. In $\\\\S\\\\,5.4$ , we justify training Robustly compared to Naïve training.',\n",
       "    'original_filename': 'Conf_Paper_Meta_Data_EMNLP_2022_Empirical_Methods_in_Natural_Language_Processing_with_whole_text.db'}}),\n",
       " (0.50440799396981,\n",
       "  {'id': 454845631964659546,\n",
       "   'distance': 0.6036479473114014,\n",
       "   'entity': {'paper_id': '63a1751790e50fcafd1f48e7',\n",
       "    'paper_title': 'CiteBench: A Benchmark for Scientific Citation Text Generation',\n",
       "    'chunk_id': 1,\n",
       "    'chunk_text': '# 2 Related work\\n\\n# 2.1 Benchmarking\\nNLP benchmarks are unified dataset collections coupled with evaluation metrics and baselines that are used to systematically compare the performance of NLP systems for the targeted tasks in a standardized evaluation setup. Well-constructed benchmarks can boost progress in the corresponding research areas, such as SQuAD ( Rajpurkar et al. ,2016 ) for question answering, GLUE ( Wang et al. ,2018 ) for natural language understanding, KILT ( Petroni et al. ,2021 ) for knowledge-intensive tasks, GEM ( Gehrmann et al. ,2021 ,2022 ) for general-purpose text generation, and DynaBench (Kiela et al. ,2021 ) for dynamic benchmark data collection. C ITE BENCH is the first benchmark for the citation text generation task.\\n\\n# 2.2 Text generation for scientific documents\\nScientific documents are characterized by academic vocabulary and writing style, wide use of nonlinguistic elements like formulae, tables and figures, as well as structural elements like abstracts and citation anchors. Recent years have seen a rise in natural language generation for scientific text, including text simplification ( Luo et al. ,2022 ), summarization ( Qazvinian and Radev ,2008 ;Erera et al. ,2019 ;Cachola et al. ,2020 ), slides generation ( Sun et al. ,2021 ), table-to-text generation (Moosavi et al. ,2021 ), and citation text generation ( Li and Ouyang ,2022 ). Closely related to the task of citation text generation, Luu et al. (2021 )study how scientific papers can relate to each other, and how these relations can be expressed in text. Related to our work, Mao et al. (2022 ) propose a benchmark for scientific extreme summarization. Compared to extreme summarization, which amounts to generating short context-independent summaries of individual manuscripts, citation text generation focuses on context-dependent descriptions that relate the cited papers to the citing paper. In line with the recent efforts that address the lack of systematic automated evaluation of natural language generation in general ( Gehrmann et al. ,2021 ), our paper contributes the first unified benchmark for citation text generation in the scientific domain.\\n\\n# 2.3 Citation text generation\\nThe task of citation text generation was introduced in Hoang and Kan (2010 ), who generate a summary of related work specific to the citing paper. Since then, several task definitions and setups have been proposed (Table 1 ). Lu et al. (2020 ) cast the task as generating a multi-paragraph related work section given the abstracts of the citing paper and of the cited papers. AbuRa’ed et al. (2020 ) use the cited paper’s title and abstract to generate a citation sentence. Xing et al. (2020 ) use the abstract of the cited paper and include context before and after the citation sentence as the input, and produce the citation sentence as the output. A recent work by Chen et al. (2021 ) uses multiple cited abstracts as input to generate a related work paragraph. The great variability of the task definitions and setups in citation text generation prevents the study of citation text generation methods across datasets and evaluation setups. Unlike prior work that explores varying task settings, C ITE BENCH brings the diverging task definitions and datasets together in a unified setup. This allows us to compare citation text generation models across different datasets in a standardized manner using an extensive set of quantitative metrics, as well as novel automated qualitative metrics.  \\n\\n<html><body><table><tr><td rowspan=\"2\">Dataset</td><td colspan=\"3\">Input</td><td rowspan=\"2\">Output Citation text (T)</td><td rowspan=\"2\">Datasources</td></tr><tr><td>Cited document (D*) SingleAbs MultiAbs</td><td>Abs</td><td>Citing context (C\") Text</td></tr><tr><td>ABURAED</td><td></td><td>Title √</td><td></td><td>Sent Para</td><td>Multiple</td></tr><tr><td>CHEN</td><td>√</td><td></td><td></td><td></td><td>S2ORCandDelve</td></tr><tr><td>LU</td><td></td><td></td><td></td><td></td><td>arXiv.org and MAG</td></tr><tr><td>XING</td><td></td><td></td><td></td><td></td><td>AAN</td></tr></table></body></html>  \\n\\nTable 1: Overview of datasets in C ITE BENCH . Single Abs $=$ Single abstract, i.e., one cited document per instance. Multi Abs $=$ Multiple abstracts, i.e., multiple cited documents per instance. Abs $=$ Abstract, i.e., a dataset contains the abstract of the citing paper. Text $=\\\\xi$ a dataset contains additional context from the citing paper. Sent $=$ generation target is a single sentence. Para $=$ generation target is a paragraph.   \\nTable 2: Datasets statistics. The validation set for XING has been created by us via randomly sampling $10\\\\%$ of the original training data. Across datasets, very few inputs contain more than 4,096 tokens, and few outputs are longer than 1,024 tokens. We exploit this property to speed up the evaluation in Section 3.3 .  \\n\\n\\n<html><body><table><tr><td>Dataset</td><td>#Train</td><td>#Validation</td><td>#Test</td><td>Inputs>4,096tok.</td><td>Outputs>1,024tok.</td></tr><tr><td>ABURAED</td><td>15,000</td><td>1,384</td><td>219</td><td>0%</td><td>0%</td></tr><tr><td>LU</td><td>30,369</td><td>5,066</td><td>5,093</td><td><0.001%</td><td>0%</td></tr><tr><td>XING</td><td>77,086</td><td>8,566</td><td>400</td><td><0.001%</td><td><0.001%</td></tr><tr><td>CHEN -Delve</td><td>72,927</td><td>3,000</td><td>3,000</td><td><0.001%</td><td>0.004%</td></tr><tr><td>-S2ORC</td><td>126,655</td><td>5,000</td><td>5,000</td><td>0.017%</td><td><0.001%</td></tr><tr><td>Total</td><td>322,037</td><td>23,016</td><td>13,712</td><td>0.007%</td><td><0.001%</td></tr></table></body></html>\\n\\n# 3 Benchmark\\n\\n# 3.1 Task definition and datasets\\nWe formalize the task of citation text generation as follows: Given a set of $n$ (cited) target documents $\\\\{D_{1}^{t}...D_{n}^{t}\\\\}$ }, a (citing) sourc ocum $D^{s}$ set of $m$ citing document contexts $\\\\{C_{1}^{s}\\\\ ...C_{m}^{s}\\\\}\\\\ \\\\in$ } ∈ $D^{s}$ , generate a citation text $T^{\\\\prime}$ ′that is as close as possible to the original citation text $T\\\\in D^{s}$ ∈. This general definition allows wide variation in how the task is implemented. The cited document $D_{i}^{t}$ can be represented by the abstract $a^{t_{i}}$ , the concatenation of the title and the abstract, or even the full text of the paper. The context set $C^{s}$ covers sentences before and after the citation text in $D^{s}$ , as well as the abstract $a^{s}\\\\in D^{s}$ .  \\n\\nSuch general, open definition allows us to accommodate diverse approaches to the task within one framework (Table 1 ). To populate the benchmark, we select four datasets, focusing on the task design and domain variety: ABURAED (AbuRa’ed et al. ,2020 ), CHEN (Chen et al. ,2021 ), LU (Lu et al. ,2020 ), and XING (Xing et al. ,2020 ). Dataset transformation details are provided in Appendix A.1 .Table 2 shows the quantitative statistics, and Figure 2 provides data examples from each dataset. The CHEN dataset has two subsets – CHEN Delve and CHEN S2ORC – based on the data source; we use CHEN to denote the union of the two subsets. The datasets are distributed under varying licenses; we have obtained explicit permissions from the authors to use the data for research purposes in cases when licensing was underspecified (see Ethics statement).  \\n\\nWe note that the datasets included in the benchmark are not only structurally diverse, but also cover a wide range of domains, from medicine to computer science. In particular, ABURAED and XING exemplify citation text generation in the computational linguistics domain, CHEN Delve cover the computer science domain; LU and CHEN S2ORC span a wide range of domains represented on arxiv.org and in the S2ORC corpus, respectively, including biology, medicine and physics.',\n",
       "    'original_filename': 'Conf_Paper_Meta_Data_EMNLP_2023_with_whole_text.db'}}),\n",
       " (0.4912802086181397,\n",
       "  {'id': 454845641449030464,\n",
       "   'distance': 0.6193163990974426,\n",
       "   'entity': {'paper_id': '6461b9c9d68f896efad43133',\n",
       "    'paper_title': 'Interactive Text-to-SQL Generation Via Editable Step-by-Step Explanations',\n",
       "    'chunk_id': 6,\n",
       "    'chunk_text': '# 7 Discussion\\nBoth the quantitative experiments and the user study demonstrate S TEPS can significantly improve the accuracy of SQL generation. This is largely attributed to the interaction design, which allows users to precisely pinpoint which part of the SQL is wrong and only regenerates the incorrect clauses rather than the entire SQL query. In contrast, existing approaches do not support expressive ease or error isolation. Users either cannot regenerate new content (e.g., DIY), or can only regenerate the entire query rather than just the erroneous part (e.g., MISP). Ning et al. (2023 ) showed that this lack of error isolation often introduces new errors, which frustrates users and makes errors hard to fix.  \\n\\nError Analysis. While simple errors are prevalent in SQL generation, our ablation study (Table 4 )shows that only fixing simple errors is insufficient, which motivates the design of our hybrid method. Our hybrid method can handle a broad range of errors because users can flexibly correct entities or clauses in a query. This ability helps reduce the difficulty of tasks by dividing complex errors into simpler ones, allowing users to solve them separately.  \\n\\nIn our automated user simulation, S TEPS failed in a few cases when the text-to-clause model predicted the wrong clause type. For example, the paraphrased ground truth explanation of one step was: “ Ensure that all categories where the total cost of therapy exceeds 1000 are included. ” The text-to-clause model predicted aWHEREclause in-stead of a HAVING clause.  \\n\\nIn the user study, one common challenge arose when multiple tables in the database had the same column name. If users did not look carefully at the database schema, they may have not explicitly indicated the table to be used. That creates an ambiguity for the model.  \\n\\nOther Datasets and Domains. Our system should work for any SQL dataset, as our approach is domain-agnostic and covers general SQL structures. For other forms of code, such as WebAPI (Su et al. ,2017 ) and SPARQL ( Ngonga Ngomo et al. ,2013 ;Mo et al. ,2022 ), the general idea is applicable, but new models would be needed for (a) code generation, (b) explanation generation, and (c) code correction.\\n\\n# 8 Conclusion\\nThis work presents S TEPS , a new interactive approach for text-to-SQL generation. S TEPS decomposes a text-to-SQL task into smaller text-to-clause tasks and enables users to validate and refine a generated query via editable explanations. Experiments on four benchmarks and a user study show S TEPS can significantly boost the accuracy of end-to-end models by incorporating user feedback. S TEPS significantly outperforms three stateof-the-art approaches for interactive SQL generation across all metrics considered.\\n\\n# 9 Limitations\\nOur automated user simulation is an optimistic experiment that does not account for user errors, such as not being able to identify mistakes in the explanation. The simulation was designed to test a scenario in which a user can perfectly identify which step of the explanation is wrong and accurately describe a corrected version in natural language. Creating such a perfect user required the use of the ground truth, both for the identification step and to generate the natural language correction. This simulation is not representative of real-world use. That limitaton was the motivation for our study with real users, in which we had actual people use different tools without information about correct answers. As shown in Table 5 , the accuracy of the user study is lower than the simulation, but S TEPS is still very effective and outperforms other tools. We choose to include the simulation study because it shows the potential for S TEPS to make corrections if there is no human error.  \\n\\nIn this paper, we only evaluate S TEPS on singleturn SQL generation. In future work, our approach can be extended to multi-turn SQL generation by incorporating contextual information when editing the natural language explanation.  \\n\\nWhile our approach is designed to be general for SQL generation and potentially other code generation tasks, the current version only supports SQL keywords that appear in the Spider dataset. Like other text-to-SQL datasets, Spider only covers query operations (e.g., SELECT ) and does not cover update operations (e.g., INSERT ) for evaluation convenience. But it would be straightforward to cover unsupported operations by adding new translation rules.  \\n\\nprocedure, potential risks, data usage, and confidentiality. We obtained consent from each user before proceeding with the study. All collected data were anonymized and de-identified to protect the privacy of users.\\n\\n# Acknowledgments\\nThis material is based in part on work supported by an Amazon Research Award, the Australian Research Council through a Discovery Early Career Researcher Award and by the Defense Advanced Research Projects Agency (grant #HR00112290056).',\n",
       "    'original_filename': 'Conf_Paper_Meta_Data_EMNLP_2023_with_whole_text.db'}}),\n",
       " (0.48633673475183836,\n",
       "  {'id': 454846633586283990,\n",
       "   'distance': 0.6550747752189636,\n",
       "   'entity': {'paper_id': '64a29654d68f896efa29af31',\n",
       "    'paper_title': 'Constraint Reasoning Embedded Structured Prediction.',\n",
       "    'chunk_id': 10,\n",
       "    'chunk_text': '# 5.3 Text2SQL Generation\\nTask Definition. Formatted data such as travel records and stock market transactions are stored in relational databases. Currently, accessing the database requires a data scientist who masters the SQL query language. Our task is to automatically synthesize SQL queries from natural language sentences using machine learning. Compared with the data expert approach, SQL query generation requires deeper reasoning across the structure of the database, the semantics of the structured query language, and the understanding of natural language. As shown in Figure 11, the input of the text2SQL generation is a sentence that describes the query in natural language and the table headers in the relational database. The output is a SQL query with the following structure:  \\n\\nSELECT agg-op sel-col WHERE (cond-col cond-op cond-val) AND ...  \\n\\nHere, SELECT and WHERE are keywords in the SQL language. What we need to predict are: (1) the aggregation operator $\\\\mathsf{a g g-o p}$ , which chooses among the set {empty, COUNT, MIN, MAX, SUM, AVG }; (2) the column name in selection sel-col and (3) the column name in condition cond-col , both of which are chosen from the table headers; (4) the conditional operator cond-op , which is in $\\\\{=,<,>\\\\}$ ; (5) the conditional value cond-val , which is assumed to be a sub-sequence of the given query. Here, one bracket pair () represents one conditional statement. The SQL query may have multiple conditions, which are denoted above by “ ... ”. Figure 11 displays this SQL query:\\n\\n# SELECT COUNT \"School\" WHERE \"No.\" = \"3\"\\nHere agg-op is COUNT ;sel-col is “school”, which is a column name from the table headers. One cond-col is “No.”, which also comes from the table headers. The cond-op is “=”. The cond-val is “3”, which we assume is from the input query. This example has one condition but multiple conditions are allowed.  \\n\\nDefinition of Constraints. Existing generative neural models for this task are not guaranteed to generate a query that follows the grammar of a SQL query. To avoid grammar violations, we compile a set of common SQL grammars as constraints into the Core-Sp module. The Core-Sp module will ensure that all the generated SQL queries follow the grammatical constraints. Our constraints are defined on the operators, namely the conditional operator cond-op and the aggregation operator agg-op . The domains of these operators are dependent upon the data types of the entities (namely, cond-col and sel-col )they operate on. Consider the previous example. The agg-op can only take values between $\\\\{\\\\mathrm{empty,~\\\\coUNT}\\\\}$ , because the sel-col is “school”, which is of the string type. More precisely, let $s$ be a column header (the value of sel-col or cond-col ). We define $F_{a}(s)$ as  \\n\\nInput Table:   \\n\\n\\n<html><body><table><tr><td></td><td>Player</td><td>No.</td><td>Position</td><td>School</td></tr><tr><td>0</td><td>Antonio</td><td>21</td><td>Guard-Forward</td><td>Duke</td></tr><tr><td>1</td><td>Voshon</td><td>2</td><td>Guard</td><td>Minnesota</td></tr><tr><td>2</td><td>Marin</td><td>3</td><td>Guard-Forward</td><td>Butler CC</td></tr></table></body></html>\\n\\n# Input Query:\\nHow many schools did player number 3 play at?\\n\\n# Output SQL Query:\\nFigure 11: An example for the Text2SQL generation task. The input is the text query “How many schools did player number 3 play at?” and the table header “ Player, No., Position, School ” from the relational database. The output should be the SQL query: SELECT COUNT \"School\" WHERE \"No. $\"~=~\"3\"$ .  \\n\\nthe set of aggregation operators agg-op that can be associated with $s$ , and $F_{c}(s)$ as the set of condition operators cond-op that can be associated with $s$ . That is:  \\n\\n$$\\n\\\\begin{array}{r l}&{F_{a}(s)=\\\\left\\\\{\\\\begin{array}{l l}{\\\\{\\\\mathrm{empty~,~\\\\varsigma0UNT,~\\\\forall\\\\mathrm{IIN},~\\\\forall\\\\mathrm{IAX},~\\\\forall\\\\mathrm{II},~\\\\mathrm{AVG}\\\\}}}&{\\\\mathrm{if~}s\\\\mathrm{~of~is~numeric~type}}\\\\\\\\ {\\\\{\\\\mathrm{empty~,~\\\\varsigma0UNT}\\\\}}&{\\\\mathrm{if~}s\\\\mathrm{~of~is~string~type}}\\\\end{array}\\\\right.}\\\\\\\\ &{F_{c}(s)=\\\\left\\\\{\\\\begin{array}{l l}{\\\\{\\\\mathrm{=,~\\\\displaystyle>,~\\\\varsigma\\\\}}}&{\\\\mathrm{if~}s\\\\mathrm{~is~of~numeric~type}}\\\\\\\\ {\\\\{\\\\mathrm{=}\\\\}}&{\\\\mathrm{if~}s\\\\mathrm{~is~of~string~type}}\\\\end{array}\\\\right.}\\\\end{array}\\n$$  \\n\\nWe also introduce dataype constraints, which are defined as:  \\n\\n$$\\n\\\\begin{array}{r l}&{\\\\mathtt{s e l-c o l}=s\\\\Rightarrow\\\\mathtt{a g g-o p}\\\\in F_{a}(s),}\\\\\\\\ &{\\\\mathtt{c o n d-c o l}=s\\\\Rightarrow\\\\mathtt{c o n d-o p}\\\\in F_{c}(s).}\\\\end{array}\\n$$  \\n\\nModel Structure. We embed the Core-Sp module to SQLova (Hwang et al., 2019), the state-of-the-art neural network for text2SQL generation. SQLova has a sequence-tosequence architecture. It first encodes a natural language sentence and the table headers into a high-dimensional vector. Then the decoder of SQLova decodes the hidden representation into the predictions of various entities in the SQL query. SQLova first determines the number of conditions in the SQL query and then fills in the ( cond-col ,cond-op ,cond-val ) for each condition. The operators agg-op, cond-op are predicted as a classification task from a fixed set of operators. Column names cond-col, sel-col are predicted from the set of table headers in the relational database. The cond-val is predicted by a pointer neural network which points at a span of the input natural language sentence. The selected span of the query is used as the cond-val (Dong and Lapata, 2018).  \\n\\nMDD Construction. The associated MDD that encodes the constraints for text2SQL generation is similar to the MDD for if-then program synthesis. The MDD is split into layers and every two layers form a group. One two-layer group is used to enforce constraints on an operator-column name pair. The operator-column name pair can be $\\\\mathsf{a g g-o p}$ and sel-col ,or can be cond-op and cond-col . Note that there can be only one group of $\\\\mathsf{a g g-o p}$ and sel-col and more than one group of cond-op and cond-col . In the first layer of the group, the column name is determined. In the second layer, the invalid operators are ruled out based on the type of the column name selected in the first layer. The two-layer group is copied several times because the SQL query can contain multiple conditions.  \\n\\nConstraint Reasoning Embedded Structured Prediction',\n",
       "    'original_filename': 'Journal_Paper_Meta_Data_Journal_of_Machine_Learning_Research_with_whole_text.db'}}),\n",
       " (0.48158976750916177,\n",
       "  {'id': 454919258070598578,\n",
       "   'distance': 0.6086536049842834,\n",
       "   'entity': {'paper_id': '628ef0495aee126c0f82db30',\n",
       "    'paper_title': 'PLOG: Table-to-Logic Pretraining for Logical Table-to-Text Generation.',\n",
       "    'chunk_id': 1,\n",
       "    'chunk_text': '# 1 Introduction\\nTable-to-text generation is a sub-task of data-to-text generation, aiming to generate natural language descriptions from structured tables. There are two steps to perform table-to-text generation: content planning (to select table contents and determine the plan to describe them) and surface realization (to realize the plan into fluent natural language). Traditional table-to-text systems take a pipeline manner, to complete the two procedures with separate modules ( Kukich ,1983 ;McKeown ,1985 ). Recent works have shown the advantage of using a neural encoder-decoder model to directly generate sentences from the tables, which shows the strong capability to produce fluent and natural generations ( Wiseman et al. ,2017 ;Nie et al. ,2018 ;Puduppully et al. ,2019b ). Reseachers have also attempted to finetune pretrained language models such as BART ( Lewis et al. ,2020 ) and T5 ( Raffel et al. ,2020 ) on downstream table-to-text tasks and achieve remarkable success on a broad range of benchmarks ( Xie et al. ,2022 ;Kale and Rastogi ,2020 ).  \\n\\nPrevious studies have mainly focused on surfacelevel generation, i.e. generating plain restatements of table records with little logical inference ( Wiseman et al. ,2017 ;Liu et al. ,2018 ;Puduppully et al. ,2019a ,b). Recently, logical table-to-text generation ( Chen et al. ,2020a ), i.e., generating textual descriptions that require logical reasoning over table records, has attracted increasing attention. Logical table-to-text generation poses a new challenge on content planning, requiring models to perform logical inference to derive facts from surface-level table records. End-to-end neural models often suffer from low logical fidelity on this task, i.e. the generated sentences are not logically entailed by the tables even showing reasonable fluency ( Chen et al. ,2020a ,2021 ). There are two reasons for the low fidelity. (1) Directly learning logical inference knowledge from table-text pairs is too difficult for neural models because of the ambiguity and diversity of natural language references. (2) The amount of such paired data is limited because of the laborintensive annotation work, which further limits the performance of neural models.  \\n\\nTo achieve high-fidelity of logical-level generation, Chen et al. (2020b ) have attempted to annotate logical forms to guide the text generation and proposed a L OGIC 2 TEXT dataset. With logical forms as mediators conveying accurate logicallevel facts, models can just focus on surface realization from associated logical forms and achieve high fidelity. However, annotating pairs of logical forms requires intensive human efforts. Moreover, generating from a self-contained logical form is actually a different task from table-to-text generation. Prior studies on this dataset ( Liu et al. ,2021a ;Shu et al. ,2021 ;Xie et al. ,2022 ) mostly focus on converting the logical forms into texts.  \\n\\nInspired by this, we propose a Pre-trained LOgical Form Generator ( PL OG) to achieve faithful logical table-to-text. Specifically, PL OG is first pre-trained on a large-scale synthetic corpus of table-to-logical-form generation ( table-to-logic ) to learn how to generate accurate logical forms from tables, then fine-tuned on downstream table-to-text tasks to transfer the logical inference knowledge learned from pre-training to text generation. Our insights are three-folds: (i) unlike natural language sentences, logical forms are formally defined with unambiguous semantics, hence it is much easier for models to acquire the logical inference knowledge via learning from logical form generation; (ii) via pre-training on large amounts of logical form generation data, the model can better understand the table and organize the logical-level content planning, leading to faithful table-to-text generation; (iii) it is viable to collect large-scale logical form corpora via rule-based search over tables without the efforts of human annotators. In this framework, logical forms can be regarded as an intermediate meaning representation to bridge the gap between logical planning and surface realization, while we do not need explicit logical forms when performing the downstream task.  \\n\\nTo achieve smooth knowledge transfer, we formulate the pre-training task in the same sequenceto-sequence generation way with the downstream table-to-text. We adopt a strong pre-trained language model T5 as the backbone model. To evaluate our method, we consider two typical scenarios of current table-to-text generation tasks: uncontrolled and controlled generation. For uncontrolled generation, we adopt the L OGIC NLG task which requires generating logical descriptions only based on the table, and imposes an additional challenge on content selection. Inspired by ToTTo ( Parikh et al. ,2020 ) and HiTab ( Cheng et al. ,2021 ), we further consider controlled generation, a recently popular task formulation. In this task setting, additional control features such as highlighted cells in the tables are explicitly specified to guide the topic of generation and narrow down the scope of content selection. Because most examples of ToTTo and HiTab do not involve logical reasoning, we reformulate the L OGIC 2 TEXT dataset into a new CONT rolled LOG ical Natural Language Generation (C ONT LOG ) task for our evaluation. We detect highlighted cells via execution-based search with their annotated logical forms. For each dataset, we collect large amounts of logical forms from the tables in the training data via an executionbased search, where the validity of them can be fully guaranteed. To evaluate the fidelity of generated texts, we mainly adopt two state-of-the-art Table Fact Verification ( Chen et al. ,2019 ) models, TA PE X(Liu et al. ,2021b ) and T A PAS (Eisenschlos et al. ,2020 ), to evaluate whether the texts are entailed by the input tables. Experimental results on both L OGIC NLG and C ONT LOG demonstrate that PL OG outperforms the baseline T5 by a large margin in terms of logical fidelity. In particular, PL OG improves the fidelity accuracy (evaluated by T A PE X) by $9.3\\\\%$ and $9.2\\\\%$ on L OGIC NLG and CONT LOG , respectively. Human evaluation and case studies further demonstrate the effectiveness of our pretraining framework. In addition, the results of table-to-logic pretraining demonstrate that the pretraining task indeed contributes to more accurate logical inference. We will make our code publicly available at https://github.com/ Aolius/logic-pretraining .',\n",
       "    'original_filename': 'Conf_Paper_Meta_Data_EMNLP_2022_Empirical_Methods_in_Natural_Language_Processing_with_whole_text.db'}}),\n",
       " (0.4772100599438958,\n",
       "  {'id': 454845489174282794,\n",
       "   'distance': 0.632977306842804,\n",
       "   'entity': {'paper_id': '644744fb71ac66d2cbf9b886',\n",
       "    'paper_title': 'A Lightweight Constrained Generation Alternative for Query-focused Summarization',\n",
       "    'chunk_id': 1,\n",
       "    'chunk_text': '# 2 Related Work\\nQuery-focused Summarization: To generate a query-focused summary, several studies used an additional query-attention mechanism. QR-BERTSUM-TL [13] incorporates query relevance scores into a pre-trained summarization model. Su et al. [29] propose merging the representation of an answer span predicted by a separate QA model into the Seq2Seq model’s training and inference process to enforce the summary’s coherence w.r.t. the query. QSG Transformer [23] suggests using a separate graph neural network model to learn per-token representations and fuse them to the Seq2Seq model to effectively generate a QFS. These mechanisms can be viewed as enforcing soft semantic constraints during the generation process, and requires additional modules and parameters to function effectively. We opt for a different approach, i.e. explicitly enforcing lexical constraints during the generation process, without the additional machinery that is necessary to handle the soft semantic constrains.  \\n\\nConstrained Generation (or Conditional Generation) is a family of natural language generation (NLG) methods that aim to generate natural language including/excluding a set of specific words, i.e. lexical constraints. The NLG domain recipe leverages pre-trained large language models (LLM) finetuned on specific datasets [7]. However, as pointed out by Lu et al. [18], such models only finetuned in an end-to-end manner do not learn to follow the underlying constraints reliably even when supervised with large amounts of training examples. Therefore, a line of works [1, 10, 17, 18] in constrained generation proposes to explicitly modify the likelihood of next word prediction in the generation stage, such that the predefined lexical constraints can be better satisfied.',\n",
       "    'original_filename': 'Conf_Paper_Meta_Data_SIGIR2023_with_whole_text.db'}}),\n",
       " (0.4707521763287927,\n",
       "  {'id': 454845641378251576,\n",
       "   'distance': 0.6278348565101624,\n",
       "   'entity': {'paper_id': '6461b9c9d68f896efad43133',\n",
       "    'paper_title': 'Interactive Text-to-SQL Generation Via Editable Step-by-Step Explanations',\n",
       "    'chunk_id': 2,\n",
       "    'chunk_text': '# 3 Approach\\nFig. 2 provides an overview of S TEPS . Given a natural language (NL) question, S TEPS invokes a text-to-SQL model to generate an initial SQL query. Then, it decomposes the generated SQL query into individual query clauses and re-orders them based on their execution order. Each clause is then translated into an NL description of the underlying data operation, which is then used to form a step-by-step explanation. By reading the NL explanation along with the query result, users can easily understand the behavior of the generated query and locate any errors, even if they are unfamiliar with SQL.  \\n\\nIf one step is incorrect, users can directly edit its explanation to specify the correct behavior. S TEPS will then regenerate the clause based on the usercorrected explanation and update the SQL query, rather than regenerate the entire query from scratch. If multiple steps are incorrect, the user can add, remove, and modify all steps as needed.\\n\\n# 3.1 Rule-based SQL Explanation\\nTo generate explanations for arbitrarily complex SQL queries (e.g., a query with nested subqueries), we design a rule-based method to first decompose a query into individual clauses. Specifically, S TEPS first parses a SQL query to its Abstract Syntax Tree (AST) based on the SQL grammar in Table 6 . Then, it traverses the AST to identify the subtree of each clause while preserving their hierarchical relations.  \\n\\nGiven the subtree of a clause, S TEPS performs an in-order traversal and translates each leaf node (i.e., a terminal token in the grammar) to the corresponding NL description based on a set of translation rules (see Table 7 in the appendices). For example, SELECT is translated to “Return”, and Order By is translated to “Sort the records based on.” S TEPS concatenates these descriptions to form a complete sentence as the explanation of the clause.  \\n\\n  \\nFigure 3: An example of the explanation generation process  \\n\\nSince SQL engines follow a specific order to execute individual clauses in a query 2 , S TEPS further reorders the clause explanations to reflect their execution order. We believe this is a more faithful representation of the query behavior and thus can help users better understand the underlying data operations, compared with rendering them based on the syntactic order of clauses. Fig. 3 shows an example translation.\\n\\n# 3.2 Text-to-Clause Generation\\nUsers make edits to the explanation produced by our system to make it consistent with their goal. Given these edits, S TEPS uses a hybrid method to generate the corresponding SQL clause. For simple edits, such as replacing a column name, S TEPS directly edits the original clause to fix the error using three direct transformation rules $(\\\\S\\\\ 3.2.1)$ .For more complex edits, S TEPS uses a neural textto-clause model to generate the clause based on the user-corrected explanation $(\\\\S\\\\ 3.2.2)$ .  \\n\\nThe hybrid method is inspired by the findings from our recent study ( Ning et al. ,2023 ). Specifically, a large portion of SQL generation errors are simple errors (e.g., incorrect column names and operators), which can be fixed with small edits. After SQL decomposition by our approach, many larger errors are further decomposed into a set of simpler errors, contained within separate clauses. Thus, it is not necessary to regenerate the entire clause to fix such errors. Furthermore, compared to using a large model, direct transformation is more computationally efficient. Our experiment shows that direct transformation is 22K times faster than the text-to-clause model (Table 4 ).\\n\\n# Algorithm 1: Direct transformation\\nInput: The original explanation $e_{o}$ ;  \\nThe new edited explanation $e_{n}$ ;  \\nThe original SQL clause $s$ ;  \\nOutput: the updated SQL clause   \\n1 $C_{o}\\\\gets\\\\mathrm{CHUNK}(e_{o})$   \\n2 C$C_{n}\\\\gets\\\\mathrm{CHUNK}(e_{n})$ ←  \\n3 foreach $(c_{o},\\\\,c_{n})$ )in $\\\\mathrm{ALIGN}(C_{o},C_{n})$ do   \\n4 // Replace ;  \\n5 if BOTH COLUMN $(c_{o},\\\\,c_{n})$ or   \\n6 BOTH TABLE $(c_{o},\\\\,c_{n})$ or   \\n7 BOTH LITERAL $\\\\left(c_{o},\\\\,c_{n}\\\\right)$ then   \\n8 s←s.R EPLACE (c o, c n) ;   \\n9 // Add ;  \\n10 else if $c_{o}$ is $\\\\mathcal{Q}$ and IS COLUMN ($\\\\left(c_{n}\\\\right)$ then   \\n11 if s. START WITH (\"Select\" )then   \\n12 s←s.A PPEND (cn)  \\n13 // Remove ;  \\n14 else if $c_{n}$ is $\\\\mathcal{D}$ and IS COLUMN $\\\\scriptstyle(c_{o})$ then   \\n15 $s\\\\gets s.\\\\mathrm{REMOVE}(c_{o})$ ;  \\n16 end   \\n17 return\\n\\n# 3.2.1 Direct Transformation\\nWe define three types of atomic edits that can be directly converted into SQL edits by S TEPS : (1) replacing a column name, a table name, or a literal value (i.e., string, number), (2) adding a new column name in the explanation of a SELECT clause, and (3) removing a column name.  \\n\\nAlgorithm 1 describes our direct transformation algorithm. After chunking the text (Lines 1 -2 ), STEPS aligns and compares the chunks in the original explanation with those in the user-corrected explanation, using the Needleman and Wunsch (1970 )algorithm (Line 3 ). This allows S TEPS to detect any replacements (Line 4 ), additions (Line 9 ), or removals (Line 13 ) of database entities in the explanation. Based on this information, S TEPS automatically edits the corresponding SQL clause without calling a neural model (Lines 8 ,12 ,15 ). More details of this algorithm can be found in Appendix E.',\n",
       "    'original_filename': 'Conf_Paper_Meta_Data_EMNLP_2023_with_whole_text.db'}}),\n",
       " (0.45702558947978295,\n",
       "  {'id': 454848078959234800,\n",
       "   'distance': 0.621777355670929,\n",
       "   'entity': {'paper_id': '633ba44790e50fcafdfe4af3',\n",
       "    'paper_title': 'Calibrating Sequence likelihood Improves Conditional Language Generation',\n",
       "    'chunk_id': 1,\n",
       "    'chunk_text': '# 1 I NTRODUCTION\\nConditional language generation aims to generate natural language text based on input context, and includes many useful and hard tasks such as abstractive summarization (Mani, 2001; Nenkova and McKeown, 2011), generative question answering (Bajaj et al., 2016), question generation (Zhou et al., 2017) and data-to-text (Wiseman et al., 2017; Gardent et al., 2017) tasks. Pretraining large Transformer encoder-decoder models and fine-tuning them on downstream tasks is the common paradigm to address these tasks (Raffel et al., 2020; Lewis et al., 2019; Tay et al., 2022; Zhang et al., 2019a).  \\n\\nConditional language generation tasks are modeled by learning the probability of a target sequence ygiven a context sequence $\\\\mathbf{x}$ . Since directly modeling sequence probability $P(\\\\mathbf{y}|\\\\mathbf{x})$ over all possible generated text sequences is intractable, the canonical solution is to auto-regressively factor the probability and share the parameters at all token prediction steps as $\\\\begin{array}{r l}{P_{\\\\theta}\\\\overline{{(\\\\mathbf{y}|\\\\mathbf{x})}}=}\\\\end{array}$ $\\\\begin{array}{r}{\\\\prod_{t=0}^{l}P_{\\\\theta}(y^{t}|y^{0}...y^{t-1},\\\\mathbf{x})}\\\\end{array}$ , where $l$ is the sequence length. These models are often trained with maximum likelihood estimation (MLE) over observed target sequences. The learning objective thus becomes $\\\\begin{array}{r}{L\\\\;=\\\\;\\\\sum_{i}^{N}-l o g(P_{\\\\theta}(\\\\mathbf{y}_{i}|\\\\mathbf{x}_{i}))\\\\;=\\\\;\\\\sum_{i}^{N}\\\\sum_{t=0}^{l}-l o g(P_{\\\\theta}(y_{i}^{t}|y_{i}^{0}...y_{i}^{t-1},\\\\mathbf{x}_{i}))}\\\\end{array}$ PP, where $N$ is the number of training instances. It is also referred to as next token prediction loss as it is mathematically equivalent.  \\n\\nIn the ideal setting of MLE training, a large number of target sequences are observed for each context, and the relative frequencies of output sequences can calibrate the assigned model probabilities. However, in practice most language generation training datasets have only a single target sequence given the context. While the subsequent MLE trained models learn to assign relatively high probability to plausible sequences, they lack the direct supervision to compare such sequences, and solely rely on models’ generalization capability. We refer to this phenomenon as models’ sequence likelihood not being calibrated . Prior works (Liu and Liu, 2021; Liu et al., 2022) has shown that the correlation between sequence probability and its quality for MLE trained models can be low. Liu et al. (2022) attributed this similarly as the deterministic (one-point) target distribution problem. Exposure bias (Ranzato et al., 2016) further aggravates the problem, as sequence likelihood estimation is noisier when models’ decoded sequences shift from exposed training data distribution.  \\n\\n  \\n\\nFigure 1: Calibrating sequence likelihood improves language generation across model scales. Scores are averaged ROUGE across 4 datasets ( $R_{m}$ in subsection 3.2)  \\n\\nMany effective heuristics have been proposed during training and decoding to combat the problem of uncalibrated sequence likelihood. Label smoothing (Szegedy et al., 2016) prevents the network from becoming over-confident towards the observed target. This is particularly necessary in language generation, since the gold target represents just one of many possibilities. It has been observed that increasing number of decoding candidates past a certain point leads to worse quality for beam search decoding (Yang et al., 2018; Koehn and Knowles, 2017) and sampling (Adiwardana et al., 2020). An optimal number of decoding candidates is often determined empirically by decoding models on the validation set and measuring their performance. Using length normalization is also essential for beam search decoding (Wu et al., 2016) and sampling (Adiwardana et al., 2020) as models tend to underestimate sequence likelihood of longer sentences. Repetition is another common failure mode when models overestimate the probability of repeated sequences (Holtzman et al., 2019). Trigram blocking (Paulus et al., 2018) and nucleus sampling (Holtzman et al., 2020) have been used to interrupt repeating sequences. These techniques are pervasive and often the default in modern Transformer libraries (Wolf et al., 2020; Lewis et al., 2019; Raffel et al., 2020; Zhang et al., 2019a).  \\n\\nSince the lack of observed target sequences in MLE training is the root problem, solutions involving learning with multiple sequence candidates have been proposed to directly address it. They can be loosely put in three categories: (1) reinforcement learning with sequence-level rewards (Paulus et al., 2018; Ziegler et al., 2019; Stiennon et al., 2020); (2) two-stage systems that generate and rerank candidates (Liu and Liu, 2021; Ravaut et al., 2022b; Liu et al., 2022); and (3) multi-task learning with sequence-level losses (Edunov et al., 2018; Liu et al., 2022). Refer to Related Works (section 4) for a more comprehensive discussion.  \\n\\nIn this paper, we propose to first decode candidates from a fine-tuned model on its own training dataset, and then continue training the model with a new objective. The new objective aims to align candidates’ sequence likelihoods according to their similarities to the target sequence in the model’s latent space. We refer to this process as sequence likelihood calibration (SLiC). Our approach is related to multi-task learning with sequence-level losses in Liu et al. (2022). However, we propose a simple yet effective recipe that eliminates decoding heuristics and doesn’t risk directly optimizing the same metrics that are used to report text generation quality. Unlike reinforcement learning, it is a one-time offline process that avoids costly online decoding processes. Also, when compared to two-stage reranking systems, it doesn’t require a separate reranking model that incurs additional complexity and compute. As depicted in Figure 1, our calibration stage naturally extends the current paradigm of pretraining and fine-tuning, and we show that calibrated models have strong improvements over fine-tuned-only models across model sizes.  \\n\\nOur main contributions include:  \\n\\n• Proposed a sequence likelihood calibration (SLiC) stage that consistently improves model quality, exceeding or matching state-of-the-art results on abstractive summarization, generative question answering, question generation and data-to-text generation tasks.  \\n\\n• Proposed a novel calibration similarity metric between model decodes and targets measured in the model’s latent space rather than resorting to external metrics or human feedback. • Demonstrated that SLiC eliminates the need for popular decoding heuristics, such as beam size optimization, length normalization and repetition prevention for the calibrated models. • Demonstrated that SLiC has persistent significant benefits on model performance even as the number of model parameters scales up. Under the same inference budget, smaller calibrated models might outperform larger counterparts by decoding more candidates.',\n",
       "    'original_filename': 'Conf_Paper_Meta_Data_ICLR_2023_with_whole_text.db'}}),\n",
       " (0.4545446034636898,\n",
       "  {'id': 454919258123551672,\n",
       "   'distance': 0.6042478084564209,\n",
       "   'entity': {'paper_id': '628ef0495aee126c0f82db30',\n",
       "    'paper_title': 'PLOG: Table-to-Logic Pretraining for Logical Table-to-Text Generation.',\n",
       "    'chunk_id': 4,\n",
       "    'chunk_text': '# 5 Table-to-Logic Pretraining\\nAs described in Section 1 , pretraining the table-totext model on table-to-logic generation is effective in generating more faithful natural language. In this section, we introduce our pretraining task and the procedure of collecting pretraining corpora.\\n\\n# 5.1 Pretraining Task\\nIn the pretraining task, the input is a (sub-) table while the target is a logical form that abstracts a reasoning process on the table. We follow the same schema in ( Chen et al. ,2020b ) to  \\n\\n$$\\nt^{*}=\\\\arg\\\\operatorname*{max}\\\\prod_{i=1}^{n}P(t_{i}|t_{<i},S;\\\\theta),\\n$$  \\n\\nWe adopt the same data serialization described in Section 4 for the pretraining task. The only difference between table-to-text and table-to-logic lies in the generation target.',\n",
       "    'original_filename': 'Conf_Paper_Meta_Data_EMNLP_2022_Empirical_Methods_in_Natural_Language_Processing_with_whole_text.db'}}),\n",
       " (0.4488331848970645,\n",
       "  {'id': 454847854731210130,\n",
       "   'distance': 0.5963490605354309,\n",
       "   'entity': {'paper_id': '64702deed68f896efa5202bb',\n",
       "    'paper_title': 'Uncovering and Categorizing Social Biases in Text-to-SQL.',\n",
       "    'chunk_id': 4,\n",
       "    'chunk_text': '# 7 Conclusion\\nIn this paper, we propose to uncover and categorize social biases in the Text-to-SQL task. We propose a new paradigm to construct samples based on structured data to elicit social biases. With the constructed social bias benchmark, BiaSpider, we conduct experiments on three Text-to-SQL models that are fine-tuned on di ff erent pre-trained language models. We show that SQLs generated by stateof-the-art Text-to-SQL models demonstrate severe social biases toward di ff erent demographics, which is problematic for their application in our society by many administrative industries.\\n\\n# Limitations\\nIn this work, we are the first to uncover the social bias problem in the Text-to-SQL task. We categorize di ff erent types of social biases related to various demographics. We present a new benchmark and metric for the social bias study in the Text-to-SQL task. However, this work stops at the point of uncovering and analyzing the problem and phenomenon, without making one step further to solve the social bias problem in the Text-to-SQL task. Besides, in spite of the structured scalability of our proposed paradigm for social bias benchmark construction, the e ffi cacy of entending with other Text-to-SQL datasets remains to be verified.',\n",
       "    'original_filename': 'Conf_Paper_Meta_Data_ACL_2023_with_whole_text.db'}}),\n",
       " (0.4477539854120189,\n",
       "  {'id': 454896058206782956,\n",
       "   'distance': 0.621261477470398,\n",
       "   'entity': {'paper_id': '62393e7e5aee126c0f125b6b',\n",
       "    'paper_title': 'Probing Factually Grounded Content Transfer with Factual Ablation',\n",
       "    'chunk_id': 1,\n",
       "    'chunk_text': '# 2 Related Work and Background\\n\\n# 2.1 Textually Grounded Generation\\nTextual grounding is a common element of natural language generation tasks, wherein a textual input is used to provide facts and information for decoding. One of the most popular tasks following this paradigm is abstractive summarization ( Narayan et al. ,2018 ;Rush et al. ,2015 ), in which generation $y$ should shorten and capture the salient information in source $g$ . Other tasks extent beyond summarization, for example grounded dialogue (Dziri et al. ,2021 ) and content transfer ( Prabhumoye et al. ,2019 ) (studied here). These tasks add the additional constraint that the generation $y$ must adhere to some existing context $c$ , either previous dialogue turns or a document being extended (respectively).\\n\\n# 2.2 Factuality and Factual Consistency\\nRecent work ( Maynez et al. ,2020 ) observes that strong neural models, although fluent and creative, often hallucinate information. Indeed, for all summarization models tested by Maynez et al. (2020 ), over $70\\\\%$ of generations included information not directly entailed by the grounding $g$ . However, they observe that some of this information is still factually correct. This naturally yields 2 notions of correctness for textually grounded generation: factuality and factual consistency (or faithfulness ). Factuality concerns the universal correctness of a generation–is the model output factual regardless of grounding $g?$ Factual consistency more specifically probes whether the generation adheres to grounding $g$ . Our work probes the much more tractable problem of factual consistency.  \\n\\nA significant portion of past work on factuality and factual consistency in generation has focused on abstractive summarization ( Pagnoni et al. ,2021 ;Goyal and Durrett ,2021 ;Cao and Wang ,2021 ;Aralikatte et al. ,2021 ). Yet as mentioned above, textually grounded generation extends beyond summarization, and some works explore notions of factuality in other domains such as conversation (Shuster et al. ,2021 ) or table-to-text generation (Liu et al. ,2021 ). Similarly, we explore these notions outside of direct summarization, instead focusing on grounded content transfer ( Prabhumoye et al. ,2019 ).  \\n\\nMuch work in this area concerns improving factuality and factual consistency ( Shuster et al. ,2021 ;Zhu et al. ,2021 ;Nan et al. ,2021 ;Mao et al. ,2020 ;Aralikatte et al. ,2021 ). While this is one aspect of our work, we also aim to improve automatic evaluation, for which a single standard metric has not emerged. Some works evaluate factuality and consistency with extraction ( Goodrich et al. ,2019 ;Zhang et al. ,2020 ) or question answering (Wang et al. ,2020 ;Durmus et al. ,2020 ;Nan et al. ,2021 ). Others use notions of entailment ( Falke et al. ,2019 ), or simply train end-to-end models to judge these aspects directly ( Kryscinski et al. ,2020 ). We instead focus on the effect of excluding relevant information from the grounding–for a factual model, removing this information should lower the probability of the ground-truth generation.  \\n\\nSome works follow a similar intuition to ours. Xie et al. (2021 ) also understand factuality by estimating the effect of the source document on generative model output, although they explicitly mask relevant information while we offer a plausible alternative grounding. Similarly, Xu and Durrett (2021 ) ablate information from a source document to understand aspects of conditional generation, although factuality is not the focus.  \\n\\nFinally, some work in this area studies the need to evaluate metrics of factuality and consistency (Gabriel et al. ,2020 ;Pagnoni et al. ,2021 ), and to generally characterize and annotate the mistakes of models ( Maynez et al. ,2020 ;Pagnoni et al. ,2021 ;Goyal and Durrett ,2021 )\\n\\n# 2.3 Loss Truncation\\nLoss Truncation ( Kang and Hashimoto ,2020 ) improves conditional models by only training on the top-c examples, ranked by dynamically updated model loss. This is broadly applicable to conditional models with a noisy learning signal, and we include two baselines using this approach.\\n\\n# 3 Methodology\\nHere, we bring factual consistency to a new domain, content transfer, which is the task of extending context $c$ with content from a grounding document $g$ . We discuss the task (§ 3.1 ), and our major contributions: novel methods for judging (§ 3.2 ) and improving (§ 3.3 ) factual consistency in this setting.\\n\\n# 3.1 Task: Content Transfer\\nRecent work studying factual consistency has largely focused on summarization: models are given a source document $g$ (grounding) as input, and output a shorter summary text $y$ capturing the most salient information from $g$ . Summarization is a natural domain to study factual consistency–the source document typically contains all information needed for the summary–but the need for factual consistency is not exclusive to summarization, and more domains should be explored.  \\n\\nHere, we expand this study to the content transfer task. As in summarization, models are given grounding $g$ , and must output text $y$ using information from $g$ . However, $y$ must also fit a context c, which significantly narrows the range of reasonable outputs from the open-ended summarization task, to those that fit the context. Prabhumoye et al. (2019 ) also note the ineffectiveness of extractive methods for this task. This obviates issues of model understanding that underlie factual consistency errors: while summarization models can often copy text directly, ensuring factual consistency regardless of understanding, content transfer models must reformulate information to fit the context.  \\n\\nPrabhumoye et al. (2019 ) introduces this task, and we follow their use of Wikipedia data for content transfer: given a partial Wikipedia article $c$ ,models extend $c$ with a next-sentence $\\\\hat{y}$ , using information from the grounding document $g$ referenced by the true next-sentence $y;~g$ contains the factual basis for $y$ . The dataset contains 600K training examples, 6K validation examples, and 50K test examples. Measuring factual ablation on this original dataset is not an option as there is only one piece of grounding per-example, and so we describe two paths to generating evaluation data for this purpose below.  \\n\\nContent transfer is formally defined as the task of generating a next-sentence $\\\\hat{y}$ for context $c$ which is (i) coherent, and fits $c$ (ii) factually and (iii) stylistically, while (iv) only utilizing information from grounding document $g$ . Note here, (iv) requires factual consistency, which is a stronger notion than overall factuality (§ 2.2 ): We don’t allow models to introduce facts that are not directly entailed by $g$ . Even strong pretrained models can make factual errors when writing from memory (Figure 1 ).  \\n\\nCentral to our study is the degree to which each above condition must be met to have an effective model. Conditions i-iii are not absolute constraints. A reasonable generation may be a bit awkward or not perfectly fit $c$ . On the other hand, an effective model must follow condition iv completely. While satisfaction of all of i-iv may be noisy in both the training dataset and tuned models, our approach will focus on addressing this noise for condition iv.',\n",
       "    'original_filename': 'Conf_Paper_Meta_Data_ACL_2022_Annual_Meeting_of_the_Association_for_Computational_Linguistics_with_whole_text.db'}}),\n",
       " (0.44288487174262436,\n",
       "  {'id': 454919253827011150,\n",
       "   'distance': 0.5987098217010498,\n",
       "   'entity': {'paper_id': '634e194190e50fcafd24e749',\n",
       "    'paper_title': 'Investigating the Robustness of Natural Language Generation from Logical Forms Via Counterfactual Samples',\n",
       "    'chunk_id': 5,\n",
       "    'chunk_text': '# 7 Related Work\\n\\n# 7.1 Text Generation from Tables\\nTable-to-text is a popular area in recent years ( Wiseman et al. ,2018 ;Lee ,2018 ;Liang et al. ,2009 ;Chen et al. ,2021 ). As previous methods generate superfacial and uncontrollable logic, Chen et al. (2020e ) introduced Logic2Text as a controllable and fidelity text generation task conditioned on a logical form. Since then, many works on Logic2Text have been proposed. In order to unify the studies of structural knowledge grounding, Xie et al. (2022 ) proposed the UNIFIEDSKG framework and unified 21 structural knowledge grounding tasks into a text-to-text format, including Logic2Text. Zhang et al. (2021a ) proposed a unified framework for logical knowledge-conditioned text generation in few shot setting. To solve the data scarcity problem of Logic2Text, Shu et al. (2021 ) iteratively augmented the original dataset with a generator and proposed an evaluator for highfidelity text generation.  \\n\\n  \\n  \\nFigure 6: Attention values during decoding. The baseline pays more attention to “attendance” as we expected, which verifies our hypothesis.  \\n\\nHowever, they all ignored the spurious correlation in logical forms, which is investigated in our work.\\n\\n# 7.2 Causal Inference For NLP\\nCausal Inference ( Pearl et al. ,2016 ;Kuang et al. ,2020 ) is a powerful statistical modeling tool for explanatory analysis. In NLP, many methods have been proposed based on the causal inference theory ( Zhang et al. ,2021b ;Chen et al. ,2020a ;Zhang et al. ,2021c ;Hu and Li ,2021 ). Yang et al. (2021 )and Wang and Culotta (2021b ) exploit causal inference to reduce the bias from the context for text classification tasks. For named entity recognition, Zeng et al. (2020 ) replaced the entities in sentences with counterfactual tokens to remove spurious correlation between the context and the entity token. Wang and Culotta (2021a ) generated counterfactual samples by replacing causal terms with their antonyms in sentiment classification. Wu et al. (2020 ) proposed to use a counterfacutal decoder to generate unbiased court’s view.  \\n\\nOur work proposes to improve the robustness of Logic2Text models with causality.\\n\\n# 8 Conclusion\\nWe investigate the robustness of current methods for Logic2Text via a set of manually constructed counterfactual samples. A significant decline on the counterfactual dataset verifies the existence of bias in the training dataset. Then we leverage causal inference to analyze the bias, based on which, two approaches are proposed to reduce the spurious correlations. Automatic and manual experimental results on both Logic2Text and the counterfactual data demonstrate that our method is effective to alleviate the spurious correlations.\\n\\n# Limitations\\nAlthough our method has achieved high logical consistency, we find that for some unseen headers, the model cannot understand them and generate some logically correct but not fluent sentences, which is related to the method of generation of counterfactual samples. Due to the limited number of high-quality logical forms, future work may continue to explore more advanced counterfactual data generation methods considering the context.  \\n\\nBesides, our structure-aware logical form encoder works based on the attention mechanism, so it can’t be applied to models without attention. Fortunately, the current attention-based models are widely used not only because of their better performance but also because of their high interpretability.\\n\\n# Acknowledgment\\nWe would like to thank anonymous reviewers for their comments and suggestions. This work is supported in part by National Natural Science Foundation of China (NO. 62037001), the Key R&D Projects of the Ministry of Science and Technology (NO. 2020 YFC 0832500), the Zhejiang Province Science and Technology Project (NO. 2022C01044), the Starry Night Science Fund of Zhejiang University Shanghai Institute for Advanced Study (NO. SN-ZJU-SIAS-0010), and CAAI-Huawei MindSpore Open Fund (NO. CAAIXSJLJJ-2021-015A).\\n\\n\\n\\n# A Details of Attention Mask\\nThe attention value from token $w_{i}$ to token $w_{j}$ is masked if there is no direct edge connecting them on the logical form. To clarify how the value of the Attention Mask is calculated, we use the left logical form in Figure 7 as an example. And the attention mask matrix for the tokenized logical form is shown on the right of Figure 7 . For each token in the logical form, the parent node can be seen (such as $\\\\mathbf{M}_{\\\\mathrm{hop,result}}=1]$ ). Besides, an operator token can also see the child nodes (such as $\\\\mathbf{M}_{\\\\mathrm{win,eq}}\\\\,=\\\\,1;$ .Otherwise, the attention value is masked.\\n\\n# BReplacement Methods\\nWe match the headers from each logical form to the tokens in the label and then replace the headers in a specific way if found. Concretely, we propose the following three strategies for replacement.  \\n\\nRandom Replacement Intuitively, when a layman tries to describe some domain-specific table, he simply replicates the obscure table headers (such as technical terms). So we train the model’s ability to replicate the header from the logical form. We use completely random strings to replace the headers.  \\n\\nHeader Disturb Another straightforward idea is to select another header token from a set of all table headers to replace the header token in the logical form. However, such a method ignores the attribute of the data type carried by the columns, thus it will produce unreasonable counterfactual samples. In order to solve this problem, we group all the headers by their data type, including three categories: strings, numbers, and time. A header in the logical form is only replaced by another header with the same data type.  \\n\\n  \\nFigure 7: Sample of Attention Mask matrix. The attention of each token to others with no directly connected edges is masked.  \\n\\nMixing Replacement We take turns performing the above two replacement strategies.',\n",
       "    'original_filename': 'Conf_Paper_Meta_Data_EMNLP_2022_Empirical_Methods_in_Natural_Language_Processing_with_whole_text.db'}}),\n",
       " (0.44072268968856626,\n",
       "  {'id': 454847842914282426,\n",
       "   'distance': 0.6255221366882324,\n",
       "   'entity': {'paper_id': '646edc9cd68f896efaddab9b',\n",
       "    'paper_title': 'Faithful Low-Resource Data-to-Text Generation Through Cycle Training.',\n",
       "    'chunk_id': 0,\n",
       "    'chunk_text': '# Faithful Low-Resource Data-to-Text Generation through Cycle Training\\nZhuoer Wang †1 Marcus Collins ⋆2 Nikhita Vedula ⋆2   \\nSimone Filice 2 Shervin Malmasi 2 Oleg Rokhlenko 2  \\n\\n1 Texas A&M University 2 Amazon  {collmr,veduln,filicesf,malmasi,olegro}@amazon.com\\n\\n# Abstract\\nMethods to generate text from structured data have advanced significantly in recent years, primarily due to fine-tuning of pre-trained language models on large datasets. However, such models can fail to produce output faithful to the input data, particularly on out-of-domain data. Sufficient annotated data is often not available for specific domains, leading us to seek an unsupervised approach to improve the faithfulness of output text. Since the problem is fundamentally one of consistency between the representations of the structured data and text, we evaluate the effectiveness of cycle training in this work. Cycle training uses two models which are inverses of each other: one that generates text from structured data, and one which generates the structured data from natural language text. We show that cycle training, when initialized with a small amount of supervised data (100 samples in our case), achieves nearly the same performance as fully supervised approaches for the data-to-text generation task on the WebNLG, E2E, WTQ, and WSQL datasets. We perform extensive empirical analysis with automated evaluation metrics and a newly designed human evaluation schema to reveal different cycle training strategies’ effectiveness of reducing various types of generation errors. Our code is publicly available at https:// github.com/Edillower/CycleNLG .',\n",
       "    'original_filename': 'Conf_Paper_Meta_Data_ACL_2023_with_whole_text.db'}}),\n",
       " (0.43219583894362806,\n",
       "  {'id': 454919258054345648,\n",
       "   'distance': 0.6017794013023376,\n",
       "   'entity': {'paper_id': '628ef0495aee126c0f82db30',\n",
       "    'paper_title': 'PLOG: Table-to-Logic Pretraining for Logical Table-to-Text Generation.',\n",
       "    'chunk_id': 0,\n",
       "    'chunk_text': '# PL OG: Table-to-Logic Pretraining for Logical Table-to-Text Generation\\nAo Liu 1 , Haoyu Dong 2 , Naoaki Okazaki 1 , Shi Han 2 , Dongmei Zhang 2 1 Tokyo Institute of Technology, 2 Microsoft Research   \\nliu.ao,@nlp.c.titech.ac.jp , {hadong,shihan,dongmeiz}@microsoft.com\\n\\n# Abstract\\nLogical table-to-text generation is a task that involves generating logically faithful sentences from tables, which requires models to derive logical-level facts from table records via logical inference. It raises a new challenge on the logical-level content planning of table-totext models. However, directly learning the logical inference knowledge from table-text pairs is very difficult for neural models because of the ambiguity of natural language and the scarcity of parallel data. Hence even largescale pre-trained language models present low logical fidelity on logical table-to-text. In this work, we propose a PL OG (Pretrained Logical Form Generator) framework to improve the generation fidelity. Specifically, PL OG is first pretrained on a table-to-logic-form generation (table-to-logic ) task, then finetuned on downstream table-to-text tasks. The formal definition of logical forms enables us to collect large amount of accurate logical forms from tables without human annotation. In addition, PL OGcan learn logical inference from table-logic pairs much more definitely than from tabletext pairs. To evaluate our model, we further collect a controlled logical table-to-text dataset CONT LOG based on an existing dataset. On two benchmarks, L OGIC NLG and C ONT LOG ,PL OG outperforms strong baselines by a large margin on the logical fidelity, demonstrating the effectiveness of table-to-logic pretraining.',\n",
       "    'original_filename': 'Conf_Paper_Meta_Data_EMNLP_2022_Empirical_Methods_in_Natural_Language_Processing_with_whole_text.db'}})]"
      ]
     },
     "execution_count": 54,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "sorted_papers = list(zip(embedding_scores, papers))\n",
    "sorted_papers.sort(key=lambda x: x[0], reverse=True)\n",
    "sorted_papers"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 87,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Error making request: 502, message='Bad Gateway', url='http://180.184.65.98:38880/atomgit/search_papers?query=%E6%9C%89%E4%BA%BA%E6%8F%90%E5%87%BA%EF%BC%8C%E8%A7%A3%E5%86%B3%E5%B7%B2%E8%AF%86%E5%88%AB%E7%9A%84%E6%8C%91%E6%88%98%E5%92%8C%E5%B1%80%E9%99%90%E6%80%A7%E5%B0%86%E6%98%BE%E8%91%97%E6%8E%A8%E5%8A%A8%E5%A4%9A%E6%A8%A1%E6%80%81%E5%A4%A7%E5%9E%8B%E8%AF%AD%E8%A8%80%E6%A8%A1%E5%9E%8B%EF%BC%88MLLM%EF%BC%89%E7%BC%96%E8%BE%91%E9%A2%86%E5%9F%9F%E7%9A%84%E5%8F%91%E5%B1%95%EF%BC%8C%E4%BB%8E%E8%80%8C%E9%87%8A%E6%94%BE%E8%BF%99%E4%BA%9B%E6%A8%A1%E5%9E%8B%E5%9C%A8%E5%90%84%E7%B1%BB%E5%BA%94%E7%94%A8%E4%B8%AD%E7%9A%84%E5%85%A8%E9%83%A8%E6%BD%9C%E5%8A%9B%E3%80%82&top_k=5'\n"
     ]
    },
    {
     "data": {
      "text/plain": [
       "[]"
      ]
     },
     "execution_count": 87,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "from research_agent.core.query import Query\n",
    "query = Query() \n",
    "await query.query_by_content(statement)\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# BM25+embeddings相似度计算"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 57,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "[(0.6756865891093204,\n",
       "  {'id': 454845652744816630,\n",
       "   'distance': 0.6249851584434509,\n",
       "   'entity': {'paper_id': '646d8642d68f896efa0a3040',\n",
       "    'paper_title': 'Exploring Chain-of-Thought Style Prompting for Text-to-SQL',\n",
       "    'chunk_id': 1,\n",
       "    'chunk_text': '# 1 Introduction\\nText-to-SQL parsing, the task of translating a natural language question into a SQL query, has found wide applications in building natural language interfaces to databases and thus piqued significant research interest in recent years ( Wang et al. ,2020 ;Deng et al. ,2021 ;Yu et al. ,2021 ;Rajkumar et al. ,2022 ;Hongjin et al. ,2023 ;Ni et al. ,2023 ). To develop a text-to-SQL parser, a prevalent approach is to collect labeled data and train a model via supervised learning ( Shaw et al. ,2021 ;Scholak et al. ,2021 ). While effective, this approach necessitates a considerable amount of training data, which is costly to obtain because annotating SQL queries requires programming expertise. Consequently, the lack of data hinders real-life applications of stateof-the-art parsers, especially on novel databases and unseen domains ( Suhr et al. ,2020 ).  \\n\\nAs an alternative to supervised learning, incontext learning ( Brown et al. ,2020 ), an emergent capability of large language models (LLMs), alleviates the need for large-scale data. With only a few examples, in-context learning enables LLMs to demonstrate performance comparable to or even better than fully supervised models on many NLP tasks, such as question answering, machine translation, and natural language inference ( Chowdhery et al. ,2022 ;Kojima et al. ,2022 ;Wei et al. ,2022b ,a ;Brohan et al. ,2023 ). When applied to text-to-SQL parsing, in-context learning also shows encouraging results, but it still lags behind supervised approaches ( Rajkumar et al. ,2022 ;Chang et al. ,2023 ;Liu et al. ,2023a ).  \\n\\nWe hypothesize that the under-performance is because text-to-SQL parsing requires complex, multistep reasoning. Even for a seemingly simple question, such as “What is the ID of Kyle,\" a model has to ground it to database schemas, infer the relational algebra among schema items, and construct syntactically correct SQL clauses. Recently, the chain-of-thought (CoT) style promptings ( Wei et al. ,2022b ;Zhou et al. ,2023 ) are proposed and have shown promising multi-step reasoning capabilities. To enhance LLMs’ reasoning ability, we systematically explore CoT style prompting for text-to-SQL parsing. Specifically, we seek to answer two research questions: (1) Which prompting style is better, generating all reasoning steps in a single pass, or iterative prompting and problem solving? (2) Does including more detailed information in the reasoning steps lead to better results for text-to-SQL parsing?  \\n\\nTo address the questions, we adopt two widely used prompting methods for text-to-SQL parsing As the first method, we apply chain-of-thought prompting (Wei et al. ,2022b ) by drawing an analogy between its problem-solving process and the execution procedure of a SQL query. Referring to the logical execution order of SQL clauses (Narechania et al. ,2021 ), we compose the intermediate execution steps in natural language and prompt LLMs to derive them before generating the SQL query. As the second method, we follow Zhou et al. (2023 ) to apply least-to-most prompting in two stages: (1) reduction: generate a series of sub-questions from the original question and (2) solving: iteratively translate each sub-question into its corresponding SQL query, with the original question as the last sub-question. However, in our case study 1 , we find that directly applying chainof-thought and lease-to-most promptings leads to error propagation issues. Their rationales contain very demonstration-specific information and are easier to mislead the reasoning process. Furthermore, least-to-most prompting technique leads to additional computational and time cost due to the multiple stages of reduction and solving.  \\n\\n  \\nFigure 1: Different prompting methods with multi-step reasoning for text-to-SQL parsing: (a) Chain-of-Thought, (b) Least-toMost, and our proposed (c) QDecomp , and (d) QDecomp $^+$ InterCOL .  \\n\\nTherefore, we propose a new method called question-decomposition prompting (QDecomp ). Similar to chain-of-thought prompting, QDecomp generates a sequence of reasoning steps and then the SQL query in one pass. However, we modify the steps to instruct LLMs to decompose the original complex question, akin to the problem reduction stage in least-to-most prompting. Also, to help LLMs ground database schemas, we design a variant of question decomposition prompting (QDecomp $^+$ InterCOL ) by including the table and column names involved in each sub-question. We conduct comprehensive evaluations on two textto-SQL datasets, Spider ( Yu et al. ,2018 ) and Spider Realistic ( Deng et al. ,2021 ). Our proposed prompting methods substantially outperform existing prompting ones by 2.4 and 1.5 point absolute gains on the development set of Spider and Spider Realistic, respectively. The results suggest that the iterative prompting which is costly due to additional computational resources requirement as in least-to-most prompting may not be necessary $(R Q I)$ . In addition, our analysis shows the proposed question decomposition prompting methods, which do not instruct LLMs to generate detailed reasoning steps, reduce the chance of error propagation when generating the reasoning steps. ( RQ2 ). Finally, we evaluate the robustness of our proposed prompting methods by varying the number, selection, and format of in-context examples and show that they can achieve consistently strong performance across different settings.',\n",
       "    'original_filename': 'Conf_Paper_Meta_Data_EMNLP_2023_with_whole_text.db'}}),\n",
       " (0.6530855957788669,\n",
       "  {'id': 454845631964659546,\n",
       "   'distance': 0.6036479473114014,\n",
       "   'entity': {'paper_id': '63a1751790e50fcafd1f48e7',\n",
       "    'paper_title': 'CiteBench: A Benchmark for Scientific Citation Text Generation',\n",
       "    'chunk_id': 1,\n",
       "    'chunk_text': '# 2 Related work\\n\\n# 2.1 Benchmarking\\nNLP benchmarks are unified dataset collections coupled with evaluation metrics and baselines that are used to systematically compare the performance of NLP systems for the targeted tasks in a standardized evaluation setup. Well-constructed benchmarks can boost progress in the corresponding research areas, such as SQuAD ( Rajpurkar et al. ,2016 ) for question answering, GLUE ( Wang et al. ,2018 ) for natural language understanding, KILT ( Petroni et al. ,2021 ) for knowledge-intensive tasks, GEM ( Gehrmann et al. ,2021 ,2022 ) for general-purpose text generation, and DynaBench (Kiela et al. ,2021 ) for dynamic benchmark data collection. C ITE BENCH is the first benchmark for the citation text generation task.\\n\\n# 2.2 Text generation for scientific documents\\nScientific documents are characterized by academic vocabulary and writing style, wide use of nonlinguistic elements like formulae, tables and figures, as well as structural elements like abstracts and citation anchors. Recent years have seen a rise in natural language generation for scientific text, including text simplification ( Luo et al. ,2022 ), summarization ( Qazvinian and Radev ,2008 ;Erera et al. ,2019 ;Cachola et al. ,2020 ), slides generation ( Sun et al. ,2021 ), table-to-text generation (Moosavi et al. ,2021 ), and citation text generation ( Li and Ouyang ,2022 ). Closely related to the task of citation text generation, Luu et al. (2021 )study how scientific papers can relate to each other, and how these relations can be expressed in text. Related to our work, Mao et al. (2022 ) propose a benchmark for scientific extreme summarization. Compared to extreme summarization, which amounts to generating short context-independent summaries of individual manuscripts, citation text generation focuses on context-dependent descriptions that relate the cited papers to the citing paper. In line with the recent efforts that address the lack of systematic automated evaluation of natural language generation in general ( Gehrmann et al. ,2021 ), our paper contributes the first unified benchmark for citation text generation in the scientific domain.\\n\\n# 2.3 Citation text generation\\nThe task of citation text generation was introduced in Hoang and Kan (2010 ), who generate a summary of related work specific to the citing paper. Since then, several task definitions and setups have been proposed (Table 1 ). Lu et al. (2020 ) cast the task as generating a multi-paragraph related work section given the abstracts of the citing paper and of the cited papers. AbuRa’ed et al. (2020 ) use the cited paper’s title and abstract to generate a citation sentence. Xing et al. (2020 ) use the abstract of the cited paper and include context before and after the citation sentence as the input, and produce the citation sentence as the output. A recent work by Chen et al. (2021 ) uses multiple cited abstracts as input to generate a related work paragraph. The great variability of the task definitions and setups in citation text generation prevents the study of citation text generation methods across datasets and evaluation setups. Unlike prior work that explores varying task settings, C ITE BENCH brings the diverging task definitions and datasets together in a unified setup. This allows us to compare citation text generation models across different datasets in a standardized manner using an extensive set of quantitative metrics, as well as novel automated qualitative metrics.  \\n\\n<html><body><table><tr><td rowspan=\"2\">Dataset</td><td colspan=\"3\">Input</td><td rowspan=\"2\">Output Citation text (T)</td><td rowspan=\"2\">Datasources</td></tr><tr><td>Cited document (D*) SingleAbs MultiAbs</td><td>Abs</td><td>Citing context (C\") Text</td></tr><tr><td>ABURAED</td><td></td><td>Title √</td><td></td><td>Sent Para</td><td>Multiple</td></tr><tr><td>CHEN</td><td>√</td><td></td><td></td><td></td><td>S2ORCandDelve</td></tr><tr><td>LU</td><td></td><td></td><td></td><td></td><td>arXiv.org and MAG</td></tr><tr><td>XING</td><td></td><td></td><td></td><td></td><td>AAN</td></tr></table></body></html>  \\n\\nTable 1: Overview of datasets in C ITE BENCH . Single Abs $=$ Single abstract, i.e., one cited document per instance. Multi Abs $=$ Multiple abstracts, i.e., multiple cited documents per instance. Abs $=$ Abstract, i.e., a dataset contains the abstract of the citing paper. Text $=\\\\xi$ a dataset contains additional context from the citing paper. Sent $=$ generation target is a single sentence. Para $=$ generation target is a paragraph.   \\nTable 2: Datasets statistics. The validation set for XING has been created by us via randomly sampling $10\\\\%$ of the original training data. Across datasets, very few inputs contain more than 4,096 tokens, and few outputs are longer than 1,024 tokens. We exploit this property to speed up the evaluation in Section 3.3 .  \\n\\n\\n<html><body><table><tr><td>Dataset</td><td>#Train</td><td>#Validation</td><td>#Test</td><td>Inputs>4,096tok.</td><td>Outputs>1,024tok.</td></tr><tr><td>ABURAED</td><td>15,000</td><td>1,384</td><td>219</td><td>0%</td><td>0%</td></tr><tr><td>LU</td><td>30,369</td><td>5,066</td><td>5,093</td><td><0.001%</td><td>0%</td></tr><tr><td>XING</td><td>77,086</td><td>8,566</td><td>400</td><td><0.001%</td><td><0.001%</td></tr><tr><td>CHEN -Delve</td><td>72,927</td><td>3,000</td><td>3,000</td><td><0.001%</td><td>0.004%</td></tr><tr><td>-S2ORC</td><td>126,655</td><td>5,000</td><td>5,000</td><td>0.017%</td><td><0.001%</td></tr><tr><td>Total</td><td>322,037</td><td>23,016</td><td>13,712</td><td>0.007%</td><td><0.001%</td></tr></table></body></html>\\n\\n# 3 Benchmark\\n\\n# 3.1 Task definition and datasets\\nWe formalize the task of citation text generation as follows: Given a set of $n$ (cited) target documents $\\\\{D_{1}^{t}...D_{n}^{t}\\\\}$ }, a (citing) sourc ocum $D^{s}$ set of $m$ citing document contexts $\\\\{C_{1}^{s}\\\\ ...C_{m}^{s}\\\\}\\\\ \\\\in$ } ∈ $D^{s}$ , generate a citation text $T^{\\\\prime}$ ′that is as close as possible to the original citation text $T\\\\in D^{s}$ ∈. This general definition allows wide variation in how the task is implemented. The cited document $D_{i}^{t}$ can be represented by the abstract $a^{t_{i}}$ , the concatenation of the title and the abstract, or even the full text of the paper. The context set $C^{s}$ covers sentences before and after the citation text in $D^{s}$ , as well as the abstract $a^{s}\\\\in D^{s}$ .  \\n\\nSuch general, open definition allows us to accommodate diverse approaches to the task within one framework (Table 1 ). To populate the benchmark, we select four datasets, focusing on the task design and domain variety: ABURAED (AbuRa’ed et al. ,2020 ), CHEN (Chen et al. ,2021 ), LU (Lu et al. ,2020 ), and XING (Xing et al. ,2020 ). Dataset transformation details are provided in Appendix A.1 .Table 2 shows the quantitative statistics, and Figure 2 provides data examples from each dataset. The CHEN dataset has two subsets – CHEN Delve and CHEN S2ORC – based on the data source; we use CHEN to denote the union of the two subsets. The datasets are distributed under varying licenses; we have obtained explicit permissions from the authors to use the data for research purposes in cases when licensing was underspecified (see Ethics statement).  \\n\\nWe note that the datasets included in the benchmark are not only structurally diverse, but also cover a wide range of domains, from medicine to computer science. In particular, ABURAED and XING exemplify citation text generation in the computational linguistics domain, CHEN Delve cover the computer science domain; LU and CHEN S2ORC span a wide range of domains represented on arxiv.org and in the S2ORC corpus, respectively, including biology, medicine and physics.',\n",
       "    'original_filename': 'Conf_Paper_Meta_Data_EMNLP_2023_with_whole_text.db'}}),\n",
       " (0.6270342255476944,\n",
       "  {'id': 454845681740829484,\n",
       "   'distance': 0.6236740350723267,\n",
       "   'entity': {'paper_id': '6535d747939a5f408295c649',\n",
       "    'paper_title': 'Benchmarking and Improving Text-to-SQL Generation under Ambiguity',\n",
       "    'chunk_id': 0,\n",
       "    'chunk_text': '# Benchmarking and Improving Text-to-SQL Generation under Ambiguity\\nAdithya Bhaskar $\\\\mathbf{\\\\nabla}_{*}\\\\bigotimes\\\\bigtriangleup$ Tushar Tomar ∗♠ Ashutosh Sathe ♠Sunita Sarawagi ♠♠IIT Bombay ♢Princeton University\\n\\n# Abstract\\nResearch in Text-to-SQL conversion has been largely benchmarked against datasets where each text query corresponds to one correct SQL. However, natural language queries over reallife databases frequently involve significant ambiguity about the intended SQL due to overlapping schema names and multiple confusing relationship paths. To bridge this gap, we develop a novel benchmark called AmbiQT with over 3000 examples where each text is interpretable as two plausible SQLs due to lexical and/or structural ambiguity.  \\n\\nWhen faced with ambiguity, an ideal top$k$ decoder should generate all valid interpretations for possible disambiguation by the user ( Elgohary et al. ,2021 ;Zhong et al. ,2022 ). We evaluate several Text-to-SQL systems and decoding algorithms, including those employing state-of-the-art LLMs, and find them to be far from this ideal. The primary reason is that the prevalent beam search algorithm and its variants, treat SQL queries as a string and produce unhelpful token-level diversity in the top$k$ .  \\n\\nWe propose LogicalBeam, a new decoding algorithm that navigates the SQL logic space using a blend of plan-based template generation and constrained infilling. Counterfactually generated plans diversify templates while in-filling with a beam-search, that branches solely on schema names, provides value diversity. LogicalBeam is up to $2.5\\\\times$ more effective than state-of-the-art models at generating all candidate SQLs in the top$k$ ranked outputs. It also enhances the top5 Exact and Execution Match Accuracies on SPIDER and Kaggle DBQA 1 .\\n\\n# 1 Introduction\\nResearch on Text-to-SQL generation has focused on scenarios where each natural language question is associated with one correct SQL ( Zelle and Mooney ,1996 ;Tang and Mooney ,2000 ;Scholak et al. ,2021a ;Wang et al. ,2020 ;Rubin and Berant ,2021 ;Xie et al. ,2022 ;Arcadinho et al. ,2022 ;Zeng et al. ,2022 ;Scholak et al. ,2021b ;Pourreza and Rafiei ,2023 ). Popular benchmarks driving such research, including WikiSQL ( Zhong et al. ,2018 ), SPIDER ( Yu et al. ,2018 ), its robust perturbations ( Chang et al. ,2023 ), and even “in-thewild” benchmarks such as KaggleDBQA ( Lee et al. ,2021 ) and SEDE ( Hazoom et al. ,2021 ) all associate one correct SQL with text. Meanwhile, ambiguity is prevalent in real-life databases — particularly the ones obtained by integrating several data sources for data analysis, where a natural language interface is most in demand. The sources of ambiguity are several — inherent ambiguity of natural language, the user’s ignorance of table/column names, overlapping strings in column names, underspecified clauses, and confusion about whether aggregates are pre-computed, or if a join is required. Hazoom et al. (2021 ) observe that up to $87\\\\%$ of queries on the stack exchange database are underspecified, and Wang et al. (2022 ) mention that $11\\\\%$ of queries exhibited ambiguity in column names. Although prior work has brought up ambiguity, there is no publicly available benchmark with ambiguous queries, nor a comprehensive evaluation of systems under ambiguity.  \\n\\nOur first contribution is to bridge this gulf by developing a benchmark, AmbiQT, that tests performance under ambiguity in the context of current models. AmbiQT includes over 3000 examples, each associating a natural question on a database with two valid SQLs. Inspired by our experience with several real-world datasets, we target four types of ambiguity spanning both lexical (ambiguous column and table names) and structural (whether a join is necessary, and an aggregate is pre-computed) ambiguity. The benchmark is generated via a combination of ChatGPT ( OpenAI ,2022 ) based synonym generation and perturbation, and standard rule-based perturbation.  \\n\\nWhen faced with ambiguity, an ideal Text-toSQL system should incorporate all valid alternatives in their top$k$ SQL outputs, for user resolution. We show that present approaches, ranging from T5-3B ( Raffel et al. ,2019 ) to SOTA models, fail to generate all ambiguous outputs with any decoding strategy, including beam search and diversity-promoting sampling methods such as Nucleus ( Holtzman et al. ,2020 ) and Typical sampling ( Meister et al. ,2023 ). Most outputs are small lexical tweaks of the top choice, bringing about little meaningful diversity in SQL structures or schema alternatives. Even SOTA LLMs like ChatGPT ( OpenAI ,2022 ) suffer from this issue.  \\n\\nTo remedy the lack of diversity, we propose a new decoding algorithm, LogicalBeam, that allocates branching to explore underlying logical variants of the SQL rather than the string form. We catalog the errors of T5-3B ( Raffel et al. ,2019 ) on the SPIDER dev split and use our insights to encourage targeted types of diversity — the number of JOIN s and selections, and table/column names.  \\n\\nOur main contributions are:   \\n•We develop AmbiQT, the first benchmark that tests performance under four types of ambiguity over $\\\\mathbf{3000+}$ examples.   \\n•We show that SOTA methods, including a finetuned T5-3B, RESDSQL ( Li et al. ,2023 ), OpenAI Codex, and ChatGPT, provide a poor representation of ambiguity despite their high accuracy on conventional benchmarks.   \\n•We present LogicalBeam, a two-step algorithm that generates plan-based templates with counterfactually controlled plan diversity and fills them via a beam search that branches only on schema names.   \\n•We show that LogicalBeam consistently increases the fraction of time when all gold SQLs get generated in the Top-5 choices by $1.5-2.5\\\\times$ over the baselines across the board on AmbiQT.',\n",
       "    'original_filename': 'Conf_Paper_Meta_Data_EMNLP_2023_with_whole_text.db'}}),\n",
       " (0.62353612232355,\n",
       "  {'id': 454845706688551984,\n",
       "   'distance': 0.613497793674469,\n",
       "   'entity': {'paper_id': '65406320939a5f40826491aa',\n",
       "    'paper_title': 'Evaluating Cross-Domain Text-to-SQL Models and Benchmarks',\n",
       "    'chunk_id': 0,\n",
       "    'chunk_text': '# Evaluating Cross-Domain Text-to-SQL Models and Benchmarks\\nMohammadreza Pourreza University of Alberta   \\n\\nDavood Rafiei University of Alberta\\n\\n# Abstract\\nText-to-SQL benchmarks play a crucial role in evaluating the progress made in the field and the ranking of different models. However, accurately matching a model-generated SQL query to a reference SQL query in a benchmark fails for various reasons, such as underspecified natural language queries, inherent assumptions in both model-generated and reference queries, and the non-deterministic nature of SQL output under certain conditions. In this paper, we conduct an extensive study of several prominent cross-domain text-to-SQL benchmarks and reevaluate some of the top-performing models within these benchmarks, by both manually evaluating the SQL queries and rewriting them in equivalent expressions. Our evaluation reveals that attaining a perfect performance on these benchmarks is unfeasible due to the multiple interpretations that can be derived from the provided samples. Furthermore, we find that the true performance of the models is underestimated and their relative performance changes after a re-evaluation. Most notably, our evaluation reveals a surprising discovery: a recent GPT4-based model surpasses the gold standard reference queries in the Spider benchmark in our human evaluation. This finding highlights the importance of interpreting benchmark evaluations cautiously, while also acknowledging the critical role of additional independent evaluations in driving advancements in the field.  \\n\\nproved from 65.6 ( Wang et al. ,2019 ) to 74.0 ( Li et al. ,2023a ). Measuring such progress is hinged on reliable benchmarks and evaluation metrics.  \\n\\nTwo standard metrics for evaluating the performance in this domain have been exact set match accuracy and execution accuracy . The former measures if a model-generated SQL query lexically matches a reference SQL query, whereas the latter measures if a model-generated SQL query produces the same output as a reference query $(\\\\S\\\\,4)$ .  \\n\\n  \\nFigure 1: An example question with two correct SQL queries, each corresponding to a different interpretation. There is an ambiguity in schema mapping, with two different database columns describing the name.\\n\\n# 1 Introduction\\nSignificant progress has been made in translating natural language text to SQL statements over the past few years. The execution accuracy on the hold-out test of Spider ( Yu et al. ,2018b )–a large-scale cross-domain text-to-SQL benchmark– has improved from 53.5 in May, 2020 ( Zhong et al. ,2020b ) to 85.3 in March, 2023 ( Pourreza and Rafiei ,2023 ). The exact set match accuracy, without considering database cell values, on the same benchmark and over the same period has im  \\n\\nConsider the example in Figure 1 , which consists of a model-generated query (shown on the left) and a reference query (shown on the right). Both SQL queries return the id and name of makers that have more than 3 models. However, the model-generated query returns the column FullName, which gives the full name of a maker (e.g., “Ford Motor Company”), whereas the reference query given in the benchmark returns the column Maker, which gives the short common name of a maker (e.g., “Ford”). The model-generated query fails an exact set match since the column names in the select clause are different. The query outputs are also different and the model-generated query fails the execution accuracy as well. The natural language utterance is not specific about the type of name to be returned, and a human evaluator tags both queries correct.  \\n\\nAs the models improve, these types of failures make up most of the errors, and the performance metrics become less relevant, as shown in our evaluation. In particular, we re-evaluated all development set queries of Spider on which two top-performing models, one using a fine-tuned model ( Scholak et al. ,2021 ) and another using a large language model ( Pourreza and Rafiei ,2023 ), failed. We found out that $25\\\\%$ of the queries generated by one model and $87\\\\%$ of the queries generated by the other model were indeed correct but were wrongly evaluated by the benchmark. For the same set of queries, our re-evaluation of the ground truth queries found $33\\\\%$ of the SQL queries incorrect, which was more than the number of incorrect queries generated by one of the models. This evaluation places one of the models above the ground truth queries in this re-evaluation.  \\n\\nWe further re-evaluated two well-known benchmarks, Spider ( Yu et al. ,2018b ) and SpiderDK ( Gan et al. ,2021b ), and a newly released benchmark, BIRD ( Li et al. ,2023b ), and found similar problems in all three benchmarks that affect the evaluation. Our evaluation reveals that $18\\\\%$ of the queries in the train sets and $20\\\\%{-23\\\\%}$ of the queries in the dev sets of these benchmarks are subject to ties in the dataset and which one of the tied rows are returned. This means a model-generated query will be deemed incorrect if it does not return the same row, among tied rows, as the ground truth query. This can severely impact the evaluation, especially when there is a tight race among models. Considering these observations, it is crucial to emphasize the significance of additional independent evaluations when utilizing these benchmarks. To enhance the evaluation process further, a potential solution is to incorporate multiple SQL queries as the ground truth, each representing a different interpretation that may be valid.  \\n\\nOur objective in this paper is to provide a comprehensive evaluation of existing Text-to-SQL benchmarks, underscoring the inherent issues they possess. We refrain from introducing a new dataset due to several considerations. First, addressing the identified issues by updating these benchmarks requires considerable human effort. Additionally, benchmarks in the Text-to-SQL domain, like Spider and BIRD, have holdout test sets used for official leaderboards and comparisons of text-to-SQL methodologies. We only have access to the development and training sets of these benchmarks, which limits our capability to alter the test sets. As a result, making changes only to the development and training sets would not completely address the benchmark’s inherent problems, given that final performance is gauged using the problematic test sets.',\n",
       "    'original_filename': 'Conf_Paper_Meta_Data_EMNLP_2023_with_whole_text.db'}}),\n",
       " (0.6113277921557354,\n",
       "  {'id': 454896058206782956,\n",
       "   'distance': 0.621261477470398,\n",
       "   'entity': {'paper_id': '62393e7e5aee126c0f125b6b',\n",
       "    'paper_title': 'Probing Factually Grounded Content Transfer with Factual Ablation',\n",
       "    'chunk_id': 1,\n",
       "    'chunk_text': '# 2 Related Work and Background\\n\\n# 2.1 Textually Grounded Generation\\nTextual grounding is a common element of natural language generation tasks, wherein a textual input is used to provide facts and information for decoding. One of the most popular tasks following this paradigm is abstractive summarization ( Narayan et al. ,2018 ;Rush et al. ,2015 ), in which generation $y$ should shorten and capture the salient information in source $g$ . Other tasks extent beyond summarization, for example grounded dialogue (Dziri et al. ,2021 ) and content transfer ( Prabhumoye et al. ,2019 ) (studied here). These tasks add the additional constraint that the generation $y$ must adhere to some existing context $c$ , either previous dialogue turns or a document being extended (respectively).\\n\\n# 2.2 Factuality and Factual Consistency\\nRecent work ( Maynez et al. ,2020 ) observes that strong neural models, although fluent and creative, often hallucinate information. Indeed, for all summarization models tested by Maynez et al. (2020 ), over $70\\\\%$ of generations included information not directly entailed by the grounding $g$ . However, they observe that some of this information is still factually correct. This naturally yields 2 notions of correctness for textually grounded generation: factuality and factual consistency (or faithfulness ). Factuality concerns the universal correctness of a generation–is the model output factual regardless of grounding $g?$ Factual consistency more specifically probes whether the generation adheres to grounding $g$ . Our work probes the much more tractable problem of factual consistency.  \\n\\nA significant portion of past work on factuality and factual consistency in generation has focused on abstractive summarization ( Pagnoni et al. ,2021 ;Goyal and Durrett ,2021 ;Cao and Wang ,2021 ;Aralikatte et al. ,2021 ). Yet as mentioned above, textually grounded generation extends beyond summarization, and some works explore notions of factuality in other domains such as conversation (Shuster et al. ,2021 ) or table-to-text generation (Liu et al. ,2021 ). Similarly, we explore these notions outside of direct summarization, instead focusing on grounded content transfer ( Prabhumoye et al. ,2019 ).  \\n\\nMuch work in this area concerns improving factuality and factual consistency ( Shuster et al. ,2021 ;Zhu et al. ,2021 ;Nan et al. ,2021 ;Mao et al. ,2020 ;Aralikatte et al. ,2021 ). While this is one aspect of our work, we also aim to improve automatic evaluation, for which a single standard metric has not emerged. Some works evaluate factuality and consistency with extraction ( Goodrich et al. ,2019 ;Zhang et al. ,2020 ) or question answering (Wang et al. ,2020 ;Durmus et al. ,2020 ;Nan et al. ,2021 ). Others use notions of entailment ( Falke et al. ,2019 ), or simply train end-to-end models to judge these aspects directly ( Kryscinski et al. ,2020 ). We instead focus on the effect of excluding relevant information from the grounding–for a factual model, removing this information should lower the probability of the ground-truth generation.  \\n\\nSome works follow a similar intuition to ours. Xie et al. (2021 ) also understand factuality by estimating the effect of the source document on generative model output, although they explicitly mask relevant information while we offer a plausible alternative grounding. Similarly, Xu and Durrett (2021 ) ablate information from a source document to understand aspects of conditional generation, although factuality is not the focus.  \\n\\nFinally, some work in this area studies the need to evaluate metrics of factuality and consistency (Gabriel et al. ,2020 ;Pagnoni et al. ,2021 ), and to generally characterize and annotate the mistakes of models ( Maynez et al. ,2020 ;Pagnoni et al. ,2021 ;Goyal and Durrett ,2021 )\\n\\n# 2.3 Loss Truncation\\nLoss Truncation ( Kang and Hashimoto ,2020 ) improves conditional models by only training on the top-c examples, ranked by dynamically updated model loss. This is broadly applicable to conditional models with a noisy learning signal, and we include two baselines using this approach.\\n\\n# 3 Methodology\\nHere, we bring factual consistency to a new domain, content transfer, which is the task of extending context $c$ with content from a grounding document $g$ . We discuss the task (§ 3.1 ), and our major contributions: novel methods for judging (§ 3.2 ) and improving (§ 3.3 ) factual consistency in this setting.\\n\\n# 3.1 Task: Content Transfer\\nRecent work studying factual consistency has largely focused on summarization: models are given a source document $g$ (grounding) as input, and output a shorter summary text $y$ capturing the most salient information from $g$ . Summarization is a natural domain to study factual consistency–the source document typically contains all information needed for the summary–but the need for factual consistency is not exclusive to summarization, and more domains should be explored.  \\n\\nHere, we expand this study to the content transfer task. As in summarization, models are given grounding $g$ , and must output text $y$ using information from $g$ . However, $y$ must also fit a context c, which significantly narrows the range of reasonable outputs from the open-ended summarization task, to those that fit the context. Prabhumoye et al. (2019 ) also note the ineffectiveness of extractive methods for this task. This obviates issues of model understanding that underlie factual consistency errors: while summarization models can often copy text directly, ensuring factual consistency regardless of understanding, content transfer models must reformulate information to fit the context.  \\n\\nPrabhumoye et al. (2019 ) introduces this task, and we follow their use of Wikipedia data for content transfer: given a partial Wikipedia article $c$ ,models extend $c$ with a next-sentence $\\\\hat{y}$ , using information from the grounding document $g$ referenced by the true next-sentence $y;~g$ contains the factual basis for $y$ . The dataset contains 600K training examples, 6K validation examples, and 50K test examples. Measuring factual ablation on this original dataset is not an option as there is only one piece of grounding per-example, and so we describe two paths to generating evaluation data for this purpose below.  \\n\\nContent transfer is formally defined as the task of generating a next-sentence $\\\\hat{y}$ for context $c$ which is (i) coherent, and fits $c$ (ii) factually and (iii) stylistically, while (iv) only utilizing information from grounding document $g$ . Note here, (iv) requires factual consistency, which is a stronger notion than overall factuality (§ 2.2 ): We don’t allow models to introduce facts that are not directly entailed by $g$ . Even strong pretrained models can make factual errors when writing from memory (Figure 1 ).  \\n\\nCentral to our study is the degree to which each above condition must be met to have an effective model. Conditions i-iii are not absolute constraints. A reasonable generation may be a bit awkward or not perfectly fit $c$ . On the other hand, an effective model must follow condition iv completely. While satisfaction of all of i-iv may be noisy in both the training dataset and tuned models, our approach will focus on addressing this noise for condition iv.',\n",
       "    'original_filename': 'Conf_Paper_Meta_Data_ACL_2022_Annual_Meeting_of_the_Association_for_Computational_Linguistics_with_whole_text.db'}}),\n",
       " (0.6073092615003175,\n",
       "  {'id': 454845641449030464,\n",
       "   'distance': 0.6193163990974426,\n",
       "   'entity': {'paper_id': '6461b9c9d68f896efad43133',\n",
       "    'paper_title': 'Interactive Text-to-SQL Generation Via Editable Step-by-Step Explanations',\n",
       "    'chunk_id': 6,\n",
       "    'chunk_text': '# 7 Discussion\\nBoth the quantitative experiments and the user study demonstrate S TEPS can significantly improve the accuracy of SQL generation. This is largely attributed to the interaction design, which allows users to precisely pinpoint which part of the SQL is wrong and only regenerates the incorrect clauses rather than the entire SQL query. In contrast, existing approaches do not support expressive ease or error isolation. Users either cannot regenerate new content (e.g., DIY), or can only regenerate the entire query rather than just the erroneous part (e.g., MISP). Ning et al. (2023 ) showed that this lack of error isolation often introduces new errors, which frustrates users and makes errors hard to fix.  \\n\\nError Analysis. While simple errors are prevalent in SQL generation, our ablation study (Table 4 )shows that only fixing simple errors is insufficient, which motivates the design of our hybrid method. Our hybrid method can handle a broad range of errors because users can flexibly correct entities or clauses in a query. This ability helps reduce the difficulty of tasks by dividing complex errors into simpler ones, allowing users to solve them separately.  \\n\\nIn our automated user simulation, S TEPS failed in a few cases when the text-to-clause model predicted the wrong clause type. For example, the paraphrased ground truth explanation of one step was: “ Ensure that all categories where the total cost of therapy exceeds 1000 are included. ” The text-to-clause model predicted aWHEREclause in-stead of a HAVING clause.  \\n\\nIn the user study, one common challenge arose when multiple tables in the database had the same column name. If users did not look carefully at the database schema, they may have not explicitly indicated the table to be used. That creates an ambiguity for the model.  \\n\\nOther Datasets and Domains. Our system should work for any SQL dataset, as our approach is domain-agnostic and covers general SQL structures. For other forms of code, such as WebAPI (Su et al. ,2017 ) and SPARQL ( Ngonga Ngomo et al. ,2013 ;Mo et al. ,2022 ), the general idea is applicable, but new models would be needed for (a) code generation, (b) explanation generation, and (c) code correction.\\n\\n# 8 Conclusion\\nThis work presents S TEPS , a new interactive approach for text-to-SQL generation. S TEPS decomposes a text-to-SQL task into smaller text-to-clause tasks and enables users to validate and refine a generated query via editable explanations. Experiments on four benchmarks and a user study show S TEPS can significantly boost the accuracy of end-to-end models by incorporating user feedback. S TEPS significantly outperforms three stateof-the-art approaches for interactive SQL generation across all metrics considered.\\n\\n# 9 Limitations\\nOur automated user simulation is an optimistic experiment that does not account for user errors, such as not being able to identify mistakes in the explanation. The simulation was designed to test a scenario in which a user can perfectly identify which step of the explanation is wrong and accurately describe a corrected version in natural language. Creating such a perfect user required the use of the ground truth, both for the identification step and to generate the natural language correction. This simulation is not representative of real-world use. That limitaton was the motivation for our study with real users, in which we had actual people use different tools without information about correct answers. As shown in Table 5 , the accuracy of the user study is lower than the simulation, but S TEPS is still very effective and outperforms other tools. We choose to include the simulation study because it shows the potential for S TEPS to make corrections if there is no human error.  \\n\\nIn this paper, we only evaluate S TEPS on singleturn SQL generation. In future work, our approach can be extended to multi-turn SQL generation by incorporating contextual information when editing the natural language explanation.  \\n\\nWhile our approach is designed to be general for SQL generation and potentially other code generation tasks, the current version only supports SQL keywords that appear in the Spider dataset. Like other text-to-SQL datasets, Spider only covers query operations (e.g., SELECT ) and does not cover update operations (e.g., INSERT ) for evaluation convenience. But it would be straightforward to cover unsupported operations by adding new translation rules.  \\n\\nprocedure, potential risks, data usage, and confidentiality. We obtained consent from each user before proceeding with the study. All collected data were anonymized and de-identified to protect the privacy of users.\\n\\n# Acknowledgments\\nThis material is based in part on work supported by an Amazon Research Award, the Australian Research Council through a Discovery Early Career Researcher Award and by the Defense Advanced Research Projects Agency (grant #HR00112290056).',\n",
       "    'original_filename': 'Conf_Paper_Meta_Data_EMNLP_2023_with_whole_text.db'}}),\n",
       " (0.5204809343348678,\n",
       "  {'id': 454845641360425782,\n",
       "   'distance': 0.6537715196609497,\n",
       "   'entity': {'paper_id': '6461b9c9d68f896efad43133',\n",
       "    'paper_title': 'Interactive Text-to-SQL Generation Via Editable Step-by-Step Explanations',\n",
       "    'chunk_id': 1,\n",
       "    'chunk_text': '# 2 Related Work\\n\\n# 2.1 Text-to-SQL Generation\\nNatural language interfaces have long been recognized as a way to expand access to databases ( Hendrix et al. ,1978 ).The construction of several large text-to-SQL datasets, such as WikiSQL ( Zhong et al. ,2017 ) and Spider ( Yu et al. ,2018 ), has enabled the adoption of deep learning models in this task, achieving unprecedented performance in recent years ( Rubin and Berant ,2021 ;Wang et al. ,2020a ;Scholak et al. ,2021 ;Yu et al. ,2020 ;Hwang et al. ,2019 ). Our technique is based on the recent success of neural text-to-SQL models. Unlike existing models that perform end-to-end SQL generation, we propose a new interaction mechanism for users to validate and refine generated queries through step-by-step explanations.  \\n\\nAs the first step to demonstrate the feasibility of our approach, we focus on single-turn SQL generation ( Yu et al. ,2018 ) in this work. There has also been recent work that supports multi-turn SQL generation ( Yu et al. ,2019a ,b;Guo et al. ,2021 ), where a sequence of interdependent queries are expressed in multiple utterances in a dialog. Models designed for multi-turn SQL generation typically need to reason about the dialog context and effectively encode the historical queries ( Wang et al. ,2021 ;Hui et al. ,2021 ;Zhang et al. ,2019 ;Cai and Wan ,2020 ;Wang et al. ,2020b ). Our approach can be extended to support multi-turn SQL generation by initiating separate refinement sessions for individual queries while incorporating the contextual information of previous queries into explanation generation and text-to-clause generation.\\n\\n# 2.2 Interactive Semantic Parsing for SQL\\nRecently, there has been a growing interest in interactive approaches that elicit user feedback to guide SQL generation. Iyer et al. (2017 ) proposed to allow users to flag incorrect queries and continuously retrain the model. Both DIY ( Narechania et al. ,2021 ) and NaLIR ( Li and Jagadish ,2014a ,b)enable users to select alternative values or subexpressions to fix an incorrect SQL query. PIIA ( Li et al. ,2020 ), MISP ( Yao et al. ,2019 ), and DialSQL ( Gur et al. ,2018 ) proactively ask for user feedback via multiple-choice questions. A common limitation of these methods is that they only solicit feedback in constrained forms, hindering their flexibility and effectiveness in addressing the variability of SQL errors. In contrast, our approach allows more flexible feedback through direct edits to the explanations generated by the model.  \\n\\nThe only work that supports open-ended user feedback in SQL generation is NL-EDIT ( Elgohary et al. ,2021 ). NL-EDIT is trained on SPLASH ( Elgohary et al. ,2020 ), a dataset of SQL errors and user feedback utterances. Given an incorrect query, NL-EDIT allows users to provide a clarification utterance. Based on the utterance, the model generates a sequence of edits to the SQL query. Incorporating feedback expressed in a completely free-text utterance is challenging for two reasons:  \\n\\n  \\nFigure 2: An Overview of Interactive SQL Generation and Refinement with Editable Step-by-Step Explanations  \\n\\n(1) the model needs to infer which part of the SQL query to fix; (2) the model needs to determine what changes are being requested. In contrast, S TEPS asks users to directly edit an NL explanation and make corrections to the explanation. Comparing the initial explanation with the user-corrected explanation makes it easier to locate the part of a SQL query that needs to be changed and infer what change to make.  \\n\\nThe idea of SQL decomposition is similar to recent work that decomposes a user question to sub-questions on SPARQL ( Mo et al. ,2022 ). Their approach requires a crowd-sourced dataset to train a question decomposition model. In contrast, our rule-based method generates step-by-step explanations without the need for training a model. This also allows our system to map each entity in the explanation to the corresponding SQL element, making it easier for SQL correction (Sec. 3.2 ).\\n\\n# 2.3 Explaining SQL Queries in NL\\nOur approach is also related to prior work that generates NL explanations for SQL queries. Simitsis and Ioannidis (2009 ) argued that databases should “talk back” in human language so that users can verify results. Kokkalis et al. (2012 ) and Koutrika et al. (2010 ) used a graph-based SQL translation approach, where each query is represented as a graph and the explanation is generated by traversing the graph. Elgohary et al. (2021 ,2020 ) employed a template-based explanation approach, where they manually curated 57 templates for explanation generation. These approaches have limited capability to handle arbitrary SQL queries. To address this limitation, we propose a rule-based method to first explain terminal tokens (e.g., operators, keywords) and gradually compose them into a complete explanation based on the derivation rules in the SQL grammar. Another key difference is that none of the existing approaches supports editable explanations for SQL correction, which is a key feature to solicit user feedback in our approach.',\n",
       "    'original_filename': 'Conf_Paper_Meta_Data_EMNLP_2023_with_whole_text.db'}}),\n",
       " (0.4935950667467299,\n",
       "  {'id': 454919307595371694,\n",
       "   'distance': 0.5985562801361084,\n",
       "   'entity': {'paper_id': '63608e5090e50fcafdee1152',\n",
       "    'paper_title': 'Diverse Parallel Data Synthesis for Cross-Database Adaptation of   Text-to-SQL Parsers',\n",
       "    'chunk_id': 2,\n",
       "    'chunk_text': '# 2.2 Translating text of related queries\\nOur next goal is to translate the retrieved $x_{r}$ from being a text for SQL $q_{r}$ to a text $\\\\hat{x}$ for SQL $q$ ,where $q\\\\approx q_{r}$ structurally. However, we do not have a readily labeled dataset to learn a model that translates $x_{r}$ to $\\\\hat{x}$ while being consistent with $q$ . We therefore decompose this task into two steps: 1) A simpler task of masking schema-specific tokens in $x_{r}$ to get a template $x_{r}^{\\\\mathrm{masked}}$ and 2) A conditional text generation model that maps $(x_{r}^{\\\\mathrm{masked}},q)$ to the text $\\\\hat{x}$ consistent with $q$ , by filling the masked positions in $x_{r}^{\\\\mathrm{masked}}$ as per $q$ . We re-purpose $\\\\mathcal{D}_{\\\\mathrm{train}}$ to get indirect supervision for training the text generation model. We now present each step in detail.  \\n\\nfrom different schemas, we modify the tree-editdistance algorithm to ignore the schema names and the database values. The tree-edit-distance is further normalized by the size of the larger tree. We $\\\\{q_{r}\\\\}$ only consider the have a distance of less than $\\\\{q_{r},x_{r}\\\\}$ pairs where the SQLs 0 .1 w.r.t. the SQL $q$ . Within datasets like Spider that span hundreds of schemas, it is often possible to find several SQLs structurally similar to a given SQL $q$ . For example, in Spider we found that $76\\\\%$ of the train SQLs contain at least three zero-distance (structurally identical) neighbours in other schemas. In Figure 2 ,we present more detailed statistics.  \\n\\nMasking the retrieved text Converting the re$\\\\{x_{r}^{\\\\mathrm{masked}}\\\\}$ trieved text queries }is a critical component of R $\\\\{x_{r}\\\\}$ to masked templates EFILL ’s pipeline since irrelevant tokens like references to schema elements of the original database can potentially misguide the text generation module. Our initial approach was to mask tokens based on a match of text tokens with schema names and manually refined schema-to-text linked annotations as in Lei et al. (2020 ). However, this approach failed to mask all schema-related terms since their occurrences in natural text often differed significantly from schema names in the database. Table A7 shows some anecdotes. Consequently, we designed a simple frequency-based method of masking that is significantly more effective for our goal of using the masked text to just guide the diversity. For each word that appears in the text queries of the train set, we count the number of distinct databases where that word gets mentioned at least once. For example, common words like $\\\\{^{\\\\prime}{\\\\mathsf{s h o w}}\\\\}$ , ‘what’, ‘list’, ‘order’} get mentioned in more than $90\\\\%$ of the schemas, and domain specific words like {‘countries’, ‘government $^\\\\prime\\\\}$ occur only in text queries of a few schemas. We mask out all the words that appear in less than $50\\\\%$ of the schemas. The words to be masked are replaced by a special token MASK , and consecutive occurrences of MASK are collapsed into a single MASK token. Thus we obtain masked templates minimal information about their original schema. $\\\\{x_{r}^{\\\\mathrm{{masked}}}\\\\}$ }retaining Editing and Filling the masked text Given a masked template $x_{r}^{\\\\mathrm{masked}}$ , and an SQL query $q$ ,we wish to edit and fill the masked portions in $x_{r}^{\\\\mathrm{masked}}$ to make it consistent with the $\\\\operatorname{SQL}q$ . We utilize a conditional text generation model BART ( Lewis et al. ,2020 ) for this purpose. We $\\\\boldsymbol{\\\\mathrm{\\\\Sigma}}_{\\\\boldsymbol{\\\\mathrm{\\\\Sigma}}}$ like first convert $q$ into a pseudo-English representation $q^{\\\\mathrm{Eng}}$ similar to Shu et al. (2021 ), to make it easier for $\\\\boldsymbol{\\\\beta}$ to encode $q$ . In addition, we wrap the table, column, or value tokens in $q^{\\\\mathrm{Eng}}$ with special tokens to provide explicit signals to the text generation model $\\\\boldsymbol{\\\\mathrm{\\\\Sigma}}_{\\\\boldsymbol{\\\\mathrm{\\\\Sigma}}}$ that ch tokens are likely to appear in the output text ˆ. Next, we concatenate the tokens in $x_{r}^{\\\\mathrm{masked}}$ and $q^{\\\\mathrm{Eng}}$ for jointly encoding them as which is expected to be consistent with the an input to $\\\\boldsymbol{\\\\mathrm{\\\\Sigma}}_{\\\\boldsymbol{\\\\mathrm{\\\\Sigma}}}$ . The output of $\\\\boldsymbol{\\\\mathrm{\\\\Sigma}}_{\\\\boldsymbol{\\\\mathrm{\\\\Sigma}}}$ ’s decoder is text ${\\\\mathrm{SQL~}}q$ $\\\\hat{x}$ ,.  \\n\\nSince we do not have direct supervision to finetune purposing SQL-Text pairs $\\\\boldsymbol{\\\\mathrm{\\\\Sigma}}_{\\\\boldsymbol{\\\\mathrm{\\\\Sigma}}}$ for this task, we presen $\\\\mathcal{D}_{\\\\mathrm{train}}$ for fine-tuning $(q_{i},x_{i})$ from various schemas B.a method of re$\\\\mathcal{D}_{\\\\mathrm{train}}$ contains $s_{i}$ .A Naïve way to train $\\\\boldsymbol{\\\\mathrm{\\\\Sigma}}_{\\\\boldsymbol{\\\\mathrm{\\\\Sigma}}}$ is to provide $[x_{i}^{\\\\mathrm{{masked}}}|q_{i}^{\\\\mathrm{{Eng}}}]$ |,the concatenation of $x_{i}^{\\\\mathrm{masked}}$ and $q_{i}^{\\\\mathrm{Eng}}$ as an input to the encoder and maximize the likelihood of $x_{i}$ in the decoder’s output. This way the decoder of $\\\\boldsymbol{\\\\beta}$ learns to refill the masked tokens in $x_{i}^{\\\\mathrm{masked}}$ by attending to $q_{i}^{\\\\mathrm{Eng}}$ to recover $x_{i}$ in the output. While useful for learning to refill the masked positions, this from its use during inference in two ways: (i) For a Naïve method of training $\\\\boldsymbol{\\\\mathrm{\\\\Sigma}}_{\\\\boldsymbol{\\\\mathrm{\\\\Sigma}}}$ is mismatched given SQL $q$ , R EFILL might fail to retrieve a similar str cture neighbour of $q_{i}$ from $\\\\mathcal{D}_{\\\\mathrm{train}}$ . In such cases, SQL-to-Text generation mode to directly translate Bshould be capable of falling back to pure $q$ into $\\\\hat{x}$ . (ii) During inference, $x_{r}^{\\\\mathrm{masked}}$ and $q$ come from different schemas. However, during Naïve training, the masked text $x_{i}^{\\\\mathrm{masked}}$ and the SQL $q_{i}$ are derived from the same example $(q_{i},x_{i})$ . To Robust address these two limitations, we train manner as follows: (a) For a random one$\\\\boldsymbol{\\\\mathrm{\\\\Sigma}}_{\\\\boldsymbol{\\\\mathrm{\\\\Sigma}}}$ in a more third of t allowing using $q_{i}^{\\\\tilde{\\\\mathrm{Eng}}}$ B. (b) For another one-third, we pass only to learn the filling of the masked tokens train steps we train $\\\\boldsymbol{\\\\mathrm{\\\\Sigma}}_{\\\\boldsymbol{\\\\mathrm{\\\\Sigma}}}$ in the Naïve way, $q_{i}^{\\\\mathrm{Eng}}$ as an input and maximize the likelihood of $x_{i}$ .This ensures that model is capable of generating the text from the $q_{i}^{\\\\mathrm{Eng}}$ alone, if the templates $\\\\boldsymbol{x}_{i}^{\\\\mathrm{{n}}}$ asked are unavailable or noisy. (c) For the remaining onethird, we first retrieve an SQL-Text pair $(q_{j},x_{j})$ ,from a different schema such that the ${\\\\mathrm{SQL~}}q_{j}$ is structurally similar to $q_{i}$ (§ 2.1 ), and the word edit distance between the masked templates $x_{i}^{\\\\mathrm{masked}}$ and $x_{j}^{\\\\mathrm{masked}}$ is also small. We can then replace $x_{i}^{\\\\mathrm{{n}}}$ asked with $x_{j}^{\\\\mathrm{masked}}$ and encode $[x_{j}^{\\\\mathrm{masked}}|q_{i}^{\\\\mathrm{Eng}}]$ as an input to $\\\\boldsymbol{\\\\beta}$ and maximize the likelihood of $x_{i}$ in the decoder’s output. This step makes the training more consistent with the inference, as $x_{j}^{\\\\mathrm{masked}}$ and $q_{i}^{\\\\mathrm{Eng}}$ now come from different schemas. In $\\\\S\\\\,5.4$ , we justify training Robustly compared to Naïve training.',\n",
       "    'original_filename': 'Conf_Paper_Meta_Data_EMNLP_2022_Empirical_Methods_in_Natural_Language_Processing_with_whole_text.db'}}),\n",
       " (0.49339241022940994,\n",
       "  {'id': 454845681760490286,\n",
       "   'distance': 0.6474056243896484,\n",
       "   'entity': {'paper_id': '6535d747939a5f408295c649',\n",
       "    'paper_title': 'Benchmarking and Improving Text-to-SQL Generation under Ambiguity',\n",
       "    'chunk_id': 1,\n",
       "    'chunk_text': '# 2 Background and Related Work\\nA Text-to-SQL model takes as input a question expressed as a natural language text x, and a database schema scomprising of table and column names, and outputs an SQL program ywhich can be executed against the database to answer the user’s question. Figure 1 shows an example. The training data for the task comprises (text, schema, SQL) triplets spanning multiple distinct databases.  \\n\\n  \\nFigure 1: A Text-to-SQL system converts a user question to an SQL query, conditioned on the database schema and/or content.  \\n\\nBenchmarks. Popular benchmarks for the Textto-SQL task are WikiSQL ( Zhong et al. ,2018 )and SPIDER ( Yu et al. ,2018 ). A few others have been proposed recently to capture real-world scenarios, such as KaggleDBQA ( Lee et al. ,2021 ), SEDE ( Hazoom et al. ,2021 ), and EHRSQL ( Lee et al. ,2022 ). They all attach one SQL per text, though some of them mention the problem of ambiguity in real-world datasets. Dr. SPIDER ( Chang et al. ,2023 ), designed to test the robustness of existing models, perturbs either the text or schema of SPIDER but still assigns one SQL per text.  \\n\\nAmbiguity in SQL Although ambiguity has been studied in other fields of NLP ( Pilault et al. ,2023 ;Li et al. ,2022 ;Futeral et al. ,2022 ), it has been unexplored in the context of semantic parsing. Ambiguity in SQL arising from related column names is discussed in ( Wang et al. ,2022 ), but they only consider column ambiguity. Their method of recognizing ambiguous queries depends on labeling words of the text and does not generalize to other kinds of ambiguity. To the best of our discernment, AmbiQT represents the first open benchmark for testing coverage of ambiguous alternatives.  \\n\\nDiverse Decoding. Prior work has critiqued the lack of meaningful diversity in beam-search outputs ( Finkel et al. ,2006 ;Gimpel et al. ,2013 ;Li et al. ,2016 ;Li and Jurafsky ,2016 ). In response, many fixes have been proposed. Some proposals attempt to restrict the tokens sampled, using strategies like Nucleus sampling ( Holtzman et al. ,2020 ), Truncated Sampling ( Hewitt et al. ,2022 ), and Typical Sampling ( Meister et al. ,2023 ), while some rely on Template-Based decoding ( Wiseman et al. ,2018 ;Zhang et al. ,2022 ;Fu et al. ,2023 ;Elgohary et al. ,2020 ;Awasthi et al. ,2022 ). A third approach is to generate a prefix with high diversity first, then generate the rest of the sentence with lower diversity. Narayan et al. (2022 ) follow this recipe but focus on incorporating diverse entity orders in text summarization.  \\n\\n<html><body><table><tr><td rowspan=\"2\">Kind of ambiguity</td><td rowspan=\"2\">Count</td><td colspan=\"3\">Example</td></tr><tr><td>QuestionText</td><td>SQL#1</td><td>SQL#2</td></tr><tr><td>Column Ambiguity (C)</td><td>1240</td><td>List the ids of all students.</td><td>SELECTroll_number FROMstudents</td><td>SELECTadmission_number FROMstudents</td></tr><tr><td>Table Ambiguity (T)</td><td>1417</td><td>How many singers do wehave?</td><td>SELECT COUNT(*) FROM artist</td><td>SELECT COUNT(*) FROM performer</td></tr><tr><td>Join Ambiguity (J)</td><td>288</td><td>Whatarethemakers and models?</td><td>SELECT maker，model FROM model</td><td>SELECT t2.maker，t1.model FROM modelASt1JOINmodel_maker AS t2 ON t1.model_id = t2.model_id</td></tr><tr><td>Precomputed Aggregates (P)</td><td>101</td><td>for each pet type.</td><td>Find the average weight|SELECT AVG(weight)， pettype FROM pets GROUP BY pettype</td><td>SELECT avg_weight，pettype FROM pets_weight</td></tr></table></body></html>\\n\\nTable 1: The AmbiQT benchmark. For each question, we list two valid SQL queries as per the schema. The schema is not shown here, but the ambiguity in it can be inferred based on the two SQL queries.\\n\\n# 3 AmbiQT: A Benchmark of Ambiguous Text-to-SQL Conversion\\nAmbiQT is constructed so that each text query has two distinct valid SQL interpretations. Motivated by our experience working with real-life databases, we designed AmbiQT to encompass four types of ambiguity. Each entry is designed so that both alternatives have a similar relevance to the question, and a well-calibrated decoding method is expected to rank them close by in their outputs.  \\n\\nWe create AmbiQT by modifying the SPIDER (Yu et al. ,2018 ) dataset, and use ChatGPT ( OpenAI ,2022 ) to aid with the creation. In each case, we modify the schema instead of the text as that provides greater control over the modification process. We explain the kinds of ambiguity in AmbiQT below and portray examples of each in Table 1 .  \\n\\nColumn Ambiguity (C). Unlike the SPIDER benchmark where column names usually appear verbatim in the question text (like born state for the column born_state ), when users unaware of the schema pose a natural question, they introduce column ambiguity ( Wang et al. ,2022 ). For example, “ What is the capacity of O2 Arena? ” could be ambiguous if the schema has separate columns for standing and seating capacity. Likewise, a query on the number of under-nourished children is ambiguous if we have different columns for “under-weight children” and “stunted growth in children”.  \\n\\nTo simulate column ambiguity, for each text $\\\\mathbf{x}$ ,schema s, and SQL yin SPIDER, we prompt ChatGPT to generate two synonyms for each column name of sin a one-shot manner. Appendix A furnishes more details of the prompt. We then modify sby replacing $c$ with two columns $c_{1},c_{2}$ , and we use yto generate two queries $\\\\mathbf{y}_{1},\\\\mathbf{y}_{2}$ where all mentions of $c$ are replaced with $c_{1}$ in $\\\\mathbf{y}_{1}$ and with $c_{2}$ in $\\\\mathbf{y}_{2}$ . An example appears in the first row of Table 1 . We do not reuse $c$ because the SPIDER dataset often contains column names verbatim in the question, and that would violate our attempt at keeping the two options at similar relevance levels. We modify one column at a time and generate up to 3 examples from each original entry.  \\n\\nTable Ambiguity (T). Table name ambiguity is common in databases obtained by integrating multiple data sources, as in web tables ( Cafarella et al. ,2008 ;Pimplikar and Sarawagi ,2012 ). Here again, we prompt ChatGPT to generate two alternate names for each table. We then modify one SQL yto generate two candidates ${\\\\bf y}_{1},{\\\\bf y}_{2}$ as shown in Table 1 .  \\n\\nJoin Ambiguity (J). In production databases, a logical table is often vertically partitioned across several tables for efficient clustered access ( Stonebraker et al. ,2019 ). Column names overlapping across tables leads to Join Ambiguity. Suppose we have two tables: (1) person with columns id, name, email_address , and (2) person_details with columns id, postal_address, photo .A question asking for a person’s name and address is ambiguous on whether a JOIN with the person_details is necessary. We expose such ambiguity by modifying the schema as follows.  \\n\\nConsider a $(\\\\mathbf{x},\\\\mathbf{s},\\\\mathbf{y})$ triplet. Suppose yinvolves selecting two or more columns $c_{1},c_{2},\\\\ldots.$ not necessarily in the same order, from a table $t$ . Suppose further that $c_{1}$ is not a primary key of $t$ . We create a table called $t\\\\_c_{1}$ that includes just the primary key $p k_{t}$ of $t$ , and $c_{1}$ . The first alternative $\\\\mathbf{y}_{1}$ is $\\\\mathbf{y}$ and the second alternative $\\\\mathbf{y}_{2}$ uses a join over $t$ and $t\\\\_c_{1}$ , with everything else staying the same as y.  \\n\\n  \\nFigure 2: Beam Search works well when targeting only one output, but leads to superficial diversity, for example via different grouping and erroneous variants of column names.  \\n\\nPrecomputed Aggregates $(\\\\mathbf{P})$ :. This ambiguity is particularly common in data warehouses such as Data Commons which pre-aggregate certain variables. For instance, the “ total rice production ” of a state might refer to the column rice_production of state rather than a sum over it. Text-toSQL models have a bias toward introducing a sum()...group-by clause every time total appears in the text. The non-aggregated alternative is usually missing in the top$k$ options. We incorporate this ambiguity as follows.  \\n\\nFor each $(\\\\mathbf{x},\\\\mathbf{s},\\\\mathbf{y})$ , where $\\\\mathbf{y}$ has at least one aggregate, we construct a new table $t^{\\\\prime}$ . For each aggregate $\\\\boldsymbol{\\\\mathcal{A}}$ over column $c$ in y, we add to $t^{\\\\prime}$ the columns and the columns grouped by in $A^{\\\\prime}\\\\_c$ for all $\\\\mathcal{A}^{\\\\prime}\\\\,\\\\in\\\\,\\\\{\\\\mathsf{a v g},\\\\mathsf{s u m},\\\\mathsf{m i n},\\\\mathsf{m a x}\\\\}$ y. For count $(\\\\star)$ ,we add a column called number . We get two gold queries, the original yand a second with the groupby replaced by a direct SELECT on $t^{\\\\prime}$ as shown in the example in Table 1 . We also support aggregates across multiple tables but skip the details here.',\n",
       "    'original_filename': 'Conf_Paper_Meta_Data_EMNLP_2023_with_whole_text.db'}}),\n",
       " (0.4737281882711575,\n",
       "  {'id': 454845641342337844,\n",
       "   'distance': 0.6175075769424438,\n",
       "   'entity': {'paper_id': '6461b9c9d68f896efad43133',\n",
       "    'paper_title': 'Interactive Text-to-SQL Generation Via Editable Step-by-Step Explanations',\n",
       "    'chunk_id': 0,\n",
       "    'chunk_text': '# Interactive Text-to-SQL Generation via Editable Step-by-Step Explanations\\nYuan Tian 1 , Zheng Zhang 2 , Zheng $\\\\mathbf{Ning^{2}}$ ,Toby Jia-Jun $\\\\mathbf{Li}^{2}$ ,Jonathan K. Kummerfeld 3 , and Tianyi Zhang 1 Purdue University 1 , University of Notre Dame 2 , The University of Sydney 3  , , , , ,\\n\\n# Abstract\\nRelational databases play an important role in business, science, and more. However, many users cannot fully unleash the analytical power of relational databases, because they are not familiar with database languages such as SQL. Many techniques have been proposed to automatically generate SQL from natural language, but they suffer from two issues: (1) they still make many mistakes, particularly for complex queries, and (2) they do not provide a flexible way for non-expert users to validate and refine incorrect queries. To address these issues, we introduce a new interaction mechanism that allows users to directly edit a stepby-step explanation of a query to fix errors. Our experiments on multiple datasets, as well as a user study with 24 participants, demonstrate that our approach can achieve better performance than multiple SOTA approaches. Our code and datasets are available at https: //github.com/magic-YuanTian/STEPS .\\n\\n# 1 Introduction\\nNatural language interfaces significantly lower the barrier to accessing databases and performing data analytics tasks for users who are not familiar with database query languages. Many approaches have been proposed for generating SQL queries from natural language ( Popescu et al. ,2004 ;Giordani and Moschitti ,2012 ;Rubin and Berant ,2021 ;Scholak et al. ,2021 ;Zhao et al. ,2022 ). Using recent large language models, systems have reached $86.6\\\\%$ execution accuracy ( Gao et al. ,2023 ) on the Spider benchmark ( Yu et al. ,2018 ).  \\n\\nHowever, the rate of improvement has slowed, with a gain of only $10\\\\%$ since mid-2021. This is partly due to the inherent ambiguity of natural language and the complex structure of SQL queries (e.g., nested or joined queries). Thus, it is challenging to generate a fully correct query in one step, especially for complex tasks ( Yao et al. ,2019 ).  \\n\\n  \\nFigure 1: Refining a SQL query by directly editing a step-by-step explanation.  \\n\\nThere has been growing interest in developing “human-in-the-loop” approaches that elicit user feedback to guide SQL generation. However, most approaches only support feedback in constrained forms, e.g., answering multiple-choice questions (MISP, PIIA, DialSQL Yao et al. ,2019 ;Li et al. ,2020 ;Gur et al. ,2018 ), changing SQL elements in a drop-down menu (DIY, Narechania et al. ,2021 ), etc. Such constrained feedback is not sufficient to fix many complex errors in real-world SQL tasks. One exception is NL-EDIT ( Elgohary et al. ,2021 ), which allows users to provide feedback as new utterances. However, since the feedback is open-ended, interpreting it can be just as hard as processing the original request.  \\n\\nIn this paper, we seek to strike a balance between constrained feedback and open-ended feedback by proposing a new interaction mechanism: editable step-by-step explanations. Fig. 1 illustrates our idea. This mechanism consists of three core components: (a) a text-to-SQL model, (b) an explanation generation method, and (c) a SQL correction model. Our key insight is that using a step-by-step explanation as the basis to suggest fixes allows users to precisely specify where the error is and how to fix it via direct edits. This not only saves users’ time but also makes it easier for the model to locate the error and apply fixes.  \\n\\nBased on this idea, we implemented an interactive SQL generation and refinement system called STEPS . S TEPS adopts a rule-based method to generate step-by-step explanations and uses a hybrid rule/neural method to convert a user-corrected explanation back to a SQL query.  \\n\\nAn evaluation with a simulated user on Spider ( Yu et al. ,2018 ) shows that S TEPS can achieve $97.9\\\\%$ exact set match accuracy, outperforming prior interactive text-to-SQL systems— MISP, DIY, and NL-EDIT—by $33.5\\\\%$ ,$33.2\\\\%$ , and $31.3\\\\%$ respectively. We further evaluate S TEPS on other datasets, including Spider-DK ( Gan et al. ,2021b ), Spider-Syn ( Gan et al. ,2021a ), and WikiSQL ( Zhong et al. ,2017 ). S TEPS consistently achieves at least $96\\\\%$ exact set match accuracy and execution accuracy across all datasets.  \\n\\nFinally, we conducted a within-subjects user study with 24 real users. We found that within the same amount of time, S TEPS helped users complete almost 2X and 4X more tasks correctly than DIY and MISP respectively, with significantly higher self-reported confidence and lower mental load.  \\n\\nThis work makes the following contributions: (1) we propose a new interaction mechanism for the text-to-SQL task; (2) we develop an interactive text-to-SQL system based on the new interaction mechanism and a new training method for SQL correction; (3) we conduct a comprehensive evaluation with both simulated and real users and demonstrate its effectiveness over state-of-the-art interactive systems. Our dataset and code are publicly available.',\n",
       "    'original_filename': 'Conf_Paper_Meta_Data_EMNLP_2023_with_whole_text.db'}}),\n",
       " (0.4691419979070297,\n",
       "  {'id': 454845641378251576,\n",
       "   'distance': 0.6278348565101624,\n",
       "   'entity': {'paper_id': '6461b9c9d68f896efad43133',\n",
       "    'paper_title': 'Interactive Text-to-SQL Generation Via Editable Step-by-Step Explanations',\n",
       "    'chunk_id': 2,\n",
       "    'chunk_text': '# 3 Approach\\nFig. 2 provides an overview of S TEPS . Given a natural language (NL) question, S TEPS invokes a text-to-SQL model to generate an initial SQL query. Then, it decomposes the generated SQL query into individual query clauses and re-orders them based on their execution order. Each clause is then translated into an NL description of the underlying data operation, which is then used to form a step-by-step explanation. By reading the NL explanation along with the query result, users can easily understand the behavior of the generated query and locate any errors, even if they are unfamiliar with SQL.  \\n\\nIf one step is incorrect, users can directly edit its explanation to specify the correct behavior. S TEPS will then regenerate the clause based on the usercorrected explanation and update the SQL query, rather than regenerate the entire query from scratch. If multiple steps are incorrect, the user can add, remove, and modify all steps as needed.\\n\\n# 3.1 Rule-based SQL Explanation\\nTo generate explanations for arbitrarily complex SQL queries (e.g., a query with nested subqueries), we design a rule-based method to first decompose a query into individual clauses. Specifically, S TEPS first parses a SQL query to its Abstract Syntax Tree (AST) based on the SQL grammar in Table 6 . Then, it traverses the AST to identify the subtree of each clause while preserving their hierarchical relations.  \\n\\nGiven the subtree of a clause, S TEPS performs an in-order traversal and translates each leaf node (i.e., a terminal token in the grammar) to the corresponding NL description based on a set of translation rules (see Table 7 in the appendices). For example, SELECT is translated to “Return”, and Order By is translated to “Sort the records based on.” S TEPS concatenates these descriptions to form a complete sentence as the explanation of the clause.  \\n\\n  \\nFigure 3: An example of the explanation generation process  \\n\\nSince SQL engines follow a specific order to execute individual clauses in a query 2 , S TEPS further reorders the clause explanations to reflect their execution order. We believe this is a more faithful representation of the query behavior and thus can help users better understand the underlying data operations, compared with rendering them based on the syntactic order of clauses. Fig. 3 shows an example translation.\\n\\n# 3.2 Text-to-Clause Generation\\nUsers make edits to the explanation produced by our system to make it consistent with their goal. Given these edits, S TEPS uses a hybrid method to generate the corresponding SQL clause. For simple edits, such as replacing a column name, S TEPS directly edits the original clause to fix the error using three direct transformation rules $(\\\\S\\\\ 3.2.1)$ .For more complex edits, S TEPS uses a neural textto-clause model to generate the clause based on the user-corrected explanation $(\\\\S\\\\ 3.2.2)$ .  \\n\\nThe hybrid method is inspired by the findings from our recent study ( Ning et al. ,2023 ). Specifically, a large portion of SQL generation errors are simple errors (e.g., incorrect column names and operators), which can be fixed with small edits. After SQL decomposition by our approach, many larger errors are further decomposed into a set of simpler errors, contained within separate clauses. Thus, it is not necessary to regenerate the entire clause to fix such errors. Furthermore, compared to using a large model, direct transformation is more computationally efficient. Our experiment shows that direct transformation is 22K times faster than the text-to-clause model (Table 4 ).\\n\\n# Algorithm 1: Direct transformation\\nInput: The original explanation $e_{o}$ ;  \\nThe new edited explanation $e_{n}$ ;  \\nThe original SQL clause $s$ ;  \\nOutput: the updated SQL clause   \\n1 $C_{o}\\\\gets\\\\mathrm{CHUNK}(e_{o})$   \\n2 C$C_{n}\\\\gets\\\\mathrm{CHUNK}(e_{n})$ ←  \\n3 foreach $(c_{o},\\\\,c_{n})$ )in $\\\\mathrm{ALIGN}(C_{o},C_{n})$ do   \\n4 // Replace ;  \\n5 if BOTH COLUMN $(c_{o},\\\\,c_{n})$ or   \\n6 BOTH TABLE $(c_{o},\\\\,c_{n})$ or   \\n7 BOTH LITERAL $\\\\left(c_{o},\\\\,c_{n}\\\\right)$ then   \\n8 s←s.R EPLACE (c o, c n) ;   \\n9 // Add ;  \\n10 else if $c_{o}$ is $\\\\mathcal{Q}$ and IS COLUMN ($\\\\left(c_{n}\\\\right)$ then   \\n11 if s. START WITH (\"Select\" )then   \\n12 s←s.A PPEND (cn)  \\n13 // Remove ;  \\n14 else if $c_{n}$ is $\\\\mathcal{D}$ and IS COLUMN $\\\\scriptstyle(c_{o})$ then   \\n15 $s\\\\gets s.\\\\mathrm{REMOVE}(c_{o})$ ;  \\n16 end   \\n17 return\\n\\n# 3.2.1 Direct Transformation\\nWe define three types of atomic edits that can be directly converted into SQL edits by S TEPS : (1) replacing a column name, a table name, or a literal value (i.e., string, number), (2) adding a new column name in the explanation of a SELECT clause, and (3) removing a column name.  \\n\\nAlgorithm 1 describes our direct transformation algorithm. After chunking the text (Lines 1 -2 ), STEPS aligns and compares the chunks in the original explanation with those in the user-corrected explanation, using the Needleman and Wunsch (1970 )algorithm (Line 3 ). This allows S TEPS to detect any replacements (Line 4 ), additions (Line 9 ), or removals (Line 13 ) of database entities in the explanation. Based on this information, S TEPS automatically edits the corresponding SQL clause without calling a neural model (Lines 8 ,12 ,15 ). More details of this algorithm can be found in Appendix E.',\n",
       "    'original_filename': 'Conf_Paper_Meta_Data_EMNLP_2023_with_whole_text.db'}}),\n",
       " (0.46136964934255914,\n",
       "  {'id': 454919258070598578,\n",
       "   'distance': 0.6086536049842834,\n",
       "   'entity': {'paper_id': '628ef0495aee126c0f82db30',\n",
       "    'paper_title': 'PLOG: Table-to-Logic Pretraining for Logical Table-to-Text Generation.',\n",
       "    'chunk_id': 1,\n",
       "    'chunk_text': '# 1 Introduction\\nTable-to-text generation is a sub-task of data-to-text generation, aiming to generate natural language descriptions from structured tables. There are two steps to perform table-to-text generation: content planning (to select table contents and determine the plan to describe them) and surface realization (to realize the plan into fluent natural language). Traditional table-to-text systems take a pipeline manner, to complete the two procedures with separate modules ( Kukich ,1983 ;McKeown ,1985 ). Recent works have shown the advantage of using a neural encoder-decoder model to directly generate sentences from the tables, which shows the strong capability to produce fluent and natural generations ( Wiseman et al. ,2017 ;Nie et al. ,2018 ;Puduppully et al. ,2019b ). Reseachers have also attempted to finetune pretrained language models such as BART ( Lewis et al. ,2020 ) and T5 ( Raffel et al. ,2020 ) on downstream table-to-text tasks and achieve remarkable success on a broad range of benchmarks ( Xie et al. ,2022 ;Kale and Rastogi ,2020 ).  \\n\\nPrevious studies have mainly focused on surfacelevel generation, i.e. generating plain restatements of table records with little logical inference ( Wiseman et al. ,2017 ;Liu et al. ,2018 ;Puduppully et al. ,2019a ,b). Recently, logical table-to-text generation ( Chen et al. ,2020a ), i.e., generating textual descriptions that require logical reasoning over table records, has attracted increasing attention. Logical table-to-text generation poses a new challenge on content planning, requiring models to perform logical inference to derive facts from surface-level table records. End-to-end neural models often suffer from low logical fidelity on this task, i.e. the generated sentences are not logically entailed by the tables even showing reasonable fluency ( Chen et al. ,2020a ,2021 ). There are two reasons for the low fidelity. (1) Directly learning logical inference knowledge from table-text pairs is too difficult for neural models because of the ambiguity and diversity of natural language references. (2) The amount of such paired data is limited because of the laborintensive annotation work, which further limits the performance of neural models.  \\n\\nTo achieve high-fidelity of logical-level generation, Chen et al. (2020b ) have attempted to annotate logical forms to guide the text generation and proposed a L OGIC 2 TEXT dataset. With logical forms as mediators conveying accurate logicallevel facts, models can just focus on surface realization from associated logical forms and achieve high fidelity. However, annotating pairs of logical forms requires intensive human efforts. Moreover, generating from a self-contained logical form is actually a different task from table-to-text generation. Prior studies on this dataset ( Liu et al. ,2021a ;Shu et al. ,2021 ;Xie et al. ,2022 ) mostly focus on converting the logical forms into texts.  \\n\\nInspired by this, we propose a Pre-trained LOgical Form Generator ( PL OG) to achieve faithful logical table-to-text. Specifically, PL OG is first pre-trained on a large-scale synthetic corpus of table-to-logical-form generation ( table-to-logic ) to learn how to generate accurate logical forms from tables, then fine-tuned on downstream table-to-text tasks to transfer the logical inference knowledge learned from pre-training to text generation. Our insights are three-folds: (i) unlike natural language sentences, logical forms are formally defined with unambiguous semantics, hence it is much easier for models to acquire the logical inference knowledge via learning from logical form generation; (ii) via pre-training on large amounts of logical form generation data, the model can better understand the table and organize the logical-level content planning, leading to faithful table-to-text generation; (iii) it is viable to collect large-scale logical form corpora via rule-based search over tables without the efforts of human annotators. In this framework, logical forms can be regarded as an intermediate meaning representation to bridge the gap between logical planning and surface realization, while we do not need explicit logical forms when performing the downstream task.  \\n\\nTo achieve smooth knowledge transfer, we formulate the pre-training task in the same sequenceto-sequence generation way with the downstream table-to-text. We adopt a strong pre-trained language model T5 as the backbone model. To evaluate our method, we consider two typical scenarios of current table-to-text generation tasks: uncontrolled and controlled generation. For uncontrolled generation, we adopt the L OGIC NLG task which requires generating logical descriptions only based on the table, and imposes an additional challenge on content selection. Inspired by ToTTo ( Parikh et al. ,2020 ) and HiTab ( Cheng et al. ,2021 ), we further consider controlled generation, a recently popular task formulation. In this task setting, additional control features such as highlighted cells in the tables are explicitly specified to guide the topic of generation and narrow down the scope of content selection. Because most examples of ToTTo and HiTab do not involve logical reasoning, we reformulate the L OGIC 2 TEXT dataset into a new CONT rolled LOG ical Natural Language Generation (C ONT LOG ) task for our evaluation. We detect highlighted cells via execution-based search with their annotated logical forms. For each dataset, we collect large amounts of logical forms from the tables in the training data via an executionbased search, where the validity of them can be fully guaranteed. To evaluate the fidelity of generated texts, we mainly adopt two state-of-the-art Table Fact Verification ( Chen et al. ,2019 ) models, TA PE X(Liu et al. ,2021b ) and T A PAS (Eisenschlos et al. ,2020 ), to evaluate whether the texts are entailed by the input tables. Experimental results on both L OGIC NLG and C ONT LOG demonstrate that PL OG outperforms the baseline T5 by a large margin in terms of logical fidelity. In particular, PL OG improves the fidelity accuracy (evaluated by T A PE X) by $9.3\\\\%$ and $9.2\\\\%$ on L OGIC NLG and CONT LOG , respectively. Human evaluation and case studies further demonstrate the effectiveness of our pretraining framework. In addition, the results of table-to-logic pretraining demonstrate that the pretraining task indeed contributes to more accurate logical inference. We will make our code publicly available at https://github.com/ Aolius/logic-pretraining .',\n",
       "    'original_filename': 'Conf_Paper_Meta_Data_EMNLP_2022_Empirical_Methods_in_Natural_Language_Processing_with_whole_text.db'}}),\n",
       " (0.4288286273832226,\n",
       "  {'id': 454846633586283990,\n",
       "   'distance': 0.6550747752189636,\n",
       "   'entity': {'paper_id': '64a29654d68f896efa29af31',\n",
       "    'paper_title': 'Constraint Reasoning Embedded Structured Prediction.',\n",
       "    'chunk_id': 10,\n",
       "    'chunk_text': '# 5.3 Text2SQL Generation\\nTask Definition. Formatted data such as travel records and stock market transactions are stored in relational databases. Currently, accessing the database requires a data scientist who masters the SQL query language. Our task is to automatically synthesize SQL queries from natural language sentences using machine learning. Compared with the data expert approach, SQL query generation requires deeper reasoning across the structure of the database, the semantics of the structured query language, and the understanding of natural language. As shown in Figure 11, the input of the text2SQL generation is a sentence that describes the query in natural language and the table headers in the relational database. The output is a SQL query with the following structure:  \\n\\nSELECT agg-op sel-col WHERE (cond-col cond-op cond-val) AND ...  \\n\\nHere, SELECT and WHERE are keywords in the SQL language. What we need to predict are: (1) the aggregation operator $\\\\mathsf{a g g-o p}$ , which chooses among the set {empty, COUNT, MIN, MAX, SUM, AVG }; (2) the column name in selection sel-col and (3) the column name in condition cond-col , both of which are chosen from the table headers; (4) the conditional operator cond-op , which is in $\\\\{=,<,>\\\\}$ ; (5) the conditional value cond-val , which is assumed to be a sub-sequence of the given query. Here, one bracket pair () represents one conditional statement. The SQL query may have multiple conditions, which are denoted above by “ ... ”. Figure 11 displays this SQL query:\\n\\n# SELECT COUNT \"School\" WHERE \"No.\" = \"3\"\\nHere agg-op is COUNT ;sel-col is “school”, which is a column name from the table headers. One cond-col is “No.”, which also comes from the table headers. The cond-op is “=”. The cond-val is “3”, which we assume is from the input query. This example has one condition but multiple conditions are allowed.  \\n\\nDefinition of Constraints. Existing generative neural models for this task are not guaranteed to generate a query that follows the grammar of a SQL query. To avoid grammar violations, we compile a set of common SQL grammars as constraints into the Core-Sp module. The Core-Sp module will ensure that all the generated SQL queries follow the grammatical constraints. Our constraints are defined on the operators, namely the conditional operator cond-op and the aggregation operator agg-op . The domains of these operators are dependent upon the data types of the entities (namely, cond-col and sel-col )they operate on. Consider the previous example. The agg-op can only take values between $\\\\{\\\\mathrm{empty,~\\\\coUNT}\\\\}$ , because the sel-col is “school”, which is of the string type. More precisely, let $s$ be a column header (the value of sel-col or cond-col ). We define $F_{a}(s)$ as  \\n\\nInput Table:   \\n\\n\\n<html><body><table><tr><td></td><td>Player</td><td>No.</td><td>Position</td><td>School</td></tr><tr><td>0</td><td>Antonio</td><td>21</td><td>Guard-Forward</td><td>Duke</td></tr><tr><td>1</td><td>Voshon</td><td>2</td><td>Guard</td><td>Minnesota</td></tr><tr><td>2</td><td>Marin</td><td>3</td><td>Guard-Forward</td><td>Butler CC</td></tr></table></body></html>\\n\\n# Input Query:\\nHow many schools did player number 3 play at?\\n\\n# Output SQL Query:\\nFigure 11: An example for the Text2SQL generation task. The input is the text query “How many schools did player number 3 play at?” and the table header “ Player, No., Position, School ” from the relational database. The output should be the SQL query: SELECT COUNT \"School\" WHERE \"No. $\"~=~\"3\"$ .  \\n\\nthe set of aggregation operators agg-op that can be associated with $s$ , and $F_{c}(s)$ as the set of condition operators cond-op that can be associated with $s$ . That is:  \\n\\n$$\\n\\\\begin{array}{r l}&{F_{a}(s)=\\\\left\\\\{\\\\begin{array}{l l}{\\\\{\\\\mathrm{empty~,~\\\\varsigma0UNT,~\\\\forall\\\\mathrm{IIN},~\\\\forall\\\\mathrm{IAX},~\\\\forall\\\\mathrm{II},~\\\\mathrm{AVG}\\\\}}}&{\\\\mathrm{if~}s\\\\mathrm{~of~is~numeric~type}}\\\\\\\\ {\\\\{\\\\mathrm{empty~,~\\\\varsigma0UNT}\\\\}}&{\\\\mathrm{if~}s\\\\mathrm{~of~is~string~type}}\\\\end{array}\\\\right.}\\\\\\\\ &{F_{c}(s)=\\\\left\\\\{\\\\begin{array}{l l}{\\\\{\\\\mathrm{=,~\\\\displaystyle>,~\\\\varsigma\\\\}}}&{\\\\mathrm{if~}s\\\\mathrm{~is~of~numeric~type}}\\\\\\\\ {\\\\{\\\\mathrm{=}\\\\}}&{\\\\mathrm{if~}s\\\\mathrm{~is~of~string~type}}\\\\end{array}\\\\right.}\\\\end{array}\\n$$  \\n\\nWe also introduce dataype constraints, which are defined as:  \\n\\n$$\\n\\\\begin{array}{r l}&{\\\\mathtt{s e l-c o l}=s\\\\Rightarrow\\\\mathtt{a g g-o p}\\\\in F_{a}(s),}\\\\\\\\ &{\\\\mathtt{c o n d-c o l}=s\\\\Rightarrow\\\\mathtt{c o n d-o p}\\\\in F_{c}(s).}\\\\end{array}\\n$$  \\n\\nModel Structure. We embed the Core-Sp module to SQLova (Hwang et al., 2019), the state-of-the-art neural network for text2SQL generation. SQLova has a sequence-tosequence architecture. It first encodes a natural language sentence and the table headers into a high-dimensional vector. Then the decoder of SQLova decodes the hidden representation into the predictions of various entities in the SQL query. SQLova first determines the number of conditions in the SQL query and then fills in the ( cond-col ,cond-op ,cond-val ) for each condition. The operators agg-op, cond-op are predicted as a classification task from a fixed set of operators. Column names cond-col, sel-col are predicted from the set of table headers in the relational database. The cond-val is predicted by a pointer neural network which points at a span of the input natural language sentence. The selected span of the query is used as the cond-val (Dong and Lapata, 2018).  \\n\\nMDD Construction. The associated MDD that encodes the constraints for text2SQL generation is similar to the MDD for if-then program synthesis. The MDD is split into layers and every two layers form a group. One two-layer group is used to enforce constraints on an operator-column name pair. The operator-column name pair can be $\\\\mathsf{a g g-o p}$ and sel-col ,or can be cond-op and cond-col . Note that there can be only one group of $\\\\mathsf{a g g-o p}$ and sel-col and more than one group of cond-op and cond-col . In the first layer of the group, the column name is determined. In the second layer, the invalid operators are ruled out based on the type of the column name selected in the first layer. The two-layer group is copied several times because the SQL query can contain multiple conditions.  \\n\\nConstraint Reasoning Embedded Structured Prediction',\n",
       "    'original_filename': 'Journal_Paper_Meta_Data_Journal_of_Machine_Learning_Research_with_whole_text.db'}}),\n",
       " (0.3934155918227106,\n",
       "  {'id': 454848078959234800,\n",
       "   'distance': 0.621777355670929,\n",
       "   'entity': {'paper_id': '633ba44790e50fcafdfe4af3',\n",
       "    'paper_title': 'Calibrating Sequence likelihood Improves Conditional Language Generation',\n",
       "    'chunk_id': 1,\n",
       "    'chunk_text': '# 1 I NTRODUCTION\\nConditional language generation aims to generate natural language text based on input context, and includes many useful and hard tasks such as abstractive summarization (Mani, 2001; Nenkova and McKeown, 2011), generative question answering (Bajaj et al., 2016), question generation (Zhou et al., 2017) and data-to-text (Wiseman et al., 2017; Gardent et al., 2017) tasks. Pretraining large Transformer encoder-decoder models and fine-tuning them on downstream tasks is the common paradigm to address these tasks (Raffel et al., 2020; Lewis et al., 2019; Tay et al., 2022; Zhang et al., 2019a).  \\n\\nConditional language generation tasks are modeled by learning the probability of a target sequence ygiven a context sequence $\\\\mathbf{x}$ . Since directly modeling sequence probability $P(\\\\mathbf{y}|\\\\mathbf{x})$ over all possible generated text sequences is intractable, the canonical solution is to auto-regressively factor the probability and share the parameters at all token prediction steps as $\\\\begin{array}{r l}{P_{\\\\theta}\\\\overline{{(\\\\mathbf{y}|\\\\mathbf{x})}}=}\\\\end{array}$ $\\\\begin{array}{r}{\\\\prod_{t=0}^{l}P_{\\\\theta}(y^{t}|y^{0}...y^{t-1},\\\\mathbf{x})}\\\\end{array}$ , where $l$ is the sequence length. These models are often trained with maximum likelihood estimation (MLE) over observed target sequences. The learning objective thus becomes $\\\\begin{array}{r}{L\\\\;=\\\\;\\\\sum_{i}^{N}-l o g(P_{\\\\theta}(\\\\mathbf{y}_{i}|\\\\mathbf{x}_{i}))\\\\;=\\\\;\\\\sum_{i}^{N}\\\\sum_{t=0}^{l}-l o g(P_{\\\\theta}(y_{i}^{t}|y_{i}^{0}...y_{i}^{t-1},\\\\mathbf{x}_{i}))}\\\\end{array}$ PP, where $N$ is the number of training instances. It is also referred to as next token prediction loss as it is mathematically equivalent.  \\n\\nIn the ideal setting of MLE training, a large number of target sequences are observed for each context, and the relative frequencies of output sequences can calibrate the assigned model probabilities. However, in practice most language generation training datasets have only a single target sequence given the context. While the subsequent MLE trained models learn to assign relatively high probability to plausible sequences, they lack the direct supervision to compare such sequences, and solely rely on models’ generalization capability. We refer to this phenomenon as models’ sequence likelihood not being calibrated . Prior works (Liu and Liu, 2021; Liu et al., 2022) has shown that the correlation between sequence probability and its quality for MLE trained models can be low. Liu et al. (2022) attributed this similarly as the deterministic (one-point) target distribution problem. Exposure bias (Ranzato et al., 2016) further aggravates the problem, as sequence likelihood estimation is noisier when models’ decoded sequences shift from exposed training data distribution.  \\n\\n  \\n\\nFigure 1: Calibrating sequence likelihood improves language generation across model scales. Scores are averaged ROUGE across 4 datasets ( $R_{m}$ in subsection 3.2)  \\n\\nMany effective heuristics have been proposed during training and decoding to combat the problem of uncalibrated sequence likelihood. Label smoothing (Szegedy et al., 2016) prevents the network from becoming over-confident towards the observed target. This is particularly necessary in language generation, since the gold target represents just one of many possibilities. It has been observed that increasing number of decoding candidates past a certain point leads to worse quality for beam search decoding (Yang et al., 2018; Koehn and Knowles, 2017) and sampling (Adiwardana et al., 2020). An optimal number of decoding candidates is often determined empirically by decoding models on the validation set and measuring their performance. Using length normalization is also essential for beam search decoding (Wu et al., 2016) and sampling (Adiwardana et al., 2020) as models tend to underestimate sequence likelihood of longer sentences. Repetition is another common failure mode when models overestimate the probability of repeated sequences (Holtzman et al., 2019). Trigram blocking (Paulus et al., 2018) and nucleus sampling (Holtzman et al., 2020) have been used to interrupt repeating sequences. These techniques are pervasive and often the default in modern Transformer libraries (Wolf et al., 2020; Lewis et al., 2019; Raffel et al., 2020; Zhang et al., 2019a).  \\n\\nSince the lack of observed target sequences in MLE training is the root problem, solutions involving learning with multiple sequence candidates have been proposed to directly address it. They can be loosely put in three categories: (1) reinforcement learning with sequence-level rewards (Paulus et al., 2018; Ziegler et al., 2019; Stiennon et al., 2020); (2) two-stage systems that generate and rerank candidates (Liu and Liu, 2021; Ravaut et al., 2022b; Liu et al., 2022); and (3) multi-task learning with sequence-level losses (Edunov et al., 2018; Liu et al., 2022). Refer to Related Works (section 4) for a more comprehensive discussion.  \\n\\nIn this paper, we propose to first decode candidates from a fine-tuned model on its own training dataset, and then continue training the model with a new objective. The new objective aims to align candidates’ sequence likelihoods according to their similarities to the target sequence in the model’s latent space. We refer to this process as sequence likelihood calibration (SLiC). Our approach is related to multi-task learning with sequence-level losses in Liu et al. (2022). However, we propose a simple yet effective recipe that eliminates decoding heuristics and doesn’t risk directly optimizing the same metrics that are used to report text generation quality. Unlike reinforcement learning, it is a one-time offline process that avoids costly online decoding processes. Also, when compared to two-stage reranking systems, it doesn’t require a separate reranking model that incurs additional complexity and compute. As depicted in Figure 1, our calibration stage naturally extends the current paradigm of pretraining and fine-tuning, and we show that calibrated models have strong improvements over fine-tuned-only models across model sizes.  \\n\\nOur main contributions include:  \\n\\n• Proposed a sequence likelihood calibration (SLiC) stage that consistently improves model quality, exceeding or matching state-of-the-art results on abstractive summarization, generative question answering, question generation and data-to-text generation tasks.  \\n\\n• Proposed a novel calibration similarity metric between model decodes and targets measured in the model’s latent space rather than resorting to external metrics or human feedback. • Demonstrated that SLiC eliminates the need for popular decoding heuristics, such as beam size optimization, length normalization and repetition prevention for the calibrated models. • Demonstrated that SLiC has persistent significant benefits on model performance even as the number of model parameters scales up. Under the same inference budget, smaller calibrated models might outperform larger counterparts by decoding more candidates.',\n",
       "    'original_filename': 'Conf_Paper_Meta_Data_ICLR_2023_with_whole_text.db'}}),\n",
       " (0.38467183690974394,\n",
       "  {'id': 454845489174282794,\n",
       "   'distance': 0.632977306842804,\n",
       "   'entity': {'paper_id': '644744fb71ac66d2cbf9b886',\n",
       "    'paper_title': 'A Lightweight Constrained Generation Alternative for Query-focused Summarization',\n",
       "    'chunk_id': 1,\n",
       "    'chunk_text': '# 2 Related Work\\nQuery-focused Summarization: To generate a query-focused summary, several studies used an additional query-attention mechanism. QR-BERTSUM-TL [13] incorporates query relevance scores into a pre-trained summarization model. Su et al. [29] propose merging the representation of an answer span predicted by a separate QA model into the Seq2Seq model’s training and inference process to enforce the summary’s coherence w.r.t. the query. QSG Transformer [23] suggests using a separate graph neural network model to learn per-token representations and fuse them to the Seq2Seq model to effectively generate a QFS. These mechanisms can be viewed as enforcing soft semantic constraints during the generation process, and requires additional modules and parameters to function effectively. We opt for a different approach, i.e. explicitly enforcing lexical constraints during the generation process, without the additional machinery that is necessary to handle the soft semantic constrains.  \\n\\nConstrained Generation (or Conditional Generation) is a family of natural language generation (NLG) methods that aim to generate natural language including/excluding a set of specific words, i.e. lexical constraints. The NLG domain recipe leverages pre-trained large language models (LLM) finetuned on specific datasets [7]. However, as pointed out by Lu et al. [18], such models only finetuned in an end-to-end manner do not learn to follow the underlying constraints reliably even when supervised with large amounts of training examples. Therefore, a line of works [1, 10, 17, 18] in constrained generation proposes to explicitly modify the likelihood of next word prediction in the generation stage, such that the predefined lexical constraints can be better satisfied.',\n",
       "    'original_filename': 'Conf_Paper_Meta_Data_SIGIR2023_with_whole_text.db'}}),\n",
       " (0.343529166530581,\n",
       "  {'id': 454919258054345648,\n",
       "   'distance': 0.6017794013023376,\n",
       "   'entity': {'paper_id': '628ef0495aee126c0f82db30',\n",
       "    'paper_title': 'PLOG: Table-to-Logic Pretraining for Logical Table-to-Text Generation.',\n",
       "    'chunk_id': 0,\n",
       "    'chunk_text': '# PL OG: Table-to-Logic Pretraining for Logical Table-to-Text Generation\\nAo Liu 1 , Haoyu Dong 2 , Naoaki Okazaki 1 , Shi Han 2 , Dongmei Zhang 2 1 Tokyo Institute of Technology, 2 Microsoft Research   \\nliu.ao,@nlp.c.titech.ac.jp , {hadong,shihan,dongmeiz}@microsoft.com\\n\\n# Abstract\\nLogical table-to-text generation is a task that involves generating logically faithful sentences from tables, which requires models to derive logical-level facts from table records via logical inference. It raises a new challenge on the logical-level content planning of table-totext models. However, directly learning the logical inference knowledge from table-text pairs is very difficult for neural models because of the ambiguity of natural language and the scarcity of parallel data. Hence even largescale pre-trained language models present low logical fidelity on logical table-to-text. In this work, we propose a PL OG (Pretrained Logical Form Generator) framework to improve the generation fidelity. Specifically, PL OG is first pretrained on a table-to-logic-form generation (table-to-logic ) task, then finetuned on downstream table-to-text tasks. The formal definition of logical forms enables us to collect large amount of accurate logical forms from tables without human annotation. In addition, PL OGcan learn logical inference from table-logic pairs much more definitely than from tabletext pairs. To evaluate our model, we further collect a controlled logical table-to-text dataset CONT LOG based on an existing dataset. On two benchmarks, L OGIC NLG and C ONT LOG ,PL OG outperforms strong baselines by a large margin on the logical fidelity, demonstrating the effectiveness of table-to-logic pretraining.',\n",
       "    'original_filename': 'Conf_Paper_Meta_Data_EMNLP_2022_Empirical_Methods_in_Natural_Language_Processing_with_whole_text.db'}}),\n",
       " (0.33069925137091194,\n",
       "  {'id': 454847842914282426,\n",
       "   'distance': 0.6255221366882324,\n",
       "   'entity': {'paper_id': '646edc9cd68f896efaddab9b',\n",
       "    'paper_title': 'Faithful Low-Resource Data-to-Text Generation Through Cycle Training.',\n",
       "    'chunk_id': 0,\n",
       "    'chunk_text': '# Faithful Low-Resource Data-to-Text Generation through Cycle Training\\nZhuoer Wang †1 Marcus Collins ⋆2 Nikhita Vedula ⋆2   \\nSimone Filice 2 Shervin Malmasi 2 Oleg Rokhlenko 2  \\n\\n1 Texas A&M University 2 Amazon  {collmr,veduln,filicesf,malmasi,olegro}@amazon.com\\n\\n# Abstract\\nMethods to generate text from structured data have advanced significantly in recent years, primarily due to fine-tuning of pre-trained language models on large datasets. However, such models can fail to produce output faithful to the input data, particularly on out-of-domain data. Sufficient annotated data is often not available for specific domains, leading us to seek an unsupervised approach to improve the faithfulness of output text. Since the problem is fundamentally one of consistency between the representations of the structured data and text, we evaluate the effectiveness of cycle training in this work. Cycle training uses two models which are inverses of each other: one that generates text from structured data, and one which generates the structured data from natural language text. We show that cycle training, when initialized with a small amount of supervised data (100 samples in our case), achieves nearly the same performance as fully supervised approaches for the data-to-text generation task on the WebNLG, E2E, WTQ, and WSQL datasets. We perform extensive empirical analysis with automated evaluation metrics and a newly designed human evaluation schema to reveal different cycle training strategies’ effectiveness of reducing various types of generation errors. Our code is publicly available at https:// github.com/Edillower/CycleNLG .',\n",
       "    'original_filename': 'Conf_Paper_Meta_Data_ACL_2023_with_whole_text.db'}}),\n",
       " (0.32776818825488324,\n",
       "  {'id': 454847854731210130,\n",
       "   'distance': 0.5963490605354309,\n",
       "   'entity': {'paper_id': '64702deed68f896efa5202bb',\n",
       "    'paper_title': 'Uncovering and Categorizing Social Biases in Text-to-SQL.',\n",
       "    'chunk_id': 4,\n",
       "    'chunk_text': '# 7 Conclusion\\nIn this paper, we propose to uncover and categorize social biases in the Text-to-SQL task. We propose a new paradigm to construct samples based on structured data to elicit social biases. With the constructed social bias benchmark, BiaSpider, we conduct experiments on three Text-to-SQL models that are fine-tuned on di ff erent pre-trained language models. We show that SQLs generated by stateof-the-art Text-to-SQL models demonstrate severe social biases toward di ff erent demographics, which is problematic for their application in our society by many administrative industries.\\n\\n# Limitations\\nIn this work, we are the first to uncover the social bias problem in the Text-to-SQL task. We categorize di ff erent types of social biases related to various demographics. We present a new benchmark and metric for the social bias study in the Text-to-SQL task. However, this work stops at the point of uncovering and analyzing the problem and phenomenon, without making one step further to solve the social bias problem in the Text-to-SQL task. Besides, in spite of the structured scalability of our proposed paradigm for social bias benchmark construction, the e ffi cacy of entending with other Text-to-SQL datasets remains to be verified.',\n",
       "    'original_filename': 'Conf_Paper_Meta_Data_ACL_2023_with_whole_text.db'}}),\n",
       " (0.31818122242458285,\n",
       "  {'id': 454919258123551672,\n",
       "   'distance': 0.6042478084564209,\n",
       "   'entity': {'paper_id': '628ef0495aee126c0f82db30',\n",
       "    'paper_title': 'PLOG: Table-to-Logic Pretraining for Logical Table-to-Text Generation.',\n",
       "    'chunk_id': 4,\n",
       "    'chunk_text': '# 5 Table-to-Logic Pretraining\\nAs described in Section 1 , pretraining the table-totext model on table-to-logic generation is effective in generating more faithful natural language. In this section, we introduce our pretraining task and the procedure of collecting pretraining corpora.\\n\\n# 5.1 Pretraining Task\\nIn the pretraining task, the input is a (sub-) table while the target is a logical form that abstracts a reasoning process on the table. We follow the same schema in ( Chen et al. ,2020b ) to  \\n\\n$$\\nt^{*}=\\\\arg\\\\operatorname*{max}\\\\prod_{i=1}^{n}P(t_{i}|t_{<i},S;\\\\theta),\\n$$  \\n\\nWe adopt the same data serialization described in Section 4 for the pretraining task. The only difference between table-to-text and table-to-logic lies in the generation target.',\n",
       "    'original_filename': 'Conf_Paper_Meta_Data_EMNLP_2022_Empirical_Methods_in_Natural_Language_Processing_with_whole_text.db'}}),\n",
       " (0.31764453126699865,\n",
       "  {'id': 454919253827011150,\n",
       "   'distance': 0.5987098217010498,\n",
       "   'entity': {'paper_id': '634e194190e50fcafd24e749',\n",
       "    'paper_title': 'Investigating the Robustness of Natural Language Generation from Logical Forms Via Counterfactual Samples',\n",
       "    'chunk_id': 5,\n",
       "    'chunk_text': '# 7 Related Work\\n\\n# 7.1 Text Generation from Tables\\nTable-to-text is a popular area in recent years ( Wiseman et al. ,2018 ;Lee ,2018 ;Liang et al. ,2009 ;Chen et al. ,2021 ). As previous methods generate superfacial and uncontrollable logic, Chen et al. (2020e ) introduced Logic2Text as a controllable and fidelity text generation task conditioned on a logical form. Since then, many works on Logic2Text have been proposed. In order to unify the studies of structural knowledge grounding, Xie et al. (2022 ) proposed the UNIFIEDSKG framework and unified 21 structural knowledge grounding tasks into a text-to-text format, including Logic2Text. Zhang et al. (2021a ) proposed a unified framework for logical knowledge-conditioned text generation in few shot setting. To solve the data scarcity problem of Logic2Text, Shu et al. (2021 ) iteratively augmented the original dataset with a generator and proposed an evaluator for highfidelity text generation.  \\n\\n  \\n  \\nFigure 6: Attention values during decoding. The baseline pays more attention to “attendance” as we expected, which verifies our hypothesis.  \\n\\nHowever, they all ignored the spurious correlation in logical forms, which is investigated in our work.\\n\\n# 7.2 Causal Inference For NLP\\nCausal Inference ( Pearl et al. ,2016 ;Kuang et al. ,2020 ) is a powerful statistical modeling tool for explanatory analysis. In NLP, many methods have been proposed based on the causal inference theory ( Zhang et al. ,2021b ;Chen et al. ,2020a ;Zhang et al. ,2021c ;Hu and Li ,2021 ). Yang et al. (2021 )and Wang and Culotta (2021b ) exploit causal inference to reduce the bias from the context for text classification tasks. For named entity recognition, Zeng et al. (2020 ) replaced the entities in sentences with counterfactual tokens to remove spurious correlation between the context and the entity token. Wang and Culotta (2021a ) generated counterfactual samples by replacing causal terms with their antonyms in sentiment classification. Wu et al. (2020 ) proposed to use a counterfacutal decoder to generate unbiased court’s view.  \\n\\nOur work proposes to improve the robustness of Logic2Text models with causality.\\n\\n# 8 Conclusion\\nWe investigate the robustness of current methods for Logic2Text via a set of manually constructed counterfactual samples. A significant decline on the counterfactual dataset verifies the existence of bias in the training dataset. Then we leverage causal inference to analyze the bias, based on which, two approaches are proposed to reduce the spurious correlations. Automatic and manual experimental results on both Logic2Text and the counterfactual data demonstrate that our method is effective to alleviate the spurious correlations.\\n\\n# Limitations\\nAlthough our method has achieved high logical consistency, we find that for some unseen headers, the model cannot understand them and generate some logically correct but not fluent sentences, which is related to the method of generation of counterfactual samples. Due to the limited number of high-quality logical forms, future work may continue to explore more advanced counterfactual data generation methods considering the context.  \\n\\nBesides, our structure-aware logical form encoder works based on the attention mechanism, so it can’t be applied to models without attention. Fortunately, the current attention-based models are widely used not only because of their better performance but also because of their high interpretability.\\n\\n# Acknowledgment\\nWe would like to thank anonymous reviewers for their comments and suggestions. This work is supported in part by National Natural Science Foundation of China (NO. 62037001), the Key R&D Projects of the Ministry of Science and Technology (NO. 2020 YFC 0832500), the Zhejiang Province Science and Technology Project (NO. 2022C01044), the Starry Night Science Fund of Zhejiang University Shanghai Institute for Advanced Study (NO. SN-ZJU-SIAS-0010), and CAAI-Huawei MindSpore Open Fund (NO. CAAIXSJLJJ-2021-015A).\\n\\n\\n\\n# A Details of Attention Mask\\nThe attention value from token $w_{i}$ to token $w_{j}$ is masked if there is no direct edge connecting them on the logical form. To clarify how the value of the Attention Mask is calculated, we use the left logical form in Figure 7 as an example. And the attention mask matrix for the tokenized logical form is shown on the right of Figure 7 . For each token in the logical form, the parent node can be seen (such as $\\\\mathbf{M}_{\\\\mathrm{hop,result}}=1]$ ). Besides, an operator token can also see the child nodes (such as $\\\\mathbf{M}_{\\\\mathrm{win,eq}}\\\\,=\\\\,1;$ .Otherwise, the attention value is masked.\\n\\n# BReplacement Methods\\nWe match the headers from each logical form to the tokens in the label and then replace the headers in a specific way if found. Concretely, we propose the following three strategies for replacement.  \\n\\nRandom Replacement Intuitively, when a layman tries to describe some domain-specific table, he simply replicates the obscure table headers (such as technical terms). So we train the model’s ability to replicate the header from the logical form. We use completely random strings to replace the headers.  \\n\\nHeader Disturb Another straightforward idea is to select another header token from a set of all table headers to replace the header token in the logical form. However, such a method ignores the attribute of the data type carried by the columns, thus it will produce unreasonable counterfactual samples. In order to solve this problem, we group all the headers by their data type, including three categories: strings, numbers, and time. A header in the logical form is only replaced by another header with the same data type.  \\n\\n  \\nFigure 7: Sample of Attention Mask matrix. The attention of each token to others with no directly connected edges is masked.  \\n\\nMixing Replacement We take turns performing the above two replacement strategies.',\n",
       "    'original_filename': 'Conf_Paper_Meta_Data_EMNLP_2022_Empirical_Methods_in_Natural_Language_Processing_with_whole_text.db'}})]"
      ]
     },
     "execution_count": 57,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "# 将结果转换为NumPy数组\n",
    "bm25_scores = np.array(bm25_scores)\n",
    "\n",
    "# 计算混合分数\n",
    "hybrid_scores = (0.3 * bm25_scores**1.2) + (0.7 * embedding_scores)\n",
    "\n",
    "\n",
    "scored_papers = list(zip(hybrid_scores, papers))\n",
    "\n",
    "# 按照分数从高到低排序\n",
    "scored_papers.sort(key=lambda x: x[0], reverse=True)\n",
    "scored_papers"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# Rerank"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 60,
   "metadata": {},
   "outputs": [],
   "source": [
    "cos_threshold = 0.5\n",
    "rerank_papers = [y for x,y in enumerate(papers) if embedding_scores[x]>cos_threshold]\n",
    "rerank_papers_content = [x[\"entity\"][\"chunk_text\"] for x in rerank_papers]"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 62,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "[{'id': 454845641360425782,\n",
       "  'distance': 0.6537715196609497,\n",
       "  'entity': {'paper_id': '6461b9c9d68f896efad43133',\n",
       "   'paper_title': 'Interactive Text-to-SQL Generation Via Editable Step-by-Step Explanations',\n",
       "   'chunk_id': 1,\n",
       "   'chunk_text': '# 2 Related Work\\n\\n# 2.1 Text-to-SQL Generation\\nNatural language interfaces have long been recognized as a way to expand access to databases ( Hendrix et al. ,1978 ).The construction of several large text-to-SQL datasets, such as WikiSQL ( Zhong et al. ,2017 ) and Spider ( Yu et al. ,2018 ), has enabled the adoption of deep learning models in this task, achieving unprecedented performance in recent years ( Rubin and Berant ,2021 ;Wang et al. ,2020a ;Scholak et al. ,2021 ;Yu et al. ,2020 ;Hwang et al. ,2019 ). Our technique is based on the recent success of neural text-to-SQL models. Unlike existing models that perform end-to-end SQL generation, we propose a new interaction mechanism for users to validate and refine generated queries through step-by-step explanations.  \\n\\nAs the first step to demonstrate the feasibility of our approach, we focus on single-turn SQL generation ( Yu et al. ,2018 ) in this work. There has also been recent work that supports multi-turn SQL generation ( Yu et al. ,2019a ,b;Guo et al. ,2021 ), where a sequence of interdependent queries are expressed in multiple utterances in a dialog. Models designed for multi-turn SQL generation typically need to reason about the dialog context and effectively encode the historical queries ( Wang et al. ,2021 ;Hui et al. ,2021 ;Zhang et al. ,2019 ;Cai and Wan ,2020 ;Wang et al. ,2020b ). Our approach can be extended to support multi-turn SQL generation by initiating separate refinement sessions for individual queries while incorporating the contextual information of previous queries into explanation generation and text-to-clause generation.\\n\\n# 2.2 Interactive Semantic Parsing for SQL\\nRecently, there has been a growing interest in interactive approaches that elicit user feedback to guide SQL generation. Iyer et al. (2017 ) proposed to allow users to flag incorrect queries and continuously retrain the model. Both DIY ( Narechania et al. ,2021 ) and NaLIR ( Li and Jagadish ,2014a ,b)enable users to select alternative values or subexpressions to fix an incorrect SQL query. PIIA ( Li et al. ,2020 ), MISP ( Yao et al. ,2019 ), and DialSQL ( Gur et al. ,2018 ) proactively ask for user feedback via multiple-choice questions. A common limitation of these methods is that they only solicit feedback in constrained forms, hindering their flexibility and effectiveness in addressing the variability of SQL errors. In contrast, our approach allows more flexible feedback through direct edits to the explanations generated by the model.  \\n\\nThe only work that supports open-ended user feedback in SQL generation is NL-EDIT ( Elgohary et al. ,2021 ). NL-EDIT is trained on SPLASH ( Elgohary et al. ,2020 ), a dataset of SQL errors and user feedback utterances. Given an incorrect query, NL-EDIT allows users to provide a clarification utterance. Based on the utterance, the model generates a sequence of edits to the SQL query. Incorporating feedback expressed in a completely free-text utterance is challenging for two reasons:  \\n\\n  \\nFigure 2: An Overview of Interactive SQL Generation and Refinement with Editable Step-by-Step Explanations  \\n\\n(1) the model needs to infer which part of the SQL query to fix; (2) the model needs to determine what changes are being requested. In contrast, S TEPS asks users to directly edit an NL explanation and make corrections to the explanation. Comparing the initial explanation with the user-corrected explanation makes it easier to locate the part of a SQL query that needs to be changed and infer what change to make.  \\n\\nThe idea of SQL decomposition is similar to recent work that decomposes a user question to sub-questions on SPARQL ( Mo et al. ,2022 ). Their approach requires a crowd-sourced dataset to train a question decomposition model. In contrast, our rule-based method generates step-by-step explanations without the need for training a model. This also allows our system to map each entity in the explanation to the corresponding SQL element, making it easier for SQL correction (Sec. 3.2 ).\\n\\n# 2.3 Explaining SQL Queries in NL\\nOur approach is also related to prior work that generates NL explanations for SQL queries. Simitsis and Ioannidis (2009 ) argued that databases should “talk back” in human language so that users can verify results. Kokkalis et al. (2012 ) and Koutrika et al. (2010 ) used a graph-based SQL translation approach, where each query is represented as a graph and the explanation is generated by traversing the graph. Elgohary et al. (2021 ,2020 ) employed a template-based explanation approach, where they manually curated 57 templates for explanation generation. These approaches have limited capability to handle arbitrary SQL queries. To address this limitation, we propose a rule-based method to first explain terminal tokens (e.g., operators, keywords) and gradually compose them into a complete explanation based on the derivation rules in the SQL grammar. Another key difference is that none of the existing approaches supports editable explanations for SQL correction, which is a key feature to solicit user feedback in our approach.',\n",
       "   'original_filename': 'Conf_Paper_Meta_Data_EMNLP_2023_with_whole_text.db'}},\n",
       " {'id': 454845681760490286,\n",
       "  'distance': 0.6474056243896484,\n",
       "  'entity': {'paper_id': '6535d747939a5f408295c649',\n",
       "   'paper_title': 'Benchmarking and Improving Text-to-SQL Generation under Ambiguity',\n",
       "   'chunk_id': 1,\n",
       "   'chunk_text': '# 2 Background and Related Work\\nA Text-to-SQL model takes as input a question expressed as a natural language text x, and a database schema scomprising of table and column names, and outputs an SQL program ywhich can be executed against the database to answer the user’s question. Figure 1 shows an example. The training data for the task comprises (text, schema, SQL) triplets spanning multiple distinct databases.  \\n\\n  \\nFigure 1: A Text-to-SQL system converts a user question to an SQL query, conditioned on the database schema and/or content.  \\n\\nBenchmarks. Popular benchmarks for the Textto-SQL task are WikiSQL ( Zhong et al. ,2018 )and SPIDER ( Yu et al. ,2018 ). A few others have been proposed recently to capture real-world scenarios, such as KaggleDBQA ( Lee et al. ,2021 ), SEDE ( Hazoom et al. ,2021 ), and EHRSQL ( Lee et al. ,2022 ). They all attach one SQL per text, though some of them mention the problem of ambiguity in real-world datasets. Dr. SPIDER ( Chang et al. ,2023 ), designed to test the robustness of existing models, perturbs either the text or schema of SPIDER but still assigns one SQL per text.  \\n\\nAmbiguity in SQL Although ambiguity has been studied in other fields of NLP ( Pilault et al. ,2023 ;Li et al. ,2022 ;Futeral et al. ,2022 ), it has been unexplored in the context of semantic parsing. Ambiguity in SQL arising from related column names is discussed in ( Wang et al. ,2022 ), but they only consider column ambiguity. Their method of recognizing ambiguous queries depends on labeling words of the text and does not generalize to other kinds of ambiguity. To the best of our discernment, AmbiQT represents the first open benchmark for testing coverage of ambiguous alternatives.  \\n\\nDiverse Decoding. Prior work has critiqued the lack of meaningful diversity in beam-search outputs ( Finkel et al. ,2006 ;Gimpel et al. ,2013 ;Li et al. ,2016 ;Li and Jurafsky ,2016 ). In response, many fixes have been proposed. Some proposals attempt to restrict the tokens sampled, using strategies like Nucleus sampling ( Holtzman et al. ,2020 ), Truncated Sampling ( Hewitt et al. ,2022 ), and Typical Sampling ( Meister et al. ,2023 ), while some rely on Template-Based decoding ( Wiseman et al. ,2018 ;Zhang et al. ,2022 ;Fu et al. ,2023 ;Elgohary et al. ,2020 ;Awasthi et al. ,2022 ). A third approach is to generate a prefix with high diversity first, then generate the rest of the sentence with lower diversity. Narayan et al. (2022 ) follow this recipe but focus on incorporating diverse entity orders in text summarization.  \\n\\n<html><body><table><tr><td rowspan=\"2\">Kind of ambiguity</td><td rowspan=\"2\">Count</td><td colspan=\"3\">Example</td></tr><tr><td>QuestionText</td><td>SQL#1</td><td>SQL#2</td></tr><tr><td>Column Ambiguity (C)</td><td>1240</td><td>List the ids of all students.</td><td>SELECTroll_number FROMstudents</td><td>SELECTadmission_number FROMstudents</td></tr><tr><td>Table Ambiguity (T)</td><td>1417</td><td>How many singers do wehave?</td><td>SELECT COUNT(*) FROM artist</td><td>SELECT COUNT(*) FROM performer</td></tr><tr><td>Join Ambiguity (J)</td><td>288</td><td>Whatarethemakers and models?</td><td>SELECT maker，model FROM model</td><td>SELECT t2.maker，t1.model FROM modelASt1JOINmodel_maker AS t2 ON t1.model_id = t2.model_id</td></tr><tr><td>Precomputed Aggregates (P)</td><td>101</td><td>for each pet type.</td><td>Find the average weight|SELECT AVG(weight)， pettype FROM pets GROUP BY pettype</td><td>SELECT avg_weight，pettype FROM pets_weight</td></tr></table></body></html>\\n\\nTable 1: The AmbiQT benchmark. For each question, we list two valid SQL queries as per the schema. The schema is not shown here, but the ambiguity in it can be inferred based on the two SQL queries.\\n\\n# 3 AmbiQT: A Benchmark of Ambiguous Text-to-SQL Conversion\\nAmbiQT is constructed so that each text query has two distinct valid SQL interpretations. Motivated by our experience working with real-life databases, we designed AmbiQT to encompass four types of ambiguity. Each entry is designed so that both alternatives have a similar relevance to the question, and a well-calibrated decoding method is expected to rank them close by in their outputs.  \\n\\nWe create AmbiQT by modifying the SPIDER (Yu et al. ,2018 ) dataset, and use ChatGPT ( OpenAI ,2022 ) to aid with the creation. In each case, we modify the schema instead of the text as that provides greater control over the modification process. We explain the kinds of ambiguity in AmbiQT below and portray examples of each in Table 1 .  \\n\\nColumn Ambiguity (C). Unlike the SPIDER benchmark where column names usually appear verbatim in the question text (like born state for the column born_state ), when users unaware of the schema pose a natural question, they introduce column ambiguity ( Wang et al. ,2022 ). For example, “ What is the capacity of O2 Arena? ” could be ambiguous if the schema has separate columns for standing and seating capacity. Likewise, a query on the number of under-nourished children is ambiguous if we have different columns for “under-weight children” and “stunted growth in children”.  \\n\\nTo simulate column ambiguity, for each text $\\\\mathbf{x}$ ,schema s, and SQL yin SPIDER, we prompt ChatGPT to generate two synonyms for each column name of sin a one-shot manner. Appendix A furnishes more details of the prompt. We then modify sby replacing $c$ with two columns $c_{1},c_{2}$ , and we use yto generate two queries $\\\\mathbf{y}_{1},\\\\mathbf{y}_{2}$ where all mentions of $c$ are replaced with $c_{1}$ in $\\\\mathbf{y}_{1}$ and with $c_{2}$ in $\\\\mathbf{y}_{2}$ . An example appears in the first row of Table 1 . We do not reuse $c$ because the SPIDER dataset often contains column names verbatim in the question, and that would violate our attempt at keeping the two options at similar relevance levels. We modify one column at a time and generate up to 3 examples from each original entry.  \\n\\nTable Ambiguity (T). Table name ambiguity is common in databases obtained by integrating multiple data sources, as in web tables ( Cafarella et al. ,2008 ;Pimplikar and Sarawagi ,2012 ). Here again, we prompt ChatGPT to generate two alternate names for each table. We then modify one SQL yto generate two candidates ${\\\\bf y}_{1},{\\\\bf y}_{2}$ as shown in Table 1 .  \\n\\nJoin Ambiguity (J). In production databases, a logical table is often vertically partitioned across several tables for efficient clustered access ( Stonebraker et al. ,2019 ). Column names overlapping across tables leads to Join Ambiguity. Suppose we have two tables: (1) person with columns id, name, email_address , and (2) person_details with columns id, postal_address, photo .A question asking for a person’s name and address is ambiguous on whether a JOIN with the person_details is necessary. We expose such ambiguity by modifying the schema as follows.  \\n\\nConsider a $(\\\\mathbf{x},\\\\mathbf{s},\\\\mathbf{y})$ triplet. Suppose yinvolves selecting two or more columns $c_{1},c_{2},\\\\ldots.$ not necessarily in the same order, from a table $t$ . Suppose further that $c_{1}$ is not a primary key of $t$ . We create a table called $t\\\\_c_{1}$ that includes just the primary key $p k_{t}$ of $t$ , and $c_{1}$ . The first alternative $\\\\mathbf{y}_{1}$ is $\\\\mathbf{y}$ and the second alternative $\\\\mathbf{y}_{2}$ uses a join over $t$ and $t\\\\_c_{1}$ , with everything else staying the same as y.  \\n\\n  \\nFigure 2: Beam Search works well when targeting only one output, but leads to superficial diversity, for example via different grouping and erroneous variants of column names.  \\n\\nPrecomputed Aggregates $(\\\\mathbf{P})$ :. This ambiguity is particularly common in data warehouses such as Data Commons which pre-aggregate certain variables. For instance, the “ total rice production ” of a state might refer to the column rice_production of state rather than a sum over it. Text-toSQL models have a bias toward introducing a sum()...group-by clause every time total appears in the text. The non-aggregated alternative is usually missing in the top$k$ options. We incorporate this ambiguity as follows.  \\n\\nFor each $(\\\\mathbf{x},\\\\mathbf{s},\\\\mathbf{y})$ , where $\\\\mathbf{y}$ has at least one aggregate, we construct a new table $t^{\\\\prime}$ . For each aggregate $\\\\boldsymbol{\\\\mathcal{A}}$ over column $c$ in y, we add to $t^{\\\\prime}$ the columns and the columns grouped by in $A^{\\\\prime}\\\\_c$ for all $\\\\mathcal{A}^{\\\\prime}\\\\,\\\\in\\\\,\\\\{\\\\mathsf{a v g},\\\\mathsf{s u m},\\\\mathsf{m i n},\\\\mathsf{m a x}\\\\}$ y. For count $(\\\\star)$ ,we add a column called number . We get two gold queries, the original yand a second with the groupby replaced by a direct SELECT on $t^{\\\\prime}$ as shown in the example in Table 1 . We also support aggregates across multiple tables but skip the details here.',\n",
       "   'original_filename': 'Conf_Paper_Meta_Data_EMNLP_2023_with_whole_text.db'}},\n",
       " {'id': 454845652744816630,\n",
       "  'distance': 0.6249851584434509,\n",
       "  'entity': {'paper_id': '646d8642d68f896efa0a3040',\n",
       "   'paper_title': 'Exploring Chain-of-Thought Style Prompting for Text-to-SQL',\n",
       "   'chunk_id': 1,\n",
       "   'chunk_text': '# 1 Introduction\\nText-to-SQL parsing, the task of translating a natural language question into a SQL query, has found wide applications in building natural language interfaces to databases and thus piqued significant research interest in recent years ( Wang et al. ,2020 ;Deng et al. ,2021 ;Yu et al. ,2021 ;Rajkumar et al. ,2022 ;Hongjin et al. ,2023 ;Ni et al. ,2023 ). To develop a text-to-SQL parser, a prevalent approach is to collect labeled data and train a model via supervised learning ( Shaw et al. ,2021 ;Scholak et al. ,2021 ). While effective, this approach necessitates a considerable amount of training data, which is costly to obtain because annotating SQL queries requires programming expertise. Consequently, the lack of data hinders real-life applications of stateof-the-art parsers, especially on novel databases and unseen domains ( Suhr et al. ,2020 ).  \\n\\nAs an alternative to supervised learning, incontext learning ( Brown et al. ,2020 ), an emergent capability of large language models (LLMs), alleviates the need for large-scale data. With only a few examples, in-context learning enables LLMs to demonstrate performance comparable to or even better than fully supervised models on many NLP tasks, such as question answering, machine translation, and natural language inference ( Chowdhery et al. ,2022 ;Kojima et al. ,2022 ;Wei et al. ,2022b ,a ;Brohan et al. ,2023 ). When applied to text-to-SQL parsing, in-context learning also shows encouraging results, but it still lags behind supervised approaches ( Rajkumar et al. ,2022 ;Chang et al. ,2023 ;Liu et al. ,2023a ).  \\n\\nWe hypothesize that the under-performance is because text-to-SQL parsing requires complex, multistep reasoning. Even for a seemingly simple question, such as “What is the ID of Kyle,\" a model has to ground it to database schemas, infer the relational algebra among schema items, and construct syntactically correct SQL clauses. Recently, the chain-of-thought (CoT) style promptings ( Wei et al. ,2022b ;Zhou et al. ,2023 ) are proposed and have shown promising multi-step reasoning capabilities. To enhance LLMs’ reasoning ability, we systematically explore CoT style prompting for text-to-SQL parsing. Specifically, we seek to answer two research questions: (1) Which prompting style is better, generating all reasoning steps in a single pass, or iterative prompting and problem solving? (2) Does including more detailed information in the reasoning steps lead to better results for text-to-SQL parsing?  \\n\\nTo address the questions, we adopt two widely used prompting methods for text-to-SQL parsing As the first method, we apply chain-of-thought prompting (Wei et al. ,2022b ) by drawing an analogy between its problem-solving process and the execution procedure of a SQL query. Referring to the logical execution order of SQL clauses (Narechania et al. ,2021 ), we compose the intermediate execution steps in natural language and prompt LLMs to derive them before generating the SQL query. As the second method, we follow Zhou et al. (2023 ) to apply least-to-most prompting in two stages: (1) reduction: generate a series of sub-questions from the original question and (2) solving: iteratively translate each sub-question into its corresponding SQL query, with the original question as the last sub-question. However, in our case study 1 , we find that directly applying chainof-thought and lease-to-most promptings leads to error propagation issues. Their rationales contain very demonstration-specific information and are easier to mislead the reasoning process. Furthermore, least-to-most prompting technique leads to additional computational and time cost due to the multiple stages of reduction and solving.  \\n\\n  \\nFigure 1: Different prompting methods with multi-step reasoning for text-to-SQL parsing: (a) Chain-of-Thought, (b) Least-toMost, and our proposed (c) QDecomp , and (d) QDecomp $^+$ InterCOL .  \\n\\nTherefore, we propose a new method called question-decomposition prompting (QDecomp ). Similar to chain-of-thought prompting, QDecomp generates a sequence of reasoning steps and then the SQL query in one pass. However, we modify the steps to instruct LLMs to decompose the original complex question, akin to the problem reduction stage in least-to-most prompting. Also, to help LLMs ground database schemas, we design a variant of question decomposition prompting (QDecomp $^+$ InterCOL ) by including the table and column names involved in each sub-question. We conduct comprehensive evaluations on two textto-SQL datasets, Spider ( Yu et al. ,2018 ) and Spider Realistic ( Deng et al. ,2021 ). Our proposed prompting methods substantially outperform existing prompting ones by 2.4 and 1.5 point absolute gains on the development set of Spider and Spider Realistic, respectively. The results suggest that the iterative prompting which is costly due to additional computational resources requirement as in least-to-most prompting may not be necessary $(R Q I)$ . In addition, our analysis shows the proposed question decomposition prompting methods, which do not instruct LLMs to generate detailed reasoning steps, reduce the chance of error propagation when generating the reasoning steps. ( RQ2 ). Finally, we evaluate the robustness of our proposed prompting methods by varying the number, selection, and format of in-context examples and show that they can achieve consistently strong performance across different settings.',\n",
       "   'original_filename': 'Conf_Paper_Meta_Data_EMNLP_2023_with_whole_text.db'}},\n",
       " {'id': 454845681740829484,\n",
       "  'distance': 0.6236740350723267,\n",
       "  'entity': {'paper_id': '6535d747939a5f408295c649',\n",
       "   'paper_title': 'Benchmarking and Improving Text-to-SQL Generation under Ambiguity',\n",
       "   'chunk_id': 0,\n",
       "   'chunk_text': '# Benchmarking and Improving Text-to-SQL Generation under Ambiguity\\nAdithya Bhaskar $\\\\mathbf{\\\\nabla}_{*}\\\\bigotimes\\\\bigtriangleup$ Tushar Tomar ∗♠ Ashutosh Sathe ♠Sunita Sarawagi ♠♠IIT Bombay ♢Princeton University\\n\\n# Abstract\\nResearch in Text-to-SQL conversion has been largely benchmarked against datasets where each text query corresponds to one correct SQL. However, natural language queries over reallife databases frequently involve significant ambiguity about the intended SQL due to overlapping schema names and multiple confusing relationship paths. To bridge this gap, we develop a novel benchmark called AmbiQT with over 3000 examples where each text is interpretable as two plausible SQLs due to lexical and/or structural ambiguity.  \\n\\nWhen faced with ambiguity, an ideal top$k$ decoder should generate all valid interpretations for possible disambiguation by the user ( Elgohary et al. ,2021 ;Zhong et al. ,2022 ). We evaluate several Text-to-SQL systems and decoding algorithms, including those employing state-of-the-art LLMs, and find them to be far from this ideal. The primary reason is that the prevalent beam search algorithm and its variants, treat SQL queries as a string and produce unhelpful token-level diversity in the top$k$ .  \\n\\nWe propose LogicalBeam, a new decoding algorithm that navigates the SQL logic space using a blend of plan-based template generation and constrained infilling. Counterfactually generated plans diversify templates while in-filling with a beam-search, that branches solely on schema names, provides value diversity. LogicalBeam is up to $2.5\\\\times$ more effective than state-of-the-art models at generating all candidate SQLs in the top$k$ ranked outputs. It also enhances the top5 Exact and Execution Match Accuracies on SPIDER and Kaggle DBQA 1 .\\n\\n# 1 Introduction\\nResearch on Text-to-SQL generation has focused on scenarios where each natural language question is associated with one correct SQL ( Zelle and Mooney ,1996 ;Tang and Mooney ,2000 ;Scholak et al. ,2021a ;Wang et al. ,2020 ;Rubin and Berant ,2021 ;Xie et al. ,2022 ;Arcadinho et al. ,2022 ;Zeng et al. ,2022 ;Scholak et al. ,2021b ;Pourreza and Rafiei ,2023 ). Popular benchmarks driving such research, including WikiSQL ( Zhong et al. ,2018 ), SPIDER ( Yu et al. ,2018 ), its robust perturbations ( Chang et al. ,2023 ), and even “in-thewild” benchmarks such as KaggleDBQA ( Lee et al. ,2021 ) and SEDE ( Hazoom et al. ,2021 ) all associate one correct SQL with text. Meanwhile, ambiguity is prevalent in real-life databases — particularly the ones obtained by integrating several data sources for data analysis, where a natural language interface is most in demand. The sources of ambiguity are several — inherent ambiguity of natural language, the user’s ignorance of table/column names, overlapping strings in column names, underspecified clauses, and confusion about whether aggregates are pre-computed, or if a join is required. Hazoom et al. (2021 ) observe that up to $87\\\\%$ of queries on the stack exchange database are underspecified, and Wang et al. (2022 ) mention that $11\\\\%$ of queries exhibited ambiguity in column names. Although prior work has brought up ambiguity, there is no publicly available benchmark with ambiguous queries, nor a comprehensive evaluation of systems under ambiguity.  \\n\\nOur first contribution is to bridge this gulf by developing a benchmark, AmbiQT, that tests performance under ambiguity in the context of current models. AmbiQT includes over 3000 examples, each associating a natural question on a database with two valid SQLs. Inspired by our experience with several real-world datasets, we target four types of ambiguity spanning both lexical (ambiguous column and table names) and structural (whether a join is necessary, and an aggregate is pre-computed) ambiguity. The benchmark is generated via a combination of ChatGPT ( OpenAI ,2022 ) based synonym generation and perturbation, and standard rule-based perturbation.  \\n\\nWhen faced with ambiguity, an ideal Text-toSQL system should incorporate all valid alternatives in their top$k$ SQL outputs, for user resolution. We show that present approaches, ranging from T5-3B ( Raffel et al. ,2019 ) to SOTA models, fail to generate all ambiguous outputs with any decoding strategy, including beam search and diversity-promoting sampling methods such as Nucleus ( Holtzman et al. ,2020 ) and Typical sampling ( Meister et al. ,2023 ). Most outputs are small lexical tweaks of the top choice, bringing about little meaningful diversity in SQL structures or schema alternatives. Even SOTA LLMs like ChatGPT ( OpenAI ,2022 ) suffer from this issue.  \\n\\nTo remedy the lack of diversity, we propose a new decoding algorithm, LogicalBeam, that allocates branching to explore underlying logical variants of the SQL rather than the string form. We catalog the errors of T5-3B ( Raffel et al. ,2019 ) on the SPIDER dev split and use our insights to encourage targeted types of diversity — the number of JOIN s and selections, and table/column names.  \\n\\nOur main contributions are:   \\n•We develop AmbiQT, the first benchmark that tests performance under four types of ambiguity over $\\\\mathbf{3000+}$ examples.   \\n•We show that SOTA methods, including a finetuned T5-3B, RESDSQL ( Li et al. ,2023 ), OpenAI Codex, and ChatGPT, provide a poor representation of ambiguity despite their high accuracy on conventional benchmarks.   \\n•We present LogicalBeam, a two-step algorithm that generates plan-based templates with counterfactually controlled plan diversity and fills them via a beam search that branches only on schema names.   \\n•We show that LogicalBeam consistently increases the fraction of time when all gold SQLs get generated in the Top-5 choices by $1.5-2.5\\\\times$ over the baselines across the board on AmbiQT.',\n",
       "   'original_filename': 'Conf_Paper_Meta_Data_EMNLP_2023_with_whole_text.db'}},\n",
       " {'id': 454845641342337844,\n",
       "  'distance': 0.6175075769424438,\n",
       "  'entity': {'paper_id': '6461b9c9d68f896efad43133',\n",
       "   'paper_title': 'Interactive Text-to-SQL Generation Via Editable Step-by-Step Explanations',\n",
       "   'chunk_id': 0,\n",
       "   'chunk_text': '# Interactive Text-to-SQL Generation via Editable Step-by-Step Explanations\\nYuan Tian 1 , Zheng Zhang 2 , Zheng $\\\\mathbf{Ning^{2}}$ ,Toby Jia-Jun $\\\\mathbf{Li}^{2}$ ,Jonathan K. Kummerfeld 3 , and Tianyi Zhang 1 Purdue University 1 , University of Notre Dame 2 , The University of Sydney 3  , , , , ,\\n\\n# Abstract\\nRelational databases play an important role in business, science, and more. However, many users cannot fully unleash the analytical power of relational databases, because they are not familiar with database languages such as SQL. Many techniques have been proposed to automatically generate SQL from natural language, but they suffer from two issues: (1) they still make many mistakes, particularly for complex queries, and (2) they do not provide a flexible way for non-expert users to validate and refine incorrect queries. To address these issues, we introduce a new interaction mechanism that allows users to directly edit a stepby-step explanation of a query to fix errors. Our experiments on multiple datasets, as well as a user study with 24 participants, demonstrate that our approach can achieve better performance than multiple SOTA approaches. Our code and datasets are available at https: //github.com/magic-YuanTian/STEPS .\\n\\n# 1 Introduction\\nNatural language interfaces significantly lower the barrier to accessing databases and performing data analytics tasks for users who are not familiar with database query languages. Many approaches have been proposed for generating SQL queries from natural language ( Popescu et al. ,2004 ;Giordani and Moschitti ,2012 ;Rubin and Berant ,2021 ;Scholak et al. ,2021 ;Zhao et al. ,2022 ). Using recent large language models, systems have reached $86.6\\\\%$ execution accuracy ( Gao et al. ,2023 ) on the Spider benchmark ( Yu et al. ,2018 ).  \\n\\nHowever, the rate of improvement has slowed, with a gain of only $10\\\\%$ since mid-2021. This is partly due to the inherent ambiguity of natural language and the complex structure of SQL queries (e.g., nested or joined queries). Thus, it is challenging to generate a fully correct query in one step, especially for complex tasks ( Yao et al. ,2019 ).  \\n\\n  \\nFigure 1: Refining a SQL query by directly editing a step-by-step explanation.  \\n\\nThere has been growing interest in developing “human-in-the-loop” approaches that elicit user feedback to guide SQL generation. However, most approaches only support feedback in constrained forms, e.g., answering multiple-choice questions (MISP, PIIA, DialSQL Yao et al. ,2019 ;Li et al. ,2020 ;Gur et al. ,2018 ), changing SQL elements in a drop-down menu (DIY, Narechania et al. ,2021 ), etc. Such constrained feedback is not sufficient to fix many complex errors in real-world SQL tasks. One exception is NL-EDIT ( Elgohary et al. ,2021 ), which allows users to provide feedback as new utterances. However, since the feedback is open-ended, interpreting it can be just as hard as processing the original request.  \\n\\nIn this paper, we seek to strike a balance between constrained feedback and open-ended feedback by proposing a new interaction mechanism: editable step-by-step explanations. Fig. 1 illustrates our idea. This mechanism consists of three core components: (a) a text-to-SQL model, (b) an explanation generation method, and (c) a SQL correction model. Our key insight is that using a step-by-step explanation as the basis to suggest fixes allows users to precisely specify where the error is and how to fix it via direct edits. This not only saves users’ time but also makes it easier for the model to locate the error and apply fixes.  \\n\\nBased on this idea, we implemented an interactive SQL generation and refinement system called STEPS . S TEPS adopts a rule-based method to generate step-by-step explanations and uses a hybrid rule/neural method to convert a user-corrected explanation back to a SQL query.  \\n\\nAn evaluation with a simulated user on Spider ( Yu et al. ,2018 ) shows that S TEPS can achieve $97.9\\\\%$ exact set match accuracy, outperforming prior interactive text-to-SQL systems— MISP, DIY, and NL-EDIT—by $33.5\\\\%$ ,$33.2\\\\%$ , and $31.3\\\\%$ respectively. We further evaluate S TEPS on other datasets, including Spider-DK ( Gan et al. ,2021b ), Spider-Syn ( Gan et al. ,2021a ), and WikiSQL ( Zhong et al. ,2017 ). S TEPS consistently achieves at least $96\\\\%$ exact set match accuracy and execution accuracy across all datasets.  \\n\\nFinally, we conducted a within-subjects user study with 24 real users. We found that within the same amount of time, S TEPS helped users complete almost 2X and 4X more tasks correctly than DIY and MISP respectively, with significantly higher self-reported confidence and lower mental load.  \\n\\nThis work makes the following contributions: (1) we propose a new interaction mechanism for the text-to-SQL task; (2) we develop an interactive text-to-SQL system based on the new interaction mechanism and a new training method for SQL correction; (3) we conduct a comprehensive evaluation with both simulated and real users and demonstrate its effectiveness over state-of-the-art interactive systems. Our dataset and code are publicly available.',\n",
       "   'original_filename': 'Conf_Paper_Meta_Data_EMNLP_2023_with_whole_text.db'}},\n",
       " {'id': 454845706688551984,\n",
       "  'distance': 0.613497793674469,\n",
       "  'entity': {'paper_id': '65406320939a5f40826491aa',\n",
       "   'paper_title': 'Evaluating Cross-Domain Text-to-SQL Models and Benchmarks',\n",
       "   'chunk_id': 0,\n",
       "   'chunk_text': '# Evaluating Cross-Domain Text-to-SQL Models and Benchmarks\\nMohammadreza Pourreza University of Alberta   \\n\\nDavood Rafiei University of Alberta\\n\\n# Abstract\\nText-to-SQL benchmarks play a crucial role in evaluating the progress made in the field and the ranking of different models. However, accurately matching a model-generated SQL query to a reference SQL query in a benchmark fails for various reasons, such as underspecified natural language queries, inherent assumptions in both model-generated and reference queries, and the non-deterministic nature of SQL output under certain conditions. In this paper, we conduct an extensive study of several prominent cross-domain text-to-SQL benchmarks and reevaluate some of the top-performing models within these benchmarks, by both manually evaluating the SQL queries and rewriting them in equivalent expressions. Our evaluation reveals that attaining a perfect performance on these benchmarks is unfeasible due to the multiple interpretations that can be derived from the provided samples. Furthermore, we find that the true performance of the models is underestimated and their relative performance changes after a re-evaluation. Most notably, our evaluation reveals a surprising discovery: a recent GPT4-based model surpasses the gold standard reference queries in the Spider benchmark in our human evaluation. This finding highlights the importance of interpreting benchmark evaluations cautiously, while also acknowledging the critical role of additional independent evaluations in driving advancements in the field.  \\n\\nproved from 65.6 ( Wang et al. ,2019 ) to 74.0 ( Li et al. ,2023a ). Measuring such progress is hinged on reliable benchmarks and evaluation metrics.  \\n\\nTwo standard metrics for evaluating the performance in this domain have been exact set match accuracy and execution accuracy . The former measures if a model-generated SQL query lexically matches a reference SQL query, whereas the latter measures if a model-generated SQL query produces the same output as a reference query $(\\\\S\\\\,4)$ .  \\n\\n  \\nFigure 1: An example question with two correct SQL queries, each corresponding to a different interpretation. There is an ambiguity in schema mapping, with two different database columns describing the name.\\n\\n# 1 Introduction\\nSignificant progress has been made in translating natural language text to SQL statements over the past few years. The execution accuracy on the hold-out test of Spider ( Yu et al. ,2018b )–a large-scale cross-domain text-to-SQL benchmark– has improved from 53.5 in May, 2020 ( Zhong et al. ,2020b ) to 85.3 in March, 2023 ( Pourreza and Rafiei ,2023 ). The exact set match accuracy, without considering database cell values, on the same benchmark and over the same period has im  \\n\\nConsider the example in Figure 1 , which consists of a model-generated query (shown on the left) and a reference query (shown on the right). Both SQL queries return the id and name of makers that have more than 3 models. However, the model-generated query returns the column FullName, which gives the full name of a maker (e.g., “Ford Motor Company”), whereas the reference query given in the benchmark returns the column Maker, which gives the short common name of a maker (e.g., “Ford”). The model-generated query fails an exact set match since the column names in the select clause are different. The query outputs are also different and the model-generated query fails the execution accuracy as well. The natural language utterance is not specific about the type of name to be returned, and a human evaluator tags both queries correct.  \\n\\nAs the models improve, these types of failures make up most of the errors, and the performance metrics become less relevant, as shown in our evaluation. In particular, we re-evaluated all development set queries of Spider on which two top-performing models, one using a fine-tuned model ( Scholak et al. ,2021 ) and another using a large language model ( Pourreza and Rafiei ,2023 ), failed. We found out that $25\\\\%$ of the queries generated by one model and $87\\\\%$ of the queries generated by the other model were indeed correct but were wrongly evaluated by the benchmark. For the same set of queries, our re-evaluation of the ground truth queries found $33\\\\%$ of the SQL queries incorrect, which was more than the number of incorrect queries generated by one of the models. This evaluation places one of the models above the ground truth queries in this re-evaluation.  \\n\\nWe further re-evaluated two well-known benchmarks, Spider ( Yu et al. ,2018b ) and SpiderDK ( Gan et al. ,2021b ), and a newly released benchmark, BIRD ( Li et al. ,2023b ), and found similar problems in all three benchmarks that affect the evaluation. Our evaluation reveals that $18\\\\%$ of the queries in the train sets and $20\\\\%{-23\\\\%}$ of the queries in the dev sets of these benchmarks are subject to ties in the dataset and which one of the tied rows are returned. This means a model-generated query will be deemed incorrect if it does not return the same row, among tied rows, as the ground truth query. This can severely impact the evaluation, especially when there is a tight race among models. Considering these observations, it is crucial to emphasize the significance of additional independent evaluations when utilizing these benchmarks. To enhance the evaluation process further, a potential solution is to incorporate multiple SQL queries as the ground truth, each representing a different interpretation that may be valid.  \\n\\nOur objective in this paper is to provide a comprehensive evaluation of existing Text-to-SQL benchmarks, underscoring the inherent issues they possess. We refrain from introducing a new dataset due to several considerations. First, addressing the identified issues by updating these benchmarks requires considerable human effort. Additionally, benchmarks in the Text-to-SQL domain, like Spider and BIRD, have holdout test sets used for official leaderboards and comparisons of text-to-SQL methodologies. We only have access to the development and training sets of these benchmarks, which limits our capability to alter the test sets. As a result, making changes only to the development and training sets would not completely address the benchmark’s inherent problems, given that final performance is gauged using the problematic test sets.',\n",
       "   'original_filename': 'Conf_Paper_Meta_Data_EMNLP_2023_with_whole_text.db'}},\n",
       " {'id': 454845631964659546,\n",
       "  'distance': 0.6036479473114014,\n",
       "  'entity': {'paper_id': '63a1751790e50fcafd1f48e7',\n",
       "   'paper_title': 'CiteBench: A Benchmark for Scientific Citation Text Generation',\n",
       "   'chunk_id': 1,\n",
       "   'chunk_text': '# 2 Related work\\n\\n# 2.1 Benchmarking\\nNLP benchmarks are unified dataset collections coupled with evaluation metrics and baselines that are used to systematically compare the performance of NLP systems for the targeted tasks in a standardized evaluation setup. Well-constructed benchmarks can boost progress in the corresponding research areas, such as SQuAD ( Rajpurkar et al. ,2016 ) for question answering, GLUE ( Wang et al. ,2018 ) for natural language understanding, KILT ( Petroni et al. ,2021 ) for knowledge-intensive tasks, GEM ( Gehrmann et al. ,2021 ,2022 ) for general-purpose text generation, and DynaBench (Kiela et al. ,2021 ) for dynamic benchmark data collection. C ITE BENCH is the first benchmark for the citation text generation task.\\n\\n# 2.2 Text generation for scientific documents\\nScientific documents are characterized by academic vocabulary and writing style, wide use of nonlinguistic elements like formulae, tables and figures, as well as structural elements like abstracts and citation anchors. Recent years have seen a rise in natural language generation for scientific text, including text simplification ( Luo et al. ,2022 ), summarization ( Qazvinian and Radev ,2008 ;Erera et al. ,2019 ;Cachola et al. ,2020 ), slides generation ( Sun et al. ,2021 ), table-to-text generation (Moosavi et al. ,2021 ), and citation text generation ( Li and Ouyang ,2022 ). Closely related to the task of citation text generation, Luu et al. (2021 )study how scientific papers can relate to each other, and how these relations can be expressed in text. Related to our work, Mao et al. (2022 ) propose a benchmark for scientific extreme summarization. Compared to extreme summarization, which amounts to generating short context-independent summaries of individual manuscripts, citation text generation focuses on context-dependent descriptions that relate the cited papers to the citing paper. In line with the recent efforts that address the lack of systematic automated evaluation of natural language generation in general ( Gehrmann et al. ,2021 ), our paper contributes the first unified benchmark for citation text generation in the scientific domain.\\n\\n# 2.3 Citation text generation\\nThe task of citation text generation was introduced in Hoang and Kan (2010 ), who generate a summary of related work specific to the citing paper. Since then, several task definitions and setups have been proposed (Table 1 ). Lu et al. (2020 ) cast the task as generating a multi-paragraph related work section given the abstracts of the citing paper and of the cited papers. AbuRa’ed et al. (2020 ) use the cited paper’s title and abstract to generate a citation sentence. Xing et al. (2020 ) use the abstract of the cited paper and include context before and after the citation sentence as the input, and produce the citation sentence as the output. A recent work by Chen et al. (2021 ) uses multiple cited abstracts as input to generate a related work paragraph. The great variability of the task definitions and setups in citation text generation prevents the study of citation text generation methods across datasets and evaluation setups. Unlike prior work that explores varying task settings, C ITE BENCH brings the diverging task definitions and datasets together in a unified setup. This allows us to compare citation text generation models across different datasets in a standardized manner using an extensive set of quantitative metrics, as well as novel automated qualitative metrics.  \\n\\n<html><body><table><tr><td rowspan=\"2\">Dataset</td><td colspan=\"3\">Input</td><td rowspan=\"2\">Output Citation text (T)</td><td rowspan=\"2\">Datasources</td></tr><tr><td>Cited document (D*) SingleAbs MultiAbs</td><td>Abs</td><td>Citing context (C\") Text</td></tr><tr><td>ABURAED</td><td></td><td>Title √</td><td></td><td>Sent Para</td><td>Multiple</td></tr><tr><td>CHEN</td><td>√</td><td></td><td></td><td></td><td>S2ORCandDelve</td></tr><tr><td>LU</td><td></td><td></td><td></td><td></td><td>arXiv.org and MAG</td></tr><tr><td>XING</td><td></td><td></td><td></td><td></td><td>AAN</td></tr></table></body></html>  \\n\\nTable 1: Overview of datasets in C ITE BENCH . Single Abs $=$ Single abstract, i.e., one cited document per instance. Multi Abs $=$ Multiple abstracts, i.e., multiple cited documents per instance. Abs $=$ Abstract, i.e., a dataset contains the abstract of the citing paper. Text $=\\\\xi$ a dataset contains additional context from the citing paper. Sent $=$ generation target is a single sentence. Para $=$ generation target is a paragraph.   \\nTable 2: Datasets statistics. The validation set for XING has been created by us via randomly sampling $10\\\\%$ of the original training data. Across datasets, very few inputs contain more than 4,096 tokens, and few outputs are longer than 1,024 tokens. We exploit this property to speed up the evaluation in Section 3.3 .  \\n\\n\\n<html><body><table><tr><td>Dataset</td><td>#Train</td><td>#Validation</td><td>#Test</td><td>Inputs>4,096tok.</td><td>Outputs>1,024tok.</td></tr><tr><td>ABURAED</td><td>15,000</td><td>1,384</td><td>219</td><td>0%</td><td>0%</td></tr><tr><td>LU</td><td>30,369</td><td>5,066</td><td>5,093</td><td><0.001%</td><td>0%</td></tr><tr><td>XING</td><td>77,086</td><td>8,566</td><td>400</td><td><0.001%</td><td><0.001%</td></tr><tr><td>CHEN -Delve</td><td>72,927</td><td>3,000</td><td>3,000</td><td><0.001%</td><td>0.004%</td></tr><tr><td>-S2ORC</td><td>126,655</td><td>5,000</td><td>5,000</td><td>0.017%</td><td><0.001%</td></tr><tr><td>Total</td><td>322,037</td><td>23,016</td><td>13,712</td><td>0.007%</td><td><0.001%</td></tr></table></body></html>\\n\\n# 3 Benchmark\\n\\n# 3.1 Task definition and datasets\\nWe formalize the task of citation text generation as follows: Given a set of $n$ (cited) target documents $\\\\{D_{1}^{t}...D_{n}^{t}\\\\}$ }, a (citing) sourc ocum $D^{s}$ set of $m$ citing document contexts $\\\\{C_{1}^{s}\\\\ ...C_{m}^{s}\\\\}\\\\ \\\\in$ } ∈ $D^{s}$ , generate a citation text $T^{\\\\prime}$ ′that is as close as possible to the original citation text $T\\\\in D^{s}$ ∈. This general definition allows wide variation in how the task is implemented. The cited document $D_{i}^{t}$ can be represented by the abstract $a^{t_{i}}$ , the concatenation of the title and the abstract, or even the full text of the paper. The context set $C^{s}$ covers sentences before and after the citation text in $D^{s}$ , as well as the abstract $a^{s}\\\\in D^{s}$ .  \\n\\nSuch general, open definition allows us to accommodate diverse approaches to the task within one framework (Table 1 ). To populate the benchmark, we select four datasets, focusing on the task design and domain variety: ABURAED (AbuRa’ed et al. ,2020 ), CHEN (Chen et al. ,2021 ), LU (Lu et al. ,2020 ), and XING (Xing et al. ,2020 ). Dataset transformation details are provided in Appendix A.1 .Table 2 shows the quantitative statistics, and Figure 2 provides data examples from each dataset. The CHEN dataset has two subsets – CHEN Delve and CHEN S2ORC – based on the data source; we use CHEN to denote the union of the two subsets. The datasets are distributed under varying licenses; we have obtained explicit permissions from the authors to use the data for research purposes in cases when licensing was underspecified (see Ethics statement).  \\n\\nWe note that the datasets included in the benchmark are not only structurally diverse, but also cover a wide range of domains, from medicine to computer science. In particular, ABURAED and XING exemplify citation text generation in the computational linguistics domain, CHEN Delve cover the computer science domain; LU and CHEN S2ORC span a wide range of domains represented on arxiv.org and in the S2ORC corpus, respectively, including biology, medicine and physics.',\n",
       "   'original_filename': 'Conf_Paper_Meta_Data_EMNLP_2023_with_whole_text.db'}},\n",
       " {'id': 454919307595371694,\n",
       "  'distance': 0.5985562801361084,\n",
       "  'entity': {'paper_id': '63608e5090e50fcafdee1152',\n",
       "   'paper_title': 'Diverse Parallel Data Synthesis for Cross-Database Adaptation of   Text-to-SQL Parsers',\n",
       "   'chunk_id': 2,\n",
       "   'chunk_text': '# 2.2 Translating text of related queries\\nOur next goal is to translate the retrieved $x_{r}$ from being a text for SQL $q_{r}$ to a text $\\\\hat{x}$ for SQL $q$ ,where $q\\\\approx q_{r}$ structurally. However, we do not have a readily labeled dataset to learn a model that translates $x_{r}$ to $\\\\hat{x}$ while being consistent with $q$ . We therefore decompose this task into two steps: 1) A simpler task of masking schema-specific tokens in $x_{r}$ to get a template $x_{r}^{\\\\mathrm{masked}}$ and 2) A conditional text generation model that maps $(x_{r}^{\\\\mathrm{masked}},q)$ to the text $\\\\hat{x}$ consistent with $q$ , by filling the masked positions in $x_{r}^{\\\\mathrm{masked}}$ as per $q$ . We re-purpose $\\\\mathcal{D}_{\\\\mathrm{train}}$ to get indirect supervision for training the text generation model. We now present each step in detail.  \\n\\nfrom different schemas, we modify the tree-editdistance algorithm to ignore the schema names and the database values. The tree-edit-distance is further normalized by the size of the larger tree. We $\\\\{q_{r}\\\\}$ only consider the have a distance of less than $\\\\{q_{r},x_{r}\\\\}$ pairs where the SQLs 0 .1 w.r.t. the SQL $q$ . Within datasets like Spider that span hundreds of schemas, it is often possible to find several SQLs structurally similar to a given SQL $q$ . For example, in Spider we found that $76\\\\%$ of the train SQLs contain at least three zero-distance (structurally identical) neighbours in other schemas. In Figure 2 ,we present more detailed statistics.  \\n\\nMasking the retrieved text Converting the re$\\\\{x_{r}^{\\\\mathrm{masked}}\\\\}$ trieved text queries }is a critical component of R $\\\\{x_{r}\\\\}$ to masked templates EFILL ’s pipeline since irrelevant tokens like references to schema elements of the original database can potentially misguide the text generation module. Our initial approach was to mask tokens based on a match of text tokens with schema names and manually refined schema-to-text linked annotations as in Lei et al. (2020 ). However, this approach failed to mask all schema-related terms since their occurrences in natural text often differed significantly from schema names in the database. Table A7 shows some anecdotes. Consequently, we designed a simple frequency-based method of masking that is significantly more effective for our goal of using the masked text to just guide the diversity. For each word that appears in the text queries of the train set, we count the number of distinct databases where that word gets mentioned at least once. For example, common words like $\\\\{^{\\\\prime}{\\\\mathsf{s h o w}}\\\\}$ , ‘what’, ‘list’, ‘order’} get mentioned in more than $90\\\\%$ of the schemas, and domain specific words like {‘countries’, ‘government $^\\\\prime\\\\}$ occur only in text queries of a few schemas. We mask out all the words that appear in less than $50\\\\%$ of the schemas. The words to be masked are replaced by a special token MASK , and consecutive occurrences of MASK are collapsed into a single MASK token. Thus we obtain masked templates minimal information about their original schema. $\\\\{x_{r}^{\\\\mathrm{{masked}}}\\\\}$ }retaining Editing and Filling the masked text Given a masked template $x_{r}^{\\\\mathrm{masked}}$ , and an SQL query $q$ ,we wish to edit and fill the masked portions in $x_{r}^{\\\\mathrm{masked}}$ to make it consistent with the $\\\\operatorname{SQL}q$ . We utilize a conditional text generation model BART ( Lewis et al. ,2020 ) for this purpose. We $\\\\boldsymbol{\\\\mathrm{\\\\Sigma}}_{\\\\boldsymbol{\\\\mathrm{\\\\Sigma}}}$ like first convert $q$ into a pseudo-English representation $q^{\\\\mathrm{Eng}}$ similar to Shu et al. (2021 ), to make it easier for $\\\\boldsymbol{\\\\beta}$ to encode $q$ . In addition, we wrap the table, column, or value tokens in $q^{\\\\mathrm{Eng}}$ with special tokens to provide explicit signals to the text generation model $\\\\boldsymbol{\\\\mathrm{\\\\Sigma}}_{\\\\boldsymbol{\\\\mathrm{\\\\Sigma}}}$ that ch tokens are likely to appear in the output text ˆ. Next, we concatenate the tokens in $x_{r}^{\\\\mathrm{masked}}$ and $q^{\\\\mathrm{Eng}}$ for jointly encoding them as which is expected to be consistent with the an input to $\\\\boldsymbol{\\\\mathrm{\\\\Sigma}}_{\\\\boldsymbol{\\\\mathrm{\\\\Sigma}}}$ . The output of $\\\\boldsymbol{\\\\mathrm{\\\\Sigma}}_{\\\\boldsymbol{\\\\mathrm{\\\\Sigma}}}$ ’s decoder is text ${\\\\mathrm{SQL~}}q$ $\\\\hat{x}$ ,.  \\n\\nSince we do not have direct supervision to finetune purposing SQL-Text pairs $\\\\boldsymbol{\\\\mathrm{\\\\Sigma}}_{\\\\boldsymbol{\\\\mathrm{\\\\Sigma}}}$ for this task, we presen $\\\\mathcal{D}_{\\\\mathrm{train}}$ for fine-tuning $(q_{i},x_{i})$ from various schemas B.a method of re$\\\\mathcal{D}_{\\\\mathrm{train}}$ contains $s_{i}$ .A Naïve way to train $\\\\boldsymbol{\\\\mathrm{\\\\Sigma}}_{\\\\boldsymbol{\\\\mathrm{\\\\Sigma}}}$ is to provide $[x_{i}^{\\\\mathrm{{masked}}}|q_{i}^{\\\\mathrm{{Eng}}}]$ |,the concatenation of $x_{i}^{\\\\mathrm{masked}}$ and $q_{i}^{\\\\mathrm{Eng}}$ as an input to the encoder and maximize the likelihood of $x_{i}$ in the decoder’s output. This way the decoder of $\\\\boldsymbol{\\\\beta}$ learns to refill the masked tokens in $x_{i}^{\\\\mathrm{masked}}$ by attending to $q_{i}^{\\\\mathrm{Eng}}$ to recover $x_{i}$ in the output. While useful for learning to refill the masked positions, this from its use during inference in two ways: (i) For a Naïve method of training $\\\\boldsymbol{\\\\mathrm{\\\\Sigma}}_{\\\\boldsymbol{\\\\mathrm{\\\\Sigma}}}$ is mismatched given SQL $q$ , R EFILL might fail to retrieve a similar str cture neighbour of $q_{i}$ from $\\\\mathcal{D}_{\\\\mathrm{train}}$ . In such cases, SQL-to-Text generation mode to directly translate Bshould be capable of falling back to pure $q$ into $\\\\hat{x}$ . (ii) During inference, $x_{r}^{\\\\mathrm{masked}}$ and $q$ come from different schemas. However, during Naïve training, the masked text $x_{i}^{\\\\mathrm{masked}}$ and the SQL $q_{i}$ are derived from the same example $(q_{i},x_{i})$ . To Robust address these two limitations, we train manner as follows: (a) For a random one$\\\\boldsymbol{\\\\mathrm{\\\\Sigma}}_{\\\\boldsymbol{\\\\mathrm{\\\\Sigma}}}$ in a more third of t allowing using $q_{i}^{\\\\tilde{\\\\mathrm{Eng}}}$ B. (b) For another one-third, we pass only to learn the filling of the masked tokens train steps we train $\\\\boldsymbol{\\\\mathrm{\\\\Sigma}}_{\\\\boldsymbol{\\\\mathrm{\\\\Sigma}}}$ in the Naïve way, $q_{i}^{\\\\mathrm{Eng}}$ as an input and maximize the likelihood of $x_{i}$ .This ensures that model is capable of generating the text from the $q_{i}^{\\\\mathrm{Eng}}$ alone, if the templates $\\\\boldsymbol{x}_{i}^{\\\\mathrm{{n}}}$ asked are unavailable or noisy. (c) For the remaining onethird, we first retrieve an SQL-Text pair $(q_{j},x_{j})$ ,from a different schema such that the ${\\\\mathrm{SQL~}}q_{j}$ is structurally similar to $q_{i}$ (§ 2.1 ), and the word edit distance between the masked templates $x_{i}^{\\\\mathrm{masked}}$ and $x_{j}^{\\\\mathrm{masked}}$ is also small. We can then replace $x_{i}^{\\\\mathrm{{n}}}$ asked with $x_{j}^{\\\\mathrm{masked}}$ and encode $[x_{j}^{\\\\mathrm{masked}}|q_{i}^{\\\\mathrm{Eng}}]$ as an input to $\\\\boldsymbol{\\\\beta}$ and maximize the likelihood of $x_{i}$ in the decoder’s output. This step makes the training more consistent with the inference, as $x_{j}^{\\\\mathrm{masked}}$ and $q_{i}^{\\\\mathrm{Eng}}$ now come from different schemas. In $\\\\S\\\\,5.4$ , we justify training Robustly compared to Naïve training.',\n",
       "   'original_filename': 'Conf_Paper_Meta_Data_EMNLP_2022_Empirical_Methods_in_Natural_Language_Processing_with_whole_text.db'}}]"
      ]
     },
     "execution_count": 62,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "rerank_papers"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 63,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "{'created': 1740412832,\n",
       " 'id': '20250225000031aad0981a9ebb4b26',\n",
       " 'request_id': '20250225000031aad0981a9ebb4b26',\n",
       " 'results': [{'document': '# Benchmarking and Improving Text-to-SQL Generation under Ambiguity\\nAdithya Bhaskar $\\\\mathbf{\\\\nabla}_{*}\\\\bigotimes\\\\bigtriangleup$ Tushar Tomar ∗♠ Ashutosh Sathe ♠Sunita Sarawagi ♠♠IIT Bombay ♢Princeton University\\n\\n# Abstract\\nResearch in Text-to-SQL conversion has been largely benchmarked against datasets where each text query corresponds to one correct SQL. However, natural language queries over reallife databases frequently involve significant ambiguity about the intended SQL due to overlapping schema names and multiple confusing relationship paths. To bridge this gap, we develop a novel benchmark called AmbiQT with over 3000 examples where each text is interpretable as two plausible SQLs due to lexical and/or structural ambiguity.  \\n\\nWhen faced with ambiguity, an ideal top$k$ decoder should generate all valid interpretations for possible disambiguation by the user ( Elgohary et al. ,2021 ;Zhong et al. ,2022 ). We evaluate several Text-to-SQL systems and decoding algorithms, including those employing state-of-the-art LLMs, and find them to be far from this ideal. The primary reason is that the prevalent beam search algorithm and its variants, treat SQL queries as a string and produce unhelpful token-level diversity in the top$k$ .  \\n\\nWe propose LogicalBeam, a new decoding algorithm that navigates the SQL logic space using a blend of plan-based template generation and constrained infilling. Counterfactually generated plans diversify templates while in-filling with a beam-search, that branches solely on schema names, provides value diversity. LogicalBeam is up to $2.5\\\\times$ more effective than state-of-the-art models at generating all candidate SQLs in the top$k$ ranked outputs. It also enhances the top5 Exact and Execution Match Accuracies on SPIDER and Kaggle DBQA 1 .\\n\\n# 1 Introduction\\nResearch on Text-to-SQL generation has focused on scenarios where each natural language question is associated with one correct SQL ( Zelle and Mooney ,1996 ;Tang and Mooney ,2000 ;Scholak et al. ,2021a ;Wang et al. ,2020 ;Rubin and Berant ,2021 ;Xie et al. ,2022 ;Arcadinho et al. ,2022 ;Zeng et al. ,2022 ;Scholak et al. ,2021b ;Pourreza and Rafiei ,2023 ). Popular benchmarks driving such research, including WikiSQL ( Zhong et al. ,2018 ), SPIDER ( Yu et al. ,2018 ), its robust perturbations ( Chang et al. ,2023 ), and even “in-thewild” benchmarks such as KaggleDBQA ( Lee et al. ,2021 ) and SEDE ( Hazoom et al. ,2021 ) all associate one correct SQL with text. Meanwhile, ambiguity is prevalent in real-life databases — particularly the ones obtained by integrating several data sources for data analysis, where a natural language interface is most in demand. The sources of ambiguity are several — inherent ambiguity of natural language, the user’s ignorance of table/column names, overlapping strings in column names, underspecified clauses, and confusion about whether aggregates are pre-computed, or if a join is required. Hazoom et al. (2021 ) observe that up to $87\\\\%$ of queries on the stack exchange database are underspecified, and Wang et al. (2022 ) mention that $11\\\\%$ of queries exhibited ambiguity in column names. Although prior work has brought up ambiguity, there is no publicly available benchmark with ambiguous queries, nor a comprehensive evaluation of systems under ambiguity.  \\n\\nOur first contribution is to bridge this gulf by developing a benchmark, AmbiQT, that tests performance under ambiguity in the context of current models. AmbiQT includes over 3000 examples, each associating a natural question on a database with two valid SQLs. Inspired by our experience with several real-world datasets, we target four types of ambiguity spanning both lexical (ambiguous column and table names) and structural (whether a join is necessary, and an aggregate is pre-computed) ambiguity. The benchmark is generated via a combination of ChatGPT ( OpenAI ,2022 ) based synonym generation and perturbation, and standard rule-based perturbation.  \\n\\nWhen faced with ambiguity, an ideal Text-toSQL system should incorporate all valid alternatives in their top$k$ SQL outputs, for user resolution. We show that present approaches, ranging from T5-3B ( Raffel et al. ,2019 ) to SOTA models, fail to generate all ambiguous outputs with any decoding strategy, including beam search and diversity-promoting sampling methods such as Nucleus ( Holtzman et al. ,2020 ) and Typical sampling ( Meister et al. ,2023 ). Most outputs are small lexical tweaks of the top choice, bringing about little meaningful diversity in SQL structures or schema alternatives. Even SOTA LLMs like ChatGPT ( OpenAI ,2022 ) suffer from this issue.  \\n\\nTo remedy the lack of diversity, we propose a new decoding algorithm, LogicalBeam, that allocates branching to explore underlying logical variants of the SQL rather than the string form. We catalog the errors of T5-3B ( Raffel et al. ,2019 ) on the SPIDER dev split and use our insights to encourage targeted types of diversity — the number of JOIN s and selections, and table/column names.  \\n\\nOur main contributions are:   \\n•We develop AmbiQT, the first benchmark that tests performance under four types of ambiguity over $\\\\mathbf{3000+}$ examples.   \\n•We show that SOTA methods, including a finetuned T5-3B, RESDSQL ( Li et al. ,2023 ), OpenAI Codex, and ChatGPT, provide a poor representation of ambiguity despite their high accuracy on conventional benchmarks.   \\n•We present LogicalBeam, a two-step algorithm that generates plan-based templates with counterfactually controlled plan diversity and fills them via a beam search that branches only on schema names.   \\n•We show that LogicalBeam consistently increases the fraction of time when all gold SQLs get generated in the Top-5 choices by $1.5-2.5\\\\times$ over the baselines across the board on AmbiQT.',\n",
       "   'index': 3,\n",
       "   'relevance_score': 22.21875},\n",
       "  {'document': '# 1 Introduction\\nText-to-SQL parsing, the task of translating a natural language question into a SQL query, has found wide applications in building natural language interfaces to databases and thus piqued significant research interest in recent years ( Wang et al. ,2020 ;Deng et al. ,2021 ;Yu et al. ,2021 ;Rajkumar et al. ,2022 ;Hongjin et al. ,2023 ;Ni et al. ,2023 ). To develop a text-to-SQL parser, a prevalent approach is to collect labeled data and train a model via supervised learning ( Shaw et al. ,2021 ;Scholak et al. ,2021 ). While effective, this approach necessitates a considerable amount of training data, which is costly to obtain because annotating SQL queries requires programming expertise. Consequently, the lack of data hinders real-life applications of stateof-the-art parsers, especially on novel databases and unseen domains ( Suhr et al. ,2020 ).  \\n\\nAs an alternative to supervised learning, incontext learning ( Brown et al. ,2020 ), an emergent capability of large language models (LLMs), alleviates the need for large-scale data. With only a few examples, in-context learning enables LLMs to demonstrate performance comparable to or even better than fully supervised models on many NLP tasks, such as question answering, machine translation, and natural language inference ( Chowdhery et al. ,2022 ;Kojima et al. ,2022 ;Wei et al. ,2022b ,a ;Brohan et al. ,2023 ). When applied to text-to-SQL parsing, in-context learning also shows encouraging results, but it still lags behind supervised approaches ( Rajkumar et al. ,2022 ;Chang et al. ,2023 ;Liu et al. ,2023a ).  \\n\\nWe hypothesize that the under-performance is because text-to-SQL parsing requires complex, multistep reasoning. Even for a seemingly simple question, such as “What is the ID of Kyle,\" a model has to ground it to database schemas, infer the relational algebra among schema items, and construct syntactically correct SQL clauses. Recently, the chain-of-thought (CoT) style promptings ( Wei et al. ,2022b ;Zhou et al. ,2023 ) are proposed and have shown promising multi-step reasoning capabilities. To enhance LLMs’ reasoning ability, we systematically explore CoT style prompting for text-to-SQL parsing. Specifically, we seek to answer two research questions: (1) Which prompting style is better, generating all reasoning steps in a single pass, or iterative prompting and problem solving? (2) Does including more detailed information in the reasoning steps lead to better results for text-to-SQL parsing?  \\n\\nTo address the questions, we adopt two widely used prompting methods for text-to-SQL parsing As the first method, we apply chain-of-thought prompting (Wei et al. ,2022b ) by drawing an analogy between its problem-solving process and the execution procedure of a SQL query. Referring to the logical execution order of SQL clauses (Narechania et al. ,2021 ), we compose the intermediate execution steps in natural language and prompt LLMs to derive them before generating the SQL query. As the second method, we follow Zhou et al. (2023 ) to apply least-to-most prompting in two stages: (1) reduction: generate a series of sub-questions from the original question and (2) solving: iteratively translate each sub-question into its corresponding SQL query, with the original question as the last sub-question. However, in our case study 1 , we find that directly applying chainof-thought and lease-to-most promptings leads to error propagation issues. Their rationales contain very demonstration-specific information and are easier to mislead the reasoning process. Furthermore, least-to-most prompting technique leads to additional computational and time cost due to the multiple stages of reduction and solving.  \\n\\n  \\nFigure 1: Different prompting methods with multi-step reasoning for text-to-SQL parsing: (a) Chain-of-Thought, (b) Least-toMost, and our proposed (c) QDecomp , and (d) QDecomp $^+$ InterCOL .  \\n\\nTherefore, we propose a new method called question-decomposition prompting (QDecomp ). Similar to chain-of-thought prompting, QDecomp generates a sequence of reasoning steps and then the SQL query in one pass. However, we modify the steps to instruct LLMs to decompose the original complex question, akin to the problem reduction stage in least-to-most prompting. Also, to help LLMs ground database schemas, we design a variant of question decomposition prompting (QDecomp $^+$ InterCOL ) by including the table and column names involved in each sub-question. We conduct comprehensive evaluations on two textto-SQL datasets, Spider ( Yu et al. ,2018 ) and Spider Realistic ( Deng et al. ,2021 ). Our proposed prompting methods substantially outperform existing prompting ones by 2.4 and 1.5 point absolute gains on the development set of Spider and Spider Realistic, respectively. The results suggest that the iterative prompting which is costly due to additional computational resources requirement as in least-to-most prompting may not be necessary $(R Q I)$ . In addition, our analysis shows the proposed question decomposition prompting methods, which do not instruct LLMs to generate detailed reasoning steps, reduce the chance of error propagation when generating the reasoning steps. ( RQ2 ). Finally, we evaluate the robustness of our proposed prompting methods by varying the number, selection, and format of in-context examples and show that they can achieve consistently strong performance across different settings.',\n",
       "   'index': 2,\n",
       "   'relevance_score': 20.953125},\n",
       "  {'document': '# 2 Related Work\\n\\n# 2.1 Text-to-SQL Generation\\nNatural language interfaces have long been recognized as a way to expand access to databases ( Hendrix et al. ,1978 ).The construction of several large text-to-SQL datasets, such as WikiSQL ( Zhong et al. ,2017 ) and Spider ( Yu et al. ,2018 ), has enabled the adoption of deep learning models in this task, achieving unprecedented performance in recent years ( Rubin and Berant ,2021 ;Wang et al. ,2020a ;Scholak et al. ,2021 ;Yu et al. ,2020 ;Hwang et al. ,2019 ). Our technique is based on the recent success of neural text-to-SQL models. Unlike existing models that perform end-to-end SQL generation, we propose a new interaction mechanism for users to validate and refine generated queries through step-by-step explanations.  \\n\\nAs the first step to demonstrate the feasibility of our approach, we focus on single-turn SQL generation ( Yu et al. ,2018 ) in this work. There has also been recent work that supports multi-turn SQL generation ( Yu et al. ,2019a ,b;Guo et al. ,2021 ), where a sequence of interdependent queries are expressed in multiple utterances in a dialog. Models designed for multi-turn SQL generation typically need to reason about the dialog context and effectively encode the historical queries ( Wang et al. ,2021 ;Hui et al. ,2021 ;Zhang et al. ,2019 ;Cai and Wan ,2020 ;Wang et al. ,2020b ). Our approach can be extended to support multi-turn SQL generation by initiating separate refinement sessions for individual queries while incorporating the contextual information of previous queries into explanation generation and text-to-clause generation.\\n\\n# 2.2 Interactive Semantic Parsing for SQL\\nRecently, there has been a growing interest in interactive approaches that elicit user feedback to guide SQL generation. Iyer et al. (2017 ) proposed to allow users to flag incorrect queries and continuously retrain the model. Both DIY ( Narechania et al. ,2021 ) and NaLIR ( Li and Jagadish ,2014a ,b)enable users to select alternative values or subexpressions to fix an incorrect SQL query. PIIA ( Li et al. ,2020 ), MISP ( Yao et al. ,2019 ), and DialSQL ( Gur et al. ,2018 ) proactively ask for user feedback via multiple-choice questions. A common limitation of these methods is that they only solicit feedback in constrained forms, hindering their flexibility and effectiveness in addressing the variability of SQL errors. In contrast, our approach allows more flexible feedback through direct edits to the explanations generated by the model.  \\n\\nThe only work that supports open-ended user feedback in SQL generation is NL-EDIT ( Elgohary et al. ,2021 ). NL-EDIT is trained on SPLASH ( Elgohary et al. ,2020 ), a dataset of SQL errors and user feedback utterances. Given an incorrect query, NL-EDIT allows users to provide a clarification utterance. Based on the utterance, the model generates a sequence of edits to the SQL query. Incorporating feedback expressed in a completely free-text utterance is challenging for two reasons:  \\n\\n  \\nFigure 2: An Overview of Interactive SQL Generation and Refinement with Editable Step-by-Step Explanations  \\n\\n(1) the model needs to infer which part of the SQL query to fix; (2) the model needs to determine what changes are being requested. In contrast, S TEPS asks users to directly edit an NL explanation and make corrections to the explanation. Comparing the initial explanation with the user-corrected explanation makes it easier to locate the part of a SQL query that needs to be changed and infer what change to make.  \\n\\nThe idea of SQL decomposition is similar to recent work that decomposes a user question to sub-questions on SPARQL ( Mo et al. ,2022 ). Their approach requires a crowd-sourced dataset to train a question decomposition model. In contrast, our rule-based method generates step-by-step explanations without the need for training a model. This also allows our system to map each entity in the explanation to the corresponding SQL element, making it easier for SQL correction (Sec. 3.2 ).\\n\\n# 2.3 Explaining SQL Queries in NL\\nOur approach is also related to prior work that generates NL explanations for SQL queries. Simitsis and Ioannidis (2009 ) argued that databases should “talk back” in human language so that users can verify results. Kokkalis et al. (2012 ) and Koutrika et al. (2010 ) used a graph-based SQL translation approach, where each query is represented as a graph and the explanation is generated by traversing the graph. Elgohary et al. (2021 ,2020 ) employed a template-based explanation approach, where they manually curated 57 templates for explanation generation. These approaches have limited capability to handle arbitrary SQL queries. To address this limitation, we propose a rule-based method to first explain terminal tokens (e.g., operators, keywords) and gradually compose them into a complete explanation based on the derivation rules in the SQL grammar. Another key difference is that none of the existing approaches supports editable explanations for SQL correction, which is a key feature to solicit user feedback in our approach.',\n",
       "   'index': 0,\n",
       "   'relevance_score': 20.359375},\n",
       "  {'document': '# Interactive Text-to-SQL Generation via Editable Step-by-Step Explanations\\nYuan Tian 1 , Zheng Zhang 2 , Zheng $\\\\mathbf{Ning^{2}}$ ,Toby Jia-Jun $\\\\mathbf{Li}^{2}$ ,Jonathan K. Kummerfeld 3 , and Tianyi Zhang 1 Purdue University 1 , University of Notre Dame 2 , The University of Sydney 3  , , , , ,\\n\\n# Abstract\\nRelational databases play an important role in business, science, and more. However, many users cannot fully unleash the analytical power of relational databases, because they are not familiar with database languages such as SQL. Many techniques have been proposed to automatically generate SQL from natural language, but they suffer from two issues: (1) they still make many mistakes, particularly for complex queries, and (2) they do not provide a flexible way for non-expert users to validate and refine incorrect queries. To address these issues, we introduce a new interaction mechanism that allows users to directly edit a stepby-step explanation of a query to fix errors. Our experiments on multiple datasets, as well as a user study with 24 participants, demonstrate that our approach can achieve better performance than multiple SOTA approaches. Our code and datasets are available at https: //github.com/magic-YuanTian/STEPS .\\n\\n# 1 Introduction\\nNatural language interfaces significantly lower the barrier to accessing databases and performing data analytics tasks for users who are not familiar with database query languages. Many approaches have been proposed for generating SQL queries from natural language ( Popescu et al. ,2004 ;Giordani and Moschitti ,2012 ;Rubin and Berant ,2021 ;Scholak et al. ,2021 ;Zhao et al. ,2022 ). Using recent large language models, systems have reached $86.6\\\\%$ execution accuracy ( Gao et al. ,2023 ) on the Spider benchmark ( Yu et al. ,2018 ).  \\n\\nHowever, the rate of improvement has slowed, with a gain of only $10\\\\%$ since mid-2021. This is partly due to the inherent ambiguity of natural language and the complex structure of SQL queries (e.g., nested or joined queries). Thus, it is challenging to generate a fully correct query in one step, especially for complex tasks ( Yao et al. ,2019 ).  \\n\\n  \\nFigure 1: Refining a SQL query by directly editing a step-by-step explanation.  \\n\\nThere has been growing interest in developing “human-in-the-loop” approaches that elicit user feedback to guide SQL generation. However, most approaches only support feedback in constrained forms, e.g., answering multiple-choice questions (MISP, PIIA, DialSQL Yao et al. ,2019 ;Li et al. ,2020 ;Gur et al. ,2018 ), changing SQL elements in a drop-down menu (DIY, Narechania et al. ,2021 ), etc. Such constrained feedback is not sufficient to fix many complex errors in real-world SQL tasks. One exception is NL-EDIT ( Elgohary et al. ,2021 ), which allows users to provide feedback as new utterances. However, since the feedback is open-ended, interpreting it can be just as hard as processing the original request.  \\n\\nIn this paper, we seek to strike a balance between constrained feedback and open-ended feedback by proposing a new interaction mechanism: editable step-by-step explanations. Fig. 1 illustrates our idea. This mechanism consists of three core components: (a) a text-to-SQL model, (b) an explanation generation method, and (c) a SQL correction model. Our key insight is that using a step-by-step explanation as the basis to suggest fixes allows users to precisely specify where the error is and how to fix it via direct edits. This not only saves users’ time but also makes it easier for the model to locate the error and apply fixes.  \\n\\nBased on this idea, we implemented an interactive SQL generation and refinement system called STEPS . S TEPS adopts a rule-based method to generate step-by-step explanations and uses a hybrid rule/neural method to convert a user-corrected explanation back to a SQL query.  \\n\\nAn evaluation with a simulated user on Spider ( Yu et al. ,2018 ) shows that S TEPS can achieve $97.9\\\\%$ exact set match accuracy, outperforming prior interactive text-to-SQL systems— MISP, DIY, and NL-EDIT—by $33.5\\\\%$ ,$33.2\\\\%$ , and $31.3\\\\%$ respectively. We further evaluate S TEPS on other datasets, including Spider-DK ( Gan et al. ,2021b ), Spider-Syn ( Gan et al. ,2021a ), and WikiSQL ( Zhong et al. ,2017 ). S TEPS consistently achieves at least $96\\\\%$ exact set match accuracy and execution accuracy across all datasets.  \\n\\nFinally, we conducted a within-subjects user study with 24 real users. We found that within the same amount of time, S TEPS helped users complete almost 2X and 4X more tasks correctly than DIY and MISP respectively, with significantly higher self-reported confidence and lower mental load.  \\n\\nThis work makes the following contributions: (1) we propose a new interaction mechanism for the text-to-SQL task; (2) we develop an interactive text-to-SQL system based on the new interaction mechanism and a new training method for SQL correction; (3) we conduct a comprehensive evaluation with both simulated and real users and demonstrate its effectiveness over state-of-the-art interactive systems. Our dataset and code are publicly available.',\n",
       "   'index': 4,\n",
       "   'relevance_score': 20.25},\n",
       "  {'document': '# Evaluating Cross-Domain Text-to-SQL Models and Benchmarks\\nMohammadreza Pourreza University of Alberta   \\n\\nDavood Rafiei University of Alberta\\n\\n# Abstract\\nText-to-SQL benchmarks play a crucial role in evaluating the progress made in the field and the ranking of different models. However, accurately matching a model-generated SQL query to a reference SQL query in a benchmark fails for various reasons, such as underspecified natural language queries, inherent assumptions in both model-generated and reference queries, and the non-deterministic nature of SQL output under certain conditions. In this paper, we conduct an extensive study of several prominent cross-domain text-to-SQL benchmarks and reevaluate some of the top-performing models within these benchmarks, by both manually evaluating the SQL queries and rewriting them in equivalent expressions. Our evaluation reveals that attaining a perfect performance on these benchmarks is unfeasible due to the multiple interpretations that can be derived from the provided samples. Furthermore, we find that the true performance of the models is underestimated and their relative performance changes after a re-evaluation. Most notably, our evaluation reveals a surprising discovery: a recent GPT4-based model surpasses the gold standard reference queries in the Spider benchmark in our human evaluation. This finding highlights the importance of interpreting benchmark evaluations cautiously, while also acknowledging the critical role of additional independent evaluations in driving advancements in the field.  \\n\\nproved from 65.6 ( Wang et al. ,2019 ) to 74.0 ( Li et al. ,2023a ). Measuring such progress is hinged on reliable benchmarks and evaluation metrics.  \\n\\nTwo standard metrics for evaluating the performance in this domain have been exact set match accuracy and execution accuracy . The former measures if a model-generated SQL query lexically matches a reference SQL query, whereas the latter measures if a model-generated SQL query produces the same output as a reference query $(\\\\S\\\\,4)$ .  \\n\\n  \\nFigure 1: An example question with two correct SQL queries, each corresponding to a different interpretation. There is an ambiguity in schema mapping, with two different database columns describing the name.\\n\\n# 1 Introduction\\nSignificant progress has been made in translating natural language text to SQL statements over the past few years. The execution accuracy on the hold-out test of Spider ( Yu et al. ,2018b )–a large-scale cross-domain text-to-SQL benchmark– has improved from 53.5 in May, 2020 ( Zhong et al. ,2020b ) to 85.3 in March, 2023 ( Pourreza and Rafiei ,2023 ). The exact set match accuracy, without considering database cell values, on the same benchmark and over the same period has im  \\n\\nConsider the example in Figure 1 , which consists of a model-generated query (shown on the left) and a reference query (shown on the right). Both SQL queries return the id and name of makers that have more than 3 models. However, the model-generated query returns the column FullName, which gives the full name of a maker (e.g., “Ford Motor Company”), whereas the reference query given in the benchmark returns the column Maker, which gives the short common name of a maker (e.g., “Ford”). The model-generated query fails an exact set match since the column names in the select clause are different. The query outputs are also different and the model-generated query fails the execution accuracy as well. The natural language utterance is not specific about the type of name to be returned, and a human evaluator tags both queries correct.  \\n\\nAs the models improve, these types of failures make up most of the errors, and the performance metrics become less relevant, as shown in our evaluation. In particular, we re-evaluated all development set queries of Spider on which two top-performing models, one using a fine-tuned model ( Scholak et al. ,2021 ) and another using a large language model ( Pourreza and Rafiei ,2023 ), failed. We found out that $25\\\\%$ of the queries generated by one model and $87\\\\%$ of the queries generated by the other model were indeed correct but were wrongly evaluated by the benchmark. For the same set of queries, our re-evaluation of the ground truth queries found $33\\\\%$ of the SQL queries incorrect, which was more than the number of incorrect queries generated by one of the models. This evaluation places one of the models above the ground truth queries in this re-evaluation.  \\n\\nWe further re-evaluated two well-known benchmarks, Spider ( Yu et al. ,2018b ) and SpiderDK ( Gan et al. ,2021b ), and a newly released benchmark, BIRD ( Li et al. ,2023b ), and found similar problems in all three benchmarks that affect the evaluation. Our evaluation reveals that $18\\\\%$ of the queries in the train sets and $20\\\\%{-23\\\\%}$ of the queries in the dev sets of these benchmarks are subject to ties in the dataset and which one of the tied rows are returned. This means a model-generated query will be deemed incorrect if it does not return the same row, among tied rows, as the ground truth query. This can severely impact the evaluation, especially when there is a tight race among models. Considering these observations, it is crucial to emphasize the significance of additional independent evaluations when utilizing these benchmarks. To enhance the evaluation process further, a potential solution is to incorporate multiple SQL queries as the ground truth, each representing a different interpretation that may be valid.  \\n\\nOur objective in this paper is to provide a comprehensive evaluation of existing Text-to-SQL benchmarks, underscoring the inherent issues they possess. We refrain from introducing a new dataset due to several considerations. First, addressing the identified issues by updating these benchmarks requires considerable human effort. Additionally, benchmarks in the Text-to-SQL domain, like Spider and BIRD, have holdout test sets used for official leaderboards and comparisons of text-to-SQL methodologies. We only have access to the development and training sets of these benchmarks, which limits our capability to alter the test sets. As a result, making changes only to the development and training sets would not completely address the benchmark’s inherent problems, given that final performance is gauged using the problematic test sets.',\n",
       "   'index': 5,\n",
       "   'relevance_score': 19.828125},\n",
       "  {'document': '# 2 Background and Related Work\\nA Text-to-SQL model takes as input a question expressed as a natural language text x, and a database schema scomprising of table and column names, and outputs an SQL program ywhich can be executed against the database to answer the user’s question. Figure 1 shows an example. The training data for the task comprises (text, schema, SQL) triplets spanning multiple distinct databases.  \\n\\n  \\nFigure 1: A Text-to-SQL system converts a user question to an SQL query, conditioned on the database schema and/or content.  \\n\\nBenchmarks. Popular benchmarks for the Textto-SQL task are WikiSQL ( Zhong et al. ,2018 )and SPIDER ( Yu et al. ,2018 ). A few others have been proposed recently to capture real-world scenarios, such as KaggleDBQA ( Lee et al. ,2021 ), SEDE ( Hazoom et al. ,2021 ), and EHRSQL ( Lee et al. ,2022 ). They all attach one SQL per text, though some of them mention the problem of ambiguity in real-world datasets. Dr. SPIDER ( Chang et al. ,2023 ), designed to test the robustness of existing models, perturbs either the text or schema of SPIDER but still assigns one SQL per text.  \\n\\nAmbiguity in SQL Although ambiguity has been studied in other fields of NLP ( Pilault et al. ,2023 ;Li et al. ,2022 ;Futeral et al. ,2022 ), it has been unexplored in the context of semantic parsing. Ambiguity in SQL arising from related column names is discussed in ( Wang et al. ,2022 ), but they only consider column ambiguity. Their method of recognizing ambiguous queries depends on labeling words of the text and does not generalize to other kinds of ambiguity. To the best of our discernment, AmbiQT represents the first open benchmark for testing coverage of ambiguous alternatives.  \\n\\nDiverse Decoding. Prior work has critiqued the lack of meaningful diversity in beam-search outputs ( Finkel et al. ,2006 ;Gimpel et al. ,2013 ;Li et al. ,2016 ;Li and Jurafsky ,2016 ). In response, many fixes have been proposed. Some proposals attempt to restrict the tokens sampled, using strategies like Nucleus sampling ( Holtzman et al. ,2020 ), Truncated Sampling ( Hewitt et al. ,2022 ), and Typical Sampling ( Meister et al. ,2023 ), while some rely on Template-Based decoding ( Wiseman et al. ,2018 ;Zhang et al. ,2022 ;Fu et al. ,2023 ;Elgohary et al. ,2020 ;Awasthi et al. ,2022 ). A third approach is to generate a prefix with high diversity first, then generate the rest of the sentence with lower diversity. Narayan et al. (2022 ) follow this recipe but focus on incorporating diverse entity orders in text summarization.  \\n\\n<html><body><table><tr><td rowspan=\"2\">Kind of ambiguity</td><td rowspan=\"2\">Count</td><td colspan=\"3\">Example</td></tr><tr><td>QuestionText</td><td>SQL#1</td><td>SQL#2</td></tr><tr><td>Column Ambiguity (C)</td><td>1240</td><td>List the ids of all students.</td><td>SELECTroll_number FROMstudents</td><td>SELECTadmission_number FROMstudents</td></tr><tr><td>Table Ambiguity (T)</td><td>1417</td><td>How many singers do wehave?</td><td>SELECT COUNT(*) FROM artist</td><td>SELECT COUNT(*) FROM performer</td></tr><tr><td>Join Ambiguity (J)</td><td>288</td><td>Whatarethemakers and models?</td><td>SELECT maker，model FROM model</td><td>SELECT t2.maker，t1.model FROM modelASt1JOINmodel_maker AS t2 ON t1.model_id = t2.model_id</td></tr><tr><td>Precomputed Aggregates (P)</td><td>101</td><td>for each pet type.</td><td>Find the average weight|SELECT AVG(weight)， pettype FROM pets GROUP BY pettype</td><td>SELECT avg_weight，pettype FROM pets_weight</td></tr></table></body></html>\\n\\nTable 1: The AmbiQT benchmark. For each question, we list two valid SQL queries as per the schema. The schema is not shown here, but the ambiguity in it can be inferred based on the two SQL queries.\\n\\n# 3 AmbiQT: A Benchmark of Ambiguous Text-to-SQL Conversion\\nAmbiQT is constructed so that each text query has two distinct valid SQL interpretations. Motivated by our experience working with real-life databases, we designed AmbiQT to encompass four types of ambiguity. Each entry is designed so that both alternatives have a similar relevance to the question, and a well-calibrated decoding method is expected to rank them close by in their outputs.  \\n\\nWe create AmbiQT by modifying the SPIDER (Yu et al. ,2018 ) dataset, and use ChatGPT ( OpenAI ,2022 ) to aid with the creation. In each case, we modify the schema instead of the text as that provides greater control over the modification process. We explain the kinds of ambiguity in AmbiQT below and portray examples of each in Table 1 .  \\n\\nColumn Ambiguity (C). Unlike the SPIDER benchmark where column names usually appear verbatim in the question text (like born state for the column born_state ), when users unaware of the schema pose a natural question, they introduce column ambiguity ( Wang et al. ,2022 ). For example, “ What is the capacity of O2 Arena? ” could be ambiguous if the schema has separate columns for standing and seating capacity. Likewise, a query on the number of under-nourished children is ambiguous if we have different columns for “under-weight children” and “stunted growth in children”.  \\n\\nTo simulate column ambiguity, for each text $\\\\mathbf{x}$ ,schema s, and SQL yin SPIDER, we prompt ChatGPT to generate two synonyms for each column name of sin a one-shot manner. Appendix A furnishes more details of the prompt. We then modify sby replacing $c$ with two columns $c_{1},c_{2}$ , and we use yto generate two queries $\\\\mathbf{y}_{1},\\\\mathbf{y}_{2}$ where all mentions of $c$ are replaced with $c_{1}$ in $\\\\mathbf{y}_{1}$ and with $c_{2}$ in $\\\\mathbf{y}_{2}$ . An example appears in the first row of Table 1 . We do not reuse $c$ because the SPIDER dataset often contains column names verbatim in the question, and that would violate our attempt at keeping the two options at similar relevance levels. We modify one column at a time and generate up to 3 examples from each original entry.  \\n\\nTable Ambiguity (T). Table name ambiguity is common in databases obtained by integrating multiple data sources, as in web tables ( Cafarella et al. ,2008 ;Pimplikar and Sarawagi ,2012 ). Here again, we prompt ChatGPT to generate two alternate names for each table. We then modify one SQL yto generate two candidates ${\\\\bf y}_{1},{\\\\bf y}_{2}$ as shown in Table 1 .  \\n\\nJoin Ambiguity (J). In production databases, a logical table is often vertically partitioned across several tables for efficient clustered access ( Stonebraker et al. ,2019 ). Column names overlapping across tables leads to Join Ambiguity. Suppose we have two tables: (1) person with columns id, name, email_address , and (2) person_details with columns id, postal_address, photo .A question asking for a person’s name and address is ambiguous on whether a JOIN with the person_details is necessary. We expose such ambiguity by modifying the schema as follows.  \\n\\nConsider a $(\\\\mathbf{x},\\\\mathbf{s},\\\\mathbf{y})$ triplet. Suppose yinvolves selecting two or more columns $c_{1},c_{2},\\\\ldots.$ not necessarily in the same order, from a table $t$ . Suppose further that $c_{1}$ is not a primary key of $t$ . We create a table called $t\\\\_c_{1}$ that includes just the primary key $p k_{t}$ of $t$ , and $c_{1}$ . The first alternative $\\\\mathbf{y}_{1}$ is $\\\\mathbf{y}$ and the second alternative $\\\\mathbf{y}_{2}$ uses a join over $t$ and $t\\\\_c_{1}$ , with everything else staying the same as y.  \\n\\n  \\nFigure 2: Beam Search works well when targeting only one output, but leads to superficial diversity, for example via different grouping and erroneous variants of column names.  \\n\\nPrecomputed Aggregates $(\\\\mathbf{P})$ :. This ambiguity is particularly common in data warehouses such as Data Commons which pre-aggregate certain variables. For instance, the “ total rice production ” of a state might refer to the column rice_production of state rather than a sum over it. Text-toSQL models have a bias toward introducing a sum()...group-by clause every time total appears in the text. The non-aggregated alternative is usually missing in the top$k$ options. We incorporate this ambiguity as follows.  \\n\\nFor each $(\\\\mathbf{x},\\\\mathbf{s},\\\\mathbf{y})$ , where $\\\\mathbf{y}$ has at least one aggregate, we construct a new table $t^{\\\\prime}$ . For each aggregate $\\\\boldsymbol{\\\\mathcal{A}}$ over column $c$ in y, we add to $t^{\\\\prime}$ the columns and the columns grouped by in $A^{\\\\prime}\\\\_c$ for all $\\\\mathcal{A}^{\\\\prime}\\\\,\\\\in\\\\,\\\\{\\\\mathsf{a v g},\\\\mathsf{s u m},\\\\mathsf{m i n},\\\\mathsf{m a x}\\\\}$ y. For count $(\\\\star)$ ,we add a column called number . We get two gold queries, the original yand a second with the groupby replaced by a direct SELECT on $t^{\\\\prime}$ as shown in the example in Table 1 . We also support aggregates across multiple tables but skip the details here.',\n",
       "   'index': 1,\n",
       "   'relevance_score': 18.65625},\n",
       "  {'document': '# 2.2 Translating text of related queries\\nOur next goal is to translate the retrieved $x_{r}$ from being a text for SQL $q_{r}$ to a text $\\\\hat{x}$ for SQL $q$ ,where $q\\\\approx q_{r}$ structurally. However, we do not have a readily labeled dataset to learn a model that translates $x_{r}$ to $\\\\hat{x}$ while being consistent with $q$ . We therefore decompose this task into two steps: 1) A simpler task of masking schema-specific tokens in $x_{r}$ to get a template $x_{r}^{\\\\mathrm{masked}}$ and 2) A conditional text generation model that maps $(x_{r}^{\\\\mathrm{masked}},q)$ to the text $\\\\hat{x}$ consistent with $q$ , by filling the masked positions in $x_{r}^{\\\\mathrm{masked}}$ as per $q$ . We re-purpose $\\\\mathcal{D}_{\\\\mathrm{train}}$ to get indirect supervision for training the text generation model. We now present each step in detail.  \\n\\nfrom different schemas, we modify the tree-editdistance algorithm to ignore the schema names and the database values. The tree-edit-distance is further normalized by the size of the larger tree. We $\\\\{q_{r}\\\\}$ only consider the have a distance of less than $\\\\{q_{r},x_{r}\\\\}$ pairs where the SQLs 0 .1 w.r.t. the SQL $q$ . Within datasets like Spider that span hundreds of schemas, it is often possible to find several SQLs structurally similar to a given SQL $q$ . For example, in Spider we found that $76\\\\%$ of the train SQLs contain at least three zero-distance (structurally identical) neighbours in other schemas. In Figure 2 ,we present more detailed statistics.  \\n\\nMasking the retrieved text Converting the re$\\\\{x_{r}^{\\\\mathrm{masked}}\\\\}$ trieved text queries }is a critical component of R $\\\\{x_{r}\\\\}$ to masked templates EFILL ’s pipeline since irrelevant tokens like references to schema elements of the original database can potentially misguide the text generation module. Our initial approach was to mask tokens based on a match of text tokens with schema names and manually refined schema-to-text linked annotations as in Lei et al. (2020 ). However, this approach failed to mask all schema-related terms since their occurrences in natural text often differed significantly from schema names in the database. Table A7 shows some anecdotes. Consequently, we designed a simple frequency-based method of masking that is significantly more effective for our goal of using the masked text to just guide the diversity. For each word that appears in the text queries of the train set, we count the number of distinct databases where that word gets mentioned at least once. For example, common words like $\\\\{^{\\\\prime}{\\\\mathsf{s h o w}}\\\\}$ , ‘what’, ‘list’, ‘order’} get mentioned in more than $90\\\\%$ of the schemas, and domain specific words like {‘countries’, ‘government $^\\\\prime\\\\}$ occur only in text queries of a few schemas. We mask out all the words that appear in less than $50\\\\%$ of the schemas. The words to be masked are replaced by a special token MASK , and consecutive occurrences of MASK are collapsed into a single MASK token. Thus we obtain masked templates minimal information about their original schema. $\\\\{x_{r}^{\\\\mathrm{{masked}}}\\\\}$ }retaining Editing and Filling the masked text Given a masked template $x_{r}^{\\\\mathrm{masked}}$ , and an SQL query $q$ ,we wish to edit and fill the masked portions in $x_{r}^{\\\\mathrm{masked}}$ to make it consistent with the $\\\\operatorname{SQL}q$ . We utilize a conditional text generation model BART ( Lewis et al. ,2020 ) for this purpose. We $\\\\boldsymbol{\\\\mathrm{\\\\Sigma}}_{\\\\boldsymbol{\\\\mathrm{\\\\Sigma}}}$ like first convert $q$ into a pseudo-English representation $q^{\\\\mathrm{Eng}}$ similar to Shu et al. (2021 ), to make it easier for $\\\\boldsymbol{\\\\beta}$ to encode $q$ . In addition, we wrap the table, column, or value tokens in $q^{\\\\mathrm{Eng}}$ with special tokens to provide explicit signals to the text generation model $\\\\boldsymbol{\\\\mathrm{\\\\Sigma}}_{\\\\boldsymbol{\\\\mathrm{\\\\Sigma}}}$ that ch tokens are likely to appear in the output text ˆ. Next, we concatenate the tokens in $x_{r}^{\\\\mathrm{masked}}$ and $q^{\\\\mathrm{Eng}}$ for jointly encoding them as which is expected to be consistent with the an input to $\\\\boldsymbol{\\\\mathrm{\\\\Sigma}}_{\\\\boldsymbol{\\\\mathrm{\\\\Sigma}}}$ . The output of $\\\\boldsymbol{\\\\mathrm{\\\\Sigma}}_{\\\\boldsymbol{\\\\mathrm{\\\\Sigma}}}$ ’s decoder is text ${\\\\mathrm{SQL~}}q$ $\\\\hat{x}$ ,.  \\n\\nSince we do not have direct supervision to finetune purposing SQL-Text pairs $\\\\boldsymbol{\\\\mathrm{\\\\Sigma}}_{\\\\boldsymbol{\\\\mathrm{\\\\Sigma}}}$ for this task, we presen $\\\\mathcal{D}_{\\\\mathrm{train}}$ for fine-tuning $(q_{i},x_{i})$ from various schemas B.a method of re$\\\\mathcal{D}_{\\\\mathrm{train}}$ contains $s_{i}$ .A Naïve way to train $\\\\boldsymbol{\\\\mathrm{\\\\Sigma}}_{\\\\boldsymbol{\\\\mathrm{\\\\Sigma}}}$ is to provide $[x_{i}^{\\\\mathrm{{masked}}}|q_{i}^{\\\\mathrm{{Eng}}}]$ |,the concatenation of $x_{i}^{\\\\mathrm{masked}}$ and $q_{i}^{\\\\mathrm{Eng}}$ as an input to the encoder and maximize the likelihood of $x_{i}$ in the decoder’s output. This way the decoder of $\\\\boldsymbol{\\\\beta}$ learns to refill the masked tokens in $x_{i}^{\\\\mathrm{masked}}$ by attending to $q_{i}^{\\\\mathrm{Eng}}$ to recover $x_{i}$ in the output. While useful for learning to refill the masked positions, this from its use during inference in two ways: (i) For a Naïve method of training $\\\\boldsymbol{\\\\mathrm{\\\\Sigma}}_{\\\\boldsymbol{\\\\mathrm{\\\\Sigma}}}$ is mismatched given SQL $q$ , R EFILL might fail to retrieve a similar str cture neighbour of $q_{i}$ from $\\\\mathcal{D}_{\\\\mathrm{train}}$ . In such cases, SQL-to-Text generation mode to directly translate Bshould be capable of falling back to pure $q$ into $\\\\hat{x}$ . (ii) During inference, $x_{r}^{\\\\mathrm{masked}}$ and $q$ come from different schemas. However, during Naïve training, the masked text $x_{i}^{\\\\mathrm{masked}}$ and the SQL $q_{i}$ are derived from the same example $(q_{i},x_{i})$ . To Robust address these two limitations, we train manner as follows: (a) For a random one$\\\\boldsymbol{\\\\mathrm{\\\\Sigma}}_{\\\\boldsymbol{\\\\mathrm{\\\\Sigma}}}$ in a more third of t allowing using $q_{i}^{\\\\tilde{\\\\mathrm{Eng}}}$ B. (b) For another one-third, we pass only to learn the filling of the masked tokens train steps we train $\\\\boldsymbol{\\\\mathrm{\\\\Sigma}}_{\\\\boldsymbol{\\\\mathrm{\\\\Sigma}}}$ in the Naïve way, $q_{i}^{\\\\mathrm{Eng}}$ as an input and maximize the likelihood of $x_{i}$ .This ensures that model is capable of generating the text from the $q_{i}^{\\\\mathrm{Eng}}$ alone, if the templates $\\\\boldsymbol{x}_{i}^{\\\\mathrm{{n}}}$ asked are unavailable or noisy. (c) For the remaining onethird, we first retrieve an SQL-Text pair $(q_{j},x_{j})$ ,from a different schema such that the ${\\\\mathrm{SQL~}}q_{j}$ is structurally similar to $q_{i}$ (§ 2.1 ), and the word edit distance between the masked templates $x_{i}^{\\\\mathrm{masked}}$ and $x_{j}^{\\\\mathrm{masked}}$ is also small. We can then replace $x_{i}^{\\\\mathrm{{n}}}$ asked with $x_{j}^{\\\\mathrm{masked}}$ and encode $[x_{j}^{\\\\mathrm{masked}}|q_{i}^{\\\\mathrm{Eng}}]$ as an input to $\\\\boldsymbol{\\\\beta}$ and maximize the likelihood of $x_{i}$ in the decoder’s output. This step makes the training more consistent with the inference, as $x_{j}^{\\\\mathrm{masked}}$ and $q_{i}^{\\\\mathrm{Eng}}$ now come from different schemas. In $\\\\S\\\\,5.4$ , we justify training Robustly compared to Naïve training.',\n",
       "   'index': 7,\n",
       "   'relevance_score': 17.953125},\n",
       "  {'document': '# 2 Related work\\n\\n# 2.1 Benchmarking\\nNLP benchmarks are unified dataset collections coupled with evaluation metrics and baselines that are used to systematically compare the performance of NLP systems for the targeted tasks in a standardized evaluation setup. Well-constructed benchmarks can boost progress in the corresponding research areas, such as SQuAD ( Rajpurkar et al. ,2016 ) for question answering, GLUE ( Wang et al. ,2018 ) for natural language understanding, KILT ( Petroni et al. ,2021 ) for knowledge-intensive tasks, GEM ( Gehrmann et al. ,2021 ,2022 ) for general-purpose text generation, and DynaBench (Kiela et al. ,2021 ) for dynamic benchmark data collection. C ITE BENCH is the first benchmark for the citation text generation task.\\n\\n# 2.2 Text generation for scientific documents\\nScientific documents are characterized by academic vocabulary and writing style, wide use of nonlinguistic elements like formulae, tables and figures, as well as structural elements like abstracts and citation anchors. Recent years have seen a rise in natural language generation for scientific text, including text simplification ( Luo et al. ,2022 ), summarization ( Qazvinian and Radev ,2008 ;Erera et al. ,2019 ;Cachola et al. ,2020 ), slides generation ( Sun et al. ,2021 ), table-to-text generation (Moosavi et al. ,2021 ), and citation text generation ( Li and Ouyang ,2022 ). Closely related to the task of citation text generation, Luu et al. (2021 )study how scientific papers can relate to each other, and how these relations can be expressed in text. Related to our work, Mao et al. (2022 ) propose a benchmark for scientific extreme summarization. Compared to extreme summarization, which amounts to generating short context-independent summaries of individual manuscripts, citation text generation focuses on context-dependent descriptions that relate the cited papers to the citing paper. In line with the recent efforts that address the lack of systematic automated evaluation of natural language generation in general ( Gehrmann et al. ,2021 ), our paper contributes the first unified benchmark for citation text generation in the scientific domain.\\n\\n# 2.3 Citation text generation\\nThe task of citation text generation was introduced in Hoang and Kan (2010 ), who generate a summary of related work specific to the citing paper. Since then, several task definitions and setups have been proposed (Table 1 ). Lu et al. (2020 ) cast the task as generating a multi-paragraph related work section given the abstracts of the citing paper and of the cited papers. AbuRa’ed et al. (2020 ) use the cited paper’s title and abstract to generate a citation sentence. Xing et al. (2020 ) use the abstract of the cited paper and include context before and after the citation sentence as the input, and produce the citation sentence as the output. A recent work by Chen et al. (2021 ) uses multiple cited abstracts as input to generate a related work paragraph. The great variability of the task definitions and setups in citation text generation prevents the study of citation text generation methods across datasets and evaluation setups. Unlike prior work that explores varying task settings, C ITE BENCH brings the diverging task definitions and datasets together in a unified setup. This allows us to compare citation text generation models across different datasets in a standardized manner using an extensive set of quantitative metrics, as well as novel automated qualitative metrics.  \\n\\n<html><body><table><tr><td rowspan=\"2\">Dataset</td><td colspan=\"3\">Input</td><td rowspan=\"2\">Output Citation text (T)</td><td rowspan=\"2\">Datasources</td></tr><tr><td>Cited document (D*) SingleAbs MultiAbs</td><td>Abs</td><td>Citing context (C\") Text</td></tr><tr><td>ABURAED</td><td></td><td>Title √</td><td></td><td>Sent Para</td><td>Multiple</td></tr><tr><td>CHEN</td><td>√</td><td></td><td></td><td></td><td>S2ORCandDelve</td></tr><tr><td>LU</td><td></td><td></td><td></td><td></td><td>arXiv.org and MAG</td></tr><tr><td>XING</td><td></td><td></td><td></td><td></td><td>AAN</td></tr></table></body></html>  \\n\\nTable 1: Overview of datasets in C ITE BENCH . Single Abs $=$ Single abstract, i.e., one cited document per instance. Multi Abs $=$ Multiple abstracts, i.e., multiple cited documents per instance. Abs $=$ Abstract, i.e., a dataset contains the abstract of the citing paper. Text $=\\\\xi$ a dataset contains additional context from the citing paper. Sent $=$ generation target is a single sentence. Para $=$ generation target is a paragraph.   \\nTable 2: Datasets statistics. The validation set for XING has been created by us via randomly sampling $10\\\\%$ of the original training data. Across datasets, very few inputs contain more than 4,096 tokens, and few outputs are longer than 1,024 tokens. We exploit this property to speed up the evaluation in Section 3.3 .  \\n\\n\\n<html><body><table><tr><td>Dataset</td><td>#Train</td><td>#Validation</td><td>#Test</td><td>Inputs>4,096tok.</td><td>Outputs>1,024tok.</td></tr><tr><td>ABURAED</td><td>15,000</td><td>1,384</td><td>219</td><td>0%</td><td>0%</td></tr><tr><td>LU</td><td>30,369</td><td>5,066</td><td>5,093</td><td><0.001%</td><td>0%</td></tr><tr><td>XING</td><td>77,086</td><td>8,566</td><td>400</td><td><0.001%</td><td><0.001%</td></tr><tr><td>CHEN -Delve</td><td>72,927</td><td>3,000</td><td>3,000</td><td><0.001%</td><td>0.004%</td></tr><tr><td>-S2ORC</td><td>126,655</td><td>5,000</td><td>5,000</td><td>0.017%</td><td><0.001%</td></tr><tr><td>Total</td><td>322,037</td><td>23,016</td><td>13,712</td><td>0.007%</td><td><0.001%</td></tr></table></body></html>\\n\\n# 3 Benchmark\\n\\n# 3.1 Task definition and datasets\\nWe formalize the task of citation text generation as follows: Given a set of $n$ (cited) target documents $\\\\{D_{1}^{t}...D_{n}^{t}\\\\}$ }, a (citing) sourc ocum $D^{s}$ set of $m$ citing document contexts $\\\\{C_{1}^{s}\\\\ ...C_{m}^{s}\\\\}\\\\ \\\\in$ } ∈ $D^{s}$ , generate a citation text $T^{\\\\prime}$ ′that is as close as possible to the original citation text $T\\\\in D^{s}$ ∈. This general definition allows wide variation in how the task is implemented. The cited document $D_{i}^{t}$ can be represented by the abstract $a^{t_{i}}$ , the concatenation of the title and the abstract, or even the full text of the paper. The context set $C^{s}$ covers sentences before and after the citation text in $D^{s}$ , as well as the abstract $a^{s}\\\\in D^{s}$ .  \\n\\nSuch general, open definition allows us to accommodate diverse approaches to the task within one framework (Table 1 ). To populate the benchmark, we select four datasets, focusing on the task design and domain variety: ABURAED (AbuRa’ed et al. ,2020 ), CHEN (Chen et al. ,2021 ), LU (Lu et al. ,2020 ), and XING (Xing et al. ,2020 ). Dataset transformation details are provided in Appendix A.1 .Table 2 shows the quantitative statistics, and Figure 2 provides data examples from each dataset. The CHEN dataset has two subsets – CHEN Delve and CHEN S2ORC – based on the data source; we use CHEN to denote the union of the two subsets. The datasets are distributed under varying licenses; we have obtained explicit permissions from the authors to use the data for research purposes in cases when licensing was underspecified (see Ethics statement).  \\n\\nWe note that the datasets included in the benchmark are not only structurally diverse, but also cover a wide range of domains, from medicine to computer science. In particular, ABURAED and XING exemplify citation text generation in the computational linguistics domain, CHEN Delve cover the computer science domain; LU and CHEN S2ORC span a wide range of domains represented on arxiv.org and in the S2ORC corpus, respectively, including biology, medicine and physics.',\n",
       "   'index': 6,\n",
       "   'relevance_score': 16.984375}],\n",
       " 'usage': {'completion_tokens': 0,\n",
       "  'prompt_tokens': 14136,\n",
       "  'total_tokens': 14136}}"
      ]
     },
     "execution_count": 63,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "import requests\n",
    "\n",
    "def rerank_documents(api_key, query, documents, top_n=0, return_docs=True, return_raw=True):\n",
    "    url = \"https://open.bigmodel.cn/api/paas/v4/rerank\"\n",
    "    headers = {\n",
    "        \"Authorization\": f\"Bearer {api_key}\",\n",
    "        \"Content-Type\": \"application/json\"\n",
    "    }\n",
    "    \n",
    "    payload = {\n",
    "        \"model\": \"rerank\",\n",
    "        \"query\": query,\n",
    "        \"documents\": documents,\n",
    "        \"top_n\": top_n,\n",
    "        \"return_documents\": return_docs,\n",
    "        \"return_raw_scores\": return_raw\n",
    "    }\n",
    "    \n",
    "    try:\n",
    "        response = requests.post(url, headers=headers, json=payload)\n",
    "        response.raise_for_status()  # 检查HTTP错误\n",
    "        \n",
    "        result = response.json()\n",
    "        \n",
    "        # 检查API返回的错误\n",
    "        if \"error\" in result:\n",
    "            raise Exception(f\"API Error: {result['error']}\")\n",
    "            \n",
    "        return result\n",
    "    \n",
    "    except requests.exceptions.RequestException as e:\n",
    "        print(f\"Request failed: {e}\")\n",
    "        return None\n",
    "YOUR_API_KEY = \"5569e2918aa6f20de094e373821e34df.MTAkqQ49Kl8yrlJQ\"\n",
    "\n",
    "# 调用函数\n",
    "result = rerank_documents(\n",
    "    api_key=YOUR_API_KEY,\n",
    "    query=statement[\"statement_hyde\"],\n",
    "    documents=rerank_papers_content,\n",
    "    top_n=10,\n",
    "    return_docs=True,\n",
    "    return_raw=True\n",
    ")\n",
    "result"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 79,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "[{'id': 454845681740829484,\n",
       "  'distance': 0.6236740350723267,\n",
       "  'entity': {'paper_id': '6535d747939a5f408295c649',\n",
       "   'paper_title': 'Benchmarking and Improving Text-to-SQL Generation under Ambiguity',\n",
       "   'chunk_id': 0,\n",
       "   'chunk_text': '# Benchmarking and Improving Text-to-SQL Generation under Ambiguity\\nAdithya Bhaskar $\\\\mathbf{\\\\nabla}_{*}\\\\bigotimes\\\\bigtriangleup$ Tushar Tomar ∗♠ Ashutosh Sathe ♠Sunita Sarawagi ♠♠IIT Bombay ♢Princeton University\\n\\n# Abstract\\nResearch in Text-to-SQL conversion has been largely benchmarked against datasets where each text query corresponds to one correct SQL. However, natural language queries over reallife databases frequently involve significant ambiguity about the intended SQL due to overlapping schema names and multiple confusing relationship paths. To bridge this gap, we develop a novel benchmark called AmbiQT with over 3000 examples where each text is interpretable as two plausible SQLs due to lexical and/or structural ambiguity.  \\n\\nWhen faced with ambiguity, an ideal top$k$ decoder should generate all valid interpretations for possible disambiguation by the user ( Elgohary et al. ,2021 ;Zhong et al. ,2022 ). We evaluate several Text-to-SQL systems and decoding algorithms, including those employing state-of-the-art LLMs, and find them to be far from this ideal. The primary reason is that the prevalent beam search algorithm and its variants, treat SQL queries as a string and produce unhelpful token-level diversity in the top$k$ .  \\n\\nWe propose LogicalBeam, a new decoding algorithm that navigates the SQL logic space using a blend of plan-based template generation and constrained infilling. Counterfactually generated plans diversify templates while in-filling with a beam-search, that branches solely on schema names, provides value diversity. LogicalBeam is up to $2.5\\\\times$ more effective than state-of-the-art models at generating all candidate SQLs in the top$k$ ranked outputs. It also enhances the top5 Exact and Execution Match Accuracies on SPIDER and Kaggle DBQA 1 .\\n\\n# 1 Introduction\\nResearch on Text-to-SQL generation has focused on scenarios where each natural language question is associated with one correct SQL ( Zelle and Mooney ,1996 ;Tang and Mooney ,2000 ;Scholak et al. ,2021a ;Wang et al. ,2020 ;Rubin and Berant ,2021 ;Xie et al. ,2022 ;Arcadinho et al. ,2022 ;Zeng et al. ,2022 ;Scholak et al. ,2021b ;Pourreza and Rafiei ,2023 ). Popular benchmarks driving such research, including WikiSQL ( Zhong et al. ,2018 ), SPIDER ( Yu et al. ,2018 ), its robust perturbations ( Chang et al. ,2023 ), and even “in-thewild” benchmarks such as KaggleDBQA ( Lee et al. ,2021 ) and SEDE ( Hazoom et al. ,2021 ) all associate one correct SQL with text. Meanwhile, ambiguity is prevalent in real-life databases — particularly the ones obtained by integrating several data sources for data analysis, where a natural language interface is most in demand. The sources of ambiguity are several — inherent ambiguity of natural language, the user’s ignorance of table/column names, overlapping strings in column names, underspecified clauses, and confusion about whether aggregates are pre-computed, or if a join is required. Hazoom et al. (2021 ) observe that up to $87\\\\%$ of queries on the stack exchange database are underspecified, and Wang et al. (2022 ) mention that $11\\\\%$ of queries exhibited ambiguity in column names. Although prior work has brought up ambiguity, there is no publicly available benchmark with ambiguous queries, nor a comprehensive evaluation of systems under ambiguity.  \\n\\nOur first contribution is to bridge this gulf by developing a benchmark, AmbiQT, that tests performance under ambiguity in the context of current models. AmbiQT includes over 3000 examples, each associating a natural question on a database with two valid SQLs. Inspired by our experience with several real-world datasets, we target four types of ambiguity spanning both lexical (ambiguous column and table names) and structural (whether a join is necessary, and an aggregate is pre-computed) ambiguity. The benchmark is generated via a combination of ChatGPT ( OpenAI ,2022 ) based synonym generation and perturbation, and standard rule-based perturbation.  \\n\\nWhen faced with ambiguity, an ideal Text-toSQL system should incorporate all valid alternatives in their top$k$ SQL outputs, for user resolution. We show that present approaches, ranging from T5-3B ( Raffel et al. ,2019 ) to SOTA models, fail to generate all ambiguous outputs with any decoding strategy, including beam search and diversity-promoting sampling methods such as Nucleus ( Holtzman et al. ,2020 ) and Typical sampling ( Meister et al. ,2023 ). Most outputs are small lexical tweaks of the top choice, bringing about little meaningful diversity in SQL structures or schema alternatives. Even SOTA LLMs like ChatGPT ( OpenAI ,2022 ) suffer from this issue.  \\n\\nTo remedy the lack of diversity, we propose a new decoding algorithm, LogicalBeam, that allocates branching to explore underlying logical variants of the SQL rather than the string form. We catalog the errors of T5-3B ( Raffel et al. ,2019 ) on the SPIDER dev split and use our insights to encourage targeted types of diversity — the number of JOIN s and selections, and table/column names.  \\n\\nOur main contributions are:   \\n•We develop AmbiQT, the first benchmark that tests performance under four types of ambiguity over $\\\\mathbf{3000+}$ examples.   \\n•We show that SOTA methods, including a finetuned T5-3B, RESDSQL ( Li et al. ,2023 ), OpenAI Codex, and ChatGPT, provide a poor representation of ambiguity despite their high accuracy on conventional benchmarks.   \\n•We present LogicalBeam, a two-step algorithm that generates plan-based templates with counterfactually controlled plan diversity and fills them via a beam search that branches only on schema names.   \\n•We show that LogicalBeam consistently increases the fraction of time when all gold SQLs get generated in the Top-5 choices by $1.5-2.5\\\\times$ over the baselines across the board on AmbiQT.',\n",
       "   'original_filename': 'Conf_Paper_Meta_Data_EMNLP_2023_with_whole_text.db'}}]"
      ]
     },
     "execution_count": 79,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "citation_papers"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 80,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "[{'id': 454845681740829484,\n",
       "  'distance': 0.6236740350723267,\n",
       "  'entity': {'paper_id': '6535d747939a5f408295c649',\n",
       "   'paper_title': 'Benchmarking and Improving Text-to-SQL Generation under Ambiguity',\n",
       "   'chunk_id': 0,\n",
       "   'chunk_text': '# Benchmarking and Improving Text-to-SQL Generation under Ambiguity\\nAdithya Bhaskar $\\\\mathbf{\\\\nabla}_{*}\\\\bigotimes\\\\bigtriangleup$ Tushar Tomar ∗♠ Ashutosh Sathe ♠Sunita Sarawagi ♠♠IIT Bombay ♢Princeton University\\n\\n# Abstract\\nResearch in Text-to-SQL conversion has been largely benchmarked against datasets where each text query corresponds to one correct SQL. However, natural language queries over reallife databases frequently involve significant ambiguity about the intended SQL due to overlapping schema names and multiple confusing relationship paths. To bridge this gap, we develop a novel benchmark called AmbiQT with over 3000 examples where each text is interpretable as two plausible SQLs due to lexical and/or structural ambiguity.  \\n\\nWhen faced with ambiguity, an ideal top$k$ decoder should generate all valid interpretations for possible disambiguation by the user ( Elgohary et al. ,2021 ;Zhong et al. ,2022 ). We evaluate several Text-to-SQL systems and decoding algorithms, including those employing state-of-the-art LLMs, and find them to be far from this ideal. The primary reason is that the prevalent beam search algorithm and its variants, treat SQL queries as a string and produce unhelpful token-level diversity in the top$k$ .  \\n\\nWe propose LogicalBeam, a new decoding algorithm that navigates the SQL logic space using a blend of plan-based template generation and constrained infilling. Counterfactually generated plans diversify templates while in-filling with a beam-search, that branches solely on schema names, provides value diversity. LogicalBeam is up to $2.5\\\\times$ more effective than state-of-the-art models at generating all candidate SQLs in the top$k$ ranked outputs. It also enhances the top5 Exact and Execution Match Accuracies on SPIDER and Kaggle DBQA 1 .\\n\\n# 1 Introduction\\nResearch on Text-to-SQL generation has focused on scenarios where each natural language question is associated with one correct SQL ( Zelle and Mooney ,1996 ;Tang and Mooney ,2000 ;Scholak et al. ,2021a ;Wang et al. ,2020 ;Rubin and Berant ,2021 ;Xie et al. ,2022 ;Arcadinho et al. ,2022 ;Zeng et al. ,2022 ;Scholak et al. ,2021b ;Pourreza and Rafiei ,2023 ). Popular benchmarks driving such research, including WikiSQL ( Zhong et al. ,2018 ), SPIDER ( Yu et al. ,2018 ), its robust perturbations ( Chang et al. ,2023 ), and even “in-thewild” benchmarks such as KaggleDBQA ( Lee et al. ,2021 ) and SEDE ( Hazoom et al. ,2021 ) all associate one correct SQL with text. Meanwhile, ambiguity is prevalent in real-life databases — particularly the ones obtained by integrating several data sources for data analysis, where a natural language interface is most in demand. The sources of ambiguity are several — inherent ambiguity of natural language, the user’s ignorance of table/column names, overlapping strings in column names, underspecified clauses, and confusion about whether aggregates are pre-computed, or if a join is required. Hazoom et al. (2021 ) observe that up to $87\\\\%$ of queries on the stack exchange database are underspecified, and Wang et al. (2022 ) mention that $11\\\\%$ of queries exhibited ambiguity in column names. Although prior work has brought up ambiguity, there is no publicly available benchmark with ambiguous queries, nor a comprehensive evaluation of systems under ambiguity.  \\n\\nOur first contribution is to bridge this gulf by developing a benchmark, AmbiQT, that tests performance under ambiguity in the context of current models. AmbiQT includes over 3000 examples, each associating a natural question on a database with two valid SQLs. Inspired by our experience with several real-world datasets, we target four types of ambiguity spanning both lexical (ambiguous column and table names) and structural (whether a join is necessary, and an aggregate is pre-computed) ambiguity. The benchmark is generated via a combination of ChatGPT ( OpenAI ,2022 ) based synonym generation and perturbation, and standard rule-based perturbation.  \\n\\nWhen faced with ambiguity, an ideal Text-toSQL system should incorporate all valid alternatives in their top$k$ SQL outputs, for user resolution. We show that present approaches, ranging from T5-3B ( Raffel et al. ,2019 ) to SOTA models, fail to generate all ambiguous outputs with any decoding strategy, including beam search and diversity-promoting sampling methods such as Nucleus ( Holtzman et al. ,2020 ) and Typical sampling ( Meister et al. ,2023 ). Most outputs are small lexical tweaks of the top choice, bringing about little meaningful diversity in SQL structures or schema alternatives. Even SOTA LLMs like ChatGPT ( OpenAI ,2022 ) suffer from this issue.  \\n\\nTo remedy the lack of diversity, we propose a new decoding algorithm, LogicalBeam, that allocates branching to explore underlying logical variants of the SQL rather than the string form. We catalog the errors of T5-3B ( Raffel et al. ,2019 ) on the SPIDER dev split and use our insights to encourage targeted types of diversity — the number of JOIN s and selections, and table/column names.  \\n\\nOur main contributions are:   \\n•We develop AmbiQT, the first benchmark that tests performance under four types of ambiguity over $\\\\mathbf{3000+}$ examples.   \\n•We show that SOTA methods, including a finetuned T5-3B, RESDSQL ( Li et al. ,2023 ), OpenAI Codex, and ChatGPT, provide a poor representation of ambiguity despite their high accuracy on conventional benchmarks.   \\n•We present LogicalBeam, a two-step algorithm that generates plan-based templates with counterfactually controlled plan diversity and fills them via a beam search that branches only on schema names.   \\n•We show that LogicalBeam consistently increases the fraction of time when all gold SQLs get generated in the Top-5 choices by $1.5-2.5\\\\times$ over the baselines across the board on AmbiQT.',\n",
       "   'original_filename': 'Conf_Paper_Meta_Data_EMNLP_2023_with_whole_text.db'}},\n",
       " {'id': 454845652744816630,\n",
       "  'distance': 0.6249851584434509,\n",
       "  'entity': {'paper_id': '646d8642d68f896efa0a3040',\n",
       "   'paper_title': 'Exploring Chain-of-Thought Style Prompting for Text-to-SQL',\n",
       "   'chunk_id': 1,\n",
       "   'chunk_text': '# 1 Introduction\\nText-to-SQL parsing, the task of translating a natural language question into a SQL query, has found wide applications in building natural language interfaces to databases and thus piqued significant research interest in recent years ( Wang et al. ,2020 ;Deng et al. ,2021 ;Yu et al. ,2021 ;Rajkumar et al. ,2022 ;Hongjin et al. ,2023 ;Ni et al. ,2023 ). To develop a text-to-SQL parser, a prevalent approach is to collect labeled data and train a model via supervised learning ( Shaw et al. ,2021 ;Scholak et al. ,2021 ). While effective, this approach necessitates a considerable amount of training data, which is costly to obtain because annotating SQL queries requires programming expertise. Consequently, the lack of data hinders real-life applications of stateof-the-art parsers, especially on novel databases and unseen domains ( Suhr et al. ,2020 ).  \\n\\nAs an alternative to supervised learning, incontext learning ( Brown et al. ,2020 ), an emergent capability of large language models (LLMs), alleviates the need for large-scale data. With only a few examples, in-context learning enables LLMs to demonstrate performance comparable to or even better than fully supervised models on many NLP tasks, such as question answering, machine translation, and natural language inference ( Chowdhery et al. ,2022 ;Kojima et al. ,2022 ;Wei et al. ,2022b ,a ;Brohan et al. ,2023 ). When applied to text-to-SQL parsing, in-context learning also shows encouraging results, but it still lags behind supervised approaches ( Rajkumar et al. ,2022 ;Chang et al. ,2023 ;Liu et al. ,2023a ).  \\n\\nWe hypothesize that the under-performance is because text-to-SQL parsing requires complex, multistep reasoning. Even for a seemingly simple question, such as “What is the ID of Kyle,\" a model has to ground it to database schemas, infer the relational algebra among schema items, and construct syntactically correct SQL clauses. Recently, the chain-of-thought (CoT) style promptings ( Wei et al. ,2022b ;Zhou et al. ,2023 ) are proposed and have shown promising multi-step reasoning capabilities. To enhance LLMs’ reasoning ability, we systematically explore CoT style prompting for text-to-SQL parsing. Specifically, we seek to answer two research questions: (1) Which prompting style is better, generating all reasoning steps in a single pass, or iterative prompting and problem solving? (2) Does including more detailed information in the reasoning steps lead to better results for text-to-SQL parsing?  \\n\\nTo address the questions, we adopt two widely used prompting methods for text-to-SQL parsing As the first method, we apply chain-of-thought prompting (Wei et al. ,2022b ) by drawing an analogy between its problem-solving process and the execution procedure of a SQL query. Referring to the logical execution order of SQL clauses (Narechania et al. ,2021 ), we compose the intermediate execution steps in natural language and prompt LLMs to derive them before generating the SQL query. As the second method, we follow Zhou et al. (2023 ) to apply least-to-most prompting in two stages: (1) reduction: generate a series of sub-questions from the original question and (2) solving: iteratively translate each sub-question into its corresponding SQL query, with the original question as the last sub-question. However, in our case study 1 , we find that directly applying chainof-thought and lease-to-most promptings leads to error propagation issues. Their rationales contain very demonstration-specific information and are easier to mislead the reasoning process. Furthermore, least-to-most prompting technique leads to additional computational and time cost due to the multiple stages of reduction and solving.  \\n\\n  \\nFigure 1: Different prompting methods with multi-step reasoning for text-to-SQL parsing: (a) Chain-of-Thought, (b) Least-toMost, and our proposed (c) QDecomp , and (d) QDecomp $^+$ InterCOL .  \\n\\nTherefore, we propose a new method called question-decomposition prompting (QDecomp ). Similar to chain-of-thought prompting, QDecomp generates a sequence of reasoning steps and then the SQL query in one pass. However, we modify the steps to instruct LLMs to decompose the original complex question, akin to the problem reduction stage in least-to-most prompting. Also, to help LLMs ground database schemas, we design a variant of question decomposition prompting (QDecomp $^+$ InterCOL ) by including the table and column names involved in each sub-question. We conduct comprehensive evaluations on two textto-SQL datasets, Spider ( Yu et al. ,2018 ) and Spider Realistic ( Deng et al. ,2021 ). Our proposed prompting methods substantially outperform existing prompting ones by 2.4 and 1.5 point absolute gains on the development set of Spider and Spider Realistic, respectively. The results suggest that the iterative prompting which is costly due to additional computational resources requirement as in least-to-most prompting may not be necessary $(R Q I)$ . In addition, our analysis shows the proposed question decomposition prompting methods, which do not instruct LLMs to generate detailed reasoning steps, reduce the chance of error propagation when generating the reasoning steps. ( RQ2 ). Finally, we evaluate the robustness of our proposed prompting methods by varying the number, selection, and format of in-context examples and show that they can achieve consistently strong performance across different settings.',\n",
       "   'original_filename': 'Conf_Paper_Meta_Data_EMNLP_2023_with_whole_text.db'}},\n",
       " {'id': 454845641360425782,\n",
       "  'distance': 0.6537715196609497,\n",
       "  'entity': {'paper_id': '6461b9c9d68f896efad43133',\n",
       "   'paper_title': 'Interactive Text-to-SQL Generation Via Editable Step-by-Step Explanations',\n",
       "   'chunk_id': 1,\n",
       "   'chunk_text': '# 2 Related Work\\n\\n# 2.1 Text-to-SQL Generation\\nNatural language interfaces have long been recognized as a way to expand access to databases ( Hendrix et al. ,1978 ).The construction of several large text-to-SQL datasets, such as WikiSQL ( Zhong et al. ,2017 ) and Spider ( Yu et al. ,2018 ), has enabled the adoption of deep learning models in this task, achieving unprecedented performance in recent years ( Rubin and Berant ,2021 ;Wang et al. ,2020a ;Scholak et al. ,2021 ;Yu et al. ,2020 ;Hwang et al. ,2019 ). Our technique is based on the recent success of neural text-to-SQL models. Unlike existing models that perform end-to-end SQL generation, we propose a new interaction mechanism for users to validate and refine generated queries through step-by-step explanations.  \\n\\nAs the first step to demonstrate the feasibility of our approach, we focus on single-turn SQL generation ( Yu et al. ,2018 ) in this work. There has also been recent work that supports multi-turn SQL generation ( Yu et al. ,2019a ,b;Guo et al. ,2021 ), where a sequence of interdependent queries are expressed in multiple utterances in a dialog. Models designed for multi-turn SQL generation typically need to reason about the dialog context and effectively encode the historical queries ( Wang et al. ,2021 ;Hui et al. ,2021 ;Zhang et al. ,2019 ;Cai and Wan ,2020 ;Wang et al. ,2020b ). Our approach can be extended to support multi-turn SQL generation by initiating separate refinement sessions for individual queries while incorporating the contextual information of previous queries into explanation generation and text-to-clause generation.\\n\\n# 2.2 Interactive Semantic Parsing for SQL\\nRecently, there has been a growing interest in interactive approaches that elicit user feedback to guide SQL generation. Iyer et al. (2017 ) proposed to allow users to flag incorrect queries and continuously retrain the model. Both DIY ( Narechania et al. ,2021 ) and NaLIR ( Li and Jagadish ,2014a ,b)enable users to select alternative values or subexpressions to fix an incorrect SQL query. PIIA ( Li et al. ,2020 ), MISP ( Yao et al. ,2019 ), and DialSQL ( Gur et al. ,2018 ) proactively ask for user feedback via multiple-choice questions. A common limitation of these methods is that they only solicit feedback in constrained forms, hindering their flexibility and effectiveness in addressing the variability of SQL errors. In contrast, our approach allows more flexible feedback through direct edits to the explanations generated by the model.  \\n\\nThe only work that supports open-ended user feedback in SQL generation is NL-EDIT ( Elgohary et al. ,2021 ). NL-EDIT is trained on SPLASH ( Elgohary et al. ,2020 ), a dataset of SQL errors and user feedback utterances. Given an incorrect query, NL-EDIT allows users to provide a clarification utterance. Based on the utterance, the model generates a sequence of edits to the SQL query. Incorporating feedback expressed in a completely free-text utterance is challenging for two reasons:  \\n\\n  \\nFigure 2: An Overview of Interactive SQL Generation and Refinement with Editable Step-by-Step Explanations  \\n\\n(1) the model needs to infer which part of the SQL query to fix; (2) the model needs to determine what changes are being requested. In contrast, S TEPS asks users to directly edit an NL explanation and make corrections to the explanation. Comparing the initial explanation with the user-corrected explanation makes it easier to locate the part of a SQL query that needs to be changed and infer what change to make.  \\n\\nThe idea of SQL decomposition is similar to recent work that decomposes a user question to sub-questions on SPARQL ( Mo et al. ,2022 ). Their approach requires a crowd-sourced dataset to train a question decomposition model. In contrast, our rule-based method generates step-by-step explanations without the need for training a model. This also allows our system to map each entity in the explanation to the corresponding SQL element, making it easier for SQL correction (Sec. 3.2 ).\\n\\n# 2.3 Explaining SQL Queries in NL\\nOur approach is also related to prior work that generates NL explanations for SQL queries. Simitsis and Ioannidis (2009 ) argued that databases should “talk back” in human language so that users can verify results. Kokkalis et al. (2012 ) and Koutrika et al. (2010 ) used a graph-based SQL translation approach, where each query is represented as a graph and the explanation is generated by traversing the graph. Elgohary et al. (2021 ,2020 ) employed a template-based explanation approach, where they manually curated 57 templates for explanation generation. These approaches have limited capability to handle arbitrary SQL queries. To address this limitation, we propose a rule-based method to first explain terminal tokens (e.g., operators, keywords) and gradually compose them into a complete explanation based on the derivation rules in the SQL grammar. Another key difference is that none of the existing approaches supports editable explanations for SQL correction, which is a key feature to solicit user feedback in our approach.',\n",
       "   'original_filename': 'Conf_Paper_Meta_Data_EMNLP_2023_with_whole_text.db'}},\n",
       " {'id': 454845641342337844,\n",
       "  'distance': 0.6175075769424438,\n",
       "  'entity': {'paper_id': '6461b9c9d68f896efad43133',\n",
       "   'paper_title': 'Interactive Text-to-SQL Generation Via Editable Step-by-Step Explanations',\n",
       "   'chunk_id': 0,\n",
       "   'chunk_text': '# Interactive Text-to-SQL Generation via Editable Step-by-Step Explanations\\nYuan Tian 1 , Zheng Zhang 2 , Zheng $\\\\mathbf{Ning^{2}}$ ,Toby Jia-Jun $\\\\mathbf{Li}^{2}$ ,Jonathan K. Kummerfeld 3 , and Tianyi Zhang 1 Purdue University 1 , University of Notre Dame 2 , The University of Sydney 3  , , , , ,\\n\\n# Abstract\\nRelational databases play an important role in business, science, and more. However, many users cannot fully unleash the analytical power of relational databases, because they are not familiar with database languages such as SQL. Many techniques have been proposed to automatically generate SQL from natural language, but they suffer from two issues: (1) they still make many mistakes, particularly for complex queries, and (2) they do not provide a flexible way for non-expert users to validate and refine incorrect queries. To address these issues, we introduce a new interaction mechanism that allows users to directly edit a stepby-step explanation of a query to fix errors. Our experiments on multiple datasets, as well as a user study with 24 participants, demonstrate that our approach can achieve better performance than multiple SOTA approaches. Our code and datasets are available at https: //github.com/magic-YuanTian/STEPS .\\n\\n# 1 Introduction\\nNatural language interfaces significantly lower the barrier to accessing databases and performing data analytics tasks for users who are not familiar with database query languages. Many approaches have been proposed for generating SQL queries from natural language ( Popescu et al. ,2004 ;Giordani and Moschitti ,2012 ;Rubin and Berant ,2021 ;Scholak et al. ,2021 ;Zhao et al. ,2022 ). Using recent large language models, systems have reached $86.6\\\\%$ execution accuracy ( Gao et al. ,2023 ) on the Spider benchmark ( Yu et al. ,2018 ).  \\n\\nHowever, the rate of improvement has slowed, with a gain of only $10\\\\%$ since mid-2021. This is partly due to the inherent ambiguity of natural language and the complex structure of SQL queries (e.g., nested or joined queries). Thus, it is challenging to generate a fully correct query in one step, especially for complex tasks ( Yao et al. ,2019 ).  \\n\\n  \\nFigure 1: Refining a SQL query by directly editing a step-by-step explanation.  \\n\\nThere has been growing interest in developing “human-in-the-loop” approaches that elicit user feedback to guide SQL generation. However, most approaches only support feedback in constrained forms, e.g., answering multiple-choice questions (MISP, PIIA, DialSQL Yao et al. ,2019 ;Li et al. ,2020 ;Gur et al. ,2018 ), changing SQL elements in a drop-down menu (DIY, Narechania et al. ,2021 ), etc. Such constrained feedback is not sufficient to fix many complex errors in real-world SQL tasks. One exception is NL-EDIT ( Elgohary et al. ,2021 ), which allows users to provide feedback as new utterances. However, since the feedback is open-ended, interpreting it can be just as hard as processing the original request.  \\n\\nIn this paper, we seek to strike a balance between constrained feedback and open-ended feedback by proposing a new interaction mechanism: editable step-by-step explanations. Fig. 1 illustrates our idea. This mechanism consists of three core components: (a) a text-to-SQL model, (b) an explanation generation method, and (c) a SQL correction model. Our key insight is that using a step-by-step explanation as the basis to suggest fixes allows users to precisely specify where the error is and how to fix it via direct edits. This not only saves users’ time but also makes it easier for the model to locate the error and apply fixes.  \\n\\nBased on this idea, we implemented an interactive SQL generation and refinement system called STEPS . S TEPS adopts a rule-based method to generate step-by-step explanations and uses a hybrid rule/neural method to convert a user-corrected explanation back to a SQL query.  \\n\\nAn evaluation with a simulated user on Spider ( Yu et al. ,2018 ) shows that S TEPS can achieve $97.9\\\\%$ exact set match accuracy, outperforming prior interactive text-to-SQL systems— MISP, DIY, and NL-EDIT—by $33.5\\\\%$ ,$33.2\\\\%$ , and $31.3\\\\%$ respectively. We further evaluate S TEPS on other datasets, including Spider-DK ( Gan et al. ,2021b ), Spider-Syn ( Gan et al. ,2021a ), and WikiSQL ( Zhong et al. ,2017 ). S TEPS consistently achieves at least $96\\\\%$ exact set match accuracy and execution accuracy across all datasets.  \\n\\nFinally, we conducted a within-subjects user study with 24 real users. We found that within the same amount of time, S TEPS helped users complete almost 2X and 4X more tasks correctly than DIY and MISP respectively, with significantly higher self-reported confidence and lower mental load.  \\n\\nThis work makes the following contributions: (1) we propose a new interaction mechanism for the text-to-SQL task; (2) we develop an interactive text-to-SQL system based on the new interaction mechanism and a new training method for SQL correction; (3) we conduct a comprehensive evaluation with both simulated and real users and demonstrate its effectiveness over state-of-the-art interactive systems. Our dataset and code are publicly available.',\n",
       "   'original_filename': 'Conf_Paper_Meta_Data_EMNLP_2023_with_whole_text.db'}},\n",
       " {'id': 454845706688551984,\n",
       "  'distance': 0.613497793674469,\n",
       "  'entity': {'paper_id': '65406320939a5f40826491aa',\n",
       "   'paper_title': 'Evaluating Cross-Domain Text-to-SQL Models and Benchmarks',\n",
       "   'chunk_id': 0,\n",
       "   'chunk_text': '# Evaluating Cross-Domain Text-to-SQL Models and Benchmarks\\nMohammadreza Pourreza University of Alberta   \\n\\nDavood Rafiei University of Alberta\\n\\n# Abstract\\nText-to-SQL benchmarks play a crucial role in evaluating the progress made in the field and the ranking of different models. However, accurately matching a model-generated SQL query to a reference SQL query in a benchmark fails for various reasons, such as underspecified natural language queries, inherent assumptions in both model-generated and reference queries, and the non-deterministic nature of SQL output under certain conditions. In this paper, we conduct an extensive study of several prominent cross-domain text-to-SQL benchmarks and reevaluate some of the top-performing models within these benchmarks, by both manually evaluating the SQL queries and rewriting them in equivalent expressions. Our evaluation reveals that attaining a perfect performance on these benchmarks is unfeasible due to the multiple interpretations that can be derived from the provided samples. Furthermore, we find that the true performance of the models is underestimated and their relative performance changes after a re-evaluation. Most notably, our evaluation reveals a surprising discovery: a recent GPT4-based model surpasses the gold standard reference queries in the Spider benchmark in our human evaluation. This finding highlights the importance of interpreting benchmark evaluations cautiously, while also acknowledging the critical role of additional independent evaluations in driving advancements in the field.  \\n\\nproved from 65.6 ( Wang et al. ,2019 ) to 74.0 ( Li et al. ,2023a ). Measuring such progress is hinged on reliable benchmarks and evaluation metrics.  \\n\\nTwo standard metrics for evaluating the performance in this domain have been exact set match accuracy and execution accuracy . The former measures if a model-generated SQL query lexically matches a reference SQL query, whereas the latter measures if a model-generated SQL query produces the same output as a reference query $(\\\\S\\\\,4)$ .  \\n\\n  \\nFigure 1: An example question with two correct SQL queries, each corresponding to a different interpretation. There is an ambiguity in schema mapping, with two different database columns describing the name.\\n\\n# 1 Introduction\\nSignificant progress has been made in translating natural language text to SQL statements over the past few years. The execution accuracy on the hold-out test of Spider ( Yu et al. ,2018b )–a large-scale cross-domain text-to-SQL benchmark– has improved from 53.5 in May, 2020 ( Zhong et al. ,2020b ) to 85.3 in March, 2023 ( Pourreza and Rafiei ,2023 ). The exact set match accuracy, without considering database cell values, on the same benchmark and over the same period has im  \\n\\nConsider the example in Figure 1 , which consists of a model-generated query (shown on the left) and a reference query (shown on the right). Both SQL queries return the id and name of makers that have more than 3 models. However, the model-generated query returns the column FullName, which gives the full name of a maker (e.g., “Ford Motor Company”), whereas the reference query given in the benchmark returns the column Maker, which gives the short common name of a maker (e.g., “Ford”). The model-generated query fails an exact set match since the column names in the select clause are different. The query outputs are also different and the model-generated query fails the execution accuracy as well. The natural language utterance is not specific about the type of name to be returned, and a human evaluator tags both queries correct.  \\n\\nAs the models improve, these types of failures make up most of the errors, and the performance metrics become less relevant, as shown in our evaluation. In particular, we re-evaluated all development set queries of Spider on which two top-performing models, one using a fine-tuned model ( Scholak et al. ,2021 ) and another using a large language model ( Pourreza and Rafiei ,2023 ), failed. We found out that $25\\\\%$ of the queries generated by one model and $87\\\\%$ of the queries generated by the other model were indeed correct but were wrongly evaluated by the benchmark. For the same set of queries, our re-evaluation of the ground truth queries found $33\\\\%$ of the SQL queries incorrect, which was more than the number of incorrect queries generated by one of the models. This evaluation places one of the models above the ground truth queries in this re-evaluation.  \\n\\nWe further re-evaluated two well-known benchmarks, Spider ( Yu et al. ,2018b ) and SpiderDK ( Gan et al. ,2021b ), and a newly released benchmark, BIRD ( Li et al. ,2023b ), and found similar problems in all three benchmarks that affect the evaluation. Our evaluation reveals that $18\\\\%$ of the queries in the train sets and $20\\\\%{-23\\\\%}$ of the queries in the dev sets of these benchmarks are subject to ties in the dataset and which one of the tied rows are returned. This means a model-generated query will be deemed incorrect if it does not return the same row, among tied rows, as the ground truth query. This can severely impact the evaluation, especially when there is a tight race among models. Considering these observations, it is crucial to emphasize the significance of additional independent evaluations when utilizing these benchmarks. To enhance the evaluation process further, a potential solution is to incorporate multiple SQL queries as the ground truth, each representing a different interpretation that may be valid.  \\n\\nOur objective in this paper is to provide a comprehensive evaluation of existing Text-to-SQL benchmarks, underscoring the inherent issues they possess. We refrain from introducing a new dataset due to several considerations. First, addressing the identified issues by updating these benchmarks requires considerable human effort. Additionally, benchmarks in the Text-to-SQL domain, like Spider and BIRD, have holdout test sets used for official leaderboards and comparisons of text-to-SQL methodologies. We only have access to the development and training sets of these benchmarks, which limits our capability to alter the test sets. As a result, making changes only to the development and training sets would not completely address the benchmark’s inherent problems, given that final performance is gauged using the problematic test sets.',\n",
       "   'original_filename': 'Conf_Paper_Meta_Data_EMNLP_2023_with_whole_text.db'}},\n",
       " {'id': 454845681760490286,\n",
       "  'distance': 0.6474056243896484,\n",
       "  'entity': {'paper_id': '6535d747939a5f408295c649',\n",
       "   'paper_title': 'Benchmarking and Improving Text-to-SQL Generation under Ambiguity',\n",
       "   'chunk_id': 1,\n",
       "   'chunk_text': '# 2 Background and Related Work\\nA Text-to-SQL model takes as input a question expressed as a natural language text x, and a database schema scomprising of table and column names, and outputs an SQL program ywhich can be executed against the database to answer the user’s question. Figure 1 shows an example. The training data for the task comprises (text, schema, SQL) triplets spanning multiple distinct databases.  \\n\\n  \\nFigure 1: A Text-to-SQL system converts a user question to an SQL query, conditioned on the database schema and/or content.  \\n\\nBenchmarks. Popular benchmarks for the Textto-SQL task are WikiSQL ( Zhong et al. ,2018 )and SPIDER ( Yu et al. ,2018 ). A few others have been proposed recently to capture real-world scenarios, such as KaggleDBQA ( Lee et al. ,2021 ), SEDE ( Hazoom et al. ,2021 ), and EHRSQL ( Lee et al. ,2022 ). They all attach one SQL per text, though some of them mention the problem of ambiguity in real-world datasets. Dr. SPIDER ( Chang et al. ,2023 ), designed to test the robustness of existing models, perturbs either the text or schema of SPIDER but still assigns one SQL per text.  \\n\\nAmbiguity in SQL Although ambiguity has been studied in other fields of NLP ( Pilault et al. ,2023 ;Li et al. ,2022 ;Futeral et al. ,2022 ), it has been unexplored in the context of semantic parsing. Ambiguity in SQL arising from related column names is discussed in ( Wang et al. ,2022 ), but they only consider column ambiguity. Their method of recognizing ambiguous queries depends on labeling words of the text and does not generalize to other kinds of ambiguity. To the best of our discernment, AmbiQT represents the first open benchmark for testing coverage of ambiguous alternatives.  \\n\\nDiverse Decoding. Prior work has critiqued the lack of meaningful diversity in beam-search outputs ( Finkel et al. ,2006 ;Gimpel et al. ,2013 ;Li et al. ,2016 ;Li and Jurafsky ,2016 ). In response, many fixes have been proposed. Some proposals attempt to restrict the tokens sampled, using strategies like Nucleus sampling ( Holtzman et al. ,2020 ), Truncated Sampling ( Hewitt et al. ,2022 ), and Typical Sampling ( Meister et al. ,2023 ), while some rely on Template-Based decoding ( Wiseman et al. ,2018 ;Zhang et al. ,2022 ;Fu et al. ,2023 ;Elgohary et al. ,2020 ;Awasthi et al. ,2022 ). A third approach is to generate a prefix with high diversity first, then generate the rest of the sentence with lower diversity. Narayan et al. (2022 ) follow this recipe but focus on incorporating diverse entity orders in text summarization.  \\n\\n<html><body><table><tr><td rowspan=\"2\">Kind of ambiguity</td><td rowspan=\"2\">Count</td><td colspan=\"3\">Example</td></tr><tr><td>QuestionText</td><td>SQL#1</td><td>SQL#2</td></tr><tr><td>Column Ambiguity (C)</td><td>1240</td><td>List the ids of all students.</td><td>SELECTroll_number FROMstudents</td><td>SELECTadmission_number FROMstudents</td></tr><tr><td>Table Ambiguity (T)</td><td>1417</td><td>How many singers do wehave?</td><td>SELECT COUNT(*) FROM artist</td><td>SELECT COUNT(*) FROM performer</td></tr><tr><td>Join Ambiguity (J)</td><td>288</td><td>Whatarethemakers and models?</td><td>SELECT maker，model FROM model</td><td>SELECT t2.maker，t1.model FROM modelASt1JOINmodel_maker AS t2 ON t1.model_id = t2.model_id</td></tr><tr><td>Precomputed Aggregates (P)</td><td>101</td><td>for each pet type.</td><td>Find the average weight|SELECT AVG(weight)， pettype FROM pets GROUP BY pettype</td><td>SELECT avg_weight，pettype FROM pets_weight</td></tr></table></body></html>\\n\\nTable 1: The AmbiQT benchmark. For each question, we list two valid SQL queries as per the schema. The schema is not shown here, but the ambiguity in it can be inferred based on the two SQL queries.\\n\\n# 3 AmbiQT: A Benchmark of Ambiguous Text-to-SQL Conversion\\nAmbiQT is constructed so that each text query has two distinct valid SQL interpretations. Motivated by our experience working with real-life databases, we designed AmbiQT to encompass four types of ambiguity. Each entry is designed so that both alternatives have a similar relevance to the question, and a well-calibrated decoding method is expected to rank them close by in their outputs.  \\n\\nWe create AmbiQT by modifying the SPIDER (Yu et al. ,2018 ) dataset, and use ChatGPT ( OpenAI ,2022 ) to aid with the creation. In each case, we modify the schema instead of the text as that provides greater control over the modification process. We explain the kinds of ambiguity in AmbiQT below and portray examples of each in Table 1 .  \\n\\nColumn Ambiguity (C). Unlike the SPIDER benchmark where column names usually appear verbatim in the question text (like born state for the column born_state ), when users unaware of the schema pose a natural question, they introduce column ambiguity ( Wang et al. ,2022 ). For example, “ What is the capacity of O2 Arena? ” could be ambiguous if the schema has separate columns for standing and seating capacity. Likewise, a query on the number of under-nourished children is ambiguous if we have different columns for “under-weight children” and “stunted growth in children”.  \\n\\nTo simulate column ambiguity, for each text $\\\\mathbf{x}$ ,schema s, and SQL yin SPIDER, we prompt ChatGPT to generate two synonyms for each column name of sin a one-shot manner. Appendix A furnishes more details of the prompt. We then modify sby replacing $c$ with two columns $c_{1},c_{2}$ , and we use yto generate two queries $\\\\mathbf{y}_{1},\\\\mathbf{y}_{2}$ where all mentions of $c$ are replaced with $c_{1}$ in $\\\\mathbf{y}_{1}$ and with $c_{2}$ in $\\\\mathbf{y}_{2}$ . An example appears in the first row of Table 1 . We do not reuse $c$ because the SPIDER dataset often contains column names verbatim in the question, and that would violate our attempt at keeping the two options at similar relevance levels. We modify one column at a time and generate up to 3 examples from each original entry.  \\n\\nTable Ambiguity (T). Table name ambiguity is common in databases obtained by integrating multiple data sources, as in web tables ( Cafarella et al. ,2008 ;Pimplikar and Sarawagi ,2012 ). Here again, we prompt ChatGPT to generate two alternate names for each table. We then modify one SQL yto generate two candidates ${\\\\bf y}_{1},{\\\\bf y}_{2}$ as shown in Table 1 .  \\n\\nJoin Ambiguity (J). In production databases, a logical table is often vertically partitioned across several tables for efficient clustered access ( Stonebraker et al. ,2019 ). Column names overlapping across tables leads to Join Ambiguity. Suppose we have two tables: (1) person with columns id, name, email_address , and (2) person_details with columns id, postal_address, photo .A question asking for a person’s name and address is ambiguous on whether a JOIN with the person_details is necessary. We expose such ambiguity by modifying the schema as follows.  \\n\\nConsider a $(\\\\mathbf{x},\\\\mathbf{s},\\\\mathbf{y})$ triplet. Suppose yinvolves selecting two or more columns $c_{1},c_{2},\\\\ldots.$ not necessarily in the same order, from a table $t$ . Suppose further that $c_{1}$ is not a primary key of $t$ . We create a table called $t\\\\_c_{1}$ that includes just the primary key $p k_{t}$ of $t$ , and $c_{1}$ . The first alternative $\\\\mathbf{y}_{1}$ is $\\\\mathbf{y}$ and the second alternative $\\\\mathbf{y}_{2}$ uses a join over $t$ and $t\\\\_c_{1}$ , with everything else staying the same as y.  \\n\\n  \\nFigure 2: Beam Search works well when targeting only one output, but leads to superficial diversity, for example via different grouping and erroneous variants of column names.  \\n\\nPrecomputed Aggregates $(\\\\mathbf{P})$ :. This ambiguity is particularly common in data warehouses such as Data Commons which pre-aggregate certain variables. For instance, the “ total rice production ” of a state might refer to the column rice_production of state rather than a sum over it. Text-toSQL models have a bias toward introducing a sum()...group-by clause every time total appears in the text. The non-aggregated alternative is usually missing in the top$k$ options. We incorporate this ambiguity as follows.  \\n\\nFor each $(\\\\mathbf{x},\\\\mathbf{s},\\\\mathbf{y})$ , where $\\\\mathbf{y}$ has at least one aggregate, we construct a new table $t^{\\\\prime}$ . For each aggregate $\\\\boldsymbol{\\\\mathcal{A}}$ over column $c$ in y, we add to $t^{\\\\prime}$ the columns and the columns grouped by in $A^{\\\\prime}\\\\_c$ for all $\\\\mathcal{A}^{\\\\prime}\\\\,\\\\in\\\\,\\\\{\\\\mathsf{a v g},\\\\mathsf{s u m},\\\\mathsf{m i n},\\\\mathsf{m a x}\\\\}$ y. For count $(\\\\star)$ ,we add a column called number . We get two gold queries, the original yand a second with the groupby replaced by a direct SELECT on $t^{\\\\prime}$ as shown in the example in Table 1 . We also support aggregates across multiple tables but skip the details here.',\n",
       "   'original_filename': 'Conf_Paper_Meta_Data_EMNLP_2023_with_whole_text.db'}},\n",
       " {'id': 454919307595371694,\n",
       "  'distance': 0.5985562801361084,\n",
       "  'entity': {'paper_id': '63608e5090e50fcafdee1152',\n",
       "   'paper_title': 'Diverse Parallel Data Synthesis for Cross-Database Adaptation of   Text-to-SQL Parsers',\n",
       "   'chunk_id': 2,\n",
       "   'chunk_text': '# 2.2 Translating text of related queries\\nOur next goal is to translate the retrieved $x_{r}$ from being a text for SQL $q_{r}$ to a text $\\\\hat{x}$ for SQL $q$ ,where $q\\\\approx q_{r}$ structurally. However, we do not have a readily labeled dataset to learn a model that translates $x_{r}$ to $\\\\hat{x}$ while being consistent with $q$ . We therefore decompose this task into two steps: 1) A simpler task of masking schema-specific tokens in $x_{r}$ to get a template $x_{r}^{\\\\mathrm{masked}}$ and 2) A conditional text generation model that maps $(x_{r}^{\\\\mathrm{masked}},q)$ to the text $\\\\hat{x}$ consistent with $q$ , by filling the masked positions in $x_{r}^{\\\\mathrm{masked}}$ as per $q$ . We re-purpose $\\\\mathcal{D}_{\\\\mathrm{train}}$ to get indirect supervision for training the text generation model. We now present each step in detail.  \\n\\nfrom different schemas, we modify the tree-editdistance algorithm to ignore the schema names and the database values. The tree-edit-distance is further normalized by the size of the larger tree. We $\\\\{q_{r}\\\\}$ only consider the have a distance of less than $\\\\{q_{r},x_{r}\\\\}$ pairs where the SQLs 0 .1 w.r.t. the SQL $q$ . Within datasets like Spider that span hundreds of schemas, it is often possible to find several SQLs structurally similar to a given SQL $q$ . For example, in Spider we found that $76\\\\%$ of the train SQLs contain at least three zero-distance (structurally identical) neighbours in other schemas. In Figure 2 ,we present more detailed statistics.  \\n\\nMasking the retrieved text Converting the re$\\\\{x_{r}^{\\\\mathrm{masked}}\\\\}$ trieved text queries }is a critical component of R $\\\\{x_{r}\\\\}$ to masked templates EFILL ’s pipeline since irrelevant tokens like references to schema elements of the original database can potentially misguide the text generation module. Our initial approach was to mask tokens based on a match of text tokens with schema names and manually refined schema-to-text linked annotations as in Lei et al. (2020 ). However, this approach failed to mask all schema-related terms since their occurrences in natural text often differed significantly from schema names in the database. Table A7 shows some anecdotes. Consequently, we designed a simple frequency-based method of masking that is significantly more effective for our goal of using the masked text to just guide the diversity. For each word that appears in the text queries of the train set, we count the number of distinct databases where that word gets mentioned at least once. For example, common words like $\\\\{^{\\\\prime}{\\\\mathsf{s h o w}}\\\\}$ , ‘what’, ‘list’, ‘order’} get mentioned in more than $90\\\\%$ of the schemas, and domain specific words like {‘countries’, ‘government $^\\\\prime\\\\}$ occur only in text queries of a few schemas. We mask out all the words that appear in less than $50\\\\%$ of the schemas. The words to be masked are replaced by a special token MASK , and consecutive occurrences of MASK are collapsed into a single MASK token. Thus we obtain masked templates minimal information about their original schema. $\\\\{x_{r}^{\\\\mathrm{{masked}}}\\\\}$ }retaining Editing and Filling the masked text Given a masked template $x_{r}^{\\\\mathrm{masked}}$ , and an SQL query $q$ ,we wish to edit and fill the masked portions in $x_{r}^{\\\\mathrm{masked}}$ to make it consistent with the $\\\\operatorname{SQL}q$ . We utilize a conditional text generation model BART ( Lewis et al. ,2020 ) for this purpose. We $\\\\boldsymbol{\\\\mathrm{\\\\Sigma}}_{\\\\boldsymbol{\\\\mathrm{\\\\Sigma}}}$ like first convert $q$ into a pseudo-English representation $q^{\\\\mathrm{Eng}}$ similar to Shu et al. (2021 ), to make it easier for $\\\\boldsymbol{\\\\beta}$ to encode $q$ . In addition, we wrap the table, column, or value tokens in $q^{\\\\mathrm{Eng}}$ with special tokens to provide explicit signals to the text generation model $\\\\boldsymbol{\\\\mathrm{\\\\Sigma}}_{\\\\boldsymbol{\\\\mathrm{\\\\Sigma}}}$ that ch tokens are likely to appear in the output text ˆ. Next, we concatenate the tokens in $x_{r}^{\\\\mathrm{masked}}$ and $q^{\\\\mathrm{Eng}}$ for jointly encoding them as which is expected to be consistent with the an input to $\\\\boldsymbol{\\\\mathrm{\\\\Sigma}}_{\\\\boldsymbol{\\\\mathrm{\\\\Sigma}}}$ . The output of $\\\\boldsymbol{\\\\mathrm{\\\\Sigma}}_{\\\\boldsymbol{\\\\mathrm{\\\\Sigma}}}$ ’s decoder is text ${\\\\mathrm{SQL~}}q$ $\\\\hat{x}$ ,.  \\n\\nSince we do not have direct supervision to finetune purposing SQL-Text pairs $\\\\boldsymbol{\\\\mathrm{\\\\Sigma}}_{\\\\boldsymbol{\\\\mathrm{\\\\Sigma}}}$ for this task, we presen $\\\\mathcal{D}_{\\\\mathrm{train}}$ for fine-tuning $(q_{i},x_{i})$ from various schemas B.a method of re$\\\\mathcal{D}_{\\\\mathrm{train}}$ contains $s_{i}$ .A Naïve way to train $\\\\boldsymbol{\\\\mathrm{\\\\Sigma}}_{\\\\boldsymbol{\\\\mathrm{\\\\Sigma}}}$ is to provide $[x_{i}^{\\\\mathrm{{masked}}}|q_{i}^{\\\\mathrm{{Eng}}}]$ |,the concatenation of $x_{i}^{\\\\mathrm{masked}}$ and $q_{i}^{\\\\mathrm{Eng}}$ as an input to the encoder and maximize the likelihood of $x_{i}$ in the decoder’s output. This way the decoder of $\\\\boldsymbol{\\\\beta}$ learns to refill the masked tokens in $x_{i}^{\\\\mathrm{masked}}$ by attending to $q_{i}^{\\\\mathrm{Eng}}$ to recover $x_{i}$ in the output. While useful for learning to refill the masked positions, this from its use during inference in two ways: (i) For a Naïve method of training $\\\\boldsymbol{\\\\mathrm{\\\\Sigma}}_{\\\\boldsymbol{\\\\mathrm{\\\\Sigma}}}$ is mismatched given SQL $q$ , R EFILL might fail to retrieve a similar str cture neighbour of $q_{i}$ from $\\\\mathcal{D}_{\\\\mathrm{train}}$ . In such cases, SQL-to-Text generation mode to directly translate Bshould be capable of falling back to pure $q$ into $\\\\hat{x}$ . (ii) During inference, $x_{r}^{\\\\mathrm{masked}}$ and $q$ come from different schemas. However, during Naïve training, the masked text $x_{i}^{\\\\mathrm{masked}}$ and the SQL $q_{i}$ are derived from the same example $(q_{i},x_{i})$ . To Robust address these two limitations, we train manner as follows: (a) For a random one$\\\\boldsymbol{\\\\mathrm{\\\\Sigma}}_{\\\\boldsymbol{\\\\mathrm{\\\\Sigma}}}$ in a more third of t allowing using $q_{i}^{\\\\tilde{\\\\mathrm{Eng}}}$ B. (b) For another one-third, we pass only to learn the filling of the masked tokens train steps we train $\\\\boldsymbol{\\\\mathrm{\\\\Sigma}}_{\\\\boldsymbol{\\\\mathrm{\\\\Sigma}}}$ in the Naïve way, $q_{i}^{\\\\mathrm{Eng}}$ as an input and maximize the likelihood of $x_{i}$ .This ensures that model is capable of generating the text from the $q_{i}^{\\\\mathrm{Eng}}$ alone, if the templates $\\\\boldsymbol{x}_{i}^{\\\\mathrm{{n}}}$ asked are unavailable or noisy. (c) For the remaining onethird, we first retrieve an SQL-Text pair $(q_{j},x_{j})$ ,from a different schema such that the ${\\\\mathrm{SQL~}}q_{j}$ is structurally similar to $q_{i}$ (§ 2.1 ), and the word edit distance between the masked templates $x_{i}^{\\\\mathrm{masked}}$ and $x_{j}^{\\\\mathrm{masked}}$ is also small. We can then replace $x_{i}^{\\\\mathrm{{n}}}$ asked with $x_{j}^{\\\\mathrm{masked}}$ and encode $[x_{j}^{\\\\mathrm{masked}}|q_{i}^{\\\\mathrm{Eng}}]$ as an input to $\\\\boldsymbol{\\\\beta}$ and maximize the likelihood of $x_{i}$ in the decoder’s output. This step makes the training more consistent with the inference, as $x_{j}^{\\\\mathrm{masked}}$ and $q_{i}^{\\\\mathrm{Eng}}$ now come from different schemas. In $\\\\S\\\\,5.4$ , we justify training Robustly compared to Naïve training.',\n",
       "   'original_filename': 'Conf_Paper_Meta_Data_EMNLP_2022_Empirical_Methods_in_Natural_Language_Processing_with_whole_text.db'}},\n",
       " {'id': 454845631964659546,\n",
       "  'distance': 0.6036479473114014,\n",
       "  'entity': {'paper_id': '63a1751790e50fcafd1f48e7',\n",
       "   'paper_title': 'CiteBench: A Benchmark for Scientific Citation Text Generation',\n",
       "   'chunk_id': 1,\n",
       "   'chunk_text': '# 2 Related work\\n\\n# 2.1 Benchmarking\\nNLP benchmarks are unified dataset collections coupled with evaluation metrics and baselines that are used to systematically compare the performance of NLP systems for the targeted tasks in a standardized evaluation setup. Well-constructed benchmarks can boost progress in the corresponding research areas, such as SQuAD ( Rajpurkar et al. ,2016 ) for question answering, GLUE ( Wang et al. ,2018 ) for natural language understanding, KILT ( Petroni et al. ,2021 ) for knowledge-intensive tasks, GEM ( Gehrmann et al. ,2021 ,2022 ) for general-purpose text generation, and DynaBench (Kiela et al. ,2021 ) for dynamic benchmark data collection. C ITE BENCH is the first benchmark for the citation text generation task.\\n\\n# 2.2 Text generation for scientific documents\\nScientific documents are characterized by academic vocabulary and writing style, wide use of nonlinguistic elements like formulae, tables and figures, as well as structural elements like abstracts and citation anchors. Recent years have seen a rise in natural language generation for scientific text, including text simplification ( Luo et al. ,2022 ), summarization ( Qazvinian and Radev ,2008 ;Erera et al. ,2019 ;Cachola et al. ,2020 ), slides generation ( Sun et al. ,2021 ), table-to-text generation (Moosavi et al. ,2021 ), and citation text generation ( Li and Ouyang ,2022 ). Closely related to the task of citation text generation, Luu et al. (2021 )study how scientific papers can relate to each other, and how these relations can be expressed in text. Related to our work, Mao et al. (2022 ) propose a benchmark for scientific extreme summarization. Compared to extreme summarization, which amounts to generating short context-independent summaries of individual manuscripts, citation text generation focuses on context-dependent descriptions that relate the cited papers to the citing paper. In line with the recent efforts that address the lack of systematic automated evaluation of natural language generation in general ( Gehrmann et al. ,2021 ), our paper contributes the first unified benchmark for citation text generation in the scientific domain.\\n\\n# 2.3 Citation text generation\\nThe task of citation text generation was introduced in Hoang and Kan (2010 ), who generate a summary of related work specific to the citing paper. Since then, several task definitions and setups have been proposed (Table 1 ). Lu et al. (2020 ) cast the task as generating a multi-paragraph related work section given the abstracts of the citing paper and of the cited papers. AbuRa’ed et al. (2020 ) use the cited paper’s title and abstract to generate a citation sentence. Xing et al. (2020 ) use the abstract of the cited paper and include context before and after the citation sentence as the input, and produce the citation sentence as the output. A recent work by Chen et al. (2021 ) uses multiple cited abstracts as input to generate a related work paragraph. The great variability of the task definitions and setups in citation text generation prevents the study of citation text generation methods across datasets and evaluation setups. Unlike prior work that explores varying task settings, C ITE BENCH brings the diverging task definitions and datasets together in a unified setup. This allows us to compare citation text generation models across different datasets in a standardized manner using an extensive set of quantitative metrics, as well as novel automated qualitative metrics.  \\n\\n<html><body><table><tr><td rowspan=\"2\">Dataset</td><td colspan=\"3\">Input</td><td rowspan=\"2\">Output Citation text (T)</td><td rowspan=\"2\">Datasources</td></tr><tr><td>Cited document (D*) SingleAbs MultiAbs</td><td>Abs</td><td>Citing context (C\") Text</td></tr><tr><td>ABURAED</td><td></td><td>Title √</td><td></td><td>Sent Para</td><td>Multiple</td></tr><tr><td>CHEN</td><td>√</td><td></td><td></td><td></td><td>S2ORCandDelve</td></tr><tr><td>LU</td><td></td><td></td><td></td><td></td><td>arXiv.org and MAG</td></tr><tr><td>XING</td><td></td><td></td><td></td><td></td><td>AAN</td></tr></table></body></html>  \\n\\nTable 1: Overview of datasets in C ITE BENCH . Single Abs $=$ Single abstract, i.e., one cited document per instance. Multi Abs $=$ Multiple abstracts, i.e., multiple cited documents per instance. Abs $=$ Abstract, i.e., a dataset contains the abstract of the citing paper. Text $=\\\\xi$ a dataset contains additional context from the citing paper. Sent $=$ generation target is a single sentence. Para $=$ generation target is a paragraph.   \\nTable 2: Datasets statistics. The validation set for XING has been created by us via randomly sampling $10\\\\%$ of the original training data. Across datasets, very few inputs contain more than 4,096 tokens, and few outputs are longer than 1,024 tokens. We exploit this property to speed up the evaluation in Section 3.3 .  \\n\\n\\n<html><body><table><tr><td>Dataset</td><td>#Train</td><td>#Validation</td><td>#Test</td><td>Inputs>4,096tok.</td><td>Outputs>1,024tok.</td></tr><tr><td>ABURAED</td><td>15,000</td><td>1,384</td><td>219</td><td>0%</td><td>0%</td></tr><tr><td>LU</td><td>30,369</td><td>5,066</td><td>5,093</td><td><0.001%</td><td>0%</td></tr><tr><td>XING</td><td>77,086</td><td>8,566</td><td>400</td><td><0.001%</td><td><0.001%</td></tr><tr><td>CHEN -Delve</td><td>72,927</td><td>3,000</td><td>3,000</td><td><0.001%</td><td>0.004%</td></tr><tr><td>-S2ORC</td><td>126,655</td><td>5,000</td><td>5,000</td><td>0.017%</td><td><0.001%</td></tr><tr><td>Total</td><td>322,037</td><td>23,016</td><td>13,712</td><td>0.007%</td><td><0.001%</td></tr></table></body></html>\\n\\n# 3 Benchmark\\n\\n# 3.1 Task definition and datasets\\nWe formalize the task of citation text generation as follows: Given a set of $n$ (cited) target documents $\\\\{D_{1}^{t}...D_{n}^{t}\\\\}$ }, a (citing) sourc ocum $D^{s}$ set of $m$ citing document contexts $\\\\{C_{1}^{s}\\\\ ...C_{m}^{s}\\\\}\\\\ \\\\in$ } ∈ $D^{s}$ , generate a citation text $T^{\\\\prime}$ ′that is as close as possible to the original citation text $T\\\\in D^{s}$ ∈. This general definition allows wide variation in how the task is implemented. The cited document $D_{i}^{t}$ can be represented by the abstract $a^{t_{i}}$ , the concatenation of the title and the abstract, or even the full text of the paper. The context set $C^{s}$ covers sentences before and after the citation text in $D^{s}$ , as well as the abstract $a^{s}\\\\in D^{s}$ .  \\n\\nSuch general, open definition allows us to accommodate diverse approaches to the task within one framework (Table 1 ). To populate the benchmark, we select four datasets, focusing on the task design and domain variety: ABURAED (AbuRa’ed et al. ,2020 ), CHEN (Chen et al. ,2021 ), LU (Lu et al. ,2020 ), and XING (Xing et al. ,2020 ). Dataset transformation details are provided in Appendix A.1 .Table 2 shows the quantitative statistics, and Figure 2 provides data examples from each dataset. The CHEN dataset has two subsets – CHEN Delve and CHEN S2ORC – based on the data source; we use CHEN to denote the union of the two subsets. The datasets are distributed under varying licenses; we have obtained explicit permissions from the authors to use the data for research purposes in cases when licensing was underspecified (see Ethics statement).  \\n\\nWe note that the datasets included in the benchmark are not only structurally diverse, but also cover a wide range of domains, from medicine to computer science. In particular, ABURAED and XING exemplify citation text generation in the computational linguistics domain, CHEN Delve cover the computer science domain; LU and CHEN S2ORC span a wide range of domains represented on arxiv.org and in the S2ORC corpus, respectively, including biology, medicine and physics.',\n",
       "   'original_filename': 'Conf_Paper_Meta_Data_EMNLP_2023_with_whole_text.db'}}]"
      ]
     },
     "execution_count": 80,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "[rerank_papers[r[\"index\"]] for r in result[\"results\"] if r[\"relevance_score\"]>1]"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 77,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "'Benchmarking and Improving Text-to-SQL Generation under Ambiguity.EMNLP_2023 chunk 0'"
      ]
     },
     "execution_count": 77,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "import re\n",
    "\n",
    "\n",
    "citation_papers = [rerank_papers[r[\"index\"]] for r in result[\"results\"] if r[\"relevance_score\"]>21]\n",
    "citation_info = \"\"\n",
    "for p in citation_papers:\n",
    "    id = p[\"entity\"][\"paper_id\"]\n",
    "    title = p[\"entity\"][\"paper_title\"]\n",
    "    chunk_id = p[\"entity\"][\"chunk_id\"]\n",
    "    original_filename = re.findall(r\"Data_+(.*?)_with\", p[\"entity\"][\"original_filename\"])\n",
    "    citation_info += f\"{title}.{original_filename[0]} chunk {chunk_id}\"\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 78,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "'Research on Text-to-SQL generation has traditionally focused on scenarios where each natural language question is associated with one correct SQL query.<sup>number</sup>'"
      ]
     },
     "execution_count": 78,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "statement[\"statement\"]+f\"<sup>number</sup>\"\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 70,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "{'id': 454845681740829484, 'distance': 0.6236740350723267, 'entity': {'paper_id': '6535d747939a5f408295c649', 'paper_title': 'Benchmarking and Improving Text-to-SQL Generation under Ambiguity', 'chunk_id': 0, 'chunk_text': '# Benchmarking and Improving Text-to-SQL Generation under Ambiguity\\nAdithya Bhaskar $\\\\mathbf{\\\\nabla}_{*}\\\\bigotimes\\\\bigtriangleup$ Tushar Tomar ∗♠ Ashutosh Sathe ♠Sunita Sarawagi ♠♠IIT Bombay ♢Princeton University\\n\\n# Abstract\\nResearch in Text-to-SQL conversion has been largely benchmarked against datasets where each text query corresponds to one correct SQL. However, natural language queries over reallife databases frequently involve significant ambiguity about the intended SQL due to overlapping schema names and multiple confusing relationship paths. To bridge this gap, we develop a novel benchmark called AmbiQT with over 3000 examples where each text is interpretable as two plausible SQLs due to lexical and/or structural ambiguity.  \\n\\nWhen faced with ambiguity, an ideal top$k$ decoder should generate all valid interpretations for possible disambiguation by the user ( Elgohary et al. ,2021 ;Zhong et al. ,2022 ). We evaluate several Text-to-SQL systems and decoding algorithms, including those employing state-of-the-art LLMs, and find them to be far from this ideal. The primary reason is that the prevalent beam search algorithm and its variants, treat SQL queries as a string and produce unhelpful token-level diversity in the top$k$ .  \\n\\nWe propose LogicalBeam, a new decoding algorithm that navigates the SQL logic space using a blend of plan-based template generation and constrained infilling. Counterfactually generated plans diversify templates while in-filling with a beam-search, that branches solely on schema names, provides value diversity. LogicalBeam is up to $2.5\\\\times$ more effective than state-of-the-art models at generating all candidate SQLs in the top$k$ ranked outputs. It also enhances the top5 Exact and Execution Match Accuracies on SPIDER and Kaggle DBQA 1 .\\n\\n# 1 Introduction\\nResearch on Text-to-SQL generation has focused on scenarios where each natural language question is associated with one correct SQL ( Zelle and Mooney ,1996 ;Tang and Mooney ,2000 ;Scholak et al. ,2021a ;Wang et al. ,2020 ;Rubin and Berant ,2021 ;Xie et al. ,2022 ;Arcadinho et al. ,2022 ;Zeng et al. ,2022 ;Scholak et al. ,2021b ;Pourreza and Rafiei ,2023 ). Popular benchmarks driving such research, including WikiSQL ( Zhong et al. ,2018 ), SPIDER ( Yu et al. ,2018 ), its robust perturbations ( Chang et al. ,2023 ), and even “in-thewild” benchmarks such as KaggleDBQA ( Lee et al. ,2021 ) and SEDE ( Hazoom et al. ,2021 ) all associate one correct SQL with text. Meanwhile, ambiguity is prevalent in real-life databases — particularly the ones obtained by integrating several data sources for data analysis, where a natural language interface is most in demand. The sources of ambiguity are several — inherent ambiguity of natural language, the user’s ignorance of table/column names, overlapping strings in column names, underspecified clauses, and confusion about whether aggregates are pre-computed, or if a join is required. Hazoom et al. (2021 ) observe that up to $87\\\\%$ of queries on the stack exchange database are underspecified, and Wang et al. (2022 ) mention that $11\\\\%$ of queries exhibited ambiguity in column names. Although prior work has brought up ambiguity, there is no publicly available benchmark with ambiguous queries, nor a comprehensive evaluation of systems under ambiguity.  \\n\\nOur first contribution is to bridge this gulf by developing a benchmark, AmbiQT, that tests performance under ambiguity in the context of current models. AmbiQT includes over 3000 examples, each associating a natural question on a database with two valid SQLs. Inspired by our experience with several real-world datasets, we target four types of ambiguity spanning both lexical (ambiguous column and table names) and structural (whether a join is necessary, and an aggregate is pre-computed) ambiguity. The benchmark is generated via a combination of ChatGPT ( OpenAI ,2022 ) based synonym generation and perturbation, and standard rule-based perturbation.  \\n\\nWhen faced with ambiguity, an ideal Text-toSQL system should incorporate all valid alternatives in their top$k$ SQL outputs, for user resolution. We show that present approaches, ranging from T5-3B ( Raffel et al. ,2019 ) to SOTA models, fail to generate all ambiguous outputs with any decoding strategy, including beam search and diversity-promoting sampling methods such as Nucleus ( Holtzman et al. ,2020 ) and Typical sampling ( Meister et al. ,2023 ). Most outputs are small lexical tweaks of the top choice, bringing about little meaningful diversity in SQL structures or schema alternatives. Even SOTA LLMs like ChatGPT ( OpenAI ,2022 ) suffer from this issue.  \\n\\nTo remedy the lack of diversity, we propose a new decoding algorithm, LogicalBeam, that allocates branching to explore underlying logical variants of the SQL rather than the string form. We catalog the errors of T5-3B ( Raffel et al. ,2019 ) on the SPIDER dev split and use our insights to encourage targeted types of diversity — the number of JOIN s and selections, and table/column names.  \\n\\nOur main contributions are:   \\n•We develop AmbiQT, the first benchmark that tests performance under four types of ambiguity over $\\\\mathbf{3000+}$ examples.   \\n•We show that SOTA methods, including a finetuned T5-3B, RESDSQL ( Li et al. ,2023 ), OpenAI Codex, and ChatGPT, provide a poor representation of ambiguity despite their high accuracy on conventional benchmarks.   \\n•We present LogicalBeam, a two-step algorithm that generates plan-based templates with counterfactually controlled plan diversity and fills them via a beam search that branches only on schema names.   \\n•We show that LogicalBeam consistently increases the fraction of time when all gold SQLs get generated in the Top-5 choices by $1.5-2.5\\\\times$ over the baselines across the board on AmbiQT.', 'original_filename': 'Conf_Paper_Meta_Data_EMNLP_2023_with_whole_text.db'}}\n"
     ]
    }
   ],
   "source": [
    "for r in result[\"results\"]:\n",
    "    index_r = r[\"index\"]\n",
    "    print(rerank_papers[index_r])\n",
    "    rerank_papers[index_r][\"relevance_score\"] = r[\"relevance_score\"]\n",
    "    \n",
    "    break"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 98,
   "metadata": {},
   "outputs": [],
   "source": [
    "write_result(\"hybrid_rerank_result\",result)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 34,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "['0.95.This paper proposes an incremental parameter modification framework specifically designed for MLLMs, achieving 89% editing accuracy while preserving 92% of original model capabilities. Our method introduces a multimodal alignment validator that dynamically adjusts editing operations across visual-textual modalities. Experimental results on 12 downstream tasks demonstrate significant improvements in model adaptability for cross-domain applications. The proposed technique effectively addresses catastrophic forgetting issues in existing editing approaches...']"
      ]
     },
     "execution_count": 34,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "rerank_threshold = 19\n",
    "[ rerank_papers[r[\"index\"]] for r in result[\"results\"] if r[\"relevance_score\"]>rerank_threshold]"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 36,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "'It has been posited that addressing the identified challenges and limitations will significantly advance the field of MLLM editing, thereby unlocking the full potential of these models across diverse applications.'"
      ]
     },
     "execution_count": 36,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "statement"
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "pytorch",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.9.18"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 2
}
