{
 "cells": [
  {
   "cell_type": "markdown",
   "id": "5495744f",
   "metadata": {},
   "source": [
    "## arXiv\n",
    "\n",
    "arXiv是一个由康奈尔大学维护的在线预印本论文存储库，它提供了物理、数学、计算机科学、定量生物学、定量金融学和统计学的开放获取服务。arXiv成立于1991年，最初是由物理学家保罗·金斯帕格（Paul Ginsparg）创建，目的是为了提供一个电子化的方式供物理学家共享研究成果。随着时间的推移，它逐渐扩展到其他学科。\n",
    "### arXiv的特点和功能\n",
    "1. **预印本平台**：arXiv允许科研人员在将论文提交给学术期刊发表之前，先行发布其研究成果。这样做的目的是为了加快科学知识的传播。\n",
    "2. **开放获取**：所有在arXiv上发表的论文都可以免费获取，这有助于全球的研究人员、学者和学生获取最新的科学进展。\n",
    "3. **同行评审**：虽然arXiv上的论文未经正式的同行评审，但它们通常会在提交前由arXiv的志愿者或编委会进行审核，以确保论文的基本质量和主题相关性。\n",
    "4. **分类和标签**：论文按照学科分类，并且可以使用关键词进行检索，便于用户找到自己感兴趣的研究领域。\n",
    "5. **版本控制**：作者可以上传论文的新版本，更新研究成果或回应同行评审的反馈。每个版本都会被记录，确保了研究过程的透明性。\n",
    "6. **引用和统计**：arXiv提供论文的引用次数和下载次数，这可以作为衡量论文影响力的一个指标。\n",
    "### 如何使用arXiv\n",
    "1. **浏览和搜索**：用户可以直接在arXiv网站上浏览最新上传的论文，或者使用搜索功能查找特定主题的论文。\n",
    "2. **订阅和通知**：用户可以订阅特定主题或作者的更新，当有新的论文上传时，arXiv会通过电子邮件通知订阅者。\n",
    "3. **提交论文**：研究人员可以通过arXiv网站提交自己的论文。提交前需要注册账号，并遵守arXiv的提交指南。\n",
    "4. **评论和讨论**：虽然arXiv本身不提供评论功能，但有些第三方平台允许用户对arXiv上的论文进行评论和讨论。\n",
    "### arXiv在中国的影响\n",
    "在中国，arXiv同样被广泛使用，它为国内外的科研人员提供了一个宝贵的信息共享平台。中国的科研机构和大学鼓励研究人员使用arXiv来展示他们的研究成果，以促进国际合作和学术交流。同时，中国的科研人员也通过arXiv获取全球科学研究的最新进展。\n",
    "arXiv对全球科学研究的开放性和可获取性作出了重要贡献，符合全球科学共同体共同推动科学知识传播和学术交流的愿景。\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 20,
   "id": "316b4630",
   "metadata": {},
   "outputs": [],
   "source": [
    "from langchain_openai import ChatOpenAI, OpenAI\n",
    "\n",
    "openai_api_key = \"EMPTY\"\n",
    "openai_api_base = \"http://127.0.0.1:1234/v1\"\n",
    "model = ChatOpenAI(\n",
    "    openai_api_key=openai_api_key,\n",
    "    openai_api_base=openai_api_base,\n",
    "    temperature=0.1,\n",
    ")\n",
    "llm = model"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 21,
   "id": "976f198d",
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "'Published: 2016-05-26\\nTitle: Heat-bath random walks with Markov bases\\nAuthors: Caprice Stanley, Tobias Windisch\\nSummary: Graphs on lattice points are studied whose edges come from a finite set of\\nallowed moves of arbitrary length. We show that the diameter of these graphs on\\nfibers of a fixed integer matrix can be bounded from above by a constant. We\\nthen study the mixing behaviour of heat-bath random walks on these graphs. We\\nalso state explicit conditions on the set of moves so that the heat-bath random\\nwalk, a generalization of the Glauber dynamics, is an expander in fixed\\ndimension.'"
      ]
     },
     "execution_count": 21,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "from langchain_community.utilities import ArxivAPIWrapper\n",
    "\n",
    "arxiv = ArxivAPIWrapper()\n",
    "docs = arxiv.run(\"1605.08386\")\n",
    "docs"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 22,
   "id": "9742640c",
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "\"Published: 2023-09-21\\nTitle: Nearly Gorenstein projective monomial curves of small codimension\\nAuthors: Sora Miyashita\\nSummary: In this paper, we characterize nearly Gorenstein projective monomial curves\\nof codimension 2 and 3.\\n\\nPublished: 2023-03-03\\nTitle: On SORA for High-Risk UAV Operations under New EU Regulations: Perspectives for Automated Approach\\nAuthors: Hamed Habibi, D. M. K. K. Venkateswara Rao, Jose Luis Sanchez-Lopez, Holger Voos\\nSummary: In this paper, we investigate requirements to prepare an application for\\nSpecific Operations Risk Assessment (SORA), regulated by European Union\\nAviation Safety Agency (EASA) to obtain flight authorization for Unmanned\\nAerial Vehicles (UAVs) operations and propose some perspectives to automate the\\napproach based on our successful application. Preparation of SORA requires\\nexpert knowledge as it contains technicalities. Also, the whole process is an\\niterative and time-consuming one. It is even more challenging for higher-risk\\noperations, such as those in urban environments, near airports, and multi- and\\ncustomized models for research activities. SORA process limits the potential\\nsocio-economic impacts of innovative UAV capabilities. Therefore, in this\\npaper, we present a SORA example, review the steps and highlight challenges.\\nAccordingly, we propose an alternative workflow, considering the same steps,\\nwhile addressing the challenges and pitfalls, to shorten the whole process.\\nFurthermore, we present a comprehensive list of preliminary technical\\nprocedures, including the pre/during/post-flight checklists, design and\\ninstallation appraisal, flight logbook, operational manual, training manual,\\nand General Data Protection Regulation (GDPR), which are not explicitly\\ninstructed in SORA manual. Moreover, we propose the initial idea to create an\\nautomated SORA workflow to facilitate obtaining authorization, which is\\nsignificantly helpful for operators, especially the scientific community, to\\nconduct experimental operations.\\n\\nPublished: 2024-03-20\\nTitle: Mora: Enabling Generalist Video Generation via A Multi-Agent Framework\\nAuthors: Zhengqing Yuan, Ruoxi Chen, Zhaoxu Li, Haolong Jia, Lifang He, Chi Wang, Lichao Sun\\nSummary: Sora is the first large-scale generalist video generation model that garnered\\nsignificant attention across society. Since its launch by OpenAI in February\\n2024, no other video generation models have paralleled {Sora}'s performance or\\nits capacity to support a broad spectrum of video generation tasks.\\nAdditionally, there are only a few fully published video generation models,\\nwith the majority being closed-source. To address this gap, this paper proposes\\na new multi-agent framework Mora, which incorporates several advanced visual AI\\nagents to replicate generalist video generation demonstrated by Sora. In\\nparticular, Mora can utilize multiple visual agents and successfully mimic\\nSora's video generation capabilities in various tasks, such as (1)\\ntext-to-video generation, (2) text-conditional image-to-video generation, (3)\\nextend generated videos, (4) video-to-video editing, (5) connect videos and (6)\\nsimulate digital worlds. Our extensive experimental results show that Mora\\nachieves performance that is proximate to that of Sora in various tasks.\\nHowever, there exists an obvious performance gap between our work and Sora when\\nassessed holistically. In summary, we hope this project can guide the future\\ntrajectory of video generation through collaborative AI agents.\""
      ]
     },
     "execution_count": 22,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "from langchain_community.utilities import ArxivAPIWrapper\n",
    "\n",
    "arxiv = ArxivAPIWrapper()\n",
    "docs = arxiv.run(\"sora\")\n",
    "docs"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 23,
   "id": "32c7e774",
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "arxiv.Search(query='gpt4', id_list=[], max_results=5, sort_by=<SortCriterion.Relevance: 'relevance'>, sort_order=<SortOrder.Descending: 'descending'>)"
      ]
     },
     "execution_count": 23,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "import arxiv\n",
    "\n",
    "search = arxiv.Search(\n",
    "    query = \"gpt4\",\n",
    "    max_results = 5,\n",
    "    sort_by = arxiv.SortCriterion.Relevance\n",
    ")\n",
    "search"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 24,
   "id": "5ae63ad0",
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "<itertools.islice at 0x1f7e4e53790>"
      ]
     },
     "execution_count": 24,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "client = arxiv.Client()\n",
    "results = client.results(search)\n",
    "\n",
    "results"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 25,
   "id": "22a8126b",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "http://arxiv.org/abs/2309.12732v1\n",
      "http://arxiv.org/abs/2403.02839v1\n",
      "http://arxiv.org/abs/2307.09744v1\n",
      "http://arxiv.org/abs/2308.01497v3\n",
      "http://arxiv.org/abs/2309.09437v2\n"
     ]
    }
   ],
   "source": [
    "papers = []\n",
    "for item in results:\n",
    "    print(item)\n",
    "    papers.append(item)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 26,
   "id": "8e4fe393",
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "arxiv.Result(entry_id='http://arxiv.org/abs/2309.12732v1', updated=datetime.datetime(2023, 9, 22, 9, 31, 39, tzinfo=datetime.timezone.utc), published=datetime.datetime(2023, 9, 22, 9, 31, 39, tzinfo=datetime.timezone.utc), title=\"OpenAi's GPT4 as coding assistant\", authors=[arxiv.Result.Author('Lefteris Moussiades'), arxiv.Result.Author('George Zografos')], summary='Lately, Large Language Models have been widely used in code generation. GPT4\\nis considered the most potent Large Language Model from Openai. In this paper,\\nwe examine GPT3.5 and GPT4 as coding assistants. More specifically, we have\\nconstructed appropriate tests to check whether the two systems can a) answer\\ntypical questions that can arise during the code development, b) produce\\nreliable code, and c) contribute to code debugging. The test results are\\nimpressive. The performance of GPT4 is outstanding and signals an increase in\\nthe productivity of programmers and the reorganization of software development\\nprocedures based on these new tools.', comment='10 pages', journal_ref=None, doi=None, primary_category='cs.AI', categories=['cs.AI', 'cs.SE'], links=[arxiv.Result.Link('http://arxiv.org/abs/2309.12732v1', title=None, rel='alternate', content_type=None), arxiv.Result.Link('http://arxiv.org/pdf/2309.12732v1', title='pdf', rel='related', content_type=None)])"
      ]
     },
     "execution_count": 26,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "papers[0]"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 27,
   "id": "29fd684e",
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "['http://arxiv.org/html/2309.12732v1',\n",
       " 'http://arxiv.org/html/2403.02839v1',\n",
       " 'http://arxiv.org/html/2307.09744v1',\n",
       " 'http://arxiv.org/html/2308.01497v3',\n",
       " 'http://arxiv.org/html/2309.09437v2']"
      ]
     },
     "execution_count": 27,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "htmlUrls = []\n",
    "\n",
    "for item in papers:\n",
    "    url = item.entry_id.replace(\"abs\",\"html\")\n",
    "    htmlUrls.append(url)\n",
    "\n",
    "    \n",
    "htmlUrls\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 40,
   "id": "7f60c780",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "2309.12732v1\n"
     ]
    }
   ],
   "source": [
    "import urllib.parse\n",
    "\n",
    "url = \"http://arxiv.org/html/2309.12732v1\"\n",
    "# 使用urllib.parse.urlsplit来处理URL，它会更智能地处理带参数的URL\n",
    "url_parts = urllib.parse.urlsplit(url)\n",
    "\n",
    "# 获取路径部分\n",
    "path = url_parts.path\n",
    "\n",
    "# 分割路径来获取最后一部分\n",
    "filename = path.split('/')[-1]\n",
    "\n",
    "print(filename)  # 输出\n",
    "\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 28,
   "id": "d4e0d630",
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "[Document(page_content='OPENAI’S GPT4 AS CODING ASSISTANT\\nLefteris Moussiades\\nComputer Science Department\\nInternational Hellenic University\\nGreece, Kavala PA 65404\\nlmous@cs.ihu.gr\\nGeorge Zografos\\nComputer Science Department\\nInternational Hellenic University\\nGreece, Kavala PA 65404\\ngezozra@cs.ihu.gr\\nSeptember 25, 2023\\nABSTRACT\\nLately, Large Language Models have been widely used in code generation. GPT4 is considered the\\nmost potent Large Language Model from Openai. In this paper, we examine GPT3.5 and GPT4\\nas coding assistants. More specifically, we have constructed appropriate tests to check whether\\nthe two systems can a) answer typical questions that can arise during the code development, b)\\nproduce reliable code, and c) contribute to code debugging. The test results are impressive. The\\nperformance of GPT4 is outstanding and signals an increase in the productivity of programmers and\\nthe reorganization of software development procedures based on these new tools.\\n1\\nIntroduction\\nAmong other features, Large Language Models (LLM) can generate code in various programming languages [1].\\nRecently, many publications have recommended and evaluated LLMs specialized in code generation.\\nCodeBERT is a bimodal pre-trained model designed for programming and natural language tasks, like code search\\nand documentation generation. It’s developed using a Transformer-based architecture [2] and trained with a unique\\nobjective function to effectively use paired and unpaired data from programming and natural language sources [3].\\nCodex is a GPT language model fine-tuned on public GitHub code, and a version of it powers GitHub Copilot. When\\nevaluated on the HumanEval set, designed to gauge program synthesis from docstrings, Codex solves 28.8% of the tasks,\\noutperforming GPT-3 and GPT-J. The study also uncovers that multiple samplings from Codex enhance problem-solving\\nsuccess rates. Additionally, the paper discusses the challenges and broader implications of advanced code generation\\ntechnologies [4].\\nThe capabilities of large language models in synthesizing Python programs from natural language prompts using two\\nnew benchmarks, MBPP and MathQA-Python, are explored by [5]. The study reveals that as model size increases,\\nsynthesis performance also improves, with the largest models being able to correctly generate solutions to nearly 60%\\nof MBPP problems through few-shot learning. The models also benefit from human feedback, cutting error rates in half,\\nbut struggle to predict the outputs of the generated programs when provided with specific inputs.\\nStudy [6] introduces a novel approach to code completion using an \"external\" context, emulating human behaviour of\\nreferencing related code snippets. The proposed framework combines retrieval techniques with traditional language\\nmodels to better predict code, factoring in direct copying and semantically similar code references. When tested on\\nPython and Java, this method achieves state-of-the-art performance on the CodeXGLUE benchmark.\\nPaper [7] explores LLMs trained on unlabeled code corpora for code generation. It introduces CERT, a two-step method\\nthat creates a basic code outline and then fills in the details. The study also presents two new benchmarks, PandasEval\\nand NumpyEval, for evaluating library-oriented code generation.\\nPanGu-Coder is a pre-trained language model built on the PanGu-Alpha architecture designed to generate code from\\nnatural language descriptions. The model is trained using a two-stage strategy, starting with raw programming data,\\nfollowed by task-focused training using Causal and Masked Language Modelling objectives [8]\\narXiv:2309.12732v1  [cs.AI]  22 Sep 2023\\nLi et al. introduced AlphaCode, a deep-learning model built with self-supervised learning and an encoder-decoder\\ntransformer, which approximates human-level performance in computer programming competitions on the Codeforces\\nplatform. Authors argue that this advancement could significantly boost programmers’ productivity and reshape\\nprogramming culture, where humans primarily define problems and machine learning handles code generation and\\nexecution [9].\\nCODEGEN is a family of large language models trained on natural language and programming data to advance program\\nsynthesis. The study also explores a multi-step approach to program synthesis, revealing improved performance\\nwhen tasks are broken down into multiple prompts, and introduces an open benchmark, the Multi-Turn Programming\\nBenchmark (MTPB), for this purpose [10].\\nPaper [11] investigates the impact of LLMs, like OpenAI Codex, on developers’ code security. Through a user study\\ninvolving 58 student programmers, the research examines the code’s security when implementing a specific C-based\\ntask with the assistance of LLMs. The findings suggest that using LLMs does not substantially increase the risk of\\nintroducing critical security vulnerabilities in such coding tasks.\\nRepoCoder [12] is a framework designed for repository-level code completion that efficiently leverages information\\nscattered across different files in a repository. RepoCoder uses a combination of a similarity-based retriever and a\\npre-trained code language model, along with an innovative iterative retrieval-generation approach, to improve code\\ncompletion at various levels of granularity. RepoCoder has been tested on a new benchmark called RepoEval.\\nPaper [13] thoroughly surveys 27 large language models geared explicitly towards the NL2Code task, which involves\\ngenerating code from natural language descriptions. The study evaluates these models using the HumanEval benchmark\\nand derives that success in this domain hinges on \"Large Size, Premium Data, Expert Tuning\". The authors also\\nintroduce a dedicated website to monitor ongoing advancements and discuss the gap between model performance and\\nhuman capabilities in the NL2Code realm.\\nThe BigCode community has unveiled StarCoder and StarCoderBase, advanced Large Language Models designed for\\ncode generation and infilling, with StarCoderBase trained on a vast dataset called The Stack and StarCoder being a\\nfine-tuned version for Python [14].\\nWizardCoder is a model that empowers Code Large Language Models (Code LLMs) with complex instruction fine-\\ntuning by adapting the Evol-Instruct method to the domain of code. It has been introduced in a paper [15] and has\\ndemonstrated exceptional performance in code-related tasks.\\nStudy [16] investigates the use of large language models (LLMs) to aid in deductive coding, a method in qualitative\\nanalysis where data is labelled based on predetermined codebooks. The approach reached satisfactory alignment with\\nexpert-labelled outcomes by integrating GPT-3 with expert-created codebooks for a specific task related to coding\\ncuriosity-driven questions. The paper highlights the potential and challenges of employing LLMs in qualitative data\\ncoding and broader applications.\\nOne result of all this development is the addition of intelligent assistants to many well-known IDEs. For example, Visual\\nStudio Code is supported by IntelliCode, PyCharm by Code With Me, Eclipse by Code Recommenders, NetBeans by\\nDeep Learning, IntelliJ IDEA by Code With Me, and Xcode by SourceKit-LSP [17].\\nIn March 2023, Openai published the GPT-4 system card, which [18] analyzes the capabilities of GPT-4, including\\ncode generation. However, to date, we have not found any publication evaluating the coding capabilities of GPT-4. This\\npaper evaluates GPT-4 and GPT-3.5 as coding assistants.\\n2\\nMethodology\\nWe consider three tasks for which a coding assistant should be helpful: Code development, Code Debugging, and\\nanswering questions related to code. Code development and Code debugging are self-explanatory concepts. The human\\nprogrammer often has questions during code writing, such as details on the syntax of a command. For this reason, we\\ncheck that GPT-3.5 and 4 can answer questions about the code satisfactorily.\\nThere are many source code datasets, several mentioned in the introduction. However, these are geared to check LLMs’\\ncode production specifically. In addition, problems of a prototypical nature often arise in the production environment.\\nAlthough we do not know exactly which data sets GPT-3.5 and 4 are trained on, it is reasonable to assume that they\\nare trained on public data sets whose purpose is to evaluate LLMs’ coding capabilities. For the reasons above, our\\ntests do not rely on such data sets. Instead, we have carefully constructed 3 test suites: one for testing code generation\\ncapabilities, one for testing debugging capabilities, and one for answering questions. The tests were designed to limit\\nthe chances that GPT3.5 and 4 were trained on exactly those requested codes. The tests were submitted through the\\n2\\nweb interface of GPT3.5 and 4. The prompt engineering of the tests follows the GPT best practices of Openai [19]. The\\nresults were evaluated based on an expert human reviewer or compared to another reliable source. As the tests are about\\nchecking different capabilities, more details about the test configuration and the evaluation of the results are given with\\nthe description of each test. Java was used as the programming language. All code and other answers generated by\\nGPT3.5 and 4 is on GitHub [19].\\n3\\nAnswering questions\\nIn this task, we test the assistants to see if they can answer questions that often arise for developers when developing\\ncode. For this purpose, we constructed three questions of relative difficulty. We list the relevant prompts and then\\nevaluate the assistants’ answers.\\n• Question 1 (Prompt):\\nDoes Java support passing a function as an argument to a function? What is the syntax?\\n• Question 2 (Prompt):\\nConsider the code\\nSystem.out.print(s==s1+\" \"+s.equals(s1));\\nI expected it to display two boolean values, but it displays only one. Explain why?\\n• Question 3 (Prompt):\\nNon-abstract methods have an implementation. The same applies to the default methods.\\nNon-abstract methods are inherited and can be overwritten. The same applies to default methods.\\nWhat is the difference between default methods and non-abstract ones? Answer briefly.\\nResponse\\nGPT3.5 and 4 responses were evaluated by a human expert and found to answer all three questions satisfactorily.\\nResponses can be found on Github [20].\\n4\\nCode Development Assistance\\nFor code development, we constructed two tests. The first asks for developing a power function, and the second for\\nimplementing a tic-tac-toe application with predetermined classes.\\n4.1\\nPower function (PF)\\nIn this task, we asked GPT3.5 and 4 to implement a function that calculates the power of a real number raised to an\\ninteger exponent. Although the task seems simple at first glance, it is demanding when high calculation precision is\\nrequired. The difficulty arises from the approximate nature of real numbers. Due to the approximate nature of real\\nnumbers, the results of operations lack precision. When there are many intermediate operations, the deviations from\\neach operation accumulate, and the final result may present a significant deviation. So, this is a complex implementation\\nwhen precision is required in the calculations. Moreover, it is a feature, not a concern for application developers, as all\\nlanguages provide a ready-made power function. Besides, after an exhaustive search on the web, we could not find a\\nhigh-precision implementation.\\nEvaluation\\nThe generated functions were compared with the Java Math.pow function. The Math.pow() function is implemented\\nin Java as a native method, which means that it is implemented in the underlying platform’s native code. The\\nimplementation of Math.pow() varies depending on the platform and the underlying hardware architecture. The\\nalgorithm is optimized for speed and accuracy and is presumed to be relatively accurate. The results were checked\\nbased on the following procedure.\\nLet GPT4.pow be the function produced by GPT4 and r(f,b,e) the result of the function f with base b and exponent\\ne. For each b from 500 to 1000 with step 1 and each e from 0 to 9 with step 1, the values r(GPT4.pow,b,e) and\\nr(Math.pow,b,e) are calculated. Assume that for each pair of these values, even one is non-infinite, and they differ from\\neach other by more than 4.9E-324 (the smallest real value represented by Java double type). In that case, the absolute\\nvalue of their difference is added to an appropriate adder. Then, the adder is divided by the number of terms in the sum\\nand, thus, the average deviation of the GPT4.pow results from the Math.pow results are calculated. The same process is\\n3\\nrepeated to compare GPT3.5.pow to Math.pow. The whole process is repeated for exponents from -1 to -9.\\nPF Prompt #1\\nDevelop a Java function that calculates the power of a real number raised to an integer exponent.\\nSpecifications:\\n1. Interface: public static double pow(double b, int e)\\n2. Don’t use Math.pow or BigDecimal.pow\\n3. Achieve the maximum possible precision\\nResponse\\nBoth systems responded by providing a satisfactory implementation based on the exponentiation by squaring algorithm.\\nThe algorithm has time complexity O(log n), where n is the exponent. The implementations are almost identical, with\\nonly two minor differences:\\n• GPT4 checks if the exponent is odd by performing a bitwise and with 1 ((e&1) == 1) while GPT3.5 performs\\nan integer division remainder calculation (e%2 == 1)\\n• GPT4 performs a right shift by 1 to divide the exponent by 2 (e >>= 1), whereas GPT3.5 performs integer\\ndivision (e/ = 2) for the same purpose.\\nThe algorithms presented the same average deviation with respect to Math.pow, which was 2.356527240763158E10 for\\npositive exponents and 1.7112490986192953E-22 for negative exponents.\\nPF Prompt #2\\nCan you improve the precision of your function? I checked it against Math.pow and found significant discrepancies.\\nExamples:\\nbase = 502, exponent= 9, GPT.pow = 2.0245730632526733E24, Math.pow = 2.024573063252673E24, diference =\\n2.68435456E8\\nbase = 504, exponent = 9, GPT.pow = 2.098335016107156E24, Math.pow = 2.0983350161071556E24, diference =\\n2.68435456E8\\nResponse\\nGPT3.5 responded with a function that implements the Taylor series expansion [21] algorithm, which increases time\\ncomplexity to O(e2). GPT4 again used exponentiation by squaring but used the BigDecimal class [22], recommended\\nfor cases requiring precision in calculations.\\nThe mean deviation of GPT3.5 worsened to 2.2292150579952536E25 for positive exponents and 1.0012331308931004\\nfor negative ones.\\nThe mean deviation of GPT4 improved to 2.3037066373333335E9 for positive exponents and 2.1726446876877912E-2\\nfor negative ones.\\n4.2\\nTic-Tac-Toe application (TTT)\\nIn this task, we asked GPT to develop a tic-tac-toe application following especial specifications. We set certain\\nspecifications to minimize the chance that a tic-tac-toe app would be found ready-made and delivered intact.\\nTTT Prompt #1\\nDevelop a command-line tic-tac-toe application consisting of the following classes:\\nPlayer, Board, LivePlayer, RBPlayer, and Game.\\n• Player: Is an Abstract class containing , final char id, abstract method Board move(Board board)\\n4\\n• Class Board: Represents the game board. It contains the following public function members:\\nvoid displayBoard(): It displays the game board on its current status\\nchar win(): It returns the winner’s id. If there is no winner, it returns a white character.\\n• Class LivePlayer: Represents a human player. It is a concrete class implementation inherited from Player.\\n• Class RBPlayer: Represents an artificial Rule-based Player. It is based on the following rules:\\nA. If there is a movement to win, select it.\\nB. If the opponent has a movement to win, select it to block the opponent from winning.\\n• Game: Uses the above-described classes to implement a tic-tac-toe game.\\nResponse\\nGPT4 respond with a fully functional application that meets all our requirements. The code quality is good, including a\\nwarning that the used Board object could have been declared final.\\nGPT3.5 responded with code that contained compile time errors. We performed the following communication to\\ninvestigate its ability to produce correct code.\\nTTT Prompt #1.1\\nYour code compiles with errors. Examples:\\n• error: cells has private access in Board\\nboard.cells[i][j] = id;\\n• error: cannot assign a value to final variable board\\nboard = currentPlayer.move(board);\\nRewrite code to avoid compile-time errors.\\nGPT3.5 replied with code containing logical errors. We prompt it as follows:\\nTTT Prompt #1.2\\nYour code has logical errors. Here is the output of your code after two movements of each player\\nPlayer X, enter your move (row [0-2] and column [0-2]): 1 1\\n————-\\n|\\n|\\n|\\n|\\n————-\\n|\\n| X |\\n|\\n————-\\n|\\n|\\n|\\n|\\n————-\\n————-\\n| O |\\n|\\n|\\n————-\\n|\\n|\\n|\\n|\\n————-\\n|\\n|\\n|\\n|\\n————-\\nAfter the second fix, in the third version of the application, GPT3.5 responded with functional code.\\nGPT4 respond with a fully functional application that meets all our requirements. The code quality is good, including a\\nwarning that the used Board object could have been declared final.\\nNext, we requested a new class representing an artificial player based on the minimax [23] algorithm. The minimax\\nimplements a perfect player, i.e., a player who never loses. Therefore, the worst possible outcome minimax may give is\\n5\\na draw.\\nTTT Prompt #2\\nCan you add the class MinimaxPlayer representing an artificial player based on the well-known minimax algorithm?\\nResponse\\nGPT4 responded with a fully functional minimax player. GPT3.5 replayed with an erroneous version of a minimax\\nplayer. A communication ensued in which we attempted to inform GPT3.5 of its errors, but it failed to present a\\nsatisfactory solution. Finally, we prompt GPT3.5 as follows:\\nTTT Prompt #2.1\\nNo improvement. It’s still straightforward for anyone to win your MinimaxPlayer. I’m giving you the game board if it\\ncan help you. Please don’t give me the same wrong algorithm again. If you can’t do better, just let me know.\\nPlayer X, enter your move (row [0-2] and column [0-2]): 2 0\\n————-\\n| O | O | X |\\n————-\\n| O | X |\\n|\\n————-\\n| X |\\n|\\n|\\n————-\\nPlayer X wins!\\nHere, GPT3.5 explained the difficulties of implementing the algorithm and suggested that we study the matter more or\\nlook for a ready-made solution on GitHub.\\n5\\nDebugging Assistance (DA)\\nTo test the debugging capabilities, we designed two tests. One includes code that throws an exception, and the other\\nincludes code containing a logic error.\\n5.1\\nException (E)\\nIn this task, we provided a code that crashes with IndexOutOfBoundsException and asked GPT3.5 and 4 to explain the\\nproblem and fix the code.\\nDA-E Prompt #1\\nThe Code below fails with IndexOutOfBoundsException.\\nimport java.util.ArrayList;\\nimport java.util.List;\\npublic class Debug2 {\\nstatic ArrayList<String> l=new ArrayList<>();\\nstatic void load() {\\nl.add(\"Green\");\\nl.add(\"Black\");\\nl.add(\"Blue\");\\nl.add(\"White\");\\nl.add(\"Pink\");\\nl.add(\"Black\");\\n}\\nstatic void delAll(List<String> l, String target) {\\n6\\nint size=l.size();\\nfor (int i=0; i<size; i++)\\nif (target.equals(l.get(i))) {\\nl.remove(i);\\n}\\n}\\npublic static void main(String[] args) {\\nload();\\ndelAll(l,\"Black\");\\n}\\n}\\nExplain the error and correct the code.\\nExplanation of the error\\nFirst, the exception is raised in the delAll function, which is responsible for deleting all the target elements from the\\nlist l. The function stores the list size in the local variable size and then, in the iterative process, tries to delete every\\nelement equal to the target. However, after deleting the first element, the list size is reduced by 1. However, delAll tries\\nto access the list for its original size, which leads to the exception.\\nResponce\\nBoth assistants solved the problem successfully. While GPT3.5 proposed a solution based on an Iterator, GPT4 proposed\\ntwo alternatives. In the first solution, the for control expression replaces the size variable with the function that returns\\nthe list size (l.size()); inside the for, decrements i by one each time it deletes an element. The second solution traverses\\nthe list from the end (l.size()-1) to the beginning, thus ensuring no IndexOutOfBoundsException issue.\\n5.2\\nLogical Error (LE)\\nDA-LE Prompt #1\\nThe code below contains logical errors.\\nExpected Output: [1, 2, 3, 4, 0, 5, 6]\\nActual Output: [1, 2, 3, 4, 5, 6, 0, 0, 0, 0]\\nExplain the errors and correct the code.\\n// Code containing logical error\\nimport java.util.Arrays;\\npublic class Debugging {\\nstatic int[] resize(int[] input, int newSize) {\\nreturn Arrays.copyOf(input, newSize < input.length ? newSize : input.length);\\n}\\nstatic int add(int[] array, int data, int index) {\\nfor (int i = 0; i <= index; i++) {\\nif (array[i] == data) {\\nreturn index;\\n}\\n}\\narray[index++] = data;\\nreturn index;\\n}\\nstatic int[] generateSet(int... array) {\\nint[] set = new int[array.length];\\nint idx = 0;\\nfor (int element : array) {\\n7\\nidx = add(set, element, idx);\\n}\\nresize(set, idx);\\nreturn set;\\n}\\nstatic int[] concat(int[] array1, int[] array2) {\\nint[] rslt = new int[array1.length + array2.length];\\nSystem.arraycopy(array1, 0, rslt, 0, array1.length);\\nSystem.arraycopy(array2, 0, rslt, array1.length, array2.length);\\nreturn generateSet(rslt);\\n}\\npublic static void main(String[] args) {\\nint[] set1 = generateSet(1, 2, 3, 4, 0),\\nset2 = generateSet(0, 3, 4, 5, 6);\\nint[] union = concat(set1, set2);\\nSystem.out.println(Arrays.toString(union));\\n}\\n}\\nExplanation of the error\\nThere are two bugs in the code. The first one is found in generateSet, which calls the function resize but does not assign\\nthe array returned by resize to the set variable. Thus, the set retains its original size and data. So the fix needed here is\\nreturn resize(set, idx); instead of resize(set,idx); return set; The second error is within the add function, which iterates\\nwhile i<=index, whereas the correct condition is i<index.\\nResponse\\nFirst, GPT3.5 and GPT4 correctly explained the problems in the add and generateSet functions. In addition, they\\nidentified a resize problem when there is none. More specifically, GPT3.5 commented:\\n1. The resize method is not updating the size of the array correctly. It creates a new array of the specified size but\\ndoesn’t copy the elements from the original array.\\n2. Use Arrays.copyOf to create a new array of the desired size and copy the elements from the original array to\\nthe new one.\\nAnd GPT4 commented:\\n1. Resize method: In the current implementation, if newSize is larger than input.length, it would return an array\\nof the same size as input. This does not match the intended behavior of resizing the array to newSize.\\nThese comments are wrong.\\nHowever, the generated codes are functional as they correctly fix both add and generateSet, while the change they make\\nto resize does not affect the specific code. More specifically, both systems converted resize so that it does not support\\nreducing the size of the input table. Indeed, size reduction is not needed in this code. Of course, a resize that helps\\nreduce a table’s length (with possible data loss) might be helpful elsewhere.\\n6\\nConclusions\\nIn this work, we examined the potential of GPT3.5 and 4 as coding assistants for three distinct tasks: Answering\\nquestions and providing Development and Debugging assistance. In answering questions, both LLMs proved to be\\nefficient. In Development assistance, GPT4 proved superior to GPT3.5. Both in creating the pow function, it achieved a\\nsignificant improvement in accuracy, and in the requirements for the tic-tac-toe application, it immediately responded\\nwith complete success. Moreover, it added a player based on the Minimax algorithm with ease. This is a requirement,\\naccording to our estimation, that is far from easy to implement. GPT3.5 failed to meet this requirement. In testing the\\ndebugging capabilities, GPT3.5 and 4 responded promptly and successfully to exception and logical error investigations.\\nThese conclude that GPT4 can provide substantial and reliable help as a coding assistant for all three properties tested.\\nAs expected, GPT3.5 appeared inferior to GPT4, but its capabilities are still impressive. Recently, a heated debate has\\nbeen about whether artificial intelligence will replace human programmers. We believe the answer to this question is\\n8\\nimpossible, as no one can predict the future. However, currently, GPT4 can provide meaningful and reliable assistance\\nto coding and dramatically improve the productivity of human developers. Such a thing is sure to reorganize the\\nsoftware production processes and possibly will not leave the job market of programmers unaffected. Whether its effect\\nwill increase the amount of software produced or unemployment in the developer industry remains to be seen.\\nReferences\\n[1] A. Tamkin, M. Brundage, J. Clark, and D. Ganguli, “Understanding the Capabilities, Limitations, and Societal\\nImpact of Large Language Models,” 2021.\\n[2] A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, L. Kaiser, and I. Polosukhin, “Attention\\nIs All You Need,” Aug. 2023. arXiv:1706.03762 [cs].\\n[3] Z. Feng, D. Guo, D. Tang, N. Duan, X. Feng, M. Gong, L. Shou, B. Qin, T. Liu, D. Jiang, and M. Zhou,\\n“CodeBERT: A Pre-Trained Model for Programming and Natural Languages,” 2020.\\n[4] M. Chen, J. Tworek, H. Jun, Q. Yuan, H. P. d. O. Pinto, J. Kaplan, H. Edwards, Y. Burda, N. Joseph, G. Brockman,\\nA. Ray, R. Puri, G. Krueger, M. Petrov, H. Khlaaf, G. Sastry, P. Mishkin, B. Chan, S. Gray, N. Ryder, M. Pavlov,\\nA. Power, L. Kaiser, M. Bavarian, C. Winter, P. Tillet, F. P. Such, D. Cummings, M. Plappert, F. Chantzis,\\nE. Barnes, A. Herbert-Voss, W. H. Guss, A. Nichol, A. Paino, N. Tezak, J. Tang, I. Babuschkin, S. Balaji,\\nS. Jain, W. Saunders, C. Hesse, A. N. Carr, J. Leike, J. Achiam, V. Misra, E. Morikawa, A. Radford, M. Knight,\\nM. Brundage, M. Murati, K. Mayer, P. Welinder, B. McGrew, D. Amodei, S. McCandlish, I. Sutskever, and\\nW. Zaremba, “Evaluating Large Language Models Trained on Code,” 2021.\\n[5] J. Austin, A. Odena, M. Nye, M. Bosma, H. Michalewski, D. Dohan, E. Jiang, C. Cai, M. Terry, Q. Le, and\\nC. Sutton, “Program Synthesis with Large Language Models,” 2021.\\n[6] S. Lu, N. Duan, H. Han, D. Guo, S.-w. Hwang, and A. Svyatkovskiy, “ReACC: A Retrieval-Augmented Code\\nCompletion Framework,” 2022.\\n[7] D. Zan, B. Chen, D. Yang, Z. Lin, M. Kim, B. Guan, Y. Wang, W. Chen, and J.-G. Lou, “CERT: Continual\\nPre-Training on Sketches for Library-Oriented Code Generation,” 2022.\\n[8] F. Christopoulou, G. Lampouras, M. Gritta, G. Zhang, Y. Guo, Z. Li, Q. Zhang, M. Xiao, B. Shen, L. Li, H. Yu,\\nL. Yan, P. Zhou, X. Wang, Y. Ma, I. Iacobacci, Y. Wang, G. Liang, J. Wei, X. Jiang, Q. Wang, and Q. Liu,\\n“PanGu-Coder: Program Synthesis with Function-Level Language Modeling,” 2022.\\n[9] Y. Li, D. Choi, J. Chung, N. Kushman, J. Schrittwieser, R. Leblond, T. Eccles, J. Keeling, F. Gimeno, A. Dal Lago,\\nT. Hubert, P. Choy, C. De Masson d’Autume, I. Babuschkin, X. Chen, P.-S. Huang, J. Welbl, S. Gowal,\\nA. Cherepanov, J. Molloy, D. J. Mankowitz, E. Sutherland Robson, P. Kohli, N. De Freitas, K. Kavukcuoglu, and\\nO. Vinyals, “Competition-level code generation with AlphaCode,” Science, vol. 378, pp. 1092–1097, Dec. 2022.\\n[10] E. Nijkamp, B. Pang, H. Hayashi, L. Tu, H. Wang, Y. Zhou, S. Savarese, and C. Xiong, “CodeGen: An Open\\nLarge Language Model for Code with Multi-Turn Program Synthesis,” 2022.\\n[11] G. Sandoval, H. Pearce, T. Nys, R. Karri, S. Garg, and B. Dolan-Gavitt, “Lost at C: A User Study on the Security\\nImplications of Large Language Model Code Assistants,” 2022.\\n[12] F. Zhang, B. Chen, Y. Zhang, J. Liu, D. Zan, Y. Mao, J.-G. Lou, and W. Chen, “RepoCoder: Repository-Level\\nCode Completion Through Iterative Retrieval and Generation,” 2023.\\n[13] D. Zan, B. Chen, F. Zhang, D. Lu, B. Wu, B. Guan, Y. Wang, and J.-G. Lou, “Large Language Models Meet\\nNL2Code: A Survey,” 2022.\\n[14] R. Li, L. B. Allal, Y. Zi, N. Muennighoff, D. Kocetkov, C. Mou, M. Marone, C. Akiki, J. Li, J. Chim, Q. Liu,\\nE. Zheltonozhskii, T. Y. Zhuo, T. Wang, O. Dehaene, M. Davaadorj, J. Lamy-Poirier, J. Monteiro, O. Shliazhko,\\nN. Gontier, N. Meade, A. Zebaze, M.-H. Yee, L. K. Umapathi, J. Zhu, B. Lipkin, M. Oblokulov, Z. Wang,\\nR. Murthy, J. Stillerman, S. S. Patel, D. Abulkhanov, M. Zocca, M. Dey, Z. Zhang, N. Fahmy, U. Bhattacharyya,\\nW. Yu, S. Singh, S. Luccioni, P. Villegas, M. Kunakov, F. Zhdanov, M. Romero, T. Lee, N. Timor, J. Ding,\\nC. Schlesinger, H. Schoelkopf, J. Ebert, T. Dao, M. Mishra, A. Gu, J. Robinson, C. J. Anderson, B. Dolan-Gavitt,\\nD. Contractor, S. Reddy, D. Fried, D. Bahdanau, Y. Jernite, C. M. Ferrandis, S. Hughes, T. Wolf, A. Guha, L. von\\nWerra, and H. de Vries, “StarCoder: may the source be with you!,” 2023.\\n[15] Z. Luo, C. Xu, P. Zhao, Q. Sun, X. Geng, W. Hu, C. Tao, J. Ma, Q. Lin, and D. Jiang, “WizardCoder: Empowering\\nCode Large Language Models with Evol-Instruct,” 2023.\\n[16] Z. Xiao, X. Yuan, Q. V. Liao, R. Abdelghani, and P.-Y. Oudeyer, “Supporting Qualitative Analysis with Large\\nLanguage Models: Combining Codebook with GPT-3 for Deductive Coding,” in 28th International Conference\\non Intelligent User Interfaces, (Sydney NSW Australia), pp. 75–78, ACM, Mar. 2023.\\n9\\n[17] F.\\nOkeke,\\n“The\\n12\\nbest\\nIDEs\\nfor\\nprogramming.”\\nhttps://www.techrepublic.com/article/\\nbest-ide-software/, July 2022. [Accessed 14-09-2023].\\n[18] “GPT-4 System Card | Data Science Association.” http://www.datascienceassn.org/content/\\ngpt-4-system-card. [Accessed 14-09-2023].\\n[19] “OpenAI Platform.” https://platform.openai.com/docs/guides/gpt-best-practices. [Accessed 14-\\n09-2023].\\n[20] “GitHub\\n-\\nlmous/openai-gpt4-coding-assistantt\\n—\\ngithub.com.”\\nhttps://github.com/lmous/\\nopenai-gpt4-coding-assistant. [Accessed 15-09-2023].\\n[21] “Taylor Series – from Wolfram MathWorld — mathworld.wolfram.com.” https://mathworld.wolfram.com/\\nTaylorSeries.html. [Accessed 15-09-2023].\\n[22] “BigDecimal (Java Platform SE 8 ).” https://docs.oracle.com/javase/8/docs/api/java/math/\\nBigDecimal.html. [Accessed 15-09-2023].\\n[23] J. von Neumann, O. Morgenstern, and A. Rubinstein, Theory of Games and Economic Behavior (60th Anniversary\\nCommemorative Edition). Princeton University Press, 1944.\\n10\\n', metadata={'Published': '2023-09-22', 'Title': \"OpenAi's GPT4 as coding assistant\", 'Authors': 'Lefteris Moussiades, George Zografos', 'Summary': 'Lately, Large Language Models have been widely used in code generation. GPT4\\nis considered the most potent Large Language Model from Openai. In this paper,\\nwe examine GPT3.5 and GPT4 as coding assistants. More specifically, we have\\nconstructed appropriate tests to check whether the two systems can a) answer\\ntypical questions that can arise during the code development, b) produce\\nreliable code, and c) contribute to code debugging. The test results are\\nimpressive. The performance of GPT4 is outstanding and signals an increase in\\nthe productivity of programmers and the reorganization of software development\\nprocedures based on these new tools.'})]"
      ]
     },
     "execution_count": 28,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "from langchain_community.document_loaders import ArxivLoader\n",
    "docs = ArxivLoader(query=\"2309.12732v1\", load_max_docs=2).load()\n",
    "docs"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 38,
   "id": "2e8ddb20",
   "metadata": {},
   "outputs": [],
   "source": [
    "from langchain_core.output_parsers import StrOutputParser\n",
    "from langchain_core.prompts import ChatPromptTemplate\n",
    "prompt = ChatPromptTemplate.from_template(\"{article}\\n\\n\\n请使用中文详细讲解上面这篇文章内容,并将核心的要点提炼出来\")\n",
    "output_parser = StrOutputParser()\n",
    "\n",
    "chain = prompt | llm | output_parser"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 39,
   "id": "f143be71",
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "'本文主要介绍了OpenAI GPT-4在编程领域中的应用，并提供了一个基于GPT-4的代码生成和辅助工具。该工具可以用于多种编程语言，包括Python、Java、C++等。它可以帮助程序员快速编写代码，提高开发效率。此外，该工具还可以进行代码补全、代码重构、代码审查等功能，从而进一步提高了程序员的开发体验。\\n\\n核心要点：\\n1. OpenAI GPT-4在编程领域中的应用；\\n2. 基于GPT-4的代码生成和辅助工具；\\n3. 支持多种编程语言，包括Python、Java、C++等；\\n4. 提供代码补全、代码重构、代码审查等功能。'"
      ]
     },
     "execution_count": 39,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "chain.invoke({\"article\":docs[0].page_content})"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "a9bc017e",
   "metadata": {},
   "outputs": [],
   "source": []
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3 (ipykernel)",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.10.9"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 5
}
